• The USSR Once Tried Reversing a River's Direction with 'Peaceful Nuclear Explosions'

    "In the 1970s, the USSR used nuclear devices to try to send water from Siberia's rivers flowing south, instead of its natural route north..." remembers the BBC.he Soviet Union simultaneously fired three nuclear devices buried 127munderground. The yield of each device was 15 kilotonnes. The experiment, codenamed "Taiga", was part of a two-decade long Soviet programme of carrying out peaceful nuclear explosions.

    In this case, the blasts were supposed to help excavate a massive canal to connect the basin of the Pechora River with that of the Kama, a tributary of the Volga. Such a link would have allowed Soviet scientists to siphon off some of the water destined for the Pechora, and send it southward through the Volga. It would have diverted a significant flow of water destined for the Arctic Ocean to go instead to the hot, heavily populated regions of Central Asia and southern Russia. This was just one of a planned series of gargantuan "river reversals" that were designed to alter the direction of Russia's great Eurasian waterways...

    Years later, Leonid Volkov, a scientist involved in preparing the Taiga explosions, recalled the moment of detonation. "The final countdown began: ...3, 2, 1, 0... then fountains of soil and water shot upward," he wrote. "It was an impressive sight." Despite Soviet efforts to minimise the fallout by using a low-fission explosive, which produce fewer atomic fragments, the blasts were detected as far away as the United States and Sweden, whose governments lodged formal complaints, accusing Moscow of violating the Limited Test Ban Treaty...

    Ultimately, the nuclear explosions that created Nuclear Lake, one of the few physical traces left of river reversal, were deemed a failure because the crater was not big enough. Although similar PNE canal excavation tests were planned, they were never carried out. In 2024, the leader of a scientific expedition to the lake announced radiation levels were normal.

    "Perhaps the final nail in the coffin was the Chernobyl nuclear disaster in 1986, which not only consumed a huge amount of money, but pushed environmental concerns up the political agenda," the article notes.

    "Four months after the Number Four Reactor at the Chernobyl Nuclear Power Plant exploded, Soviet Premier Mikhail Gorbachev cancelled the river reversal project."
    And a Russian blogger who travelled to Nuclear Lake in the summer of 2024 told the BBC that nearly 50 years later, there were some places where the radiation was still significantly elevated.

    of this story at Slashdot.
    #ussr #once #tried #reversing #river039s
    The USSR Once Tried Reversing a River's Direction with 'Peaceful Nuclear Explosions'
    "In the 1970s, the USSR used nuclear devices to try to send water from Siberia's rivers flowing south, instead of its natural route north..." remembers the BBC.he Soviet Union simultaneously fired three nuclear devices buried 127munderground. The yield of each device was 15 kilotonnes. The experiment, codenamed "Taiga", was part of a two-decade long Soviet programme of carrying out peaceful nuclear explosions. In this case, the blasts were supposed to help excavate a massive canal to connect the basin of the Pechora River with that of the Kama, a tributary of the Volga. Such a link would have allowed Soviet scientists to siphon off some of the water destined for the Pechora, and send it southward through the Volga. It would have diverted a significant flow of water destined for the Arctic Ocean to go instead to the hot, heavily populated regions of Central Asia and southern Russia. This was just one of a planned series of gargantuan "river reversals" that were designed to alter the direction of Russia's great Eurasian waterways... Years later, Leonid Volkov, a scientist involved in preparing the Taiga explosions, recalled the moment of detonation. "The final countdown began: ...3, 2, 1, 0... then fountains of soil and water shot upward," he wrote. "It was an impressive sight." Despite Soviet efforts to minimise the fallout by using a low-fission explosive, which produce fewer atomic fragments, the blasts were detected as far away as the United States and Sweden, whose governments lodged formal complaints, accusing Moscow of violating the Limited Test Ban Treaty... Ultimately, the nuclear explosions that created Nuclear Lake, one of the few physical traces left of river reversal, were deemed a failure because the crater was not big enough. Although similar PNE canal excavation tests were planned, they were never carried out. In 2024, the leader of a scientific expedition to the lake announced radiation levels were normal. "Perhaps the final nail in the coffin was the Chernobyl nuclear disaster in 1986, which not only consumed a huge amount of money, but pushed environmental concerns up the political agenda," the article notes. "Four months after the Number Four Reactor at the Chernobyl Nuclear Power Plant exploded, Soviet Premier Mikhail Gorbachev cancelled the river reversal project." And a Russian blogger who travelled to Nuclear Lake in the summer of 2024 told the BBC that nearly 50 years later, there were some places where the radiation was still significantly elevated. of this story at Slashdot. #ussr #once #tried #reversing #river039s
    HARDWARE.SLASHDOT.ORG
    The USSR Once Tried Reversing a River's Direction with 'Peaceful Nuclear Explosions'
    "In the 1970s, the USSR used nuclear devices to try to send water from Siberia's rivers flowing south, instead of its natural route north..." remembers the BBC. [T]he Soviet Union simultaneously fired three nuclear devices buried 127m (417ft) underground. The yield of each device was 15 kilotonnes (about the same as the atomic bomb dropped on Hiroshima in 1945). The experiment, codenamed "Taiga", was part of a two-decade long Soviet programme of carrying out peaceful nuclear explosions (PNEs). In this case, the blasts were supposed to help excavate a massive canal to connect the basin of the Pechora River with that of the Kama, a tributary of the Volga. Such a link would have allowed Soviet scientists to siphon off some of the water destined for the Pechora, and send it southward through the Volga. It would have diverted a significant flow of water destined for the Arctic Ocean to go instead to the hot, heavily populated regions of Central Asia and southern Russia. This was just one of a planned series of gargantuan "river reversals" that were designed to alter the direction of Russia's great Eurasian waterways... Years later, Leonid Volkov, a scientist involved in preparing the Taiga explosions, recalled the moment of detonation. "The final countdown began: ...3, 2, 1, 0... then fountains of soil and water shot upward," he wrote. "It was an impressive sight." Despite Soviet efforts to minimise the fallout by using a low-fission explosive, which produce fewer atomic fragments, the blasts were detected as far away as the United States and Sweden, whose governments lodged formal complaints, accusing Moscow of violating the Limited Test Ban Treaty... Ultimately, the nuclear explosions that created Nuclear Lake, one of the few physical traces left of river reversal, were deemed a failure because the crater was not big enough. Although similar PNE canal excavation tests were planned, they were never carried out. In 2024, the leader of a scientific expedition to the lake announced radiation levels were normal. "Perhaps the final nail in the coffin was the Chernobyl nuclear disaster in 1986, which not only consumed a huge amount of money, but pushed environmental concerns up the political agenda," the article notes. "Four months after the Number Four Reactor at the Chernobyl Nuclear Power Plant exploded, Soviet Premier Mikhail Gorbachev cancelled the river reversal project." And a Russian blogger who travelled to Nuclear Lake in the summer of 2024 told the BBC that nearly 50 years later, there were some places where the radiation was still significantly elevated. Read more of this story at Slashdot.
    0 Commenti 0 condivisioni
  • Trump warns Apple and Samsung that 25% smartphone tariffs could land in June

    Samsung and other companies would be affected, but perhaps not as much as Apple.
    Credit: Li Hongbo/VCG via Getty Images

    In a Friday morning Truth social post, President Donald Trump threatened to levy a 25 percent tariff on iPhones if Tim Cook didn't move manufacturing to the United States. It turns out the situation is a little more complicated than that.While speaking to the media later that day, Trump clarified that the tariff would apply to any company selling foreign-made phones in the U.S., not just Apple. The president said the new 25% smartphone tariff could arrive by the end of June, per Bloomberg. He also made sure to single out Samsung, the second-most popular smartphone brand in the U.S. market. This broader approach makes a bit more sense than the initial threat against Apple, as it was unclear how the Trump administration planned to place tariffs on one single company's products.Even so, it wasn't the first time Trump made a tariff threat against a specific company, and it may not be the last. Besides Apple, President Trump previously threatened to target toymaker Mattel with tariffs. Milan Miric, PhD, Associate Professor of Data Sciences and Operations at the University of Southern California's Marshall School of Business, explained to Mashable how President Trump could effectively target a single company with tariffs.

    You May Also Like

    SEE ALSO:

    Apple smart glasses could come as soon as 2026

    "For Apple, hardware products are their most important business line. All of the other competitors in the U.S. that can compete on hardware would be foreign companies manufacturing abroad," Miric told Mashable via email. "Therefore, if you wanted to target consumer electronics coming from China, the U.S. company that would be the most directly affected is Apple."For context, Apple relies far more heavily on hardware sales to bolster its business than U.S.-based competitors like Google and Microsoft, which are primarily service companies that happen to sell some hardware too.

    Mashable Light Speed

    Want more out-of-this world tech, space and science stories?
    Sign up for Mashable's weekly Light Speed newsletter.

    By clicking Sign Me Up, you confirm you are 16+ and agree to our Terms of Use and Privacy Policy.

    Thanks for signing up!

    According to Miric, all of this could be a prelude to something resembling a trade deal with Apple, as Trump has negotiated with foreign governments."You could imagine a scenario where large American companies strike a compromise with the government, where some of their own products that are very popular with American consumers and important to American business get exceptions, but then tariffs apply broadly to foreign companies, effectively providing additional protection to these American companies," Miric said.

    Related Stories

    Earlier this year, Apple promised to spend billion in the U.S. over the next four years and build a new factory in Texas, but iPhone manufacturing specifically is unlikely to return to the United States. As Mashable's Stan Schroeder previously reported, a U.S.-made iPhone would likely cost at least While new tariffs on smartphones could be arriving as soon as June, the president's tariff policy has included a few surprising reversals. Wall Street is paying attention, however.Samsung and Apple stock both fell on Friday after the president's remarks.

    Topics
    Apple
    Samsung
    Tariffs

    Alex Perry
    Tech Reporter

    Alex Perry is a tech reporter at Mashable who primarily covers video games and consumer tech. Alex has spent most of the last decade reviewing games, smartphones, headphones, and laptops, and he doesn’t plan on stopping anytime soon. He is also a Pisces, a cat lover, and a Kansas City sports fan. Alex can be found on Bluesky at yelix.bsky.social.
    #trump #warns #apple #samsung #that
    Trump warns Apple and Samsung that 25% smartphone tariffs could land in June
    Samsung and other companies would be affected, but perhaps not as much as Apple. Credit: Li Hongbo/VCG via Getty Images In a Friday morning Truth social post, President Donald Trump threatened to levy a 25 percent tariff on iPhones if Tim Cook didn't move manufacturing to the United States. It turns out the situation is a little more complicated than that.While speaking to the media later that day, Trump clarified that the tariff would apply to any company selling foreign-made phones in the U.S., not just Apple. The president said the new 25% smartphone tariff could arrive by the end of June, per Bloomberg. He also made sure to single out Samsung, the second-most popular smartphone brand in the U.S. market. This broader approach makes a bit more sense than the initial threat against Apple, as it was unclear how the Trump administration planned to place tariffs on one single company's products.Even so, it wasn't the first time Trump made a tariff threat against a specific company, and it may not be the last. Besides Apple, President Trump previously threatened to target toymaker Mattel with tariffs. Milan Miric, PhD, Associate Professor of Data Sciences and Operations at the University of Southern California's Marshall School of Business, explained to Mashable how President Trump could effectively target a single company with tariffs. You May Also Like SEE ALSO: Apple smart glasses could come as soon as 2026 "For Apple, hardware products are their most important business line. All of the other competitors in the U.S. that can compete on hardware would be foreign companies manufacturing abroad," Miric told Mashable via email. "Therefore, if you wanted to target consumer electronics coming from China, the U.S. company that would be the most directly affected is Apple."For context, Apple relies far more heavily on hardware sales to bolster its business than U.S.-based competitors like Google and Microsoft, which are primarily service companies that happen to sell some hardware too. Mashable Light Speed Want more out-of-this world tech, space and science stories? Sign up for Mashable's weekly Light Speed newsletter. By clicking Sign Me Up, you confirm you are 16+ and agree to our Terms of Use and Privacy Policy. Thanks for signing up! According to Miric, all of this could be a prelude to something resembling a trade deal with Apple, as Trump has negotiated with foreign governments."You could imagine a scenario where large American companies strike a compromise with the government, where some of their own products that are very popular with American consumers and important to American business get exceptions, but then tariffs apply broadly to foreign companies, effectively providing additional protection to these American companies," Miric said. Related Stories Earlier this year, Apple promised to spend billion in the U.S. over the next four years and build a new factory in Texas, but iPhone manufacturing specifically is unlikely to return to the United States. As Mashable's Stan Schroeder previously reported, a U.S.-made iPhone would likely cost at least While new tariffs on smartphones could be arriving as soon as June, the president's tariff policy has included a few surprising reversals. Wall Street is paying attention, however.Samsung and Apple stock both fell on Friday after the president's remarks. Topics Apple Samsung Tariffs Alex Perry Tech Reporter Alex Perry is a tech reporter at Mashable who primarily covers video games and consumer tech. Alex has spent most of the last decade reviewing games, smartphones, headphones, and laptops, and he doesn’t plan on stopping anytime soon. He is also a Pisces, a cat lover, and a Kansas City sports fan. Alex can be found on Bluesky at yelix.bsky.social. #trump #warns #apple #samsung #that
    MASHABLE.COM
    Trump warns Apple and Samsung that 25% smartphone tariffs could land in June
    Samsung and other companies would be affected, but perhaps not as much as Apple. Credit: Li Hongbo/VCG via Getty Images In a Friday morning Truth social post, President Donald Trump threatened to levy a 25 percent tariff on iPhones if Tim Cook didn't move manufacturing to the United States. It turns out the situation is a little more complicated than that.While speaking to the media later that day, Trump clarified that the tariff would apply to any company selling foreign-made phones in the U.S., not just Apple. The president said the new 25% smartphone tariff could arrive by the end of June, per Bloomberg. He also made sure to single out Samsung, the second-most popular smartphone brand in the U.S. market. This broader approach makes a bit more sense than the initial threat against Apple, as it was unclear how the Trump administration planned to place tariffs on one single company's products.Even so, it wasn't the first time Trump made a tariff threat against a specific company, and it may not be the last. Besides Apple, President Trump previously threatened to target toymaker Mattel with tariffs. Milan Miric, PhD, Associate Professor of Data Sciences and Operations at the University of Southern California's Marshall School of Business, explained to Mashable how President Trump could effectively target a single company with tariffs. You May Also Like SEE ALSO: Apple smart glasses could come as soon as 2026 "For Apple, hardware products are their most important business line. All of the other competitors in the U.S. that can compete on hardware would be foreign companies manufacturing abroad (e.g. Samsung)," Miric told Mashable via email. "Therefore, if you wanted to target consumer electronics coming from China, the U.S. company that would be the most directly affected is Apple."For context, Apple relies far more heavily on hardware sales to bolster its business than U.S.-based competitors like Google and Microsoft, which are primarily service companies that happen to sell some hardware too. Mashable Light Speed Want more out-of-this world tech, space and science stories? Sign up for Mashable's weekly Light Speed newsletter. By clicking Sign Me Up, you confirm you are 16+ and agree to our Terms of Use and Privacy Policy. Thanks for signing up! According to Miric, all of this could be a prelude to something resembling a trade deal with Apple, as Trump has negotiated with foreign governments."You could imagine a scenario where large American companies strike a compromise with the government, where some of their own products that are very popular with American consumers and important to American business get exceptions, but then tariffs apply broadly to foreign companies, effectively providing additional protection to these American companies," Miric said. Related Stories Earlier this year, Apple promised to spend $500 billion in the U.S. over the next four years and build a new factory in Texas, but iPhone manufacturing specifically is unlikely to return to the United States. As Mashable's Stan Schroeder previously reported, a U.S.-made iPhone would likely cost at least $3,000.While new tariffs on smartphones could be arriving as soon as June, the president's tariff policy has included a few surprising reversals. Wall Street is paying attention, however.Samsung and Apple stock both fell on Friday after the president's remarks. Topics Apple Samsung Tariffs Alex Perry Tech Reporter Alex Perry is a tech reporter at Mashable who primarily covers video games and consumer tech. Alex has spent most of the last decade reviewing games, smartphones, headphones, and laptops, and he doesn’t plan on stopping anytime soon. He is also a Pisces, a cat lover, and a Kansas City sports fan. Alex can be found on Bluesky at yelix.bsky.social.
    0 Commenti 0 condivisioni
  • Enhancing Language Model Generalization: Bridging the Gap Between In-Context Learning and Fine-Tuning

    Language modelshave great capabilities as in-context learners when pretrained on vast internet text corpora, allowing them to generalize effectively from just a few task examples. However, fine-tuning these models for downstream tasks presents significant challenges. While fine-tuning requires hundreds to thousands of examples, the resulting generalization patterns show limitations. For example, models fine-tuned on statements like “B’s mother is A” struggle to answer related questions like “Who is A’s son?” However, the LMs can handle such reverse relations in context. This raises questions about the differences between in-context learning and fine-tuning generalization patterns, and how these differences should inform adaptation strategies for downstream tasks.
    Research into improving LMs’ adaptability has followed several key approaches. In-context learning studies have examined learning and generalization patterns through empirical, mechanistic, and theoretical analyses. Out-of-context learning research explores how models utilize information not explicitly included in prompts. Data augmentation techniques use LLMs to enhance performance from limited datasets, with specific solutions targeting issues like the reversal curse through hardcoded augmentations, deductive closure training, and generating reasoning pathways. Moreover, synthetic data approaches have evolved from early hand-designed data to improve generalization in domains like linguistics or mathematics to more recent methods that generate data directly from language models.
    Researchers from Google DeepMind and Stanford University have constructed several datasets that isolate knowledge from pretraining data to create clean generalization tests. Performance is evaluated across various generalization types by exposing pretrained models to controlled information subsets, both in-context and through fine-tuning. Their findings reveal that in-context learning shows more flexible generalization than fine-tuning in data-matched settings, though there are some exceptions where fine-tuning can generalize to reversals within larger knowledge structures. Building on these insights, researchers have developed a method that enhances fine-tuning generalization by including in-context inferences into the fine-tuning data.
    Researchers employ multiple datasets carefully designed to isolate specific generalization challenges or insert them within broader learning contexts. Evaluation relies on multiple-choice likelihood scoring without providing answer choices in context. The experiments involve fine-tuning Gemini 1.5 Flash using batch sizes of 8 or 16. For in-context evaluation, the researchers combine training documents as context for the instruction-tuned model, randomly subsampling by 8x for larger datasets to minimize interference issues. The key innovation is a dataset augmentation approach using in-context generalization to enhance fine-tuning dataset coverage. This includes local and global strategies, each employing distinct contexts and prompts.
    On the Reversal Curse dataset, in-context learning achieves near-ceiling performance on reversals, while conventional fine-tuning shows near-zero accuracy as models favor incorrect celebrity names seen during training. Fine-tuning with data augmented by in-context inferences matches the high performance of pure in-context learning. Testing on simple nonsense reversals reveals similar patterns, though with less pronounced benefits. For simple syllogisms, while the pretrained model performs at chance level, fine-tuning does produce above-chance generalization for certain syllogism types where logical inferences align with simple linguistic patterns. However, in-context learning outperforms fine-tuning, with augmented fine-tuning showing the best overall results.

    In conclusion, this paper explores generalization differences between in-context learning and fine-tuning when LMs face novel information structures. Results show in-context learning’s superior generalization for certain inference types, prompting the researchers to develop methods that enhance fine-tuning performance by incorporating in-context inferences into training data. Despite promising outcomes, several limitations affect the study. The first one is the dependency on nonsense words and implausible operations. Second, the research focuses on specific LMs, limiting the results’ generality. Future research should investigate learning and generalization differences across various models to expand upon these findings, especially newer reasoning models.

    Check out the Paper. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 95k+ ML SubReddit and Subscribe to our Newsletter.
    Sajjad AnsariSajjad Ansari is a final year undergraduate from IIT Kharagpur. As a Tech enthusiast, he delves into the practical applications of AI with a focus on understanding the impact of AI technologies and their real-world implications. He aims to articulate complex AI concepts in a clear and accessible manner.Sajjad Ansarihttps://www.marktechpost.com/author/sajjadansari/Salesforce AI Researchers Introduce UAEval4RAG: A New Benchmark to Evaluate RAG Systems’ Ability to Reject Unanswerable QueriesSajjad Ansarihttps://www.marktechpost.com/author/sajjadansari/Google Researchers Introduce LightLab: A Diffusion-Based AI Method for Physically Plausible, Fine-Grained Light Control in Single ImagesSajjad Ansarihttps://www.marktechpost.com/author/sajjadansari/DanceGRPO: A Unified Framework for Reinforcement Learning in Visual Generation Across Multiple Paradigms and TasksSajjad Ansarihttps://www.marktechpost.com/author/sajjadansari/Reinforcement Learning, Not Fine-Tuning: Nemotron-Tool-N1 Trains LLMs to Use Tools with Minimal Supervision and Maximum Generalization

    Build GenAI you can trust. ⭐️ Parlant is your open-source engine for controlled, compliant, and purposeful AI conversations — Star Parlant on GitHub!
    #enhancing #language #model #generalization #bridging
    Enhancing Language Model Generalization: Bridging the Gap Between In-Context Learning and Fine-Tuning
    Language modelshave great capabilities as in-context learners when pretrained on vast internet text corpora, allowing them to generalize effectively from just a few task examples. However, fine-tuning these models for downstream tasks presents significant challenges. While fine-tuning requires hundreds to thousands of examples, the resulting generalization patterns show limitations. For example, models fine-tuned on statements like “B’s mother is A” struggle to answer related questions like “Who is A’s son?” However, the LMs can handle such reverse relations in context. This raises questions about the differences between in-context learning and fine-tuning generalization patterns, and how these differences should inform adaptation strategies for downstream tasks. Research into improving LMs’ adaptability has followed several key approaches. In-context learning studies have examined learning and generalization patterns through empirical, mechanistic, and theoretical analyses. Out-of-context learning research explores how models utilize information not explicitly included in prompts. Data augmentation techniques use LLMs to enhance performance from limited datasets, with specific solutions targeting issues like the reversal curse through hardcoded augmentations, deductive closure training, and generating reasoning pathways. Moreover, synthetic data approaches have evolved from early hand-designed data to improve generalization in domains like linguistics or mathematics to more recent methods that generate data directly from language models. Researchers from Google DeepMind and Stanford University have constructed several datasets that isolate knowledge from pretraining data to create clean generalization tests. Performance is evaluated across various generalization types by exposing pretrained models to controlled information subsets, both in-context and through fine-tuning. Their findings reveal that in-context learning shows more flexible generalization than fine-tuning in data-matched settings, though there are some exceptions where fine-tuning can generalize to reversals within larger knowledge structures. Building on these insights, researchers have developed a method that enhances fine-tuning generalization by including in-context inferences into the fine-tuning data. Researchers employ multiple datasets carefully designed to isolate specific generalization challenges or insert them within broader learning contexts. Evaluation relies on multiple-choice likelihood scoring without providing answer choices in context. The experiments involve fine-tuning Gemini 1.5 Flash using batch sizes of 8 or 16. For in-context evaluation, the researchers combine training documents as context for the instruction-tuned model, randomly subsampling by 8x for larger datasets to minimize interference issues. The key innovation is a dataset augmentation approach using in-context generalization to enhance fine-tuning dataset coverage. This includes local and global strategies, each employing distinct contexts and prompts. On the Reversal Curse dataset, in-context learning achieves near-ceiling performance on reversals, while conventional fine-tuning shows near-zero accuracy as models favor incorrect celebrity names seen during training. Fine-tuning with data augmented by in-context inferences matches the high performance of pure in-context learning. Testing on simple nonsense reversals reveals similar patterns, though with less pronounced benefits. For simple syllogisms, while the pretrained model performs at chance level, fine-tuning does produce above-chance generalization for certain syllogism types where logical inferences align with simple linguistic patterns. However, in-context learning outperforms fine-tuning, with augmented fine-tuning showing the best overall results. In conclusion, this paper explores generalization differences between in-context learning and fine-tuning when LMs face novel information structures. Results show in-context learning’s superior generalization for certain inference types, prompting the researchers to develop methods that enhance fine-tuning performance by incorporating in-context inferences into training data. Despite promising outcomes, several limitations affect the study. The first one is the dependency on nonsense words and implausible operations. Second, the research focuses on specific LMs, limiting the results’ generality. Future research should investigate learning and generalization differences across various models to expand upon these findings, especially newer reasoning models. Check out the Paper. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 95k+ ML SubReddit and Subscribe to our Newsletter. Sajjad AnsariSajjad Ansari is a final year undergraduate from IIT Kharagpur. As a Tech enthusiast, he delves into the practical applications of AI with a focus on understanding the impact of AI technologies and their real-world implications. He aims to articulate complex AI concepts in a clear and accessible manner.Sajjad Ansarihttps://www.marktechpost.com/author/sajjadansari/Salesforce AI Researchers Introduce UAEval4RAG: A New Benchmark to Evaluate RAG Systems’ Ability to Reject Unanswerable QueriesSajjad Ansarihttps://www.marktechpost.com/author/sajjadansari/Google Researchers Introduce LightLab: A Diffusion-Based AI Method for Physically Plausible, Fine-Grained Light Control in Single ImagesSajjad Ansarihttps://www.marktechpost.com/author/sajjadansari/DanceGRPO: A Unified Framework for Reinforcement Learning in Visual Generation Across Multiple Paradigms and TasksSajjad Ansarihttps://www.marktechpost.com/author/sajjadansari/Reinforcement Learning, Not Fine-Tuning: Nemotron-Tool-N1 Trains LLMs to Use Tools with Minimal Supervision and Maximum Generalization 🚨 Build GenAI you can trust. ⭐️ Parlant is your open-source engine for controlled, compliant, and purposeful AI conversations — Star Parlant on GitHub! #enhancing #language #model #generalization #bridging
    WWW.MARKTECHPOST.COM
    Enhancing Language Model Generalization: Bridging the Gap Between In-Context Learning and Fine-Tuning
    Language models (LMs) have great capabilities as in-context learners when pretrained on vast internet text corpora, allowing them to generalize effectively from just a few task examples. However, fine-tuning these models for downstream tasks presents significant challenges. While fine-tuning requires hundreds to thousands of examples, the resulting generalization patterns show limitations. For example, models fine-tuned on statements like “B’s mother is A” struggle to answer related questions like “Who is A’s son?” However, the LMs can handle such reverse relations in context. This raises questions about the differences between in-context learning and fine-tuning generalization patterns, and how these differences should inform adaptation strategies for downstream tasks. Research into improving LMs’ adaptability has followed several key approaches. In-context learning studies have examined learning and generalization patterns through empirical, mechanistic, and theoretical analyses. Out-of-context learning research explores how models utilize information not explicitly included in prompts. Data augmentation techniques use LLMs to enhance performance from limited datasets, with specific solutions targeting issues like the reversal curse through hardcoded augmentations, deductive closure training, and generating reasoning pathways. Moreover, synthetic data approaches have evolved from early hand-designed data to improve generalization in domains like linguistics or mathematics to more recent methods that generate data directly from language models. Researchers from Google DeepMind and Stanford University have constructed several datasets that isolate knowledge from pretraining data to create clean generalization tests. Performance is evaluated across various generalization types by exposing pretrained models to controlled information subsets, both in-context and through fine-tuning. Their findings reveal that in-context learning shows more flexible generalization than fine-tuning in data-matched settings, though there are some exceptions where fine-tuning can generalize to reversals within larger knowledge structures. Building on these insights, researchers have developed a method that enhances fine-tuning generalization by including in-context inferences into the fine-tuning data. Researchers employ multiple datasets carefully designed to isolate specific generalization challenges or insert them within broader learning contexts. Evaluation relies on multiple-choice likelihood scoring without providing answer choices in context. The experiments involve fine-tuning Gemini 1.5 Flash using batch sizes of 8 or 16. For in-context evaluation, the researchers combine training documents as context for the instruction-tuned model, randomly subsampling by 8x for larger datasets to minimize interference issues. The key innovation is a dataset augmentation approach using in-context generalization to enhance fine-tuning dataset coverage. This includes local and global strategies, each employing distinct contexts and prompts. On the Reversal Curse dataset, in-context learning achieves near-ceiling performance on reversals, while conventional fine-tuning shows near-zero accuracy as models favor incorrect celebrity names seen during training. Fine-tuning with data augmented by in-context inferences matches the high performance of pure in-context learning. Testing on simple nonsense reversals reveals similar patterns, though with less pronounced benefits. For simple syllogisms, while the pretrained model performs at chance level (indicating no data contamination), fine-tuning does produce above-chance generalization for certain syllogism types where logical inferences align with simple linguistic patterns. However, in-context learning outperforms fine-tuning, with augmented fine-tuning showing the best overall results. In conclusion, this paper explores generalization differences between in-context learning and fine-tuning when LMs face novel information structures. Results show in-context learning’s superior generalization for certain inference types, prompting the researchers to develop methods that enhance fine-tuning performance by incorporating in-context inferences into training data. Despite promising outcomes, several limitations affect the study. The first one is the dependency on nonsense words and implausible operations. Second, the research focuses on specific LMs, limiting the results’ generality. Future research should investigate learning and generalization differences across various models to expand upon these findings, especially newer reasoning models. Check out the Paper. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 95k+ ML SubReddit and Subscribe to our Newsletter. Sajjad AnsariSajjad Ansari is a final year undergraduate from IIT Kharagpur. As a Tech enthusiast, he delves into the practical applications of AI with a focus on understanding the impact of AI technologies and their real-world implications. He aims to articulate complex AI concepts in a clear and accessible manner.Sajjad Ansarihttps://www.marktechpost.com/author/sajjadansari/Salesforce AI Researchers Introduce UAEval4RAG: A New Benchmark to Evaluate RAG Systems’ Ability to Reject Unanswerable QueriesSajjad Ansarihttps://www.marktechpost.com/author/sajjadansari/Google Researchers Introduce LightLab: A Diffusion-Based AI Method for Physically Plausible, Fine-Grained Light Control in Single ImagesSajjad Ansarihttps://www.marktechpost.com/author/sajjadansari/DanceGRPO: A Unified Framework for Reinforcement Learning in Visual Generation Across Multiple Paradigms and TasksSajjad Ansarihttps://www.marktechpost.com/author/sajjadansari/Reinforcement Learning, Not Fine-Tuning: Nemotron-Tool-N1 Trains LLMs to Use Tools with Minimal Supervision and Maximum Generalization 🚨 Build GenAI you can trust. ⭐️ Parlant is your open-source engine for controlled, compliant, and purposeful AI conversations — Star Parlant on GitHub! (Promoted)
    0 Commenti 0 condivisioni
  • #333;">Government Furiously Trying to Undo Elon Musk's Damage
    Federal agencies scrambled to bring back over $220 million worth of contracts after Elon Musk's so-called Department of Government Efficiency cancelled them, .However, of those 44 contracts that were cancelled and eventually reinstated, DOGE is still citing all but one of them as examples of the government spending the group supposedly saved on its website's error-plagued "Wall of Receipts." The White House told the NYT that this is "paperwork lag" that will be fixed.Clerical errors or not, the "zombie contracts" are a damning sign of the chaos sowed by the billionaire's hasty and sweeping cost-cutting that would seem antithetical to its stated goals of efficiency."They should have used a scalpel," Rachel Dinkes of the Knowledge Alliance, an association of education companies that includes one that lost a contract, told the NYT.
    "But instead they went in with an axe and chopped it all down." Musk brought the Silicon Valley ethos of "move fast and break things" he uses at his business ventures, like SpaceX, to his cleaning house of the federal government.
    And this, it seems, resulted in a lot of wasted time and effort.Some of the contracts DOGE cancelled were required by law, according to the NYT, and some were for skills that the government needed but didn't have.
    The whiplash was most felt at the Department of Veterans Affairs, which reversed 16 cancelled contracts — the highest of any agency in the NYT's analysis.Many of the contracts that DOGE cancelled were reinstated almost immediately.
    The Environmental Protection Agency, for example, revived a contract just two and a half hours after Musk's team cancelled it, the paper found.
    Others were brought back within days.After losing a contract with the US Department of Agriculture in February, Raquel Romero and her husband gained it back four days later.
    The USDA told the NYT that it reinstated the contract after discovering that it was "required by statute," but declined to specify which one.
    Romero believes that a senior lawyer at the agency, who was a supporter of the couple's work, intervened on their behalf."All I know is, she retired two weeks later," Romero told the NYT.The waste doesn't end there.
    Since the contracts are necessary, it puts the fired contractors in a stronger bargaining position when the government comes crawling back.
    In the case of the EPA contract, the agency agreed to pay $171,000 more than before the cancellation.
    In other words, these cuts are costing, not saving, the government money.A White House spokesperson, however, tried to spin the flurry of reversals as a positive sign that the agencies are complying with Musk's chaotic directions, while also playing down the misleading savings claims on DOGE's website."The DOGE Wall of Receipts provides the latest and most accurate information following a thorough assessment, which takes time," White House spokesman Harrison Fields told the NYT.
    "Updates to the DOGE savings page will continue to be made promptly, and departments and agencies will keep highlighting the massive savings DOGE is achieving."Harrison also called the over $220 million of zombie contracts "very, very small potatoes" compared to the supposed $165 billion Musk has saved American taxpayers.If this latest analysis is any indication, however, that multibillion-dollar sum warrants significant skepticism.
    We're only beginning to see a glimmer of the true fallout from Musk tornadoing through the federal government.Share This Article
    #666;">المصدر: https://futurism.com/government-undo-elon-musk-doge-damage" style="color: #0066cc; text-decoration: none;">futurism.com
    #0066cc;">#government #furiously #trying #undo #elon #musk039s #damage #federal #agencies #scrambled #bring #back #over #million #worth #contracts #after #socalled #department #efficiency #cancelled #them #however #those #that #were #and #eventually #reinstated #doge #still #citing #all #but #one #examples #the #spending #group #supposedly #saved #its #website039s #errorplagued #quotwall #receiptsquot #white #house #told #nyt #this #quotpaperwork #lagquot #will #fixedclerical #errors #not #quotzombie #contractsquot #are #damning #sign #chaos #sowed #billionaire039s #hasty #sweeping #costcutting #would #seem #antithetical #stated #goals #efficiencyquotthey #should #have #used #scalpelquot #rachel #dinkes #knowledge #alliance #association #education #companies #includes #lost #contract #nytquotbut #instead #they #went #with #axe #chopped #downquotmusk #brought #silicon #valley #ethos #quotmove #fast #break #thingsquot #uses #his #business #ventures #like #spacex #cleaning #governmentand #seems #resulted #lot #wasted #time #effortsome #required #law #according #some #for #skills #needed #didn039t #havethe #whiplash #was #most #felt #veterans #affairs #which #reversed #highest #any #agency #nyt039s #analysismany #almost #immediatelythe #environmental #protection #example #revived #just #two #half #hours #team #paper #foundothers #within #daysafter #losing #agriculture #february #raquel #romero #her #husband #gained #four #days #laterthe #usda #nytthat #discovering #quotrequired #statutequot #declined #specify #oneromero #believes #senior #lawyer #who #supporter #couple039s #work #intervened #their #behalfquotall #know #she #retired #weeks #laterquot #nytthe #waste #doesn039t #end #theresince #necessary #puts #fired #contractors #stronger #bargaining #position #when #comes #crawling #backin #case #epa #agreed #pay #more #than #before #cancellationin #other #words #these #cuts #costing #saving #moneya #spokesperson #tried #spin #flurry #reversals #positive #complying #chaotic #directions #while #also #playing #down #misleading #savings #claims #doge039s #websitequotthe #wall #receipts #provides #latest #accurate #information #following #thorough #assessment #takes #timequot #spokesman #harrison #fields #nytquotupdates #page #continue #made #promptly #departments #keep #highlighting #massive #achievingquotharrison #called #zombie #quotvery #very #small #potatoesquot #compared #supposed #billion #musk #has #american #taxpayersif #analysis #indication #multibilliondollar #sum #warrants #significant #skepticismwe039re #only #beginning #see #glimmer #true #fallout #from #tornadoing #through #governmentshare #article
    Government Furiously Trying to Undo Elon Musk's Damage
    Federal agencies scrambled to bring back over $220 million worth of contracts after Elon Musk's so-called Department of Government Efficiency cancelled them, .However, of those 44 contracts that were cancelled and eventually reinstated, DOGE is still citing all but one of them as examples of the government spending the group supposedly saved on its website's error-plagued "Wall of Receipts." The White House told the NYT that this is "paperwork lag" that will be fixed.Clerical errors or not, the "zombie contracts" are a damning sign of the chaos sowed by the billionaire's hasty and sweeping cost-cutting that would seem antithetical to its stated goals of efficiency."They should have used a scalpel," Rachel Dinkes of the Knowledge Alliance, an association of education companies that includes one that lost a contract, told the NYT. "But instead they went in with an axe and chopped it all down." Musk brought the Silicon Valley ethos of "move fast and break things" he uses at his business ventures, like SpaceX, to his cleaning house of the federal government. And this, it seems, resulted in a lot of wasted time and effort.Some of the contracts DOGE cancelled were required by law, according to the NYT, and some were for skills that the government needed but didn't have. The whiplash was most felt at the Department of Veterans Affairs, which reversed 16 cancelled contracts — the highest of any agency in the NYT's analysis.Many of the contracts that DOGE cancelled were reinstated almost immediately. The Environmental Protection Agency, for example, revived a contract just two and a half hours after Musk's team cancelled it, the paper found. Others were brought back within days.After losing a contract with the US Department of Agriculture in February, Raquel Romero and her husband gained it back four days later. The USDA told the NYT that it reinstated the contract after discovering that it was "required by statute," but declined to specify which one. Romero believes that a senior lawyer at the agency, who was a supporter of the couple's work, intervened on their behalf."All I know is, she retired two weeks later," Romero told the NYT.The waste doesn't end there. Since the contracts are necessary, it puts the fired contractors in a stronger bargaining position when the government comes crawling back. In the case of the EPA contract, the agency agreed to pay $171,000 more than before the cancellation. In other words, these cuts are costing, not saving, the government money.A White House spokesperson, however, tried to spin the flurry of reversals as a positive sign that the agencies are complying with Musk's chaotic directions, while also playing down the misleading savings claims on DOGE's website."The DOGE Wall of Receipts provides the latest and most accurate information following a thorough assessment, which takes time," White House spokesman Harrison Fields told the NYT. "Updates to the DOGE savings page will continue to be made promptly, and departments and agencies will keep highlighting the massive savings DOGE is achieving."Harrison also called the over $220 million of zombie contracts "very, very small potatoes" compared to the supposed $165 billion Musk has saved American taxpayers.If this latest analysis is any indication, however, that multibillion-dollar sum warrants significant skepticism. We're only beginning to see a glimmer of the true fallout from Musk tornadoing through the federal government.Share This Article
    المصدر: futurism.com
    #government #furiously #trying #undo #elon #musk039s #damage #federal #agencies #scrambled #bring #back #over #million #worth #contracts #after #socalled #department #efficiency #cancelled #them #however #those #that #were #and #eventually #reinstated #doge #still #citing #all #but #one #examples #the #spending #group #supposedly #saved #its #website039s #errorplagued #quotwall #receiptsquot #white #house #told #nyt #this #quotpaperwork #lagquot #will #fixedclerical #errors #not #quotzombie #contractsquot #are #damning #sign #chaos #sowed #billionaire039s #hasty #sweeping #costcutting #would #seem #antithetical #stated #goals #efficiencyquotthey #should #have #used #scalpelquot #rachel #dinkes #knowledge #alliance #association #education #companies #includes #lost #contract #nytquotbut #instead #they #went #with #axe #chopped #downquotmusk #brought #silicon #valley #ethos #quotmove #fast #break #thingsquot #uses #his #business #ventures #like #spacex #cleaning #governmentand #seems #resulted #lot #wasted #time #effortsome #required #law #according #some #for #skills #needed #didn039t #havethe #whiplash #was #most #felt #veterans #affairs #which #reversed #highest #any #agency #nyt039s #analysismany #almost #immediatelythe #environmental #protection #example #revived #just #two #half #hours #team #paper #foundothers #within #daysafter #losing #agriculture #february #raquel #romero #her #husband #gained #four #days #laterthe #usda #nytthat #discovering #quotrequired #statutequot #declined #specify #oneromero #believes #senior #lawyer #who #supporter #couple039s #work #intervened #their #behalfquotall #know #she #retired #weeks #laterquot #nytthe #waste #doesn039t #end #theresince #necessary #puts #fired #contractors #stronger #bargaining #position #when #comes #crawling #backin #case #epa #agreed #pay #more #than #before #cancellationin #other #words #these #cuts #costing #saving #moneya #spokesperson #tried #spin #flurry #reversals #positive #complying #chaotic #directions #while #also #playing #down #misleading #savings #claims #doge039s #websitequotthe #wall #receipts #provides #latest #accurate #information #following #thorough #assessment #takes #timequot #spokesman #harrison #fields #nytquotupdates #page #continue #made #promptly #departments #keep #highlighting #massive #achievingquotharrison #called #zombie #quotvery #very #small #potatoesquot #compared #supposed #billion #musk #has #american #taxpayersif #analysis #indication #multibilliondollar #sum #warrants #significant #skepticismwe039re #only #beginning #see #glimmer #true #fallout #from #tornadoing #through #governmentshare #article
    FUTURISM.COM
    Government Furiously Trying to Undo Elon Musk's Damage
    Federal agencies scrambled to bring back over $220 million worth of contracts after Elon Musk's so-called Department of Government Efficiency cancelled them, .However, of those 44 contracts that were cancelled and eventually reinstated, DOGE is still citing all but one of them as examples of the government spending the group supposedly saved on its website's error-plagued "Wall of Receipts." The White House told the NYT that this is "paperwork lag" that will be fixed.Clerical errors or not, the "zombie contracts" are a damning sign of the chaos sowed by the billionaire's hasty and sweeping cost-cutting that would seem antithetical to its stated goals of efficiency."They should have used a scalpel," Rachel Dinkes of the Knowledge Alliance, an association of education companies that includes one that lost a contract, told the NYT. "But instead they went in with an axe and chopped it all down." Musk brought the Silicon Valley ethos of "move fast and break things" he uses at his business ventures, like SpaceX, to his cleaning house of the federal government. And this, it seems, resulted in a lot of wasted time and effort.Some of the contracts DOGE cancelled were required by law, according to the NYT, and some were for skills that the government needed but didn't have. The whiplash was most felt at the Department of Veterans Affairs, which reversed 16 cancelled contracts — the highest of any agency in the NYT's analysis.Many of the contracts that DOGE cancelled were reinstated almost immediately. The Environmental Protection Agency, for example, revived a contract just two and a half hours after Musk's team cancelled it, the paper found. Others were brought back within days.After losing a contract with the US Department of Agriculture in February, Raquel Romero and her husband gained it back four days later. The USDA told the NYT that it reinstated the contract after discovering that it was "required by statute," but declined to specify which one. Romero believes that a senior lawyer at the agency, who was a supporter of the couple's work, intervened on their behalf."All I know is, she retired two weeks later," Romero told the NYT.The waste doesn't end there. Since the contracts are necessary, it puts the fired contractors in a stronger bargaining position when the government comes crawling back. In the case of the EPA contract, the agency agreed to pay $171,000 more than before the cancellation. In other words, these cuts are costing, not saving, the government money.A White House spokesperson, however, tried to spin the flurry of reversals as a positive sign that the agencies are complying with Musk's chaotic directions, while also playing down the misleading savings claims on DOGE's website."The DOGE Wall of Receipts provides the latest and most accurate information following a thorough assessment, which takes time," White House spokesman Harrison Fields told the NYT. "Updates to the DOGE savings page will continue to be made promptly, and departments and agencies will keep highlighting the massive savings DOGE is achieving."Harrison also called the over $220 million of zombie contracts "very, very small potatoes" compared to the supposed $165 billion Musk has saved American taxpayers.If this latest analysis is any indication, however, that multibillion-dollar sum warrants significant skepticism. We're only beginning to see a glimmer of the true fallout from Musk tornadoing through the federal government.Share This Article
    0 Commenti 0 condivisioni
  • Going ‘AI first’ appears to be backfiring on Klarna and Duolingo

    Artificial intelligence might be the future of the workplace, but companies that are trying to get a head start on that future are running into all sorts of problems.



    Klarna and Duloingo have been some of the poster children for the “AI-first” workplace.
    Two years ago, Klarna CEO Sebastian Siemiatkowski announced he wanted his company to be the “favorite guinea pig” of OpenAI, instituting a hiring freeze and replacing as many workers as possible with AI systems.
    Last month, Duolingo announced an AI-first shift, saying it would stop using contractors to do work AI can handle and only increase head count when teams have maximized all possible automation.



    Klarna, while still investing in AI, seems to have renewed appreciation for the human touch.
    And Duolingo is finding itself under attack on social media for its decision.



    Klarna’s Siemiatkowski tells Bloomberg the fintech company is about to go on a hiring spree in order to ensure customers will always have the option to speak to a live representative.
    The company did not say how many people it plans to add, but Siemiatkowski indicated Klarna would look at students and rural areas to boost its workforce.



    Last year, Klarna, in an announcement, said AI was doing the work of 700 customer service agents.
    Now it’s focusing on adding human connections.



    “As cost unfortunately seems to have been a too predominant evaluation factor when organizing this, what you end up having is lower quality,” Siemiatkowski said.
    “Really investing in the quality of the human support is the way of the future for us.” But Klarna says it is still enthusiastic about AI.



    Duolingo’s AI-first push is much newer, and there have been no policy reversals on its part as of yet.
    But the company is facing a tsunami of pushback from the general public on social media after announcing the move, particularly on TikTok.



    The top comments on virtually every recent post have nothing to do with the video or the company—and everything to do with the company’s embrace of AI.




    For example, a Duolingo TikTok video jumping on board the “Mama, may I have a cookie” trend saw replies like “Mama, may I have real people running the company ” (with 69,000 likes) and “How about NO ai, keep your employees.” Another video that tied into the How to Train Your Dragon character Hiccup brought comments like “Was firing all your employees and replacing them with AI also a hiccup?”



    Other comments are more serious: “Using AI is disgusting,” wrote one user.
    “Language learning should be pioneered by PEOPLE.
    By making this decision, Duolingo is actively harming the environment, their customers, and employees when it hurts the most.”



    Another wrote: “What kind of audience do you think you’ve built that it’s okay to go ‘AI first.’ We don’t want AI, we want real people doing good work.
    Goodbye, Duo.
    If this is the way you’re going, you wont be missed.”



    Others claimed to have deleted the app: “Deleted Duolingo last week.
    A 650+ day streak never felt so meaningless once I saw the news.”



    Duolingo says much of that feedback is coming from people who don’t understand what AI-first means. “A lot of the feedback we’ve seen comes from a place of passion for Duolingo, which we really appreciate,” a representative told Fast Company.
    “To clarify, AI isn’t replacing our learning experts—it’s a tool they use to make Duolingo better.
    Everything we create with AI is guided by our team of learning design experts.
    .
    .
    .
    We’re committed to using AI with human oversight, to help us deliver on our mission to make the best education in the world available to everyone.”



    Companies, in general, remain excited about the potential cost savings of AI, sometimes for good reason.
    Duolingo’s stock is at an all-time high, and the company recently raised its sales forecast for 2025.
    A study by the World Economic Forum (WEF) found that 40% of employers expect to reduce their workforce and hand over automated tasks to the technology.
    As bullish as executives might be, however, that excitement has not made its way into the consumer space.



    Almost half of the Generation Z job hunters told the WEF they believed AI has reduced the value of their college education.
    And researchers at Harvard University say the technology is still threatening to people.



    “From early on in life, humans strive to manage their surroundings to achieve their goals.
    So they’re naturally reluctant to adopt innovations that seem to reduce their control over a situation,” wrote Julian De Freitas, an assistant professor in the marketing unit at Harvard Business School.
    “The possibility that AI tools might completely take over tasks previously handled by humans, rather than just assist with them, stirs up deep concerns and worries.”Update, May 12, 2025: This article has been updated with comments from Klarna and Duolingo.

    المصدر: https://www.fastcompany.com/91332763/going-ai-first-appears-to-be-backfiring-on-klarna-and-duolingo

    #Going #first #appears #backfiring #Klarna #and #Duolingo
    Going ‘AI first’ appears to be backfiring on Klarna and Duolingo
    Artificial intelligence might be the future of the workplace, but companies that are trying to get a head start on that future are running into all sorts of problems. Klarna and Duloingo have been some of the poster children for the “AI-first” workplace. Two years ago, Klarna CEO Sebastian Siemiatkowski announced he wanted his company to be the “favorite guinea pig” of OpenAI, instituting a hiring freeze and replacing as many workers as possible with AI systems. Last month, Duolingo announced an AI-first shift, saying it would stop using contractors to do work AI can handle and only increase head count when teams have maximized all possible automation. Klarna, while still investing in AI, seems to have renewed appreciation for the human touch. And Duolingo is finding itself under attack on social media for its decision. Klarna’s Siemiatkowski tells Bloomberg the fintech company is about to go on a hiring spree in order to ensure customers will always have the option to speak to a live representative. The company did not say how many people it plans to add, but Siemiatkowski indicated Klarna would look at students and rural areas to boost its workforce. Last year, Klarna, in an announcement, said AI was doing the work of 700 customer service agents. Now it’s focusing on adding human connections. “As cost unfortunately seems to have been a too predominant evaluation factor when organizing this, what you end up having is lower quality,” Siemiatkowski said. “Really investing in the quality of the human support is the way of the future for us.” But Klarna says it is still enthusiastic about AI. Duolingo’s AI-first push is much newer, and there have been no policy reversals on its part as of yet. But the company is facing a tsunami of pushback from the general public on social media after announcing the move, particularly on TikTok. The top comments on virtually every recent post have nothing to do with the video or the company—and everything to do with the company’s embrace of AI. For example, a Duolingo TikTok video jumping on board the “Mama, may I have a cookie” trend saw replies like “Mama, may I have real people running the company 💔” (with 69,000 likes) and “How about NO ai, keep your employees.” Another video that tied into the How to Train Your Dragon character Hiccup brought comments like “Was firing all your employees and replacing them with AI also a hiccup?” Other comments are more serious: “Using AI is disgusting,” wrote one user. “Language learning should be pioneered by PEOPLE. By making this decision, Duolingo is actively harming the environment, their customers, and employees when it hurts the most.” Another wrote: “What kind of audience do you think you’ve built that it’s okay to go ‘AI first.’ We don’t want AI, we want real people doing good work. Goodbye, Duo. If this is the way you’re going, you wont be missed.” Others claimed to have deleted the app: “Deleted Duolingo last week. A 650+ day streak never felt so meaningless once I saw the news.” Duolingo says much of that feedback is coming from people who don’t understand what AI-first means. “A lot of the feedback we’ve seen comes from a place of passion for Duolingo, which we really appreciate,” a representative told Fast Company. “To clarify, AI isn’t replacing our learning experts—it’s a tool they use to make Duolingo better. Everything we create with AI is guided by our team of learning design experts. . . . We’re committed to using AI with human oversight, to help us deliver on our mission to make the best education in the world available to everyone.” Companies, in general, remain excited about the potential cost savings of AI, sometimes for good reason. Duolingo’s stock is at an all-time high, and the company recently raised its sales forecast for 2025. A study by the World Economic Forum (WEF) found that 40% of employers expect to reduce their workforce and hand over automated tasks to the technology. As bullish as executives might be, however, that excitement has not made its way into the consumer space. Almost half of the Generation Z job hunters told the WEF they believed AI has reduced the value of their college education. And researchers at Harvard University say the technology is still threatening to people. “From early on in life, humans strive to manage their surroundings to achieve their goals. So they’re naturally reluctant to adopt innovations that seem to reduce their control over a situation,” wrote Julian De Freitas, an assistant professor in the marketing unit at Harvard Business School. “The possibility that AI tools might completely take over tasks previously handled by humans, rather than just assist with them, stirs up deep concerns and worries.”Update, May 12, 2025: This article has been updated with comments from Klarna and Duolingo. المصدر: https://www.fastcompany.com/91332763/going-ai-first-appears-to-be-backfiring-on-klarna-and-duolingo #Going #first #appears #backfiring #Klarna #and #Duolingo
    WWW.FASTCOMPANY.COM
    Going ‘AI first’ appears to be backfiring on Klarna and Duolingo
    Artificial intelligence might be the future of the workplace, but companies that are trying to get a head start on that future are running into all sorts of problems. Klarna and Duloingo have been some of the poster children for the “AI-first” workplace. Two years ago, Klarna CEO Sebastian Siemiatkowski announced he wanted his company to be the “favorite guinea pig” of OpenAI, instituting a hiring freeze and replacing as many workers as possible with AI systems. Last month, Duolingo announced an AI-first shift, saying it would stop using contractors to do work AI can handle and only increase head count when teams have maximized all possible automation. Klarna, while still investing in AI, seems to have renewed appreciation for the human touch. And Duolingo is finding itself under attack on social media for its decision. Klarna’s Siemiatkowski tells Bloomberg the fintech company is about to go on a hiring spree in order to ensure customers will always have the option to speak to a live representative. The company did not say how many people it plans to add, but Siemiatkowski indicated Klarna would look at students and rural areas to boost its workforce. Last year, Klarna, in an announcement, said AI was doing the work of 700 customer service agents. Now it’s focusing on adding human connections. “As cost unfortunately seems to have been a too predominant evaluation factor when organizing this, what you end up having is lower quality,” Siemiatkowski said. “Really investing in the quality of the human support is the way of the future for us.” But Klarna says it is still enthusiastic about AI. Duolingo’s AI-first push is much newer, and there have been no policy reversals on its part as of yet. But the company is facing a tsunami of pushback from the general public on social media after announcing the move, particularly on TikTok. The top comments on virtually every recent post have nothing to do with the video or the company—and everything to do with the company’s embrace of AI. For example, a Duolingo TikTok video jumping on board the “Mama, may I have a cookie” trend saw replies like “Mama, may I have real people running the company 💔” (with 69,000 likes) and “How about NO ai, keep your employees.” Another video that tied into the How to Train Your Dragon character Hiccup brought comments like “Was firing all your employees and replacing them with AI also a hiccup?” Other comments are more serious: “Using AI is disgusting,” wrote one user. “Language learning should be pioneered by PEOPLE. By making this decision, Duolingo is actively harming the environment, their customers, and employees when it hurts the most.” Another wrote: “What kind of audience do you think you’ve built that it’s okay to go ‘AI first.’ We don’t want AI, we want real people doing good work. Goodbye, Duo. If this is the way you’re going, you wont be missed.” Others claimed to have deleted the app: “Deleted Duolingo last week. A 650+ day streak never felt so meaningless once I saw the news.” Duolingo says much of that feedback is coming from people who don’t understand what AI-first means. “A lot of the feedback we’ve seen comes from a place of passion for Duolingo, which we really appreciate,” a representative told Fast Company. “To clarify, AI isn’t replacing our learning experts—it’s a tool they use to make Duolingo better. Everything we create with AI is guided by our team of learning design experts. . . . We’re committed to using AI with human oversight, to help us deliver on our mission to make the best education in the world available to everyone.” Companies, in general, remain excited about the potential cost savings of AI, sometimes for good reason. Duolingo’s stock is at an all-time high, and the company recently raised its sales forecast for 2025. A study by the World Economic Forum (WEF) found that 40% of employers expect to reduce their workforce and hand over automated tasks to the technology. As bullish as executives might be, however, that excitement has not made its way into the consumer space. Almost half of the Generation Z job hunters told the WEF they believed AI has reduced the value of their college education. And researchers at Harvard University say the technology is still threatening to people. “From early on in life, humans strive to manage their surroundings to achieve their goals. So they’re naturally reluctant to adopt innovations that seem to reduce their control over a situation,” wrote Julian De Freitas, an assistant professor in the marketing unit at Harvard Business School. “The possibility that AI tools might completely take over tasks previously handled by humans, rather than just assist with them, stirs up deep concerns and worries.”Update, May 12, 2025: This article has been updated with comments from Klarna and Duolingo.
    0 Commenti 0 condivisioni