• As AI faces court challenges from Disney and Universal, legal battles are shaping the industry's future | Opinion

    As AI faces court challenges from Disney and Universal, legal battles are shaping the industry's future | Opinion
    Silicon advances and design innovations do still push us forward – but the future landscape of the industry is also being sculpted in courtrooms and parliaments

    Image credit: Disney / Epic Games

    Opinion

    by Rob Fahey
    Contributing Editor

    Published on June 13, 2025

    In some regards, the past couple of weeks have felt rather reassuring.
    We've just seen a hugely successful launch for a new Nintendo console, replete with long queues for midnight sales events. Over the next few days, the various summer events and showcases that have sprouted amongst the scattered bones of E3 generated waves of interest and hype for a host of new games.
    It all feels like old times. It's enough to make you imagine that while change is the only constant, at least it's we're facing change that's fairly well understood, change in the form of faster, cheaper silicon, or bigger, more ambitious games.
    If only the winds that blow through this industry all came from such well-defined points on the compass. Nestled in amongst the week's headlines, though, was something that's likely to have profound but much harder to understand impacts on this industry and many others over the coming years – a lawsuit being brought by Disney and NBC Universal against Midjourney, operators of the eponymous generative AI image creation tool.
    In some regards, the lawsuit looks fairly straightforward; the arguments made and considered in reaching its outcome, though, may have a profound impact on both the ability of creatives and media companiesto protect their IP rights from a very new kind of threat, and the ways in which a promising but highly controversial and risky new set of development and creative tools can be used commercially.
    A more likely tack on Midjourney's side will be the argument that they are not responsible for what their customers create with the tool
    I say the lawsuit looks straightforward from some angles, but honestly overall it looks fairly open and shut – the media giants accuse Midjourney of replicating their copyrighted characters and material, and of essentially building a machine for churning out limitless copyright violations.
    The evidence submitted includes screenshot after screenshot of Midjourney generating pages of images of famous copyrighted and trademarked characters ranging from Yoda to Homer Simpson, so "no we didn't" isn't going to be much of a defence strategy here.
    A more likely tack on Midjourney's side will be the argument that they are not responsible for what their customers create with the tool – you don't sue the manufacturers of oil paints or canvases when artists use them to paint something copyright-infringing, nor does Microsoft get sued when someone writes something libellous in Word, and Midjourney may try to argue that their software belongs in that tool category, with users alone being ultimately responsible for how they use them.

    If that argument prevails and survives appeals and challenges, it would be a major triumph for the nascent generative AI industry and a hugely damaging blow to IP holders and creatives, since it would seriously undermine their argument that AI companies shouldn't be able to include copyrighted material into training data sets without licensing or compensation.
    The reason Disney and NBCU are going after Midjourney specifically seems to be partially down to Midjourney being especially reticent to negotiate with them about licensing fees and prompt restrictions; other generative AI firms have started talking, at least, about paying for content licenses for training data, and have imposed various limitations on their software to prevent the most egregious and obvious forms of copyright violation.
    In the process, though, they're essentially risking a court showdown over a set of not-quite-clear legal questions at the heart of this dispute, and if Midjourney were to prevail in that argument, other AI companies would likely back off from engaging with IP holders on this topic.
    To be clear, though, it seems highly unlikely that Midjourney will win that argument, at least not in the medium to long term. Yet depending on how this case moves forward, losing the argument could have equally dramatic consequences – especially if the courts find themselves compelled to consider the question of how, exactly, a generative AI system reproduces a copyrighted character with such precision without storing copyright-infringing data in some manner.
    The 2020s are turning out to be the decade in which many key regulatory issues come to a head all at once
    AI advocates have been trying to handwave around this notion from the outset, but at some point a court is going to have to sit down and confront the fact that the precision with which these systems can replicate copyrighted characters, scenes, and other materials requires that they must have stored that infringing material in some form.
    That it's stored as a scattered mesh of probabilities across the vertices of a high-dimensional vector array, rather than a straightforward, monolithic media file, is clearly important but may ultimately be considered moot. If the data is in the system and can be replicated on request, how that differs from Napster or The Pirate Bay is arguably just a matter of technical obfuscation.
    Not having to defend that technical argument in court thus far has been a huge boon to the generative AI field; if it is knocked over in that venue, it will have knock-on effects on every company in the sector and on every business that uses their products.
    Nobody can be quite sure which of the various rocks and pebbles being kicked on this slope is going to set off the landslide, but there seems to be an increasing consensus that a legal and regulatory reckoning is coming for generative AI.
    Consequently, a lot of what's happening in that market right now has the feel of companies desperately trying to establish products and lock in revenue streams before that happens, because it'll be harder to regulate a technology that's genuinely integrated into the world's economic systems than it is to impose limits on one that's currently only clocking up relatively paltry sales and revenues.

    Keeping an eye on this is crucial for any industry that's started experimenting with AI in its workflows – none more than a creative industry like video games, where various forms of AI usage have been posited, although the enthusiasm and buzz so far massively outweighs any tangible benefits from the technology.
    Regardless of what happens in legal and regulatory contexts, AI is already a double-edged sword for any creative industry.
    Used judiciously, it might help to speed up development processes and reduce overheads. Applied in a slapdash or thoughtless manner, it can and will end up wreaking havoc on development timelines, filling up storefronts with endless waves of vaguely-copyright-infringing slop, and potentially make creative firms, from the industry's biggest companies to its smallest indie developers, into victims of impossibly large-scale copyright infringement rather than beneficiaries of a new wave of technology-fuelled productivity.
    The legal threat now hanging over the sector isn't new, merely amplified. We've known for a long time that AI generated artwork, code, and text has significant problems from the perspective of intellectual property rights.
    Even if you're not using AI yourself, however – even if you're vehemently opposed to it on moral and ethical grounds, the Midjourney judgement and its fallout may well impact the creative work you produce yourself and how it ends up being used and abused by these products in future.
    This all has huge ramifications for the games business and will shape everything from how games are created to how IP can be protected for many years to come – a wind of change that's very different and vastly more unpredictable than those we're accustomed to. It's a reminder of just how much of the industry's future is currently being shaped not in development studios and semiconductor labs, but rather in courtrooms and parliamentary committees.
    The ways in which generative AI can be used and how copyright can persist in the face of it will be fundamentally shaped in courts and parliaments, but it's far from the only crucially important topic being hashed out in those venues.
    The ongoing legal turmoil over the opening up of mobile app ecosystems, too, will have huge impacts on the games industry. Meanwhile, the debates over loot boxes, gambling, and various consumer protection aspects related to free-to-play models continue to rumble on in the background.
    Because the industry moves fast while governments move slow, it's easy to forget that that's still an active topic for as far as governments are concerned, and hammers may come down at any time.
    Regulation by governments, whether through the passage of new legislation or the interpretation of existing laws in the courts, has always loomed in the background of any major industry, especially one with strong cultural relevance. The games industry is no stranger to that being part of the background heartbeat of the business.
    The 2020s, however, are turning out to be the decade in which many key regulatory issues come to a head all at once, whether it's AI and copyright, app stores and walled gardens, or loot boxes and IAP-based business models.
    Rulings on those topics in various different global markets will create a complex new landscape that will shape the winds that blow through the business, and how things look in the 2030s and beyond will be fundamentally impacted by those decisions.
    #faces #court #challenges #disney #universal
    As AI faces court challenges from Disney and Universal, legal battles are shaping the industry's future | Opinion
    As AI faces court challenges from Disney and Universal, legal battles are shaping the industry's future | Opinion Silicon advances and design innovations do still push us forward – but the future landscape of the industry is also being sculpted in courtrooms and parliaments Image credit: Disney / Epic Games Opinion by Rob Fahey Contributing Editor Published on June 13, 2025 In some regards, the past couple of weeks have felt rather reassuring. We've just seen a hugely successful launch for a new Nintendo console, replete with long queues for midnight sales events. Over the next few days, the various summer events and showcases that have sprouted amongst the scattered bones of E3 generated waves of interest and hype for a host of new games. It all feels like old times. It's enough to make you imagine that while change is the only constant, at least it's we're facing change that's fairly well understood, change in the form of faster, cheaper silicon, or bigger, more ambitious games. If only the winds that blow through this industry all came from such well-defined points on the compass. Nestled in amongst the week's headlines, though, was something that's likely to have profound but much harder to understand impacts on this industry and many others over the coming years – a lawsuit being brought by Disney and NBC Universal against Midjourney, operators of the eponymous generative AI image creation tool. In some regards, the lawsuit looks fairly straightforward; the arguments made and considered in reaching its outcome, though, may have a profound impact on both the ability of creatives and media companiesto protect their IP rights from a very new kind of threat, and the ways in which a promising but highly controversial and risky new set of development and creative tools can be used commercially. A more likely tack on Midjourney's side will be the argument that they are not responsible for what their customers create with the tool I say the lawsuit looks straightforward from some angles, but honestly overall it looks fairly open and shut – the media giants accuse Midjourney of replicating their copyrighted characters and material, and of essentially building a machine for churning out limitless copyright violations. The evidence submitted includes screenshot after screenshot of Midjourney generating pages of images of famous copyrighted and trademarked characters ranging from Yoda to Homer Simpson, so "no we didn't" isn't going to be much of a defence strategy here. A more likely tack on Midjourney's side will be the argument that they are not responsible for what their customers create with the tool – you don't sue the manufacturers of oil paints or canvases when artists use them to paint something copyright-infringing, nor does Microsoft get sued when someone writes something libellous in Word, and Midjourney may try to argue that their software belongs in that tool category, with users alone being ultimately responsible for how they use them. If that argument prevails and survives appeals and challenges, it would be a major triumph for the nascent generative AI industry and a hugely damaging blow to IP holders and creatives, since it would seriously undermine their argument that AI companies shouldn't be able to include copyrighted material into training data sets without licensing or compensation. The reason Disney and NBCU are going after Midjourney specifically seems to be partially down to Midjourney being especially reticent to negotiate with them about licensing fees and prompt restrictions; other generative AI firms have started talking, at least, about paying for content licenses for training data, and have imposed various limitations on their software to prevent the most egregious and obvious forms of copyright violation. In the process, though, they're essentially risking a court showdown over a set of not-quite-clear legal questions at the heart of this dispute, and if Midjourney were to prevail in that argument, other AI companies would likely back off from engaging with IP holders on this topic. To be clear, though, it seems highly unlikely that Midjourney will win that argument, at least not in the medium to long term. Yet depending on how this case moves forward, losing the argument could have equally dramatic consequences – especially if the courts find themselves compelled to consider the question of how, exactly, a generative AI system reproduces a copyrighted character with such precision without storing copyright-infringing data in some manner. The 2020s are turning out to be the decade in which many key regulatory issues come to a head all at once AI advocates have been trying to handwave around this notion from the outset, but at some point a court is going to have to sit down and confront the fact that the precision with which these systems can replicate copyrighted characters, scenes, and other materials requires that they must have stored that infringing material in some form. That it's stored as a scattered mesh of probabilities across the vertices of a high-dimensional vector array, rather than a straightforward, monolithic media file, is clearly important but may ultimately be considered moot. If the data is in the system and can be replicated on request, how that differs from Napster or The Pirate Bay is arguably just a matter of technical obfuscation. Not having to defend that technical argument in court thus far has been a huge boon to the generative AI field; if it is knocked over in that venue, it will have knock-on effects on every company in the sector and on every business that uses their products. Nobody can be quite sure which of the various rocks and pebbles being kicked on this slope is going to set off the landslide, but there seems to be an increasing consensus that a legal and regulatory reckoning is coming for generative AI. Consequently, a lot of what's happening in that market right now has the feel of companies desperately trying to establish products and lock in revenue streams before that happens, because it'll be harder to regulate a technology that's genuinely integrated into the world's economic systems than it is to impose limits on one that's currently only clocking up relatively paltry sales and revenues. Keeping an eye on this is crucial for any industry that's started experimenting with AI in its workflows – none more than a creative industry like video games, where various forms of AI usage have been posited, although the enthusiasm and buzz so far massively outweighs any tangible benefits from the technology. Regardless of what happens in legal and regulatory contexts, AI is already a double-edged sword for any creative industry. Used judiciously, it might help to speed up development processes and reduce overheads. Applied in a slapdash or thoughtless manner, it can and will end up wreaking havoc on development timelines, filling up storefronts with endless waves of vaguely-copyright-infringing slop, and potentially make creative firms, from the industry's biggest companies to its smallest indie developers, into victims of impossibly large-scale copyright infringement rather than beneficiaries of a new wave of technology-fuelled productivity. The legal threat now hanging over the sector isn't new, merely amplified. We've known for a long time that AI generated artwork, code, and text has significant problems from the perspective of intellectual property rights. Even if you're not using AI yourself, however – even if you're vehemently opposed to it on moral and ethical grounds, the Midjourney judgement and its fallout may well impact the creative work you produce yourself and how it ends up being used and abused by these products in future. This all has huge ramifications for the games business and will shape everything from how games are created to how IP can be protected for many years to come – a wind of change that's very different and vastly more unpredictable than those we're accustomed to. It's a reminder of just how much of the industry's future is currently being shaped not in development studios and semiconductor labs, but rather in courtrooms and parliamentary committees. The ways in which generative AI can be used and how copyright can persist in the face of it will be fundamentally shaped in courts and parliaments, but it's far from the only crucially important topic being hashed out in those venues. The ongoing legal turmoil over the opening up of mobile app ecosystems, too, will have huge impacts on the games industry. Meanwhile, the debates over loot boxes, gambling, and various consumer protection aspects related to free-to-play models continue to rumble on in the background. Because the industry moves fast while governments move slow, it's easy to forget that that's still an active topic for as far as governments are concerned, and hammers may come down at any time. Regulation by governments, whether through the passage of new legislation or the interpretation of existing laws in the courts, has always loomed in the background of any major industry, especially one with strong cultural relevance. The games industry is no stranger to that being part of the background heartbeat of the business. The 2020s, however, are turning out to be the decade in which many key regulatory issues come to a head all at once, whether it's AI and copyright, app stores and walled gardens, or loot boxes and IAP-based business models. Rulings on those topics in various different global markets will create a complex new landscape that will shape the winds that blow through the business, and how things look in the 2030s and beyond will be fundamentally impacted by those decisions. #faces #court #challenges #disney #universal
    WWW.GAMESINDUSTRY.BIZ
    As AI faces court challenges from Disney and Universal, legal battles are shaping the industry's future | Opinion
    As AI faces court challenges from Disney and Universal, legal battles are shaping the industry's future | Opinion Silicon advances and design innovations do still push us forward – but the future landscape of the industry is also being sculpted in courtrooms and parliaments Image credit: Disney / Epic Games Opinion by Rob Fahey Contributing Editor Published on June 13, 2025 In some regards, the past couple of weeks have felt rather reassuring. We've just seen a hugely successful launch for a new Nintendo console, replete with long queues for midnight sales events. Over the next few days, the various summer events and showcases that have sprouted amongst the scattered bones of E3 generated waves of interest and hype for a host of new games. It all feels like old times. It's enough to make you imagine that while change is the only constant, at least it's we're facing change that's fairly well understood, change in the form of faster, cheaper silicon, or bigger, more ambitious games. If only the winds that blow through this industry all came from such well-defined points on the compass. Nestled in amongst the week's headlines, though, was something that's likely to have profound but much harder to understand impacts on this industry and many others over the coming years – a lawsuit being brought by Disney and NBC Universal against Midjourney, operators of the eponymous generative AI image creation tool. In some regards, the lawsuit looks fairly straightforward; the arguments made and considered in reaching its outcome, though, may have a profound impact on both the ability of creatives and media companies (including game studios and publishers) to protect their IP rights from a very new kind of threat, and the ways in which a promising but highly controversial and risky new set of development and creative tools can be used commercially. A more likely tack on Midjourney's side will be the argument that they are not responsible for what their customers create with the tool I say the lawsuit looks straightforward from some angles, but honestly overall it looks fairly open and shut – the media giants accuse Midjourney of replicating their copyrighted characters and material, and of essentially building a machine for churning out limitless copyright violations. The evidence submitted includes screenshot after screenshot of Midjourney generating pages of images of famous copyrighted and trademarked characters ranging from Yoda to Homer Simpson, so "no we didn't" isn't going to be much of a defence strategy here. A more likely tack on Midjourney's side will be the argument that they are not responsible for what their customers create with the tool – you don't sue the manufacturers of oil paints or canvases when artists use them to paint something copyright-infringing, nor does Microsoft get sued when someone writes something libellous in Word, and Midjourney may try to argue that their software belongs in that tool category, with users alone being ultimately responsible for how they use them. If that argument prevails and survives appeals and challenges, it would be a major triumph for the nascent generative AI industry and a hugely damaging blow to IP holders and creatives, since it would seriously undermine their argument that AI companies shouldn't be able to include copyrighted material into training data sets without licensing or compensation. The reason Disney and NBCU are going after Midjourney specifically seems to be partially down to Midjourney being especially reticent to negotiate with them about licensing fees and prompt restrictions; other generative AI firms have started talking, at least, about paying for content licenses for training data, and have imposed various limitations on their software to prevent the most egregious and obvious forms of copyright violation (at least for famous characters belonging to rich companies; if you're an individual or a smaller company, it's entirely the Wild West out there as regards your IP rights). In the process, though, they're essentially risking a court showdown over a set of not-quite-clear legal questions at the heart of this dispute, and if Midjourney were to prevail in that argument, other AI companies would likely back off from engaging with IP holders on this topic. To be clear, though, it seems highly unlikely that Midjourney will win that argument, at least not in the medium to long term. Yet depending on how this case moves forward, losing the argument could have equally dramatic consequences – especially if the courts find themselves compelled to consider the question of how, exactly, a generative AI system reproduces a copyrighted character with such precision without storing copyright-infringing data in some manner. The 2020s are turning out to be the decade in which many key regulatory issues come to a head all at once AI advocates have been trying to handwave around this notion from the outset, but at some point a court is going to have to sit down and confront the fact that the precision with which these systems can replicate copyrighted characters, scenes, and other materials requires that they must have stored that infringing material in some form. That it's stored as a scattered mesh of probabilities across the vertices of a high-dimensional vector array, rather than a straightforward, monolithic media file, is clearly important but may ultimately be considered moot. If the data is in the system and can be replicated on request, how that differs from Napster or The Pirate Bay is arguably just a matter of technical obfuscation. Not having to defend that technical argument in court thus far has been a huge boon to the generative AI field; if it is knocked over in that venue, it will have knock-on effects on every company in the sector and on every business that uses their products. Nobody can be quite sure which of the various rocks and pebbles being kicked on this slope is going to set off the landslide, but there seems to be an increasing consensus that a legal and regulatory reckoning is coming for generative AI. Consequently, a lot of what's happening in that market right now has the feel of companies desperately trying to establish products and lock in revenue streams before that happens, because it'll be harder to regulate a technology that's genuinely integrated into the world's economic systems than it is to impose limits on one that's currently only clocking up relatively paltry sales and revenues. Keeping an eye on this is crucial for any industry that's started experimenting with AI in its workflows – none more than a creative industry like video games, where various forms of AI usage have been posited, although the enthusiasm and buzz so far massively outweighs any tangible benefits from the technology. Regardless of what happens in legal and regulatory contexts, AI is already a double-edged sword for any creative industry. Used judiciously, it might help to speed up development processes and reduce overheads. Applied in a slapdash or thoughtless manner, it can and will end up wreaking havoc on development timelines, filling up storefronts with endless waves of vaguely-copyright-infringing slop, and potentially make creative firms, from the industry's biggest companies to its smallest indie developers, into victims of impossibly large-scale copyright infringement rather than beneficiaries of a new wave of technology-fuelled productivity. The legal threat now hanging over the sector isn't new, merely amplified. We've known for a long time that AI generated artwork, code, and text has significant problems from the perspective of intellectual property rights (you can infringe someone else's copyright with it, but generally can't impose your own copyright on its creations – opening careless companies up to a risk of having key assets in their game being technically public domain and impossible to protect). Even if you're not using AI yourself, however – even if you're vehemently opposed to it on moral and ethical grounds (which is entirely valid given the highly dubious land-grab these companies have done for their training data), the Midjourney judgement and its fallout may well impact the creative work you produce yourself and how it ends up being used and abused by these products in future. This all has huge ramifications for the games business and will shape everything from how games are created to how IP can be protected for many years to come – a wind of change that's very different and vastly more unpredictable than those we're accustomed to. It's a reminder of just how much of the industry's future is currently being shaped not in development studios and semiconductor labs, but rather in courtrooms and parliamentary committees. The ways in which generative AI can be used and how copyright can persist in the face of it will be fundamentally shaped in courts and parliaments, but it's far from the only crucially important topic being hashed out in those venues. The ongoing legal turmoil over the opening up of mobile app ecosystems, too, will have huge impacts on the games industry. Meanwhile, the debates over loot boxes, gambling, and various consumer protection aspects related to free-to-play models continue to rumble on in the background. Because the industry moves fast while governments move slow, it's easy to forget that that's still an active topic for as far as governments are concerned, and hammers may come down at any time. Regulation by governments, whether through the passage of new legislation or the interpretation of existing laws in the courts, has always loomed in the background of any major industry, especially one with strong cultural relevance. The games industry is no stranger to that being part of the background heartbeat of the business. The 2020s, however, are turning out to be the decade in which many key regulatory issues come to a head all at once, whether it's AI and copyright, app stores and walled gardens, or loot boxes and IAP-based business models. Rulings on those topics in various different global markets will create a complex new landscape that will shape the winds that blow through the business, and how things look in the 2030s and beyond will be fundamentally impacted by those decisions.
    0 Yorumlar 0 hisse senetleri
  • The Download: US climate studies are being shut down, and building cities from lava

    This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

    The Trump administration has shut down more than 100 climate studies

    The Trump administration has terminated National Science Foundation grants for more than 100 research projects related to climate change, according to an MIT Technology Review analysis of a database that tracks such cuts.

    The move will cut off what’s likely to amount to tens of millions of dollars for studies that were previously approved and, in most cases, already in the works. Many believe the administration’s broader motivation is to undermine the power of the university system and prevent research findings that cut against its politics. Read the full story.

    —James Temple

    This architect wants to build cities out of lava

    Arnhildur Pálmadóttir is an architect with an extraordinary mission: to harness molten lava and build cities out of it.Pálmadóttir believes the lava that flows from a single eruption could yield enough building material to lay the foundations of an entire city. She has been researching this possibility for more than five years as part of a project she calls Lavaforming. Together with her son and colleague Arnar Skarphéðinsson, she has identified three potential techniques that could change how future homes are designed and built from repurposed lava. Read the full story.—Elissaveta M. Brandon

    This story is from the most recent edition of our print magazine, which is all about how technology is changing creativity. Subscribe now to read it and to receive future print copies once they land.

    The must-reads

    I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

    1 America is failing to win the tech race against ChinaIn fields as diverse as drones and energy.+ Humanoid robots is an area of particular interest.+ China has accused the US of violating the pair’s trade truce.2 Who is really in charge of DOGE?According to a fired staffer, it wasn’t Elon Musk.+ DOGE’s tech takeover threatens the safety and stability of our critical data.3 Brazilians will soon be able to sell their digital dataIt’s the first time citizens will be able to monetize their digital footprint.4 The Trump administration’s anti-vaccine stance is stoking fear among scientistsIt’s slashing funding for mRNA trials, and experts are afraid to speak out.+ This annual shot might protect against HIV infections.5 Tech companies want us to spend longer talking to chatbotsThose conversations can easily veer into dangerous territory.+ How we use AI in the future is up to us.+ This benchmark used Reddit’s AITA to test how much AI models suck up to us.6 Tiktok’s mental health videos are rife with misinformationA lot of the advice is useless at best, and harmful at worst.7 Lawyers are hooked on ChatGPTEven though it’s inherently unreliable.+ Yet another lawyer has been found referencing nonexistent citations.+ How AI is introducing errors into courtrooms.8 How chefs are using generative AI They’re starting to experiment with using it to create innovative new dishes.+ Watch this robot cook shrimp and clean autonomously.9 The influencer suing her rival has dropped her lawsuitThe legal fight over ownership of a basic aesthetic has come to an end.10 Roblox’s new game has sparked a digital fruit underground marketAnd players are already spending millions of dollars every week.Quote of the day

    “We can’t substitute complex thinking with machines. AI can’t replace our curiosity, creativity or emotional intelligence.”

    —Mateusz Demski, a journalist in Poland, tells the Guardian about how his radio station employer laid him off, only to later launch shows fronted by AI-generated presenters.

    One more thing

    ​​Adventures in the genetic time machineAn ancient-DNA revolution is turning the high-speed equipment used to study the DNA of living things on to specimens from the past.The technology is being used to create genetic maps of saber-toothed cats, cave bears, and thousands of ancient humans, including Vikings, Polynesian navigators, and numerous Neanderthals. The total number of ancient humans studied is more than 10,000 and rising fast.The old genes have already revealed remarkable stories of human migrations around the globe.But researchers are hoping ancient DNA will be more than a telescope on the past—they hope it will have concrete practical use in the present. Read the full story. 

    —Antonio Regalado

    We can still have nice things

    A place for comfort, fun and distraction to brighten up your day.+ The ancient Persians managed to keep cool using an innovative breeze-catching technique that could still be useful today.+ Knowledge is power—here’s a helpful list of hoaxes to be aware of.+ How said it: Homer Simpson or Pete Hegseth?+ I had no idea London has so many cat statues.
    #download #climate #studies #are #being
    The Download: US climate studies are being shut down, and building cities from lava
    This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology. The Trump administration has shut down more than 100 climate studies The Trump administration has terminated National Science Foundation grants for more than 100 research projects related to climate change, according to an MIT Technology Review analysis of a database that tracks such cuts. The move will cut off what’s likely to amount to tens of millions of dollars for studies that were previously approved and, in most cases, already in the works. Many believe the administration’s broader motivation is to undermine the power of the university system and prevent research findings that cut against its politics. Read the full story. —James Temple This architect wants to build cities out of lava Arnhildur Pálmadóttir is an architect with an extraordinary mission: to harness molten lava and build cities out of it.Pálmadóttir believes the lava that flows from a single eruption could yield enough building material to lay the foundations of an entire city. She has been researching this possibility for more than five years as part of a project she calls Lavaforming. Together with her son and colleague Arnar Skarphéðinsson, she has identified three potential techniques that could change how future homes are designed and built from repurposed lava. Read the full story.—Elissaveta M. Brandon This story is from the most recent edition of our print magazine, which is all about how technology is changing creativity. Subscribe now to read it and to receive future print copies once they land. The must-reads I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology. 1 America is failing to win the tech race against ChinaIn fields as diverse as drones and energy.+ Humanoid robots is an area of particular interest.+ China has accused the US of violating the pair’s trade truce.2 Who is really in charge of DOGE?According to a fired staffer, it wasn’t Elon Musk.+ DOGE’s tech takeover threatens the safety and stability of our critical data.3 Brazilians will soon be able to sell their digital dataIt’s the first time citizens will be able to monetize their digital footprint.4 The Trump administration’s anti-vaccine stance is stoking fear among scientistsIt’s slashing funding for mRNA trials, and experts are afraid to speak out.+ This annual shot might protect against HIV infections.5 Tech companies want us to spend longer talking to chatbotsThose conversations can easily veer into dangerous territory.+ How we use AI in the future is up to us.+ This benchmark used Reddit’s AITA to test how much AI models suck up to us.6 Tiktok’s mental health videos are rife with misinformationA lot of the advice is useless at best, and harmful at worst.7 Lawyers are hooked on ChatGPTEven though it’s inherently unreliable.+ Yet another lawyer has been found referencing nonexistent citations.+ How AI is introducing errors into courtrooms.8 How chefs are using generative AI They’re starting to experiment with using it to create innovative new dishes.+ Watch this robot cook shrimp and clean autonomously.9 The influencer suing her rival has dropped her lawsuitThe legal fight over ownership of a basic aesthetic has come to an end.10 Roblox’s new game has sparked a digital fruit underground marketAnd players are already spending millions of dollars every week.Quote of the day “We can’t substitute complex thinking with machines. AI can’t replace our curiosity, creativity or emotional intelligence.” —Mateusz Demski, a journalist in Poland, tells the Guardian about how his radio station employer laid him off, only to later launch shows fronted by AI-generated presenters. One more thing ​​Adventures in the genetic time machineAn ancient-DNA revolution is turning the high-speed equipment used to study the DNA of living things on to specimens from the past.The technology is being used to create genetic maps of saber-toothed cats, cave bears, and thousands of ancient humans, including Vikings, Polynesian navigators, and numerous Neanderthals. The total number of ancient humans studied is more than 10,000 and rising fast.The old genes have already revealed remarkable stories of human migrations around the globe.But researchers are hoping ancient DNA will be more than a telescope on the past—they hope it will have concrete practical use in the present. Read the full story.  —Antonio Regalado We can still have nice things A place for comfort, fun and distraction to brighten up your day.+ The ancient Persians managed to keep cool using an innovative breeze-catching technique that could still be useful today.+ Knowledge is power—here’s a helpful list of hoaxes to be aware of.+ How said it: Homer Simpson or Pete Hegseth?+ I had no idea London has so many cat statues. #download #climate #studies #are #being
    WWW.TECHNOLOGYREVIEW.COM
    The Download: US climate studies are being shut down, and building cities from lava
    This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology. The Trump administration has shut down more than 100 climate studies The Trump administration has terminated National Science Foundation grants for more than 100 research projects related to climate change, according to an MIT Technology Review analysis of a database that tracks such cuts. The move will cut off what’s likely to amount to tens of millions of dollars for studies that were previously approved and, in most cases, already in the works. Many believe the administration’s broader motivation is to undermine the power of the university system and prevent research findings that cut against its politics. Read the full story. —James Temple This architect wants to build cities out of lava Arnhildur Pálmadóttir is an architect with an extraordinary mission: to harness molten lava and build cities out of it.Pálmadóttir believes the lava that flows from a single eruption could yield enough building material to lay the foundations of an entire city. She has been researching this possibility for more than five years as part of a project she calls Lavaforming. Together with her son and colleague Arnar Skarphéðinsson, she has identified three potential techniques that could change how future homes are designed and built from repurposed lava. Read the full story.—Elissaveta M. Brandon This story is from the most recent edition of our print magazine, which is all about how technology is changing creativity. Subscribe now to read it and to receive future print copies once they land. The must-reads I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology. 1 America is failing to win the tech race against ChinaIn fields as diverse as drones and energy. (WSJ $)+ Humanoid robots is an area of particular interest. (Bloomberg $)+ China has accused the US of violating the pair’s trade truce. (FT $) 2 Who is really in charge of DOGE?According to a fired staffer, it wasn’t Elon Musk. (Wired $)+ DOGE’s tech takeover threatens the safety and stability of our critical data. (MIT Technology Review) 3 Brazilians will soon be able to sell their digital dataIt’s the first time citizens will be able to monetize their digital footprint. (Rest of World) 4 The Trump administration’s anti-vaccine stance is stoking fear among scientistsIt’s slashing funding for mRNA trials, and experts are afraid to speak out. (The Atlantic $)+ This annual shot might protect against HIV infections. (MIT Technology Review) 5 Tech companies want us to spend longer talking to chatbotsThose conversations can easily veer into dangerous territory. (WP $)+ How we use AI in the future is up to us. (New Yorker $)+ This benchmark used Reddit’s AITA to test how much AI models suck up to us. (MIT Technology Review) 6 Tiktok’s mental health videos are rife with misinformationA lot of the advice is useless at best, and harmful at worst. (The Guardian) 7 Lawyers are hooked on ChatGPTEven though it’s inherently unreliable. (The Verge)+ Yet another lawyer has been found referencing nonexistent citations. (The Guardian)+ How AI is introducing errors into courtrooms. (MIT Technology Review) 8 How chefs are using generative AI They’re starting to experiment with using it to create innovative new dishes. (NYT $)+ Watch this robot cook shrimp and clean autonomously. (MIT Technology Review) 9 The influencer suing her rival has dropped her lawsuitThe legal fight over ownership of a basic aesthetic has come to an end. (NBC News) 10 Roblox’s new game has sparked a digital fruit underground marketAnd players are already spending millions of dollars every week. (Bloomberg $) Quote of the day “We can’t substitute complex thinking with machines. AI can’t replace our curiosity, creativity or emotional intelligence.” —Mateusz Demski, a journalist in Poland, tells the Guardian about how his radio station employer laid him off, only to later launch shows fronted by AI-generated presenters. One more thing ​​Adventures in the genetic time machineAn ancient-DNA revolution is turning the high-speed equipment used to study the DNA of living things on to specimens from the past.The technology is being used to create genetic maps of saber-toothed cats, cave bears, and thousands of ancient humans, including Vikings, Polynesian navigators, and numerous Neanderthals. The total number of ancient humans studied is more than 10,000 and rising fast.The old genes have already revealed remarkable stories of human migrations around the globe.But researchers are hoping ancient DNA will be more than a telescope on the past—they hope it will have concrete practical use in the present. Read the full story.  —Antonio Regalado We can still have nice things A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.) + The ancient Persians managed to keep cool using an innovative breeze-catching technique that could still be useful today.+ Knowledge is power—here’s a helpful list of hoaxes to be aware of.+ How said it: Homer Simpson or Pete Hegseth?+ I had no idea London has so many cat statues.
    0 Yorumlar 0 hisse senetleri
  • The Download: introducing the AI energy package

    This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.

    We did the math on AI’s energy footprint. Here’s the story you haven’t heard.

    It’s well documented that AI is a power-hungry technology. But there has been far less reporting on the extent of that hunger, how much its appetite is set to grow in the coming years, where that power will come from, and who will pay for it. 

    For the past six months, MIT Technology Review’s team of reporters and editors have worked to answer those questions. The result is an unprecedented look at the state of AI’s energy and resource usage, where it is now, where it is headed in the years to come, and why we have to get it right. 

    At the centerpiece of this package is an entirely novel line of reporting into the demands of inference—the way human beings interact with AI when we make text queries or ask AI to come up with new images or create videos. Experts say inference is set to eclipse the already massive amount of energy required to train new AI models. Here’s everything we found out.

    Here’s what you can expect from the rest of the package, including:

    + We were so startled by what we learned reporting this story that we also put together a brief on everything you need to know about estimating AI’s energy and emissions burden. 

    + We went out into the world to see the effects of this energy hunger—from the deserts of Nevada, where data centers in an industrial park the size of Detroit demand ever more water to keep their processors cool and running. 

    + In Louisiana, where Meta plans its largest-ever data center, we expose the dirty secret that will fuel its AI ambitions—along with those of many others. 

    + Why the clean energy promise of powering AI data centers with nuclear energy will long remain elusive. 

    + But it’s not all doom and gloom. Check out the reasons to be optimistic, and examine why future AI systems could be far less energy intensive than today’s.

    AI can do a better job of persuading people than we do

    The news: Millions of people argue with each other online every day, but remarkably few of them change someone’s mind. New research suggests that large language modelsmight do a better job, especially when they’re given the ability to adapt their arguments using personal information about individuals. The finding suggests that AI could become a powerful tool for persuading people, for better or worse.

    The big picture: The findings are the latest in a growing body of research demonstrating LLMs’ powers of persuasion. The authors warn they show how AI tools can craft sophisticated, persuasive arguments if they have even minimal information about the humans they’re interacting with. Read the full story.

    —Rhiannon Williams

    How AI is introducing errors into courtrooms

    It’s been quite a couple weeks for stories about AI in the courtroom. You might have heard about the deceased victim of a road rage incident whose family created an AI avatar of him to show as an impact statement.But there’s a bigger, far more consequential controversy brewing, legal experts say. AI hallucinations are cropping up more and more in legal filings. And it’s starting to infuriate judges. Just consider these three cases, each of which gives a glimpse into what we can expect to see more of as lawyers embrace AI. Read the full story.

    —James O’Donnell

    This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.

    The must-reads

    I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

    1 Donald Trump has signed the Take It Down Act into US lawIt criminalizes the distribution of non-consensual intimate images, including deepfakes.+ Tech platforms will be forced to remove such material within 48 hours of being notified.+ It’s only the sixth bill he’s signed into law during his second term.2 There’s now a buyer for 23andMe Pharma firm Regeneron has swooped in and offered to help it keep operating.+ The worth of your genetic data?+ Regeneron promised to prioritize security and ethical use of that data.3 Microsoft is adding Elon Musk’s AI models to its cloud platformErr, is that a good idea?+ Musk wants to sell Grok to other businesses.4 Autonomous cars trained to react like humans cause fewer road injuriesA study found they were more cautious around cyclists, pedestrians and motorcyclists.+ Waymo is expanding its robotaxi operations out of San Francisco.+ How Wayve’s driverless cars will meet one of their biggest challenges yet.5 Hurricane season is on its wayDOGE cuts means we’re less prepared.+ COP30 may be in crisis before it’s even begun.6 Telegram handed over data from more than 20,000 users In the first three months of 2025 alone.7 GM has stopped exporting cars to ChinaTrump’s tariffs have put an end to its export plans.8 Blended meats are on the risePlants account for up to 70% of these new meats—and consumers love them.+ Alternative meat could help the climate. Will anyone eat it?9 SAG-AFTRA isn’t happy about Fornite’s AI-voiced Darth VaderIt’s slapped Fortnite’s creators with an unfair labor practice charge.+ How Meta and AI companies recruited striking actors to train AI.10 This AI model can swiftly build Lego structuresThanks to nothing more than a prompt.Quote of the day

    “Platforms have no incentive or requirement to make sure what comes through the system is non-consensual intimate imagery.”

    —Becca Branum, deputy director of the Center for Democracy and Technology, says the new Take It Down Act could fuel censorship, Wired reports.

    One more thing

    Are friends electric?Thankfully, the difference between humans and machines in the real world is easy to discern, at least for now. While machines tend to excel at things adults find difficult—playing world-champion-level chess, say, or multiplying really big numbers—they find it hard to accomplish stuff a five-year-old can do with ease, such as catching a ball or walking around a room without bumping into things.This fundamental tension—what is hard for humans is easy for machines, and what’s hard for machines is easy for humans—is at the heart of three new books delving into our complex and often fraught relationship with robots, AI, and automation. They force us to reimagine the nature of everything from friendship and love to work, health care, and home life. Read the full story.

    —Bryan Gardiner

    We can still have nice things

    A place for comfort, fun and distraction to brighten up your day.+ Congratulations to William Goodge, who ran across Australia in just 35 days!+ A British horticulturist has created a garden at this year’s Chelsea Flower Show just for dogs.+ The Netherlands just loves a sidewalk garden.+ Did you know the T Rex is a north American hero? Me neither
    #download #introducing #energy #package
    The Download: introducing the AI energy package
    This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology. We did the math on AI’s energy footprint. Here’s the story you haven’t heard. It’s well documented that AI is a power-hungry technology. But there has been far less reporting on the extent of that hunger, how much its appetite is set to grow in the coming years, where that power will come from, and who will pay for it.  For the past six months, MIT Technology Review’s team of reporters and editors have worked to answer those questions. The result is an unprecedented look at the state of AI’s energy and resource usage, where it is now, where it is headed in the years to come, and why we have to get it right.  At the centerpiece of this package is an entirely novel line of reporting into the demands of inference—the way human beings interact with AI when we make text queries or ask AI to come up with new images or create videos. Experts say inference is set to eclipse the already massive amount of energy required to train new AI models. Here’s everything we found out. Here’s what you can expect from the rest of the package, including: + We were so startled by what we learned reporting this story that we also put together a brief on everything you need to know about estimating AI’s energy and emissions burden.  + We went out into the world to see the effects of this energy hunger—from the deserts of Nevada, where data centers in an industrial park the size of Detroit demand ever more water to keep their processors cool and running.  + In Louisiana, where Meta plans its largest-ever data center, we expose the dirty secret that will fuel its AI ambitions—along with those of many others.  + Why the clean energy promise of powering AI data centers with nuclear energy will long remain elusive.  + But it’s not all doom and gloom. Check out the reasons to be optimistic, and examine why future AI systems could be far less energy intensive than today’s. AI can do a better job of persuading people than we do The news: Millions of people argue with each other online every day, but remarkably few of them change someone’s mind. New research suggests that large language modelsmight do a better job, especially when they’re given the ability to adapt their arguments using personal information about individuals. The finding suggests that AI could become a powerful tool for persuading people, for better or worse. The big picture: The findings are the latest in a growing body of research demonstrating LLMs’ powers of persuasion. The authors warn they show how AI tools can craft sophisticated, persuasive arguments if they have even minimal information about the humans they’re interacting with. Read the full story. —Rhiannon Williams How AI is introducing errors into courtrooms It’s been quite a couple weeks for stories about AI in the courtroom. You might have heard about the deceased victim of a road rage incident whose family created an AI avatar of him to show as an impact statement.But there’s a bigger, far more consequential controversy brewing, legal experts say. AI hallucinations are cropping up more and more in legal filings. And it’s starting to infuriate judges. Just consider these three cases, each of which gives a glimpse into what we can expect to see more of as lawyers embrace AI. Read the full story. —James O’Donnell This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here. The must-reads I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology. 1 Donald Trump has signed the Take It Down Act into US lawIt criminalizes the distribution of non-consensual intimate images, including deepfakes.+ Tech platforms will be forced to remove such material within 48 hours of being notified.+ It’s only the sixth bill he’s signed into law during his second term.2 There’s now a buyer for 23andMe Pharma firm Regeneron has swooped in and offered to help it keep operating.+ The worth of your genetic data?+ Regeneron promised to prioritize security and ethical use of that data.3 Microsoft is adding Elon Musk’s AI models to its cloud platformErr, is that a good idea?+ Musk wants to sell Grok to other businesses.4 Autonomous cars trained to react like humans cause fewer road injuriesA study found they were more cautious around cyclists, pedestrians and motorcyclists.+ Waymo is expanding its robotaxi operations out of San Francisco.+ How Wayve’s driverless cars will meet one of their biggest challenges yet.5 Hurricane season is on its wayDOGE cuts means we’re less prepared.+ COP30 may be in crisis before it’s even begun.6 Telegram handed over data from more than 20,000 users In the first three months of 2025 alone.7 GM has stopped exporting cars to ChinaTrump’s tariffs have put an end to its export plans.8 Blended meats are on the risePlants account for up to 70% of these new meats—and consumers love them.+ Alternative meat could help the climate. Will anyone eat it?9 SAG-AFTRA isn’t happy about Fornite’s AI-voiced Darth VaderIt’s slapped Fortnite’s creators with an unfair labor practice charge.+ How Meta and AI companies recruited striking actors to train AI.10 This AI model can swiftly build Lego structuresThanks to nothing more than a prompt.Quote of the day “Platforms have no incentive or requirement to make sure what comes through the system is non-consensual intimate imagery.” —Becca Branum, deputy director of the Center for Democracy and Technology, says the new Take It Down Act could fuel censorship, Wired reports. One more thing Are friends electric?Thankfully, the difference between humans and machines in the real world is easy to discern, at least for now. While machines tend to excel at things adults find difficult—playing world-champion-level chess, say, or multiplying really big numbers—they find it hard to accomplish stuff a five-year-old can do with ease, such as catching a ball or walking around a room without bumping into things.This fundamental tension—what is hard for humans is easy for machines, and what’s hard for machines is easy for humans—is at the heart of three new books delving into our complex and often fraught relationship with robots, AI, and automation. They force us to reimagine the nature of everything from friendship and love to work, health care, and home life. Read the full story. —Bryan Gardiner We can still have nice things A place for comfort, fun and distraction to brighten up your day.+ Congratulations to William Goodge, who ran across Australia in just 35 days!+ A British horticulturist has created a garden at this year’s Chelsea Flower Show just for dogs.+ The Netherlands just loves a sidewalk garden.+ Did you know the T Rex is a north American hero? Me neither #download #introducing #energy #package
    WWW.TECHNOLOGYREVIEW.COM
    The Download: introducing the AI energy package
    This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology. We did the math on AI’s energy footprint. Here’s the story you haven’t heard. It’s well documented that AI is a power-hungry technology. But there has been far less reporting on the extent of that hunger, how much its appetite is set to grow in the coming years, where that power will come from, and who will pay for it.  For the past six months, MIT Technology Review’s team of reporters and editors have worked to answer those questions. The result is an unprecedented look at the state of AI’s energy and resource usage, where it is now, where it is headed in the years to come, and why we have to get it right.  At the centerpiece of this package is an entirely novel line of reporting into the demands of inference—the way human beings interact with AI when we make text queries or ask AI to come up with new images or create videos. Experts say inference is set to eclipse the already massive amount of energy required to train new AI models. Here’s everything we found out. Here’s what you can expect from the rest of the package, including: + We were so startled by what we learned reporting this story that we also put together a brief on everything you need to know about estimating AI’s energy and emissions burden.  + We went out into the world to see the effects of this energy hunger—from the deserts of Nevada, where data centers in an industrial park the size of Detroit demand ever more water to keep their processors cool and running.  + In Louisiana, where Meta plans its largest-ever data center, we expose the dirty secret that will fuel its AI ambitions—along with those of many others.  + Why the clean energy promise of powering AI data centers with nuclear energy will long remain elusive.  + But it’s not all doom and gloom. Check out the reasons to be optimistic, and examine why future AI systems could be far less energy intensive than today’s. AI can do a better job of persuading people than we do The news: Millions of people argue with each other online every day, but remarkably few of them change someone’s mind. New research suggests that large language models (LLMs) might do a better job, especially when they’re given the ability to adapt their arguments using personal information about individuals. The finding suggests that AI could become a powerful tool for persuading people, for better or worse. The big picture: The findings are the latest in a growing body of research demonstrating LLMs’ powers of persuasion. The authors warn they show how AI tools can craft sophisticated, persuasive arguments if they have even minimal information about the humans they’re interacting with. Read the full story. —Rhiannon Williams How AI is introducing errors into courtrooms It’s been quite a couple weeks for stories about AI in the courtroom. You might have heard about the deceased victim of a road rage incident whose family created an AI avatar of him to show as an impact statement (possibly the first time this has been done in the US).But there’s a bigger, far more consequential controversy brewing, legal experts say. AI hallucinations are cropping up more and more in legal filings. And it’s starting to infuriate judges. Just consider these three cases, each of which gives a glimpse into what we can expect to see more of as lawyers embrace AI. Read the full story. —James O’Donnell This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here. The must-reads I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology. 1 Donald Trump has signed the Take It Down Act into US lawIt criminalizes the distribution of non-consensual intimate images, including deepfakes. (The Verge)+ Tech platforms will be forced to remove such material within 48 hours of being notified. (CNN)+ It’s only the sixth bill he’s signed into law during his second term. (NBC News) 2 There’s now a buyer for 23andMe Pharma firm Regeneron has swooped in and offered to help it keep operating. (WSJ $)+ The worth of your genetic data? $17. (404 Media)+ Regeneron promised to prioritize security and ethical use of that data. (TechCrunch) 3 Microsoft is adding Elon Musk’s AI models to its cloud platformErr, is that a good idea? (Bloomberg $)+ Musk wants to sell Grok to other businesses. (The Information $) 4 Autonomous cars trained to react like humans cause fewer road injuriesA study found they were more cautious around cyclists, pedestrians and motorcyclists. (FT $)+ Waymo is expanding its robotaxi operations out of San Francisco. (Reuters)+ How Wayve’s driverless cars will meet one of their biggest challenges yet. (MIT Technology Review) 5 Hurricane season is on its wayDOGE cuts means we’re less prepared. (The Atlantic $)+ COP30 may be in crisis before it’s even begun. (New Scientist $) 6 Telegram handed over data from more than 20,000 users In the first three months of 2025 alone. (404 Media) 7 GM has stopped exporting cars to ChinaTrump’s tariffs have put an end to its export plans. (NYT $) 8 Blended meats are on the risePlants account for up to 70% of these new meats—and consumers love them. (WP $)+ Alternative meat could help the climate. Will anyone eat it? (MIT Technology Review) 9 SAG-AFTRA isn’t happy about Fornite’s AI-voiced Darth VaderIt’s slapped Fortnite’s creators with an unfair labor practice charge. (Ars Technica)+ How Meta and AI companies recruited striking actors to train AI. (MIT Technology Review) 10 This AI model can swiftly build Lego structuresThanks to nothing more than a prompt. (Fast Company $) Quote of the day “Platforms have no incentive or requirement to make sure what comes through the system is non-consensual intimate imagery.” —Becca Branum, deputy director of the Center for Democracy and Technology, says the new Take It Down Act could fuel censorship, Wired reports. One more thing Are friends electric?Thankfully, the difference between humans and machines in the real world is easy to discern, at least for now. While machines tend to excel at things adults find difficult—playing world-champion-level chess, say, or multiplying really big numbers—they find it hard to accomplish stuff a five-year-old can do with ease, such as catching a ball or walking around a room without bumping into things.This fundamental tension—what is hard for humans is easy for machines, and what’s hard for machines is easy for humans—is at the heart of three new books delving into our complex and often fraught relationship with robots, AI, and automation. They force us to reimagine the nature of everything from friendship and love to work, health care, and home life. Read the full story. —Bryan Gardiner We can still have nice things A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.) + Congratulations to William Goodge, who ran across Australia in just 35 days!+ A British horticulturist has created a garden at this year’s Chelsea Flower Show just for dogs.+ The Netherlands just loves a sidewalk garden.+ Did you know the T Rex is a north American hero? Me neither
    0 Yorumlar 0 hisse senetleri
  • How AI is introducing errors into courtrooms

    It’s been quite a couple weeks for stories about AI in the courtroom. You might have heard about the deceased victim of a road rage incident whose family created an AI avatar of him to show as an impact statement. But there’s a bigger, far more consequential controversy brewing, legal experts say. AI hallucinations are cropping up more and more in legal filings. And it’s starting to infuriate judges. Just consider these three cases, each of which gives a glimpse into what we can expect to see more of as lawyers embrace AI.

    A few weeks ago, a California judge, Michael Wilner, became intrigued by a set of arguments some lawyers made in a filing. He went to learn more about those arguments by following the articles they cited. But the articles didn’t exist. He asked the lawyers’ firm for more details, and they responded with a new brief that contained even more mistakes than the first. Wilner ordered the attorneys to give sworn testimonies explaining the mistakes, in which he learned that one of them, from the elite firm Ellis George, used Google Gemini as well as law-specific AI models to help write the document, which generated false information. As detailed in a filing on May 6, the judge fined the firm  

    Last week, another California-based judge caught another hallucination in a court filing, this time submitted by the AI company Anthropic in the lawsuit that record labels have brought against it over copyright issues. One of Anthropic’s lawyers had asked the company’s AI model Claude to create a citation for a legal article, but Claude included the wrong title and author. Anthropic’s attorney admitted that the mistake was not caught by anyone reviewing the document. 

    Lastly, and perhaps most concerning, is a case unfolding in Israel. After police arrested an individual on charges of money laundering, Israeli prosecutors submitted a request asking a judge for permission to keep the individual’s phone as evidence. But they cited laws that don’t exist, prompting the defendant’s attorney to accuse them of including AI hallucinations in their request. The prosecutors, according to Israeli news outlets, admitted that this was the case, receiving a scolding from the judge. 

    Taken together, these cases point to a serious problem. Courts rely on documents that are accurate and backed up with citations—two traits that AI models, despite being adopted by lawyers eager to save time, often fail miserably to deliver. 

    Those mistakes are getting caught, but it’s not a stretch to imagine that at some point soon, a judge’s decision will be influenced by something that’s totally made up by AI, and no one will catch it. 

    I spoke with Maura Grossman, who teaches at the School of Computer Science at the University of Waterloo as well as Osgoode Hall Law School, and has been a vocal early critic of the problems that generative AI poses for courts. She wrote about the problem back in 2023, when the first cases of hallucinations started appearing. She said she thought courts’ existing rules requiring lawyers to vet what they submit to the courts, combined with the bad publicity those cases attracted, would put a stop to the problem. That hasn’t panned out.

    Hallucinations “don’t seem to have slowed down,” she says. “If anything, they’ve sped up.” And these aren’t one-off cases with obscure local firms, she says. These are big-time lawyers making significant, embarrassing mistakes with AI. She worries that such mistakes are also cropping up more in documents not written by lawyers themselves, like expert reports.  

    I told Grossman that I find all this a little surprising. Attorneys, more than most, are obsessed with diction. They choose their words with precision. Why are so many getting caught making these mistakes?

    “Lawyers fall in two camps,” she says. “The first are scared to death and don’t want to use it at all.” But then there are the early adopters. These are lawyers tight on time or without a cadre of other lawyers to help with a brief. They’re eager for technology that can help them write documents under tight deadlines. And their checks on the AI’s work aren’t always thorough. 

    The fact that high-powered lawyers, whose very profession it is to scrutinize language, keep getting caught making mistakes introduced by AI says something about how most of us treat the technology right now. We’re told repeatedly that AI makes mistakes, but language models also feel a bit like magic. We put in a complicated question and receive what sounds like a thoughtful, intelligent reply. Over time, AI models develop a veneer of authority. We trust them.

    “We assume that because these large language models are so fluent, it also means that they’re accurate,” Grossman says. “We all sort of slip into that trusting mode because it sounds authoritative.” Attorneys are used to checking the work of junior attorneys and interns but for some reason, Grossman says, don’t apply this skepticism to AI.

    We’ve known about this problem ever since ChatGPT launched nearly three years ago, but the recommended solution has not evolved much since then: Don’t trust everything you read, and vet what an AI model tells you. As AI models get thrust into so many different tools we use, I increasingly find this to be an unsatisfying counter to one of AI’s most foundational flaws.

    Hallucinations are inherent to the way that large language models work. Despite that, companies are selling generative AI tools made for lawyers that claim to be reliably accurate. “Feel confident your research is accurate and complete,” reads the website for Westlaw Precision, and the website for CoCounsel promises its AI is “backed by authoritative content.” That didn’t stop their client, Ellis George, from being fined Increasingly, I have sympathy for people who trust AI more than they should. We are, after all, living in a time when the people building this technology are telling us that AI is so powerful it should be treated like nuclear weapons. Models have learned from nearly every word humanity has ever written down and are infiltrating our online life. If people shouldn’t trust everything AI models say, they probably deserve to be reminded of that a little more often by the companies building them. 

    This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.
    #how #introducing #errors #into #courtrooms
    How AI is introducing errors into courtrooms
    It’s been quite a couple weeks for stories about AI in the courtroom. You might have heard about the deceased victim of a road rage incident whose family created an AI avatar of him to show as an impact statement. But there’s a bigger, far more consequential controversy brewing, legal experts say. AI hallucinations are cropping up more and more in legal filings. And it’s starting to infuriate judges. Just consider these three cases, each of which gives a glimpse into what we can expect to see more of as lawyers embrace AI. A few weeks ago, a California judge, Michael Wilner, became intrigued by a set of arguments some lawyers made in a filing. He went to learn more about those arguments by following the articles they cited. But the articles didn’t exist. He asked the lawyers’ firm for more details, and they responded with a new brief that contained even more mistakes than the first. Wilner ordered the attorneys to give sworn testimonies explaining the mistakes, in which he learned that one of them, from the elite firm Ellis George, used Google Gemini as well as law-specific AI models to help write the document, which generated false information. As detailed in a filing on May 6, the judge fined the firm   Last week, another California-based judge caught another hallucination in a court filing, this time submitted by the AI company Anthropic in the lawsuit that record labels have brought against it over copyright issues. One of Anthropic’s lawyers had asked the company’s AI model Claude to create a citation for a legal article, but Claude included the wrong title and author. Anthropic’s attorney admitted that the mistake was not caught by anyone reviewing the document.  Lastly, and perhaps most concerning, is a case unfolding in Israel. After police arrested an individual on charges of money laundering, Israeli prosecutors submitted a request asking a judge for permission to keep the individual’s phone as evidence. But they cited laws that don’t exist, prompting the defendant’s attorney to accuse them of including AI hallucinations in their request. The prosecutors, according to Israeli news outlets, admitted that this was the case, receiving a scolding from the judge.  Taken together, these cases point to a serious problem. Courts rely on documents that are accurate and backed up with citations—two traits that AI models, despite being adopted by lawyers eager to save time, often fail miserably to deliver.  Those mistakes are getting caught, but it’s not a stretch to imagine that at some point soon, a judge’s decision will be influenced by something that’s totally made up by AI, and no one will catch it.  I spoke with Maura Grossman, who teaches at the School of Computer Science at the University of Waterloo as well as Osgoode Hall Law School, and has been a vocal early critic of the problems that generative AI poses for courts. She wrote about the problem back in 2023, when the first cases of hallucinations started appearing. She said she thought courts’ existing rules requiring lawyers to vet what they submit to the courts, combined with the bad publicity those cases attracted, would put a stop to the problem. That hasn’t panned out. Hallucinations “don’t seem to have slowed down,” she says. “If anything, they’ve sped up.” And these aren’t one-off cases with obscure local firms, she says. These are big-time lawyers making significant, embarrassing mistakes with AI. She worries that such mistakes are also cropping up more in documents not written by lawyers themselves, like expert reports.   I told Grossman that I find all this a little surprising. Attorneys, more than most, are obsessed with diction. They choose their words with precision. Why are so many getting caught making these mistakes? “Lawyers fall in two camps,” she says. “The first are scared to death and don’t want to use it at all.” But then there are the early adopters. These are lawyers tight on time or without a cadre of other lawyers to help with a brief. They’re eager for technology that can help them write documents under tight deadlines. And their checks on the AI’s work aren’t always thorough.  The fact that high-powered lawyers, whose very profession it is to scrutinize language, keep getting caught making mistakes introduced by AI says something about how most of us treat the technology right now. We’re told repeatedly that AI makes mistakes, but language models also feel a bit like magic. We put in a complicated question and receive what sounds like a thoughtful, intelligent reply. Over time, AI models develop a veneer of authority. We trust them. “We assume that because these large language models are so fluent, it also means that they’re accurate,” Grossman says. “We all sort of slip into that trusting mode because it sounds authoritative.” Attorneys are used to checking the work of junior attorneys and interns but for some reason, Grossman says, don’t apply this skepticism to AI. We’ve known about this problem ever since ChatGPT launched nearly three years ago, but the recommended solution has not evolved much since then: Don’t trust everything you read, and vet what an AI model tells you. As AI models get thrust into so many different tools we use, I increasingly find this to be an unsatisfying counter to one of AI’s most foundational flaws. Hallucinations are inherent to the way that large language models work. Despite that, companies are selling generative AI tools made for lawyers that claim to be reliably accurate. “Feel confident your research is accurate and complete,” reads the website for Westlaw Precision, and the website for CoCounsel promises its AI is “backed by authoritative content.” That didn’t stop their client, Ellis George, from being fined Increasingly, I have sympathy for people who trust AI more than they should. We are, after all, living in a time when the people building this technology are telling us that AI is so powerful it should be treated like nuclear weapons. Models have learned from nearly every word humanity has ever written down and are infiltrating our online life. If people shouldn’t trust everything AI models say, they probably deserve to be reminded of that a little more often by the companies building them.  This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here. #how #introducing #errors #into #courtrooms
    WWW.TECHNOLOGYREVIEW.COM
    How AI is introducing errors into courtrooms
    It’s been quite a couple weeks for stories about AI in the courtroom. You might have heard about the deceased victim of a road rage incident whose family created an AI avatar of him to show as an impact statement (possibly the first time this has been done in the US). But there’s a bigger, far more consequential controversy brewing, legal experts say. AI hallucinations are cropping up more and more in legal filings. And it’s starting to infuriate judges. Just consider these three cases, each of which gives a glimpse into what we can expect to see more of as lawyers embrace AI. A few weeks ago, a California judge, Michael Wilner, became intrigued by a set of arguments some lawyers made in a filing. He went to learn more about those arguments by following the articles they cited. But the articles didn’t exist. He asked the lawyers’ firm for more details, and they responded with a new brief that contained even more mistakes than the first. Wilner ordered the attorneys to give sworn testimonies explaining the mistakes, in which he learned that one of them, from the elite firm Ellis George, used Google Gemini as well as law-specific AI models to help write the document, which generated false information. As detailed in a filing on May 6, the judge fined the firm $31,000.  Last week, another California-based judge caught another hallucination in a court filing, this time submitted by the AI company Anthropic in the lawsuit that record labels have brought against it over copyright issues. One of Anthropic’s lawyers had asked the company’s AI model Claude to create a citation for a legal article, but Claude included the wrong title and author. Anthropic’s attorney admitted that the mistake was not caught by anyone reviewing the document.  Lastly, and perhaps most concerning, is a case unfolding in Israel. After police arrested an individual on charges of money laundering, Israeli prosecutors submitted a request asking a judge for permission to keep the individual’s phone as evidence. But they cited laws that don’t exist, prompting the defendant’s attorney to accuse them of including AI hallucinations in their request. The prosecutors, according to Israeli news outlets, admitted that this was the case, receiving a scolding from the judge.  Taken together, these cases point to a serious problem. Courts rely on documents that are accurate and backed up with citations—two traits that AI models, despite being adopted by lawyers eager to save time, often fail miserably to deliver.  Those mistakes are getting caught (for now), but it’s not a stretch to imagine that at some point soon, a judge’s decision will be influenced by something that’s totally made up by AI, and no one will catch it.  I spoke with Maura Grossman, who teaches at the School of Computer Science at the University of Waterloo as well as Osgoode Hall Law School, and has been a vocal early critic of the problems that generative AI poses for courts. She wrote about the problem back in 2023, when the first cases of hallucinations started appearing. She said she thought courts’ existing rules requiring lawyers to vet what they submit to the courts, combined with the bad publicity those cases attracted, would put a stop to the problem. That hasn’t panned out. Hallucinations “don’t seem to have slowed down,” she says. “If anything, they’ve sped up.” And these aren’t one-off cases with obscure local firms, she says. These are big-time lawyers making significant, embarrassing mistakes with AI. She worries that such mistakes are also cropping up more in documents not written by lawyers themselves, like expert reports (in December, a Stanford professor and expert on AI admitted to including AI-generated mistakes in his testimony).   I told Grossman that I find all this a little surprising. Attorneys, more than most, are obsessed with diction. They choose their words with precision. Why are so many getting caught making these mistakes? “Lawyers fall in two camps,” she says. “The first are scared to death and don’t want to use it at all.” But then there are the early adopters. These are lawyers tight on time or without a cadre of other lawyers to help with a brief. They’re eager for technology that can help them write documents under tight deadlines. And their checks on the AI’s work aren’t always thorough.  The fact that high-powered lawyers, whose very profession it is to scrutinize language, keep getting caught making mistakes introduced by AI says something about how most of us treat the technology right now. We’re told repeatedly that AI makes mistakes, but language models also feel a bit like magic. We put in a complicated question and receive what sounds like a thoughtful, intelligent reply. Over time, AI models develop a veneer of authority. We trust them. “We assume that because these large language models are so fluent, it also means that they’re accurate,” Grossman says. “We all sort of slip into that trusting mode because it sounds authoritative.” Attorneys are used to checking the work of junior attorneys and interns but for some reason, Grossman says, don’t apply this skepticism to AI. We’ve known about this problem ever since ChatGPT launched nearly three years ago, but the recommended solution has not evolved much since then: Don’t trust everything you read, and vet what an AI model tells you. As AI models get thrust into so many different tools we use, I increasingly find this to be an unsatisfying counter to one of AI’s most foundational flaws. Hallucinations are inherent to the way that large language models work. Despite that, companies are selling generative AI tools made for lawyers that claim to be reliably accurate. “Feel confident your research is accurate and complete,” reads the website for Westlaw Precision, and the website for CoCounsel promises its AI is “backed by authoritative content.” That didn’t stop their client, Ellis George, from being fined $31,000. Increasingly, I have sympathy for people who trust AI more than they should. We are, after all, living in a time when the people building this technology are telling us that AI is so powerful it should be treated like nuclear weapons. Models have learned from nearly every word humanity has ever written down and are infiltrating our online life. If people shouldn’t trust everything AI models say, they probably deserve to be reminded of that a little more often by the companies building them.  This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.
    0 Yorumlar 0 hisse senetleri
  • Anthropic blames Claude AI for ‘embarrassing and unintentional mistake’ in legal filing

    Not the best publicity for Anthropic’s chatbot.

    Anthropic has responded to allegations that it used an AI-fabricated source in its legal battle against music publishers, saying its Claude chatbot made an “honest citation mistake.”

    An erroneous citation was included in a filing submitted by Anthropic data scientist Olivia Chen on April 30th, as part of the AI company’s defense against claims that copyrighted lyrics were used to train Claude. An attorney representing Universal Music Group, ABKCO, and Concord said in a hearing that sources referenced in Chen’s filing were a “complete fabrication,” and implied they were hallucinated by Anthropic’s AI tool.

    In a response filed on Thursday, Anthropic defense attorney Ivana Dukanovic said that the scrutinized source was genuine and that Claude had indeed been used to format legal citations in the document. While incorrect volume and page numbers generated by the chatbot were caught and corrected by a “manual citation check,” Anthropic admits that wording errors had gone undetected.

    Dukanovic said, “unfortunately, although providing the correct publication title, publication year, and link to the provided source, the returned citation included an inaccurate title and incorrect authors,” and that the error wasn’t a “fabrication of authority.” The company apologized for the inaccuracy and confusion caused by the citation error, calling it “an embarrassing and unintentional mistake.”

    This is one of many growing examples of how using AI tools for legal citations has caused issues in courtrooms. Last week, a California Judge chastised two law firms for failing to disclose that AI was used to create a supplemental brief rife with “bogus” materials that “didn’t exist.” A misinformation expert admitted in December that ChatGPT had hallucinated citations in a legal filing he’d submitted.
    #anthropic #blames #claude #embarrassing #unintentional
    Anthropic blames Claude AI for ‘embarrassing and unintentional mistake’ in legal filing
    Not the best publicity for Anthropic’s chatbot. Anthropic has responded to allegations that it used an AI-fabricated source in its legal battle against music publishers, saying its Claude chatbot made an “honest citation mistake.” An erroneous citation was included in a filing submitted by Anthropic data scientist Olivia Chen on April 30th, as part of the AI company’s defense against claims that copyrighted lyrics were used to train Claude. An attorney representing Universal Music Group, ABKCO, and Concord said in a hearing that sources referenced in Chen’s filing were a “complete fabrication,” and implied they were hallucinated by Anthropic’s AI tool. In a response filed on Thursday, Anthropic defense attorney Ivana Dukanovic said that the scrutinized source was genuine and that Claude had indeed been used to format legal citations in the document. While incorrect volume and page numbers generated by the chatbot were caught and corrected by a “manual citation check,” Anthropic admits that wording errors had gone undetected. Dukanovic said, “unfortunately, although providing the correct publication title, publication year, and link to the provided source, the returned citation included an inaccurate title and incorrect authors,” and that the error wasn’t a “fabrication of authority.” The company apologized for the inaccuracy and confusion caused by the citation error, calling it “an embarrassing and unintentional mistake.” This is one of many growing examples of how using AI tools for legal citations has caused issues in courtrooms. Last week, a California Judge chastised two law firms for failing to disclose that AI was used to create a supplemental brief rife with “bogus” materials that “didn’t exist.” A misinformation expert admitted in December that ChatGPT had hallucinated citations in a legal filing he’d submitted. #anthropic #blames #claude #embarrassing #unintentional
    WWW.THEVERGE.COM
    Anthropic blames Claude AI for ‘embarrassing and unintentional mistake’ in legal filing
    Not the best publicity for Anthropic’s chatbot. Anthropic has responded to allegations that it used an AI-fabricated source in its legal battle against music publishers, saying its Claude chatbot made an “honest citation mistake.” An erroneous citation was included in a filing submitted by Anthropic data scientist Olivia Chen on April 30th, as part of the AI company’s defense against claims that copyrighted lyrics were used to train Claude. An attorney representing Universal Music Group, ABKCO, and Concord said in a hearing that sources referenced in Chen’s filing were a “complete fabrication,” and implied they were hallucinated by Anthropic’s AI tool. In a response filed on Thursday, Anthropic defense attorney Ivana Dukanovic said that the scrutinized source was genuine and that Claude had indeed been used to format legal citations in the document. While incorrect volume and page numbers generated by the chatbot were caught and corrected by a “manual citation check,” Anthropic admits that wording errors had gone undetected. Dukanovic said, “unfortunately, although providing the correct publication title, publication year, and link to the provided source, the returned citation included an inaccurate title and incorrect authors,” and that the error wasn’t a “fabrication of authority.” The company apologized for the inaccuracy and confusion caused by the citation error, calling it “an embarrassing and unintentional mistake.” This is one of many growing examples of how using AI tools for legal citations has caused issues in courtrooms. Last week, a California Judge chastised two law firms for failing to disclose that AI was used to create a supplemental brief rife with “bogus” materials that “didn’t exist.” A misinformation expert admitted in December that ChatGPT had hallucinated citations in a legal filing he’d submitted.
    0 Yorumlar 0 hisse senetleri
  • Congress proposes 10-year ban on state AI regulations

    House Republicans have proposed banning states from regulating AI for the next ten years. The sweeping moratorium, quietly tucked into the Budget Reconciliation Bill last Sunday, would block most state and local governments from enforcing AI regulations until 2035 if passed.

    The proposed legislation stated that “no State or political subdivision thereof may enforce any law or regulation regulating artificial intelligence models, artificial intelligence systems, or automated decision systems,” for 10 years.

    Industry experts warn that this potential regulatory vacuum would come precisely when AI systems are becoming more powerful and pervasive across the US society.

    Oversight gap raises concerns

    The moratorium would create an unprecedented situation: rapidly evolving AI technology would operate without state-level guardrails during what may be its most transformative decade.

    “The proposed decade-long moratorium on state-level AI regulations presents a double-edged sword,” said Abhivyakti Sengar, practice director at Everest Group. “On one hand, it aims to prevent a fragmented regulatory environment that could stifle innovation, on the other hand, it risks creating a regulatory vacuum, leaving critical decisions about AI governance in the hands of private entities without sufficient oversight.”

    The proposed legislation includes specific exceptions. According to the bill text, states would still be allowed to enforce laws that have “the primary purpose and effect of which is to remove legal impediments to, or facilitate the deployment or operation of, an artificial intelligence model, artificial intelligence system, or automated decision system.”

    States could also enforce laws that streamline “licensing, permitting, routing, zoning, procurement, or reporting procedures” for AI systems.

    However, the bill explicitly prohibits states from imposing “any substantive design, performance, data-handling, documentation, civil liability, taxation, fee, or other requirement” specifically on AI unless such requirements are applied equally to non-AI systems with similar functions.

    This limitation would prevent states from creating AI-specific oversight frameworks that address the technology’s unique capabilities and risks.

    State AI regulations threatened

    If enacted, the impact could be significant. Several states have been developing AI oversight frameworks that would likely become unenforceable under the federal provision.

    Various state-level efforts to regulate AI systems — from algorithmic transparency requirements to data privacy protections for AI training — could be effectively neutralized without public debate or input.

    The moratorium particularly threatens state data privacy protections. Without these state laws, consumers have few guarantees regarding how AI systems use their data, obtain consent, or make decisions affecting their lives.

    Global standards diverge

    The US approach now stands in stark contrast to the European Union’s comprehensive AI Act, which imposes strict requirements on high-risk AI systems.

    “As the US diverges from the EU’s stringent AI regulatory framework, multinational enterprises may face the challenge of navigating conflicting standards,” Sengar noted. This divergence potentially leads to “increased compliance costs and operational complexities.”

    Sanchit Vir Gogia, chief analyst and CEO at Greyhound Research, sees a splintering global AI landscape ahead.

    “America’s moratorium will likely deepen the regulatory divergence with Europe,” said Gogia. “This will accelerate the fragmentation of global AI product design, where use-case eligibility and ethical thresholds vary dramatically by geography.”

    Enterprises face a new reality

    For businesses, the regulatory clarity comes with difficult strategic decisions. Companies must determine how aggressively to implement AI systems during this regulation-free decade.

    Many large companies aren’t waiting for government guidance. “Even before public oversight being put on hold, large enterprises have already launched internal AI governance councils,” Gogia explained. “These internal regimes — led by CISOs, legal, and risk teams — are becoming the primary referees for responsible AI use.”

    But Gogia cautioned against over-reliance on self-regulation: “While these structures are necessary, they are not a long-term substitute for statutory accountability.”

    Legal uncertainty remains

    Despite the moratorium on regulations, experts warn that companies still face significant liability risks.

    “The absence of clear legal guidelines could result in heightened legal uncertainty, as courts grapple with AI-related disputes without established precedents,” said Sengar.

    Gogia puts it more bluntly: “Even in a regulatory freeze, enterprises remain legally accountable. I believe the lack of specific laws does not eliminate legal exposure — it merely shifts the battleground from compliance desks to courtrooms.”

    While restricting state action, the legislation simultaneously expands the federal government’s AI footprint. The bill allocates million to the Department of Commerce for AI modernization through 2035.

    The money targets legacy system replacement, operational efficiency improvements, and cybersecurity enhancements using AI technologies.

    This dual approach positions the federal government as both the primary AI regulator and a major AI customer, consolidating tremendous influence over the technology’s direction.

    Finding balance

    Industry observers emphasize the need for thoughtful governance despite the moratorium.

    “In this rapidly evolving landscape, a balanced approach that fosters innovation while ensuring accountability and public trust is paramount,” Sengar noted. Gogia offers a succinct assessment of the situation: “The 10-year moratorium on US state and local AI regulation removes complexity but not risk. I believe innovation does need room, but room without direction risks misalignment between corporate ethics and public interest.”
    #congress #proposes #10year #ban #state
    Congress proposes 10-year ban on state AI regulations
    House Republicans have proposed banning states from regulating AI for the next ten years. The sweeping moratorium, quietly tucked into the Budget Reconciliation Bill last Sunday, would block most state and local governments from enforcing AI regulations until 2035 if passed. The proposed legislation stated that “no State or political subdivision thereof may enforce any law or regulation regulating artificial intelligence models, artificial intelligence systems, or automated decision systems,” for 10 years. Industry experts warn that this potential regulatory vacuum would come precisely when AI systems are becoming more powerful and pervasive across the US society. Oversight gap raises concerns The moratorium would create an unprecedented situation: rapidly evolving AI technology would operate without state-level guardrails during what may be its most transformative decade. “The proposed decade-long moratorium on state-level AI regulations presents a double-edged sword,” said Abhivyakti Sengar, practice director at Everest Group. “On one hand, it aims to prevent a fragmented regulatory environment that could stifle innovation, on the other hand, it risks creating a regulatory vacuum, leaving critical decisions about AI governance in the hands of private entities without sufficient oversight.” The proposed legislation includes specific exceptions. According to the bill text, states would still be allowed to enforce laws that have “the primary purpose and effect of which is to remove legal impediments to, or facilitate the deployment or operation of, an artificial intelligence model, artificial intelligence system, or automated decision system.” States could also enforce laws that streamline “licensing, permitting, routing, zoning, procurement, or reporting procedures” for AI systems. However, the bill explicitly prohibits states from imposing “any substantive design, performance, data-handling, documentation, civil liability, taxation, fee, or other requirement” specifically on AI unless such requirements are applied equally to non-AI systems with similar functions. This limitation would prevent states from creating AI-specific oversight frameworks that address the technology’s unique capabilities and risks. State AI regulations threatened If enacted, the impact could be significant. Several states have been developing AI oversight frameworks that would likely become unenforceable under the federal provision. Various state-level efforts to regulate AI systems — from algorithmic transparency requirements to data privacy protections for AI training — could be effectively neutralized without public debate or input. The moratorium particularly threatens state data privacy protections. Without these state laws, consumers have few guarantees regarding how AI systems use their data, obtain consent, or make decisions affecting their lives. Global standards diverge The US approach now stands in stark contrast to the European Union’s comprehensive AI Act, which imposes strict requirements on high-risk AI systems. “As the US diverges from the EU’s stringent AI regulatory framework, multinational enterprises may face the challenge of navigating conflicting standards,” Sengar noted. This divergence potentially leads to “increased compliance costs and operational complexities.” Sanchit Vir Gogia, chief analyst and CEO at Greyhound Research, sees a splintering global AI landscape ahead. “America’s moratorium will likely deepen the regulatory divergence with Europe,” said Gogia. “This will accelerate the fragmentation of global AI product design, where use-case eligibility and ethical thresholds vary dramatically by geography.” Enterprises face a new reality For businesses, the regulatory clarity comes with difficult strategic decisions. Companies must determine how aggressively to implement AI systems during this regulation-free decade. Many large companies aren’t waiting for government guidance. “Even before public oversight being put on hold, large enterprises have already launched internal AI governance councils,” Gogia explained. “These internal regimes — led by CISOs, legal, and risk teams — are becoming the primary referees for responsible AI use.” But Gogia cautioned against over-reliance on self-regulation: “While these structures are necessary, they are not a long-term substitute for statutory accountability.” Legal uncertainty remains Despite the moratorium on regulations, experts warn that companies still face significant liability risks. “The absence of clear legal guidelines could result in heightened legal uncertainty, as courts grapple with AI-related disputes without established precedents,” said Sengar. Gogia puts it more bluntly: “Even in a regulatory freeze, enterprises remain legally accountable. I believe the lack of specific laws does not eliminate legal exposure — it merely shifts the battleground from compliance desks to courtrooms.” While restricting state action, the legislation simultaneously expands the federal government’s AI footprint. The bill allocates million to the Department of Commerce for AI modernization through 2035. The money targets legacy system replacement, operational efficiency improvements, and cybersecurity enhancements using AI technologies. This dual approach positions the federal government as both the primary AI regulator and a major AI customer, consolidating tremendous influence over the technology’s direction. Finding balance Industry observers emphasize the need for thoughtful governance despite the moratorium. “In this rapidly evolving landscape, a balanced approach that fosters innovation while ensuring accountability and public trust is paramount,” Sengar noted. Gogia offers a succinct assessment of the situation: “The 10-year moratorium on US state and local AI regulation removes complexity but not risk. I believe innovation does need room, but room without direction risks misalignment between corporate ethics and public interest.” #congress #proposes #10year #ban #state
    WWW.COMPUTERWORLD.COM
    Congress proposes 10-year ban on state AI regulations
    House Republicans have proposed banning states from regulating AI for the next ten years. The sweeping moratorium, quietly tucked into the Budget Reconciliation Bill last Sunday, would block most state and local governments from enforcing AI regulations until 2035 if passed. The proposed legislation stated that “no State or political subdivision thereof may enforce any law or regulation regulating artificial intelligence models, artificial intelligence systems, or automated decision systems,” for 10 years. Industry experts warn that this potential regulatory vacuum would come precisely when AI systems are becoming more powerful and pervasive across the US society. Oversight gap raises concerns The moratorium would create an unprecedented situation: rapidly evolving AI technology would operate without state-level guardrails during what may be its most transformative decade. “The proposed decade-long moratorium on state-level AI regulations presents a double-edged sword,” said Abhivyakti Sengar, practice director at Everest Group. “On one hand, it aims to prevent a fragmented regulatory environment that could stifle innovation, on the other hand, it risks creating a regulatory vacuum, leaving critical decisions about AI governance in the hands of private entities without sufficient oversight.” The proposed legislation includes specific exceptions. According to the bill text, states would still be allowed to enforce laws that have “the primary purpose and effect of which is to remove legal impediments to, or facilitate the deployment or operation of, an artificial intelligence model, artificial intelligence system, or automated decision system.” States could also enforce laws that streamline “licensing, permitting, routing, zoning, procurement, or reporting procedures” for AI systems. However, the bill explicitly prohibits states from imposing “any substantive design, performance, data-handling, documentation, civil liability, taxation, fee, or other requirement” specifically on AI unless such requirements are applied equally to non-AI systems with similar functions. This limitation would prevent states from creating AI-specific oversight frameworks that address the technology’s unique capabilities and risks. State AI regulations threatened If enacted, the impact could be significant. Several states have been developing AI oversight frameworks that would likely become unenforceable under the federal provision. Various state-level efforts to regulate AI systems — from algorithmic transparency requirements to data privacy protections for AI training — could be effectively neutralized without public debate or input. The moratorium particularly threatens state data privacy protections. Without these state laws, consumers have few guarantees regarding how AI systems use their data, obtain consent, or make decisions affecting their lives. Global standards diverge The US approach now stands in stark contrast to the European Union’s comprehensive AI Act, which imposes strict requirements on high-risk AI systems. “As the US diverges from the EU’s stringent AI regulatory framework, multinational enterprises may face the challenge of navigating conflicting standards,” Sengar noted. This divergence potentially leads to “increased compliance costs and operational complexities.” Sanchit Vir Gogia, chief analyst and CEO at Greyhound Research, sees a splintering global AI landscape ahead. “America’s moratorium will likely deepen the regulatory divergence with Europe,” said Gogia. “This will accelerate the fragmentation of global AI product design, where use-case eligibility and ethical thresholds vary dramatically by geography.” Enterprises face a new reality For businesses, the regulatory clarity comes with difficult strategic decisions. Companies must determine how aggressively to implement AI systems during this regulation-free decade. Many large companies aren’t waiting for government guidance. “Even before public oversight being put on hold, large enterprises have already launched internal AI governance councils,” Gogia explained. “These internal regimes — led by CISOs, legal, and risk teams — are becoming the primary referees for responsible AI use.” But Gogia cautioned against over-reliance on self-regulation: “While these structures are necessary, they are not a long-term substitute for statutory accountability.” Legal uncertainty remains Despite the moratorium on regulations, experts warn that companies still face significant liability risks. “The absence of clear legal guidelines could result in heightened legal uncertainty, as courts grapple with AI-related disputes without established precedents,” said Sengar. Gogia puts it more bluntly: “Even in a regulatory freeze, enterprises remain legally accountable. I believe the lack of specific laws does not eliminate legal exposure — it merely shifts the battleground from compliance desks to courtrooms.” While restricting state action, the legislation simultaneously expands the federal government’s AI footprint. The bill allocates $500 million to the Department of Commerce for AI modernization through 2035. The money targets legacy system replacement, operational efficiency improvements, and cybersecurity enhancements using AI technologies. This dual approach positions the federal government as both the primary AI regulator and a major AI customer, consolidating tremendous influence over the technology’s direction. Finding balance Industry observers emphasize the need for thoughtful governance despite the moratorium. “In this rapidly evolving landscape, a balanced approach that fosters innovation while ensuring accountability and public trust is paramount,” Sengar noted. Gogia offers a succinct assessment of the situation: “The 10-year moratorium on US state and local AI regulation removes complexity but not risk. I believe innovation does need room, but room without direction risks misalignment between corporate ethics and public interest.”
    0 Yorumlar 0 hisse senetleri