MIT Technology Review
MIT Technology Review
Our in-depth reporting on innovation reveals and explains what’s really happening now to help you know what’s coming next. Get our journalism: http://technologyreview.com/newsletters.
1 pessoas curtiram isso
579 Publicações
2 fotos
0 Vídeos
0 Anterior
Atualizações Recentes
  • The first trial of generative AI therapy shows it might help with depression
    www.technologyreview.com
    The first clinical trial of a therapy bot that uses generative AI suggests it was as effective as human therapy for participants with depression, anxiety, or risk for developing eating disorders. Even so, it doesnt give a go-ahead to the dozens of companies hyping such technologies while operating in a regulatory gray area.A team led by psychiatric researchers and psychologists at the Geisel School of Medicine at Dartmouth College built the tool, called Therabot, and the results were published on March 27 in the New England Journal of Medicine. Many tech companies have built AI tools for therapy, promising that people can talk with a bot more frequently and cheaply than they can with a trained therapistand that this approach is safe and effective.Many psychologists and psychiatrists have shared the vision, noting that fewer than half of people with a mental disorder receive therapy, and those who do might get only 45 minutes per week. Researchers have tried to build tech so that more people can access therapy, but they have been held back by two things.One, a therapy bot that says the wrong thing could result in real harm. Thats why many researchers have built bots using explicit programming: The software pulls from a finite bank of approved responses (as was the case with Eliza, a mock-psychotherapist computer program built in the 1960s). But this makes them less engaging to chat with, and people lose interest. The second issue is that the hallmarks of good therapeutic relationshipsshared goals and collaborationare hard to replicate in software.In 2019, as early large language models like OpenAIs GPT were taking shape, the researchers at Dartmouth thought generative AI might help overcome these hurdles. They set about building an AI model trained to give evidence-based responses. They first tried building it from general mental-health conversations pulled from internet forums. Then they turned to thousands of hours of transcripts of real sessions with psychotherapists.We got a lot of hmm-hmms, go ons, and then Your problems stem from your relationship with your mother, said Michael Heinz, a research psychiatrist at Dartmouth College and Dartmouth Health and first author of the study, in an interview. Really tropes of what psychotherapy would be, rather than actually what wed want.Dissatisfied, they set to work assembling their own custom data sets based on evidence-based practices, which is what ultimately went into the model. Many AI therapy bots on the market, in contrast, might be just slight variations of foundation models like Metas Llama, trained mostly on internet conversations. That poses a problem, especially for topics like disordered eating.If you were to say that you want to lose weight, Heinz says, they will readily support you in doing that, even if you will often have a low weight to start with. A human therapist wouldnt do that.To test the bot, the researchers ran an eight-week clinical trial with 210 participants who had symptoms of depression or generalized anxiety disorder or were at high risk for eating disorders. About half had access to Therabot, and a control group did not. Participants responded to prompts from the AI and initiated conversations, averaging about 10 messages per day.Participants with depression experienced a 51% reduction in symptoms, the best result in the study. Those with anxiety experienced a 31% reduction, and those at risk for eating disorders saw a 19% reduction in concerns about body image and weight. These measurements are based on self-reporting through surveys, a method thats not perfect but remains one of the best tools researchers have.These results, Heinz says, are about what one finds in randomized control trials of psychotherapy with 16 hours of human-provided treatment, but the Therabot trial accomplished it in about half the time. Ive been working in digital therapeutics for a long time, and Ive never seen levels of engagement that are prolonged and sustained at this level, he says.Jean-Christophe Blisle-Pipon, an assistant professor of health ethics at Simon Fraser University who has written about AI therapy bots but was not involved in the research, says the results are impressive but notes that just like any other clinical trial, this one doesnt necessarily represent how the treatment would act in the real world.We remain far from a greenlight for widespread clinical deployment, he wrote in an email.One issue is the supervision that wider deployment might require. During the beginning of the trial, Heinz says, he personally oversaw all the messages coming in from participants (who consented to the arrangement) to watch out for problematic responses from the bot. If therapy bots needed this oversight, they wouldnt be able to reach as many people.I asked Heinz if he thinks the results validate the burgeoning industry of AI therapy sites.Quite the opposite, he says, cautioning that most dont appear to train their models on evidence-based practices like cognitive behavioral therapy, and they likely dont employ a team of trained researchers to monitor interactions. I have a lot of concerns about the industry and how fast were moving without really kind of evaluating this, he adds.When AI sites advertise themselves as offering therapy in a legitimate, clinical context, Heinz says, it means they fall under the regulatory purview of the Food and Drug Administration. Thus far, the FDA has not gone after many of the sites. If it did, Heinz says, my suspicion is almost none of themprobably none of themthat are operating in this space would have the ability to actually get a claim clearancethat is, a ruling backing up their claims about the benefits provided.Blisle-Pipon points out that if these types of digital therapies are not approved and integrated into health-care and insurance systems, it will severely limit their reach. Instead, the people who would benefit from using them might seek emotional bonds and therapy from types of AI not designed for those purposes (indeed, new research from OpenAI suggests that interactions with its AI models have a very real impact on emotional well-being).It is highly likely that many individuals will continue to rely on more affordable, nontherapeutic chatbotssuch as ChatGPT or Character.AIfor everyday needs, ranging from generating recipe ideas to managing their mental health, he wrote.
    0 Comentários ·0 Compartilhamentos ·3 Visualizações
  • How a bankruptcy judge can stop a genetic privacy disaster
    www.technologyreview.com
    Stop me if youve heard this one before: A tech company accumulates a ton of user data, hoping to figure out a business model later. That business model never arrives, the company goes under, and the data is in the wind.The latest version of that story emerged on March 24, when the onetime genetic testing darling 23andMe filed for bankruptcy. Now the fate of 15 million peoples genetic data rests in the hands of a bankruptcy judge. At a hearing on March 26, the judge gave 23andMe permission to seek offers for its users data. But, theres still a small chance of writing a better ending for users.After the bankruptcy filing, the immediate take from policymakers and privacy advocates was that 23andMe users should delete their accounts to prevent genetic data from falling into the wrong hands. Thats good advice for the individual user (and you can read how to do so here). But the reality is most people wont do it. Maybe they wont see the recommendations to do so. Maybe they dont know why they should be worried. Maybe they have long since abandoned an account that they dont even remember exists. Or maybe theyre just occupied with the chaos of everyday life.This means the real value of this data comes from the fact that people have forgotten about it. Given 23andMes meager revenuefewer than 4% of people who took tests pay for subscriptionsit seems inevitable that the new owner, whoever it is, will have to find some new way to monetize that data.This is a terrible deal for users who just wanted to learn a little more about themselves or their ancestry. Because genetic data is forever. Contact information can go stale over time: you can always change your password, your email, your phone number, or even your address. But a bad actor who has your genetic datawhether a cybercriminal selling it to the highest bidder, a company building a profile of your future health risk, or a government trying to identify youwill have it tomorrow and the next day and all the days after that.Users with exposed genetic data are not only vulnerable to harm today; theyre vulnerable to exploits that might be developed in the future.While 23andMe promises that it will not voluntarily share data with insurance providers, employers, or public databases, its new owner could unwind those promises at any time with a simple change in terms.In other words: If a bankruptcy court makes a mistake authorizing the sale of 23andMes user data, that mistake is likely permanent and irreparable.All this is possible because American lawmakers have neglected to meaningfully engage with digital privacy for nearly a quarter-century. As a result, services are incentivized to make flimsy, deceptive promises that can be abandoned at a moments notice. And the burden falls on users to keep track of it all, or just give up.Here, a simple fix would be to reverse that burden. A bankruptcy court could require that users individually opt in before their genetic data can be transferred to 23andMes new owners, regardless of who those new owners are. Anyone who didnt respond or who opted out would have the data deleted.Bankruptcy proceedings involving personal data dont have to end badly. In 2000, the Federal Trade Commission settled with the bankrupt retailer ToySmart to ensure that its customer data could not be sold as a stand-alone asset, and that customers would have to affirmatively consent to unexpected new uses of their data. And in 2015, the FTC intervened in the bankruptcy of RadioShack to ensure that it would keep its promises never to sell the personal data of its customers. (RadioShack eventually agreed to destroy it.)The ToySmart case also gave rise to the role of the consumer privacy ombudsman. Bankruptcy judges can appoint an ombuds to help the court consider how the sale of personal data might affect the bankruptcy estate, examining the potential harms or benefits to consumers and any alternatives that might mitigate those harms. The U.S. Trustee has requested the appointment of an ombuds in this case. While scholars have called for the role to have more teeth and for the FTC and states to intervene more often, a framework for protecting personal data in bankruptcy is available. And ultimately, the bankruptcy judge has broad power to make decisions about how (or whether) property in bankruptcy is sold.Here, 23andMe has a more permissive privacy policy than ToySmart or RadioShack. But the risks incurred if genetic data falls into the wrong hands or is misused are severe and irreversible. And given 23andMes failure to build a viable business model from testing kits, it seems likely that a new business would use genetic data in ways that users wouldnt expect or want.An opt-in requirement for genetic data solves this problem. Genetic data (and other sensitive data) could be held by the bankruptcy trustee and released as individual users gave their consent. If users failed to opt in after a period of time, the remaining data would be deleted. This would incentivize 23andMes new owners to earn user trust and build a business that delivers value to users, instead of finding unexpected ways to exploit their data. And it would impose virtually no burden on the people whose genetic data is at risk: after all, they have plenty more DNA to spare.Consider the alternative. Before 23andMe went into bankruptcy, its then-CEO made two failed attempts to buy it, at reported valuations of $74.7 million and $12.1 million. Using the higher offer, and with 15 million users, that works out to a little under $5 per user. Is it really worth it to permanently risk a persons genetic privacy just to add a few dollars in value to the bankruptcy estate?Of course, this raises a bigger question: Why should anyone be able to buy the genetic data of millions of Americans in a bankruptcy proceeding? The answer is simple: Lawmakers allow them to. Federal and state inaction allows companies to dissolve promises about protecting Americans most sensitive data at a moments notice. When 23andMe was founded, in 2006, the promise was that personalized health care was around the corner. Today, 18 years later, that era may really be almost here. But with privacy laws like ours, who would trust it?Keith Porcaro is the Rueben Everett Senior Lecturing Fellow at Duke Law School.
    0 Comentários ·0 Compartilhamentos ·7 Visualizações
  • The Download: peering inside an LLM, and the rise of Signal
    www.technologyreview.com
    This is todays edition ofThe Download,our weekday newsletter that provides a daily dose of whats going on in the world of technology.Anthropic can now track the bizarre inner workings of a large language modelThe news: The AI firm Anthropic has developed a way to peer inside a large language model and watch what it does as it comes up with a response, revealing key new insights into how the technology works. The takeaway: LLMs are even stranger than we thought.Why it matters: Its no secret that large language models work in mysterious ways. Shedding some light on how they work would expose their weaknesses, revealing why they make stuff up and can be tricked into going off the rails. It would help resolve deep disputes about exactly what these models can and cant do. And it would show how trustworthy (or not) they really are. Read the full story.Will Douglas HeavenWhat is Signal? The messaging app, explained.With the recent news that the Atlantics editor in chief was accidentally added to a group Signal chat for American leaders planning a bombing in Yemen, many people are wondering: What is Signal? Is it secure? If government officials arent supposed to use it for military planning, does that mean I shouldnt use it either?The answer is: Yes, you should use Signal, but government officials having top-secret conversations shouldnt use Signal. Read the full story to find out why.Jack CushmanThis story is part of our MIT Technology Review Explains series, in which our writers untangle the complex, messy world of technology to help you understand whats coming next. You can read more of them here.Spare living human bodies might provide us with organs for transplantationJessica HamzelouThis week, MIT Technology Review published a piece on bodyoidsliving bodies that cannot think or feel pain. In the piece, a trio of scientists argue that advances in biotechnology will soon allow us to create spare human bodies that could be used for research, or to provide organs for donation.If you find your skin crawling at this point, youre not the only one. Its a creepy idea, straight from the more horrible corners of science fiction. But bodyoids could be used for good. And if they are truly unaware and unable to think, the use of bodyoids wouldnt cross most peoples ethical lines, the authors argue.Im not so sure. Read the full story.This article first appeared in The Checkup, MIT Technology Reviews weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.The must-readsIve combed the internet to find you todays most fun/important/scary/fascinating stories about technology.1 A judge has ordered Trumps officials to preserve their secret Signal chatWhile officials are required by law to keep chats detailing government business, Signals messages can be set to auto-disappear. (USA Today)+ The conversation detailed an imminent attack against Houthi rebels in Yemen. (The Hill)+ A government accountability group has sued the agencies involved. (Reuters)+ The officials involved in the chat appear to have public Venmo accounts. (Wired $)2 The White House is prepared to cut up to 50% of agency staffBut the final cuts could end up exceeding even that. (WP $)+ The sweeping cuts could threaten vital US statistics, too. (FT $)+ Can AI help DOGE slash government budgets? Its complex. (MIT Technology Review)3 OpenAI is struggling to keep up with demand for ChatGPTs image generationThe fervor around its Studio Ghibli pictures has sent its GPUs into overdrive. (The Verge)+ Ghiblis founder is no fan of AI art. (404 Media)+ Four ways to protect your art from AI. (MIT Technology Review)4 Facebook is pivoting back towards friends and familyLess news, fewer posts from people you dont know. (NYT $)+ A new tab shows purely updates from friends, with no other recommendations. (Insider $)5 Africa is set to build its first AI factoryA specialized powerhouse for AI computing, to be precise. (Rest of World)+ What Africa needs to do to become a major AI player. (MIT Technology Review)6 A TikTok network spread Spanish-language immigration misinformationIncluding clips of the doctored voices of well-known journalists. (NBC News)7 Your TV is desperate for your dataStreamers are scrambling around for new ways to make money off the information they gather on you. (Vox)8 This startup extracts rare earth oxides from industrial magnets Its a less intrusive way of accessing minerals vital to EV and wind turbine production. (FT $)+ The race to produce rare earth elements. (MIT Technology Review)9 NASA hopes to launch its next Starliner flight as soon as later this yearAfter its latest mission stretched from a projected eight days to nine months. (Reuters)+ Europe is finally getting serious about commercial rockets. (MIT Technology Review)10 The Sims has been the worlds favorite life simulation game for 25 yearsBut a new Korean game is both more realistic and multicultural. (Bloomberg $)Quote of the dayIts like, can you tell the difference between a person and a person-shaped sock puppet that is holding up a sign saying, I am a sock puppet?Laura Edelson, a computer science professor at Northeastern University, is skeptical about brands abilities to ensure their ads are being shown to real humans and not bots, she tells the Wall Street Journal.The big storyThe race to fix space-weather forecasting before next big solar storm hitsApril 2024As the number of satellites in space grows, and as we rely on them for increasing numbers of vital tasks on Earth, the need to better predict stormy space weather is becoming more and more urgent.Scientists have long known that solar activity can change the density of the upper atmosphere. But its incredibly difficult to precisely predict the sorts of density changes that a given amount of solar activity would produce.Now, experts are working on a model of the upper atmosphere to help scientists to improve their models of how solar activity affects the environment in low Earth orbit. If they succeed, theyll be able to keep satellites safe even amid turbulent space weather, reducing the risk of potentially catastrophic orbital collisions. Read the full story.Tereza PultarovaWe can still have nice thingsA place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet em at me.)+ This is very coola nearly-infinite virtual museum entirely generated from Wikipedia.+ How to let go of that grudge youve been harboring (you know the one)+ If your social media feeds have been plagued by hot men making bad art, youre not alone.+ Its Friday, so enjoy this 1992 recording of a very fresh-faced Pearl Jam.
    0 Comentários ·0 Compartilhamentos ·7 Visualizações
  • Spare living human bodies might provide us with organs for transplantation
    www.technologyreview.com
    This week, MIT Technology Review published a piece on bodyoidsliving bodies that cannot think or feel pain. In the piece, a trio of scientists argue that advances in biotechnology will soon allow us to create spare human bodies that could be used for research, or to provide organs for donation.If you find your skin crawling at this point, youre not the only one. Its a creepy idea, straight from the more horrible corners of science fiction. But bodyoids could be used for good. And if they are truly unaware and unable to think, the use of bodyoids wouldnt cross most peoples ethical lines, the authors argue. Im not so sure.Either way, theres no doubt that developments in science and biotechnology are bringing us closer to the potential reality of bodyoids. And the idea is already stirring plenty of ethical debate and controversy.One of the main arguments made for bodyoids is that they could provide spare human organs. Theres a huge shortage of organs for transplantation. More than 100,000 people in the US are waiting for a transplant, and 17 people on that waiting list die every day. Human bodyoids could serve as a new source.Scientists are working on other potential solutions to this problem. One approach is the use of gene-edited animal organs. Animal organs dont typically last inside human bodiesour immune systems will reject them as foreign. But a few companies are creating pigs with a series of gene edits that make their organs more acceptable to human bodies.A handful of living people have received gene-edited pig organs. David Bennett Sr. was the first person to get a gene-edited pig heart, in 2022, and Richard Slayman was the first to get a kidney, in early 2024. Unfortunately, both men died around two months after their surgery.But Towana Looney, the third living person to receive a gene-edited pig kidney, has been doing well. She had her transplant surgery in late November of last year. I am full of energy. I got an appetite Ive never had in eight years, she said at the time. I can put my hand on this kidney and feel it buzzing. She returned home in February.At least one company is taking more of a bodyoid-like approach. Renewal Bio, a biotech company based in Israel, hopes to grow embryo-stage versions of people for replacement organs.Their approach is based on advances in the development of synthetic embryos. (Im putting that term in quotation marks because, while its the simplest descriptor of what they are, a lot of scientists hate the term.)Embryos start with the union of an egg cell and a sperm cell. But scientists have been working on ways to make embryos using stem cells instead. Under the right conditions, these cells can divide into structures that look a lot like a typical embryo.Scientists dont know how far these embryo-like structures will be able to develop. But theyre already using them to try to get cows and monkeys pregnant.And no one really knows how to think about synthetic human embryos. Scientists dont even really know what to call them. Rules stipulate that typical human embryos may be grown in the lab for a maximum of 14 days. Should the same rules apply to synthetic ones?The very existence of synthetic embryos is throwing into question our understanding of what a human embryo even is. Is it the thing that is only generated from the fusion of a sperm and an egg? Naomi Moris, a developmental biologist at the Crick Institute in London, said to me a couple of years ago. Is it something to do with the cell types it possesses, or the [shape] of the structure?The authors of the new MIT Technology Review piece also point out that such bodyoids could also help speed scientific and medical research.At the moment, most drug research must be conducted in lab animals before clinical trials can start. But nonhuman animals may not respond the same way people do, and the vast majority of treatments that look super-promising in mice fail in humans. Such research can feel like a waste of both animal lives and time.Scientists have been working on solutions to these problems, too. Some are creating organs on chipsminiature collections of cells organized on a small piece of polymer that may resemble full-size organs and can be used to test the effects of drugs.Others are creating digital representations of human organs for the same purpose. Such digital twins can be extensively modeled, and can potentially be used to run clinical trials in silico.Both of these approaches seem somehow more palatable to me, personally, than running experiments on a human created without the capacity to think or feel pain. The idea reminds me of the recent novel Tender Is the Flesh by Agustina Bazterrica, in which humans are bred for consumption. In the book, their vocal cords are removed so that others do not have to hear them scream.When it comes to real-world biotechnology, though, our feelings about what is acceptable tend to shift. In vitro fertilization was demonized when it was first developed, for instance, with opponents arguing that it was unnatural, a perilous insult, and the biggest threat since the atom bomb. It is estimated that more than 12 million people have been born through IVF since Louise Brown became the first test tube baby 46 years ago. I wonder how well all feel about bodyoids 46 years from now.This article first appeared in The Checkup,MIT Technology Reviewsweekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first,sign up here.
    0 Comentários ·0 Compartilhamentos ·12 Visualizações
  • Anthropic can now track the bizarre inner workings of a large language model
    www.technologyreview.com
    The AI firm Anthropic has developed a way to peer inside a large language model and watch what it does as it comes up with a response, revealing key new insights into how the technology works. The takeaway: LLMs are even stranger than we thought.The Anthropic team was surprised by some of the counterintuitive workarounds that large language models appear to use to complete sentences, solve simple math problems, suppress hallucinations, and more, says Joshua Batson, a research scientist at the company.Its no secret that large language models work in mysterious ways. Fewif anymass-market technologies have ever been so little understood. That makes figuring out what makes them tick one of the biggest open challenges in science.But its not just about curiosity. Shedding some light on how these models work would expose their weaknesses, revealing why they make stuff up and can be tricked into going off the rails. It would help resolve deep disputes about exactly what these models can and cant do. And it would show how trustworthy (or not) they really are.Batson and his colleagues describe their new work in two reports published today. The first presents Anthropics use of a technique called circuit tracing, which lets researchers track the decision-making processes inside a large language model step by step. Anthropic used circuit tracing to watch its LLM Claude 3.5 Haiku carry out various tasks. The second (titled On the Biology of a Large Language Model) details what the team discovered when it looked at 10 tasks in particular.I think this is really cool work, says Jack Merullo, who studies large language models at Brown University in Providence, Rhode Island, and was not involved in the research. Its a really nice step forward in terms of methods.Circuit tracing is not itself new. Last year Merullo and his colleagues analyzed a specific circuit in a version of OpenAIs GPT-2, an older large language model that OpenAI released in 2019. But Anthropic has now analyzed a number of different circuits as a far larger and far more complex model carries out multiple tasks. Anthropic is very capable at applying scale to a problem, says Merullo.Eden Biran, who studies large language models at Tel Aviv University, agrees. Finding circuits in a large state-of-the-art model such as Claude is a nontrivial engineering feat, he says. And it shows that circuits scale up and might be a good way forward for interpreting language models.Circuits chain together different partsor componentsof a model. Last year, Anthropic identified certain components inside Claude that correspond to real-world concepts. Some were specific, such as Michael Jordan or greenness; others were more vague, such as conflict between individuals. One component appeared to represent the Golden Gate Bridge. Anthropic researchers found that if they turned up the dial on this component, Claude could be made to self-identify not as a large language model but as the physical bridge itself.The latest work builds on that research and the work of others, including Google DeepMind, to reveal some of the connections between individual components. Chains of components are the pathways between the words put into Claude and the words that come out.Its tip-of-the-iceberg stuff. Maybe were looking at a few percent of whats going on, says Batson. But thats already enough to see incredible structure.Growing LLMsResearchers at Anthropic and elsewhere are studying large language models as if they were natural phenomena rather than human-built software. Thats because the models are trained, not programmed.They almost grow organically, says Batson. They start out totally random. Then you train them on all this data and they go from producing gibberish to being able to speak different languages and write software and fold proteins. There are insane things that these models learn to do, but we dont know how that happened because we didnt go in there and set the knobs.Sure, its all math. But its not math that we can follow. Open up a large language model and all you will see is billions of numbersthe parameters, says Batson. Its not illuminating.Anthropic says it was inspired by brain-scan techniques used in neuroscience to build what the firm describes as a kind of microscope that can be pointed at different parts of a model while it runs. The technique highlights components that are active at different times. Researchers can then zoom in on different components and record when they are and are not active.Take the component that corresponds to the Golden Gate Bridge. It turns on when Claude is shown text that names or describes the bridge or even text related to the bridge, such as San Francisco or Alcatraz. Its off otherwise.Yet another component might correspond to the idea of smallness: We look through tens of millions of texts and see its on for the word small, its on for the word tiny, its on for the word petite, its on for words related to smallness, things that are itty-bitty, like thimblesyou know, just small stuff, says Batson.Having identified individual components, Anthropic then follows the trail inside the model as different components get chained together. The researchers start at the end, with the component or components that led to the final response Claude gives to a query. Batson and his team then trace that chain backwards.Odd behaviorSo: What did they find? Anthropic looked at 10 different behaviors in Claude. One involved the use of different languages. Does Claude have a part that speaks French and another part that speaks Chinese, and so on?The team found that Claude used components independent of any language to answer a question or solve a problem and then picked a specific language when it replied. Ask it What is the opposite of small? in English, French, and Chinese and Claude will first use the language-neutral components related to smallness and opposites to come up with an answer. Only then will it pick a specific language in which to reply. This suggests that large language models can learn things in one language and apply them in other languages.Anthropic also looked at how Claude solved simple math problems. The team found that the model seems to have developed its own internal strategies that are unlike those it will have seen in its training data. Ask Claude to add 36 and 59 and the model will go through a series of odd steps, including first adding a selection of approximate values (add 40ish and 60ish, add 57ish and 36ish). Towards the end of its process, it comes up with the value 92ish. Meanwhile, another sequence of steps focuses on the last digits, 6 and 9, and determines that the answer must end in a 5. Putting that together with 92ish gives the correct answer of 95.And yet if you then ask Claude how it worked that out, it will say something like: I added the ones (6+9=15), carried the 1, then added the 10s (3+5+1=9), resulting in 95. In other words, it gives you a common approach found everywhere online rather than what it actually did. Yep! LLMs are weird. (And not to be trusted.)The steps that Claude 3.5 Haiku used to solve a simple math problem were not what Anthropic expectedtheyre not the steps Claude claimed it took either. ANTHROPICThis is clear evidence that large language models will give reasons for what they do that do not necessarily reflect what they actually did. But this is true for people too, says Batson: You ask somebody, Why did you do that? And theyre like, Um, I guess its because I was . You know, maybe not. Maybe they were just hungry and thats why they did it.Biran thinks this finding is especially interesting. Many researchers study the behavior of large language models by asking them to explain their actions. But that might be a risky approach, he says: As models continue getting stronger, they must be equipped with better guardrails. I believeand this work also showsthat relying only on model outputs is not enough.A third task that Anthropic studied was writing poems. The researchers wanted to know if the model really did just wing it, predicting one word at a time. Instead they found that Claude somehow looked ahead, picking the word at the end of the next line several words in advance.For example, when Claude was given the prompt A rhyming couplet: He saw a carrot and had to grab it, the model responded, His hunger was like a starving rabbit. But using their microscope, they saw that Claude had already hit upon the word rabbit when it was processing grab it. It then seemed to write the next line with that ending already in place.This might sound like a tiny detail. But it goes against the common assumption that large language models always work by picking one word at a time in sequence. The planning thing in poems blew me away, says Batson. Instead of at the very last minute trying to make the rhyme make sense, it knows where its going.I thought that was cool, says Merullo. One of the joys of working in the field is moments like that. Theres been maybe small bits of evidence pointing toward the ability of models to plan ahead, but its been a big open question to what extent they do.Anthropic then confirmed its observation by turning off the placeholder component for rabbitness. Claude responded with His hunger was a powerful habit. And when the team replaced rabbitness with greenness, Claude responded with freeing it from the gardens green.Anthropic also explored why Claude sometimes made stuff up, a phenomenon known as hallucination. Hallucination is the most natural thing in the world for these models, given how theyre just trained to give possible completions, says Batson. The real question is, How in Gods name could you ever make it not do that?The latest generation of large language models, like Claude 3.5 and Gemini and GPT-4o, hallucinate far less than previous versions, thanks to extensive post-training (the steps that take an LLM trained on the internet and turn it into a usable chatbot). But Batsons team was surprised to find that this post-training seems to have made Claude refuse to speculate as a default behavior. When it did respond with false information, it was because some other component had overridden the dont speculate component.This seemed to happen most often when the speculation involved a celebrity or other well-known entity. Its as if the amount of information available pushed the speculation through, despite the default setting. When Anthropic overrode the dont speculate component to test this, Claude produced lots of false statements about individuals, including claiming that Batson was famous for inventing the Batson principle (he isnt).Still unclearBecause we know so little about large language models, any new insight is a big step forward. A deep understanding of how these models work under the hood would allow us to design and train models that are much better and stronger, says Biran.But Batson notes there are still serious limitations. Its a misconception that weve found all the components of the model or, like, a Gods-eye view, he says. Some things are in focus, but other things are still uncleara distortion of the microscope.And it takes several hours for a human researcher to trace the responses to even very short prompts. Whats more, these models can do a remarkable number of different things, and Anthropic has so far looked at only 10 of them.Batson also says there are big questions that this approach wont answer. Circuit tracing can be used to peer at the structures inside a large language model, but it wont tell you how or why those structures formed during training. Thats a profound question that we dont address at all in this work, he says.But Batson sees this as the start of a new era in which it is possible, at last, to find real evidence for how these models work: We dont have to be, like: Are they thinking? Are they reasoning? Are they dreaming? Are they memorizing? Those are all analogies. But if we can literally see step by step what a model is doing, maybe now we dont need analogies.
    0 Comentários ·0 Compartilhamentos ·6 Visualizações
  • What is Signal? The messaging app, explained.
    www.technologyreview.com
    MIT Technology Review Explains: Let our writers untangle the complex, messy world of technology to help you understand whats coming next. You can read more from the series here.With the recent news that the Atlantics editor in chief was accidentally added to a group Signal chat for American leaders planning a bombing in Yemen, many people are wondering: What is Signal? Is it secure? If government officials arent supposed to use it for military planning, does that mean I shouldnt use it either?The answer is: Yes, you should use Signal, but government officials having top-secret conversations shouldnt use Signal.Read on to find out why.What is Signal?Signal is an app you can install on your iPhone or Android phone, or on your computer. It lets you send secure texts, images, and phone or video chats with other people or groups of people, just like iMessage, Google Messages, WhatsApp, and other chat apps.Installing Signal is a two-minute processagain, its designed to work just like other popular texting apps.Why is it a problem for government officials to use Signal?Signal is very secureas well see below, its the best option out there for having private conversations with your friends on your cell phone.But you shouldnt use it if you have a legal obligation to preserve your messages, such as while doing government business, because Signal prioritizes privacy over ability to preserve data. Its designed to securely delete data when youre done with it, not to keep it. This makes it uniquely unsuited for following public record laws.You also shouldnt use it if your phone might be a target of sophisticated hackers, because Signal can only do its job if the phone it is running on is secure. If your phone has been hacked, then the hacker can read your messages regardless of what software you are running.This is why you shouldnt use Signal to discuss classified material or military plans. For military communication your civilian phone is always considered hacked by adversaries, so you should instead use communication equipment that is saferequipment that is physically guarded and designed to do only one job, making it harder to hack.What about everyone else?Signal is designed from bottom to top as a very private space for conversation. Cryptographers are very sure that as long as your phone is otherwise secure, no one can read your messages.Why should you want that? Because private spaces for conversation are very important. In the US, the First Amendment recognizes, in the right to freedom of assembly, that we all need private conversations among our own selected groups in order to function.And you dont need the First Amendment to tell you that. You know, just like everyone else, that you can have important conversations in your living room, bedroom, church coffee hour, or meeting hall that you could never have on a public stage. Signal gives us the digital equivalent of thatits a space where we can talk, among groups of our choice, about the private things that matter to us, free of corporate or government surveillance. Our mental health and social functioning require that.So if youre not legally required to record your conversations, and not planning secret military operations, go ahead and use Signalyou deserve the privacy.How do we know Signal is secure?People often give up on finding digital privacy and end up censoring themselves out of caution. So are there really private ways to talk on our phones, or should we just assume that everything is being read anyway?The good news is: For most of us who arent individually targeted by hackers, we really can still have private conversations.Signal is designed to ensure that if you know your phone and the phones of other people in your group havent been hacked (more on that later), you dont have to trust anything else. It uses many techniques from the cryptography community to make that possible.Most important and well-known is end-to-end encryption, which means that messages can be read only on the devices involved in the conversation and not by servers passing the messages back and forth.But Signal uses other techniques to keep your messages private and safe as well. For example, it goes to great lengths to make it hard for the Signal server itself to know who else you are talking to (a feature known as sealed sender), or for an attacker who records traffic between phones to later decrypt the traffic by seizing one of the phones (perfect forward secrecy).These are only a few of many security properties built into the protocol, which is well enough designed and vetted for other messaging apps, such as WhatsApp and Google Messages, to use the same one.Signal is also designed so we dont have to trust the people who make it. The source code for the app is available online and, because of its popularity as a security tool, is frequently audited by experts.And even though its security does not rely on our trust in the publisher,it does come from a respected source: the Signal Technology Foundation, a nonprofit whose mission is to protect free expression and enable secure global communication through open-source privacy technology. The app itself, and the foundation, grew out of a community of prominent privacy advocates. The foundation was started by Moxie Marlinspike, a cryptographer and longtime advocate of secure private communication, and Brian Acton, a cofounder of WhatsApp.Why do people use Signal over other text apps? Are other ones secure?Many apps offer end-to-end encryption, and its not a bad idea to use them for a measure of privacy. But Signal is a gold standard for private communication because it is secure by default: Unless you add someone you didnt mean to, its very hard for a chat to accidentally become less secure than you intended.Thats not necessarily the case for other apps. For example, iMessage conversations are sometimes end-to-end encrypted, but only if your chat has blue bubbles, and they arent encrypted in iCloud backups by default. Google Messages are sometimes end-to-end encrypted, but only if the chat shows a lock icon. WhatsApp is end-to-end encrypted but logs your activity, including how you interact with others using our Services.Signal is careful not to record who you are talking with, to offer ways to reliably delete messages, and to keep messages secure even in online phone backups. This focus demonstrates the benefits of an app coming from a nonprofit focused on privacy rather than a company that sees security as a nice to have feature alongside other goals.(Conversely, and as a warning, using Signal makes it rather easier to accidentally lose messages! Again, it is not a good choice if you are legally required to record your communication.)Applications like WhatsApp, iMessage, and Google Messages do offer end-to-end encryption and can offer much better security than nothing. The worst option of all is regular SMS text messages (green bubbles on iOS)those are sent unencrypted and are likely collected by mass government surveillance.Wait, how do I know that my phone is secure?Signal is an excellent choice for privacy if you know that the phones of everyone youre talking with are secure. But how do you know that? Its easy to give up on a feeling of privacy if you never feel good about trusting your phone anyway.One good place to start for most of us is simply to make sure your phone is up to date. Governments often do have ways of hacking phones, but hacking up-to-date phones is expensive and risky and reserved for high-value targets. For most people, simply having your software up to date will remove you from a category that hackers target.If youre a potential target of sophisticated hacking, then dont stop there. Youll need extra security measures, andguides from the Freedom of the Press Foundation and the Electronic Frontier Foundation are a good place to start.But you dont have to be a high-value target to value privacy. The rest of us can do our part to re-create that private living room, bedroom, church, or meeting hall simply by using an up-to-date phone with an app that respects our privacy.Jack Cushman is a fellow of the Berkman Klein Center for Internet and Society and directs the Library Innovation Lab at Harvard Law School Library. He is an appellate lawyer, computer programmer, and former board member of the ACLU of Massachusetts.
    0 Comentários ·0 Compartilhamentos ·6 Visualizações
  • The Download: how people fall for pig butchering schemes, and saving glaciers
    www.technologyreview.com
    This is todays edition ofThe Download,our weekday newsletter that provides a daily dose of whats going on in the world of technology.Inside a romance scam compoundand how people get tricked into being thereGaveshs journey had started, seemingly innocently, with a job ad on Facebook promising work he desperately needed.Instead, he found himself trafficked into a business commonly known as pig butcheringa form of fraud in which scammers form romantic or other close relationships with targets online and extract money from them. The Chinese crime syndicates behind the scams have netted billions of dollars, and they have used violence and coercion to force their workers, many of them people trafficked like Gavesh, to carry out the frauds from large compounds, several of which operate openly in the quasi-lawless borderlands of Myanmar.We spoke to Gavesh and five other workers from inside the scam industry, as well as anti-trafficking experts and technology specialists. Their testimony reveals how global companies, including American social media and dating apps and international cryptocurrency and messaging platforms, have given the fraud business the means to become industrialized.By the same token, it is Big Tech that may hold the key to breaking up the scam syndicatesif only these companies can be persuaded or compelled to act. Read the full story.Peter Guest & Emily FishbeinHow to save a glacierTheres a lot we dont understand about how glaciers move and how soon some of the most significant ones could collapse into the sea. That could be a problem, since melting glaciers could lead to multiple feet of sea-level rise this century, potentially displacing millions of people who live and work along the coasts.A new group is aiming not only to further our understanding of glaciers but also to look into options to save them if things move toward a worst-case scenario, as my colleague James Temple outlined in his latest story. One idea: refreezing glaciers in place.The whole thing can sound like science fiction. But once you consider how huge the stakes are, I think it gets easier to understand why some scientists say we should at least be exploring these radical interventions. Read the full story.Casey CrownhartThis article is from The Spark, MIT Technology Reviews weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.MIT Technology Review Narrated: How tracking animal movement may save the planetResearchers have long dreamed of creating an Internet of Animals. And theyre getting closer to monitoring 100,000 creaturesand revealing hidden facets of our shared world.This is our latest story to be turned into a MIT Technology Review Narrated podcast, whichwere publishing each week on Spotify and Apple Podcasts. Just navigate to MIT Technology Review Narrated on either platform, and follow us to get all our new content as its released.The must-readsIve combed the internet to find you todays most fun/important/scary/fascinating stories about technology.1 Donald Trump has announced 25% tariffs on imported cars and partsThe measures are likely to make new cars significantly more expensive for Americans. (NYT $)+ Moving car manufacturing operations to the US wont be easy. (WP $)+ Its not just big businesses that will suffer, either. (The Atlantic $)+ How Trumps tariffs could drive up the cost of batteries, EVs, and more. (MIT Technology Review)2 China is developing an AI system to increase its online censorshipA leaked dataset demonstrates how LLMs could rapidly filter undesirable material. (TechCrunch)3 Trump may reduce tariffs on China to encourage a TikTok dealThe Chinese-owned company has until April 5 to find a new US owner. (Insider $)+ The national security concerns surrounding it havent gone away, though. (NYT $)4 OpenAIs new image generator can ape Studio Ghiblis distinctive styleWhich raises the question of whether the model was trained on Ghiblis images. (TechCrunch)+ The tools popularity means its rollout to non-paying users has been delayed. (The Verge)+ The AI lab waging a guerrilla war over exploitative AI. (MIT Technology Review)5 DOGE planned to dismantle USAID from the beginningNew court filings reveal the departments ambitions to infiltrate the system. (Wired $)+ Can AI help DOGE slash government budgets? Its complex. (MIT Technology Review)6 Wildfires are getting worse in the southwest of the USWhile federal fire spending is concentrated mainly in the west, the risk is rising in South Carolina and Texas too. (WP $)+ North and South Carolina were recovering from Hurricane Helene when the fires struck. (The Guardian)+ How AI can help spot wildfires. (MIT Technology Review)7 A quantum computer has generatedand verifiedtruly random numbersWhich is good news for cryptographers. (Bloomberg $)+ Cybersecurity analysts are increasingly worried about the so-called Q-Day. (Wired $)+ Amazons first quantum computing chip makes its debut. (MIT Technology Review)8 Whats next for weight-loss drugs Competition is heating up, but will patients be the ones to benefit? (New Scientist $)+ Drugs like Ozempic now make up 5% of prescriptions in the US. (MIT Technology Review)9 At least weve still got memesPoking fun at the Trump administrations decisions is a form of online resistance. (New Yorker $)10 Can you truly be friends with a chatbot?People are starting to find out. (Vox)+ The AI relationship revolution is already here. (MIT Technology Review)Quote of the dayI cant imagine any professional I know committing this egregious a lapse in judgement.A government technology leader tells Fast Company why top Trump officials decision to use unclassified messaging app Signal to discuss war plans is so surprising.The big storyWhy one developer wont quit fighting to connect the USs gridsSeptember 2024Michael Skelly hasnt learned to take no for an answer. For much of the last 15 years, the energy entrepreneur has worked to develop long-haul transmission lines to carry wind power across the Great Plains, Midwest, and Southwest. But so far, he has little to show for the effort.Skelly has long argued that building such lines and linking together the nations grids would accelerate the shift from coal- and natural-gas-fueled power plants to the renewables needed to cut the pollution driving climate change. But his previous business shut down in 2019, after halting two of its projects and selling off interests in three more.Skelly contends he was early, not wrong, and that the market and policymakers are increasingly coming around to his perspective. After all, the US Department of Energy just blessed his latest companys proposed line with hundreds of millions in grants. Read the full story.James TempleWe can still have nice thingsA place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet em at me.)+ Severances Adam Scott sure has interesting taste in music.+ While were not 100% sure if Millie is definitely the worlds oldest cat, one thing we know for sure is that she lives a life of luxury.+ Hiking trails are covered in beautiful wildflowers right now; just make sure you tread carefully.+ This is a really charming look at how girls live in America right now.
    0 Comentários ·0 Compartilhamentos ·8 Visualizações
  • Inside a romance scam compoundand how people get tricked into being there
    www.technologyreview.com
    Heading north in the dark, the only way Gavesh could try to track his progress through the Thai countryside was by watching the road signs zip by. The Jeeps three occupantsGavesh, a driver, and a young Chinese womanhad no languages in common, so they drove for hours in nervous silence as they wove their way out of Bangkok and toward Mae Sot, a city on Thailands western border with Myanmar.When they reached the city, the driver pulled off the road toward a small hotel, where another car was waiting. I had some suspicionslike, why are we changing vehicles? Gavesh remembers. But it happened so fast.They left the highway and drove on until, in total darkness, they parked at what looked like a private house. We stopped the vehicle. There were people gathered. Maybe 10 of them. They took the luggage and they asked us to come, Gavesh says. One was going in front, there was another one behind, and everyone said: Go, go, go.Gavesh and the Chinese woman were marched through the pitch-black fields by flashlight to a riverside where a boat was moored. By then, it was far too late to back out.Gaveshs journey had started, seemingly innocently, with a job ad on Facebook promising work he desperately needed. Instead, he found himself trafficked into a business commonly known as pig butcheringa form of fraud in which scammers form romantic or other close relationships with targets online and extract money from them. The Chinese crime syndicates behind the scams have netted billions of dollars, and they have used violence and coercion to force their workers, many of them people trafficked like Gavesh, to carry out the frauds from large compounds, several of which operate openly in the quasi-lawless borderlands of Myanmar.We spoke to Gavesh and five other workers from inside the scam industry, as well as anti-trafficking experts and technology specialists. Their testimony reveals how global companies, including American social media and dating apps and international cryptocurrency and messaging platforms, have given the fraud business the means to become industrialized. By the same token, it is Big Tech that may hold the key to breaking up the scam syndicatesif only these companies can be persuaded or compelled to act.Were identifying Gavesh using a pseudonym to protect his identity. He is from a country in South Asia, one he asked us not to name. He hasnt shared his story much, and he still hasnt told his family. He worries about how theyd handle it.Until the pandemic, he had held down a job in the tourism industry. But lockdowns had gutted the sector, and two years later he was working as a day laborer to support himself and his father and sister. I was fed up with my life, he says. I was trying so hard to find a way to get out.When he saw the Facebook post in mid-2022, it seemed like a godsend. A company in Thailand was looking for English-speaking customer service and data entry specialists. The monthly salary was $1,500far more than he could earn at homewith meals, travel costs, a visa, and accommodation included. I knew if I got this job, my life would turn around. I would be able to give my family a good life, Gavesh says.What came next was life-changing, but not in the way Gavesh had hoped. The advert was a fraudand a classic tactic syndicates use to force workers like Gavesh into an economy that operates as something like a dark mirror of the global outsourcing industry.The true scale of this type of fraud is hard to estimate, but the United Nations reported in 2023 that hundreds of thousands of people had been trafficked to work as online scammers in Southeast Asia. One 2024 study, from the University of Texas, estimates that the criminal syndicates that run these businesses have stolen at least $75 billion since 2020.These schemes have been going on for more than two decades, but theyve started to capture global attention only recently, as the syndicates running them increasingly shift from Chinese targets toward the West. And even as investigators, international organizations, and journalists gradually pull back the curtain on the brutal conditions inside scamming compounds and document their vast scale, what is far less exposed is the pivotal role platforms owned by Big Tech play throughout the industryfrom initially coercing individuals to become scammers to, finally, duping scam targets out of their life savings.As losses mount, governments and law enforcement agencies have looked for ways to disrupt the syndicates, which have become adept at using ungoverned spaces in lawless borderlands and partnering with corrupt regimes. But on the whole, the syndicates have managed to stay a step ahead of law enforcementin part by relying on services from the worlds tech giants. Apple iPhones are their preferred scamming tools. Meta-owned Facebook and WhatsApp are used to recruit people into forced labor, as is Telegram. Social media and messaging platforms, including Facebook, Instagram, WhatsApp, WeChat, and X, provide spaces for scammers to find and lure targets. So do dating apps, including Tinder. Some of the scam compounds have their own Starlink terminals. And cryptocurrencies like tether and global crypto platforms like Binance have allowed the criminal operations to move money with little or no oversight.Scam workers sit inside Myanmars KK Park, a notorious fraud hub near the border with Thailand, following a recent crackdown by law enforcement.REUTERSPrivate-sector corporations are, unfortunately, inadvertently enabling this criminal industry, says Andrew Wasuwongse, the Thailand country director at the anti-trafficking nonprofit International Justice Mission (IJM). The private sector holds significant tools and responsibility to disrupt and prevent its further growth.Yet while the tech sector has, slowly, begun to roll out anti-scam tools and policies, experts in human trafficking, platform integrity, and cybercrime tell us that these measures largely focus on the downstream problem: the losses suffered by the victims of the scams. That approach overlooks the other set of victims, often from lower-income countries, at the far end of a fraud supply chain that is built on human miseryand on Big Tech. Meanwhile, the scams continue on a mass scale.Tech companies could certainly be doing more to crack down, the experts say. Even relatively small interventions, they argue, could start to erode the business model of the scam syndicates; with enough of these, the whole business could start to founder.The trick is: How do you make it unprofitable? says Eric Davis, a platform integrity expert and senior vice president of special projects at the Institute for Security and Technology (IST), a think tank in California. How do you create enough friction?That question is only becoming more urgent as many tech companies pull back on efforts to moderate their platforms, artificial intelligence supercharges scam operations, and the Trump administration signals broad support for deregulation of the tech sector while withdrawing support from organizations that study the scams and support the victims. All these trends may further embolden the syndicates. And even as the human costs keep building, global governments exert ineffectual pressureif any at allon the tech sector to turn its vast financial and technical resources against a criminal economy that has thrived in the spaces Silicon Valley built.Capturing a vulnerable workforceThe roots of pig butchering scams reach back to the offshore gambling industry that emerged from China in the early 2000s. Online casinos had become hugely popular in China, but the government cracked down, forcing the operators to relocate to Cambodia, the Philippines, Laos, and Myanmar. There, they could continue to target Chinese gamblers with relative impunity. Over time, the casinos began to use social media to entice people back home, deploying scam-like tactics that frequently centered on attractive and even nude dealers.The doubts didnt really start until after Gavesh reached Bangkoks Suvarnabhumi Airport. As time ticked by, it began to occur to him that he was alone, with no money, no return ticket, and no working SIM card.Often the romance scam was a part of thatbuilding romantic relationships with people that you eventually would aim to hook, says Jason Tower, Myanmar country director at the United States Institute of Peace (USIP), a research and diplomacy organization funded by the US government, who researches the cyber scam industry. (USIPs leadership was recently targeted by the Trump administration and Elon Musks Department of Government Efficiency task force, leaving the organizations future uncertain; its website, which previously housed its research, is also currently offline.)By the late 2010s, many of the casinos were big, professional operations. Gradually, says Tower, the business model turned more sinister, with a tactic called sha zhu pan in Chinese emerging as a core strategy. Scamming operatives work to fatten up or cultivate a target by building a relationship before going in for the slaughterpersuading them to invest in a supposedly once-in-a-lifetime scheme and then absconding with the money. That actually ended up being much, much more lucrative than online gambling, Tower says. (The international law enforcement organization Interpol no longer uses the graphic term pig butchering, citing concerns that it dehumanizes and stigmatizes victims.)Like other online industries, the romance scamming business was supercharged by the pandemic. There were simply more isolated people to defraud, and more people out of work who might be persuaded to try scamming othersor who were vulnerable to being trafficked into the industry.Initially, most of the workers carrying out the frauds were Chinese, as were the fraud victims. But after the government in Beijing tightened travel restrictions, making it hard to recruit Chinese laborers, the syndicates went global. They started targeting more Western markets and turning, Tower says, to much more malign types of approaches to tricking people into scam centers.Getting recruitedGavesh was scrolling through Facebook when he saw the ad. He sent his rsum to a Telegram contact number. A human resources representative replied and had him demonstrate his English and typing skills over video. It all felt very professional. I didnt have any reason to suspect, he says.The doubts didnt really start until after he reached Bangkoks Suvarnabhumi Airport. After being met at arrivals by a man who spoke no English, he was left to wait. As time ticked by, it began to occur to Gavesh that he was alone, with no money, no return ticket, and no working SIM card. Finally, the Jeep arrived to pick him up.Hours later, exhausted, he was on a boat crossing the Moei River from Thailand into Myanmar. On the far bank, a group was waiting. One man was in military uniform and carried a gun. In my country, if we see an army guy when we are in trouble, we feel safe, Gavesh says. So my initial thoughts were: Okay, theres nothing to be worried about.They hiked a kilometer across a sodden paddy field and emerged at the other side caked in mud. There a van was parked, and the driver took them to what he called, in broken English, the office. They arrived at the gate of a huge compound, surrounded by high walls topped with barbed wire.While some people are drawn into online scamming directly by friends and relatives, Facebook is, according to IJMs Wasuwongse, the most common entry point for people recruited on social media.Meta has known for years that its platforms host this kind of content. Back in 2019, the BBC exposed slave markets that were running on Instagram; in 2021, the Wall Street Journal reported, drawing on documents leaked by a whistleblower, that Meta had long struggled to rein in the problem but took meaningful action only after Apple threatened to pull Instagram from its app store.Today, years on, ads like the one that Gavesh responded to are still easy to find on Facebook if you know what to look for.Examples of fraudulent Facebook ads, shared by International Justice Mission.They are typically posted in job seekers groups and usually seem to be advertising legitimate jobs in areas like customer service. They offer attractive wages, especially for people with language skillsusually English or Chinese.The traffickers tend to finish the recruitment process on encrypted or private messaging apps. In our research, many experts said that Telegram, which is notorious for hosting terrorist content, child sexual abuse material, and other communication related to criminal activity, was particularly problematic. Many spoke with a combination of anger and resignation about its apparent lack of interest in working with them to address the problem; Mina Chiang, founder of Humanity Research Consultancy, an anti-trafficking organization, accuses the app of being very much complicit in human trafficking and proactively facilitating these scams. (Telegram did not respond to a request for comment.)But while Telegram users have the option of encrypting their messages end to end, making them almost impossible to monitor, social media companies are of course able to access users posts. And its here, at the beginning of the romance scam supply chain, where Big Tech could arguably make its most consequential intervention.Social media is monitored by a combination of human moderators and AI systems, which help flag users and contentads, posts, pagesthat break the law or violate the companies own policies. Dangerous content is easiest to police when it follows predictable patterns or is posted by users acting in distinctive and suspicious ways.They have financial resources. You can hire the most talented coding engineers in the world. Why cant you just find people who understand the issue properly?Anti-trafficking experts say the scam advertising tends to follow formulaic templates and use common language, and that they routinely report the ads to Meta and point out the markers they have identified. Their hope is that this information will be fed into the data sets that train the content moderation models.While individual ads may be taken down, even in big waveslast November, Meta said it had purged 2 million accounts connected to scamming syndicates over the previous yearexperts say that Facebook still continues to be used in recruiting. And new ads keep appearing.(In response to a request for comment, a Meta spokesperson shared links to policies about bans on content or advertisements that facilitate human trafficking, as well as company blog posts telling users how to protect themselves from romance scams and sharing details about the companys efforts to disrupt fraud on its platforms, one statingthat it is constantly rolling out new product features to help protect people on [its] apps from known scam tactics at scale. The spokesperson also said that WhatsApp has spam detection technology, and millions of accounts are banned per month.)Anti-trafficking experts we spoke with say that as recently as last fall, Meta was engaging with them and had told them it was ramping up its capabilities. But Chiang says there still isnt enough urgency from tech companies. Theres a question about speed. They might be able to say Thats the goal for the next two years. No. But thats not fast enough. We need it now, she says. They have financial resources. You can hire the most talented coding engineers in the world. Why cant you just find people who understand the issue properly?Part of the answer comes down to money, according to experts we spoke with. Scaling up content moderation and other processes that could cause users to be kicked off a platform requires not only technological staff but also legal and policy expertswhich not everyone sees as worth the cost.The vast majority of these companies are doing the minimum or less, says Tower of USIP. If not properly incentivized, either through regulatory action or through exposure by media or other forms of pressure often, these companies will underinvest in keeping their platforms safe.Getting set upGaveshs new office turned out to be one of the most infamous scamming hubs in Southeast Asia: KK Park in Myanmars Myawaddy region. Satellite imagery shows it as a densely packed cluster of buildings, surrounded by fields. Most of it has been built since late 2019.Inside, it runs like a hybrid of a company campus and a prison.When Gavesh arrived, he handed over his phone and passport and was assigned to a dormitory and an employer. He was allowed his own phone back only for short periods, and his calls were monitored. Security was tight. He had to pass through airport-style metal detectors when he went in or out of the office. Black-uniformed personnel patrolled the buildings, while armed men in combat fatigues watched the perimeter fences from guard posts.On his first full day, he was put in front of a computer with just four documents on it, which he had to read over and overguides on how to approach strangers. On his second day, he learned to build fake profiles on social media and dating apps. The trick was to find real people on Instagram or Facebook who were physically attractive, posted often, and appeared to be wealthy and living a luxurious life, he says, and use their photos to build a new account: There are so many Instagram models that pretend they have a lot of money.After Gavesh was trafficked into Myanmar, he was taken to KK Park. Most of the compound has been built since late 2019.LUKE DUGGLEBY/REDUXNext, he was given a batch of iPhone 8smost people on his team used between eight and 10 devices eachloaded with local SIM cards and apps that spoofed their location so that they appeared to be in the US. Using male and female aliases, he set up dozens of accounts on Facebook, WhatsApp, Telegram, Instagram, and X and profiles on several dating platforms, though he cant remember exactly which ones.Different scamming operations teach different techniques for finding and reaching out to potential victims, several people who worked in the compounds tell us. Some people used direct approaches on dating apps, Facebook, Instagram, orfor those targeting Chinese victimsWeChat. One worker from Myanmar sent out mass messages on WhatsApp, pretending to have accidentally messaged a wrong number, in the hope of striking up a conversation. (Tencent, which owns WeChat, declined to comment.)Some scamming workers we spoke to were told to target white, middle-aged or older men in Western countries who seemed to be well off. Gavesh says he would pretend to be white men and women, using information found from Google to add verisimilitude to his claims of living in, say, Miami Beach. He would chat with the targets, trying to figure out from their jobs, spending habits, and ambitions whether theyd be worth investing time in.One South African woman, trafficked to Myanmar in 2022, says she was given a script and told to pose as an Asian woman living in Chicago. She was instructed to study her assigned city and learn quotidian details about life there. They kept on punishing people all the time for not knowing or for forgetting that theyre staying in Chicago, she says, or for forgetting whats Starbucks or whats [a] latte.Fake users have, of course, been a problem on social media platforms and dating sites for years. Some platforms, such as X, allow practically anyone to create accounts and even to have them verified for a fee. Others, including Facebook, have periodically conducted sweeps to get rid of fake accounts engaged in what Meta calls coordinated inauthentic behavior. (X did not respond to requests for comment.)But scam workers tell us they were advised on simple ways to circumvent detection mechanisms on social media. They were given basic training in how to avoid suspicious behavior such as adding too many contacts too quickly, which might trigger the company to review whether someones profile is authentic. The South African woman says she was shown how to manipulate the dates on a Facebook account to seem as if you opened the account in 2019 or whatever, making it easier to add friends. (Metas spam filtersmeant to reduce the spread of unwanted contentinclude limits on friend requests and bulk messaging.)Wang set up a Tinder profile with a picture of a dog and a bio that read, I am a dog. It passed through the platforms verification system without a hitch.Dating apps, whose users generally hope to meet other users in real life, have a particular need to make sure that people are who they say they are. But Match Group, the parent company of Tinder, ended its partnership with a company doing background checks in 2023. It now encourages users to verify their profile with a selfie and further ID checks, though insiders say these systems are often rudimentary. They just check a box and [do] what is legally required or what will make the media get off of [their] case, says one tech executive who has worked with multiple dating apps on safety systems, speaking on the condition of anonymity because they were not permitted to speak about their work with certain companies.Fangzhou Wang, an assistant professor at the University of Texas at Arlington who studies romance scams, ran a test: She set up a Tinder profile with a picture of a dog and a bio that read, I am a dog. It passed through the platforms verification system without a hitch. They are not providing enough security measures to filter out fraudulent profiles, Wang says. Everybody can create anything.Like recruitment ads, the scam profiles tend to follow patterns that should raise red flags. They use photos copied from existing users or made by artificial intelligence, and the accounts are sometimes set up using phone numbers generated by voice-over-internet-protocol services. Then theres the scammers behavior: They swipe too fast, or spend too much time logged in. A normal human doesnt spend eight hours on a dating app a day, the tech executive says.Whats more, scammers use the same language over and over again as they reach out to potential targets. The majority of them are using predesigned scripts, says Wang.It would be fairly easy for platforms to detect these signs and either stop accounts from being created or make the users go through further checks, experts tell us. Signals of some of these behaviors can potentially be embedded into a type of machine-learning algorithm, Wang says. She approached Tinder a few years ago with her research into the language that scammers use on the platforms, and offered to help build data sets for its moderation models. She says the company didnt reply.(In a statement, Yoel Roth, vice president of trust and safety at Match Group, said that the company invests in proactive tools, advanced detection systems and user education to help prevent harm. He wrote, We use proprietary AI-powered tools to help identify scammer messaging, and unlike many platforms, we moderate messages, which allows us to detect suspicious patterns early and act quickly, adding that the company has recently worked with Reality Defender, a provider of deepfake detection tools, to strengthen its ability to detect AI-generated content. A company spokesperson reported having no record of Wangs outreach but said that the company welcome[s] collaboration and [is] always open to reviewing research that can help strengthen user safety.)A recent investigation published in The Markup found that Match Group has long possessed the tools and resources to track sex offenders and other bad actors but has resisted efforts to roll out safety protocols for fear they might slow growth.This tension, between the desire to keep increasing the number of users and the need to ensure that these users and their online activity are authentic, is often behind safety issues on platforms. While no platform wants to be a haven for fraudsters, identity verification creates friction for users, which stops real people as well as impostors from signing up. And again, cracking down on platform violations costs money.According to Josh Kim, an economist who works in Big Tech, it would be costly for tech companies to build out the legal, policy, and operational teams for content moderation tools that could get users kicked off a platformand the expense is one companies may find hard to justify in the current business climate. The shift toward profitability means that you have to be very selective in where you invest the resources that you have, he says.My intuition here is that unless there are fines or pressure from governments or regulatory agencies or the public themselves, he adds, the current atmosphere in the tech ecosystem is to focus on building a product that is profitable and grows fast, and things that dont contribute to those two points are probably being deprioritized.Getting onlineand staying in lineAt work, Gavesh wore a blue tag, marking him as belonging to the lowest rank of workers. On top of us are the ones who are wearing the yellow tagsthey call themselves HR or translators, or office guys, he says. Red tags are team leaders, managers And then moving from that, they have black and ash tags. Those are the ones running the office. Most of the latter were Chinese, Gavesh says, as were the really big bosses, who didnt wear tags at all.Within this hierarchy operated a system of incentives and punishments. Workers who followed orders and proved successful at scamming could rise through the ranks to training or supervisory positions, and gain access to perks like restaurants and nightclubs. Those who failed to meet the targets or broke the rules faced violence and humiliation.Gavesh says he was once beaten because he broke an unwritten rule that it was forbidden to cross your legs at work. Yawning was banned, and bathroom breaks were limited to two minutes at a time.KATHERINE LAMBeatings were usually conducted in the open, though the most severe punishments at Gaveshs company happened in a room called the water jail. One day a coworker was there alongside the others, and the next day he was not, Gavesh recalls. When the colleague was brought back to the office, he had been so badly beaten he couldnt walk or speak. They took him to the front, and they said: If you do not listen to us, this is what will happen to you.Gavesh was desperate to leave but felt there was no chance of escaping. The armed guards seemed ready to shoot, and there were rumors in the compound that some people who jumped the fence had been found drowned in the river.This kind of physical and psychological abuse is routine across the industry. Gavesh and others we spoke to describe working 12 hours or more a day, without days off. They faced strict quotas for the number of scam targets they had to have on the hook. If they failed to reach them, they were punished. The UN has documented cases of torture, arbitrary detention, and sexual violence in the compounds. We heard accounts of people made to perform calisthenics and being thrashed on the backside in front of other workers.Even if someone could escape, there is often no authority to appeal to on the outside. KK Park and other scam factories in Myanmar are situated in a geopolitical gray zoneborderlands where criminal enterprises have based themselves for decades, trading in narcotics and other unlawful industries. Armed groups, some of them operating under the command of the military, are credibly believed to profit directly from the trade in people and contraband in these areas, in some cases facing international sanctions as a result. Illicit industries in Myanmar have only expanded since a military coup in 2021. By August 2023, according to UN estimates, more than 120,000 people were being held in the country for the purposes of forced scamming, making it the largest hub for the frauds in Southeast Asia.Workers who followed orders and proved successful at scamming could rise through the ranks and gain access to perks like restaurants and nightclubs. Those who failed to meet the targets or broke the rules faced violence and humiliation.In at least some attempt to get a handle on this lawlessness, Thailand tried to cut off internet services for some compounds across its western border starting last May. Syndicates adapted by running fiber-optic cables across the river. When some of those were discovered, they were severed by Thai authorities. Thailand again ramped up its crackdowns on the industry earlier this year, with tactics that included cutting off internet, gas, and electricity to known scamming enclaves, following the trafficking of a Chinese celebrity through Thailand into Myanmar.Still, the scammers keep adaptingagain, using Western technology. Weve started to see and hear of Starlink systems being used by these compounds, says Eric Heintz, a global analyst at IJM.While the military junta has criminalized the use of unauthorized satellite internet service, intercepted shipments and raids on scamming centers over the past year indicate that syndicates smuggle in equipment. The crackdowns seem to have had a limited impacta Wired investigation published in February found that scamming networks appeared to be widely using Starlink in Myanmar. The journalist, using mobile-phone connection data collected by an online advertising industry tool, identified eight known scam compounds on the Myanmar-Thailand border where hundreds of phones had used Starlink more than 40,000 times since November 2024. He also identified photos that appeared to show dozens of Starlink satellite dishes on a scamming compound rooftop.Starlink could provide another prime opportunity for systematic efforts to interrupt the scams, particularly since it requires a subscription and is able to geofence its services. I could give you coordinates of where some of these [scamming operations] are, like IP addresses that are connecting to them, Heintz says. That should make a huge paper trail.Starlinks parent company, SpaceX, has previously limited access in areas of Ukraine under Russian occupation, after all. Its policies also state that SpaceX may terminate Starlink services to users who participate in fraudulent activities. (SpaceX did not respond to a request for comment.)Knowing the locations of scam compounds could also allow Apple to step in: Workers rely on iPhones to make contact with victims, and these have to be associated with an Apple ID, even if the workers use apps to spoof their addresses.As Heintz puts it, [If] you have an iCloud account with five phones, and you know that those phones GPS antenna locates those phones inside a known scam compound, then all of those phones should be bricked. The account should be locked.(Apple did not provide a response to a request for comment.)This isnt like the other trafficking cases that weve worked on, where were trying to find a boat in the middle of the ocean, Heintz adds. These are city-size compounds. We all know where they are, and weve watched them being built via satellite imagery. We should be able to do something location-based to take these accounts offline.Getting paidOnce Gavesh developed a relationship on social media or a dating site, he was supposed to move the conversation to WhatsApp. That platform is end-to-end encrypted, meaning even Meta cant read the content of messagesalthough it should be possible for the company to spot a users unusual patterns of behavior, like opening large numbers of WhatsApp accounts or sending numerous messages in a short span of time. If you have an account that is suddenly adding people in large quantities all over the world, should you immediately flag it and freeze that account or require that that individual verify his or her information? USIPs Tower says.After cultivating targets trust, scammers would inevitably shift the conversation to the subject of money. Having made themselves out to be living a life of luxury, they would offer a chance to share in the secrets of their wealth. Gavesh was taught to make the approach as if it were an extension of an existing intimacy. I would not show this platform to anyone else, he says he was supposed to say. But since I feel like you are my life partner, I feel like you are my future.Lower-level workers like Gavesh were only expected to get scamming targets on the hook; then theyd pass off the relationship to a manager. From there, there is some variation in the approach, but the target is sometimes encouraged to set up an account with a mainstream crypto exchange and buy some tokens. Then the scammer sends the victimor customer, as some workers say they called these targetsa link to a convincing, but fake, crypto investment platform.After the target invests an initial amount of money, the scammer typically sends fake investment return charts that seem to show the value of that stake rising and rising. To demonstrate good faith, the scammer sends a few hundred dollars back to the victims crypto wallet, all the while working to convince the mark to keep investing. Then, once the customer is all in, the scammer goes in for the kill, using every means possible to take more money. We [would] pull out bigger amounts from the customers and squeeze them out of their possessions, one worker tells us.The design of cryptocurrency allows some degree of anonymity, but with enough time, persistence, and luck, its possible to figure out where tokens are flowing. Its also possible, though even more difficult, to discover who owns the crypto wallets.In early 2024, University of Texas researchers John M. Griffin and Kevin Mei published a paper that followed money from crypto wallets associated with scammers. They tracked hundreds of thousands of transactions, collectively worth billions of dollarsmoney that was transferred in and out of mainstream exchanges, including Binance, Coinbase, and Crypto.com.Scam workers spend time gaining the trust of their targets, often by deploying fraudulent personas and developing romantic relationships.REUTERS/CARLOS BARRIASome scam syndicates would move crypto off these big exchanges, launder it through anonymous platforms known as mixers (which can be used to obscure crypto transactions), and then come back to the exchanges to cash out into fiat currency such as dollars.Griffin and Mei were able to identify deposit addresses on Binance and smaller platforms, including Hong Kongbased Huobi and Seychelles-based OKX, that were collectively receiving billions of dollars from suspected scams. These addresses were being used over and over again to send and receive money, suggesting limited monitoring by crypto exchanges, the authors wrote.(We were unable to reach OKX for comment; Coinbase and Huobi did not respond to requests for comment. A Binance spokesperson said that the company disputes the findings of the University of Texas study, alleging that they are misleading at best and, at worst, wildly inaccurate. The spokesperson also said that the company has extensive know-your-customer requirements, uses internal and third-party tools to spot illicit activity, freezes funds, and works with law enforcement to help reclaim stolen assets, claiming to have proactively prevented $4.2 billion in potential losses for 2.8 million users from scams and frauds and recovered $88 million in stolen or misplaced funds last year. A Crypto.com spokesperson said that the company is committed to security, compliance and consumer protection and that it uses robust transaction monitoring and fraud detection controls, rigorously investigates accounts flagged for potential fraudulent activity or victimization, and has internal blacklisting processes for wallet addresses known to be linked to scams.)But while tracking illicit payments through the crypto ecosystem is possible, its messy and complicated to actually pin down who owns a scam wallet, according to Griffin Hotchkiss, a writer and use-case researcher at the Ethereum Foundation who has worked on crypto projects in Myanmar and who spoke in his personal capacity. Investigators have to build models that connect users to accounts by the flows of money going through them, which involves a degree of guesswork and red string and sticky notes on the board trying to trace the flow of funds, he says.There are, however, certain actors within the crypto ecosystem who should have a good vantage point for observing how money moves through it. The most significant of these is Tether Holdings, a company formerly based in the British Virgin Islands (it has since relocated to El Salvador) that issues tether or USDT, a so-called stablecoin whose value is nominally pegged to the US dollar. Tether is widely used by crypto traders to park their money in dollar-denominated assets without having to convert cryptocurrencies into fiat currency. It is also widely used in criminal activity.There was this one guy I was chatting with, [using] a girls profile. He was trying to make a living. He was working in a cafe. He had a daughter who was living with [her] mother. That story was really touching. And, like, you dont want to get these people [involved].There is more than $140 billion worth of USDT in circulation; in 2023, TRM Labs, a firm that traces crypto fraud, estimated that $19.3 billion worth of tether transactions was associated with illicit activity. In January 2024, the UNs Office on Drugs and Crime said that tether was a leading means of exchange for fraudsters and money launderers operating in Southeast Asia. In October, US federal investigators reportedly opened an investigation alleging possible sanctions violations and complicity in money laundering (though at the time, Tether Holdings CEO said there was no indication the company was under investigation).Tech experts tell us that USDT is ever-present in the scam business, used to move money and as the main medium of exchange on anonymous marketplaces such as Cambodia-based Huione Guarantee, which has been accused of allowing romance scammers to launder the proceeds of their crimes. (Cambodia revoked the banking license of Huione Pay in March of this year. Huione, which did not respond to a request for comment, has previously denied engaging in criminal activity.)While much of the crypto ecosystem is decentralized, USDT does have a central authority that could intervene, Hotchkiss says. Tethers code has functions that allow the company to blacklist users, freeze accounts, and even destroy tokens, he adds. (Tether Holdings did not respond to requests for comment.)In practice, Hotchkiss says, the company has frozen very few accountsand, like other experts we spoke to, he thinks its unlikely to happen at scale. If it were to start acting like a regulator or a bank, the currency would lose a fundamental part of its appeal: its anonymity and independence from the mainstream of finance. The more you intervene, the less trust people have in your coin, he says. The incentives are kind of misaligned.Getting outGavesh really wasnt very good at scamming. The knowledge that the person on the other side of the conversation was working hard for money that he was trying to steal weighed heavily on him. There was this one guy I was chatting with, [using] a girls profile, he says. He was trying to make a living. He was working in a cafe. He had a daughter who was living with [her] mother. That story was really touching. And, like, you dont want to get these people [involved].The nature of the work left him racked with guilt. I believe in karma, he says. What goes around comes around.Twice during Gaveshs incarceration, he was sold on from one employer to another, but he still struggled with scamming. In February 2023, he was put up for sale a third time, along with some other workers.We went to the boss and begged him not to sell [us] and to please let us go home, Gavesh says. The boss eventually agreed but told them it would cost them. As well as forgoing their salaries, they had to pay a ransomGaveshs was set at 72,000 Thai baht, more than $2,000.Gavesh managed to scrape the money together, and he and around a dozen others were driven to the river in a military vehicle. We had to be very silent, he says. They were told not to make any sounds or anythingjust to get on the boat. They slipped back into Thailand the way they had come.KATHERINE LAMTo avoid checkpoints on the way to Bangkok, the smugglers took paths through the jungle and changed vehicles around 10 times.The group barely had enough money to survive a couple of days in the city, so they stuck together, staying in a cheap hotel while figuring out what to do next. With the help of a compatriot, Gavesh got in touch with IJM, which offered to help him navigate the legal bureaucracy ahead.The traffickers hadnt given him back his passport, and he was in Thailand without authorization. It was April before he was finally able to board a flight home, where he faced yet more questioning from police and immigration officials. He told his family he had a small visa issue and that he had lost his passport in Bangkok. He has never told them about his ordeal. It would be very hard for them to process, he says.Recent history shows its very unlikely Gavesh will get any justice. Thats part of the reason why disrupting scams technology supply chain is so important: Its incredibly challenging to hold the people operating the syndicates accountable. They straddle borders and jurisdictions. They have trafficked people from more than 60 countries, according to research from USIP, and scam targets come from all over the world. Much of the stolen money is moved through crypto wallets based in secrecy jurisdictions. This thing is really like an onion. Youve got layer after layer after layer of it, and its just really difficult to see where jurisdiction starts and where jurisdiction ends, Tower says.Chinese authorities are often more willing to cooperate with the military junta and armed groups in Myanmar that Western governments will not deal with, and they have cracked down where they can on operations involving their nationals. Thailand has also stepped up its efforts to address the human trafficking crisis and shut down scamming operations across its border in recent months. But when it comes to regulating tech platforms, the reaction from governments has been slower.The few legislative efforts in the US, which are still in the earliest stages, focus on supporting law enforcement and financial institutions, not directly on ways to address the abuse of American tech platforms for scamming. And they probably wont take that on anytime soon. Trump, who has been boosted and courted by several high-profile tech executives, has indicated that his administration opposes heavier online moderation. One executive order, signed in February, vows to impose tariffs on foreign governments if they introduce measures that could inhibit the growth of US companiesparticularly those in techor compel them to moderate online content.The Trump White House also supports reducing regulation in the crypto industry; it has halted major investigations into crypto companies and just this month removed sanctions on the crypto mixer Tornado Cash. In what was widely seen as a nod to libertarian-leaning crypto-enthusiasts, Trump pardoned Ross Ulbricht, the founder of the dark web marketplace Silk Road and one of the earlier adopters of crypto for large-scale criminal activity. The administrations embrace of crypto could indeed have implications for the scamming industry, notes Kim, the economist: It makes it much easier for crypto services to proliferate and have wider-spread adoption, and that might make it easier for criminal enterprises to tap into that and exploit that for their own means.Whats more, the new US administration has overseen the rollback of funding for myriad international aid programs, primarily programs run through the US Agency for International Development and including those working to help the people whove been trafficked into scam compounds. In late February, CNN reports, every one of the agencys anti-trafficking projects was halted.This all means its up to the tech companies themselves to act on their own initiative. And Big Tech has rarely acted without legislative threats or significant social or financial pressure. Companies wont do anything if its not mandatory, its not enforced by the government, and most important, if companies dont profit from it, says Wang, from the University of Texas. While a group of tech companies, including Meta, Match, and Coinbase, last year announced the formation of Tech Against Scams, a collaboration to share tips and best practices, experts tell us there are no concrete actions to point to yet.And at a time when more resources are desperately needed to address the growing problems on their platforms, social media companies like X, Meta, and others have laid off hundreds of people from their trust and safety departments in recent years, reducing their capacity to tackle even the most pressing issues. Since the reelection of Trump, Meta has signaled an even greater rollback of its moderation and fact checking, a decision that earned praise from the president.Still, companies may feel pressure given that a handful of entities and executives have in recent years been held legally responsible for criminal activity on their platforms. Changpeng Zhao, who founded Binance, the worlds largest cryptocurrency exchange, was sentenced to four months in jail last April after pleading guilty to breaking US money-laundering laws, and the company had to forfeit some $4 billion for offenses that included allowing users to bypass sanctions. Then last May, Alexey Pertsev, a Tornado Cash cofounder, was sentenced to more than five years in a Dutch prison for facilitating the laundering of money stolen by, among others, the Lazarus Group, North Koreas infamous state-backed hacking team. And in August last year, French authorities arrested Pavel Durov, the CEO of Telegram, and charged him with complicity in drug trafficking and distribution of child sexual abuse material.I think all social media [companies] should really be looking at the case of Telegram right now, USIPs Tower says. At that CEO level, youre starting to see states try to hold a company accountable for its role in enabling major transnational criminal activity on a global scale.Compounding all the challenges, however, is the integration of cheap and easy-to-use artificial intelligence into scamming operations. The trafficked individuals we spoke to, who had mostly left the compounds before the widespread adoption of generative AI, said that if targets suggested a video call they would deflect or, as a last resort, play prerecorded video clips. Only one described the use of AI by his company; he says he was paid to record himself saying various sentences in ways that reflected different emotions, for the purposes of feeding the audio into an AI model. Recently, reports have emerged of scammers who have used AI-powered face swap and voice-altering products so that they can impersonate their characters more convincingly. Malicious actors can exploit these models, especially open-source models, to produce content at an unprecedented scale, says Gabrielle Tran, senior analyst for technology and society at IST. These models are purposefully being fine-tuned to serve as convincing humans.Experts we spoke with warn that if platforms dont pick up the pace on enforcement now, theyre likely to fall even further behind.Every now and again, Gavesh still goes on Facebook to report pages he thinks are scams. He never hears back.But he is working again in the tourism industry and on the path to recovering from his ordeal. I cant say that Im 100% out of the trauma, but Im trying to survive because I have responsibilities, he says.He chose to speak out because he doesnt want anyone else to be trickedinto a scamming compound, or into giving up their life savings to a stranger. Hes seen behind the scenes into a brutal industry that exploits peoples real needs for work, connection, and human contact, and he wants to make sure no one else ends up where he did.Theres a very scary world, he says. A world beyond what we have seen.Peter Guest is a journalist based in London. Emily Fishbein is a freelance journalist focusing on Myanmar.Additional reporting by Nu Nu Lusan.
    0 Comentários ·0 Compartilhamentos ·15 Visualizações
  • How to save a glacier
    www.technologyreview.com
    Glaciers generally move so slowly you cant see their progress with the naked eye. (Their pace is glacial.) But these massive bodies of ice do march downhill, with potentially planet-altering consequences.Theres a lot we dont understand about how glaciers move and how soon some of the most significant ones could collapse into the sea. That could be a problem, since melting glaciers could lead to multiple feet of sea-level rise this century, potentially displacing millions of people who live and work along the coasts.A new group is aiming not only to further our understanding of glaciers but also to look into options to save them if things move toward a worst-case scenario, as my colleague James Temple outlined in his latest story. One idea: refreezing glaciers in place.The whole thing can sound like science fiction. But once you consider how huge the stakes are, I think it gets easier to understand why some scientists say we should at least be exploring these radical interventions.Its hard to feel very optimistic about glaciers these days. (The Thwaites Glacier in West Antarctica is often called the doomsday glaciernot alarming at all!)Take two studies published just in the last month, for example. The British Antarctic Survey released the most detailed map to date of Antarcticas bedrockthe foundation under the continents ice. With twice as many data points as before, the study revealed that more ice than we thought is resting on bedrock thats already below sea level. That means seawater can flow in and help melt ice faster, so Antarcticas ice is more vulnerable than previously estimated.Another study examined subglacial riversstreams that flow under the ice, often from subglacial lakes. The team found that the fastest-moving glaciers have a whole lot of water moving around underneath them, which speeds melting and lubricates the ice sheet so it slides faster, in turn melting even more ice.And those are just two of the most recent surveys. Look at any news site and its probably delivered the same gnarly message at some point recently: The glaciers are melting faster than previously realized. (Our site has one, too: Greenlands ice sheet is less stable than we thought, from 2016.)The new group is joining the race to better understand glaciers. Arte Glacier Initiative, a nonprofit research organization founded by scientists at MIT and Dartmouth, has already awarded its first grants to researchers looking into how glaciers melt and plans to study the possibility of reversing those fortunes, as James exclusively reported last week.Brent Minchew, one of the groups cofounders and an associate professor of geophysics at MIT, was drawn to studying glaciers because of their potential impact on sea-level rise. But over the years, I became less content with simply telling a more dramatic story about how things were goingand more open to asking the question of what can we do about it, he says.Minchew is among the researchers looking into potential plans to alter the future of glaciers. Strategies being proposed by groups around the world include building physical supports to prop them up and installing massive curtains to slow the flow of warm water that speeds melting. Another approach, which will be the focus of Arte, is called basal intervention. It basically involves drilling holes in glaciers, which would allow water flowing underneath the ice to be pumped out and refrozen, hopefully slowing them down.If you have questions about how all this would work, youre not alone. These are almost inconceivably huge engineering projects, theyd be expensive, and theyd face legal and ethical questions. Nobody really owns Antarctica, and its governed by a huge treatyhow could we possibly decide whether to move forward with these projects?Then theres the question of the potential side effects. Just look at recent news from the Arctic Ice Project, which was researching how to slow the melting of sea ice by covering it with substances designed to reflect sunlight away. (Sea ice is different from glaciers, but some of the key issues are the same.)One of the projects largest field experiments involved spreading tiny silica beads, sort of like sand, over 45,000 square feet of ice in Alaska. But after new research revealed that the materials might be disrupting food chains, the organization announced that its concluding its research and winding down operations.Cutting our emissions of greenhouse gases to stop climate change at the source would certainly be more straightforward than spreading beads on ice, or trying to stop a 74,000-square-mile glacier in its tracks.But were not doing so hot on cutting emissionsin fact, levels of carbon dioxide in the atmosphere rose faster than ever in 2024. And even if the world stopped polluting the atmosphere with planet-warming gases today, things may have already gone too far to save some of the most vulnerable glaciers.The longer I cover climate change and face the situation were in, the more I understand the impulse to at least consider every option out there, even if it sounds like science fiction.This article is from The Spark, MIT Technology Reviews weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.
    0 Comentários ·0 Compartilhamentos ·16 Visualizações
  • The Download: Chinas empty data centers, and OpenAIs new practical image generator
    www.technologyreview.com
    This is todays edition ofThe Download,our weekday newsletter that provides a daily dose of whats going on in the world of technology.China built hundreds of AI data centers to catch the AI boom. Now many stand unused.Just months ago, Chinas boom in data center construction was at its height, fueled by both government and private investors. Renting out GPUs to companies that need them for training AI models was once seen as a sure bet.But with the rise of DeepSeek and a sudden change in the economics around AI, the industry is faltering. Prices for GPUs are falling and many newly built facilities are now sitting empty. Read the full story to find out why.Caiwei ChenOpenAIs new image generator aims to be practical enough for designers and advertisersWhats new? OpenAI has released a new image generator thats designed less for typical surrealist AI art and more for highly controllable and practical creation of visualsa sign that OpenAI thinks its tools are ready for use in fields like advertising and graphic design.Why it matters: While most AI models have been great at creating fantastical images or realistic deepfakes, theyve been terrible at identifying certain objects correctly and putting them in their proper place. OpenAIs new model makes progress on technical issues that have plagued AI image generators for years.But in entering this domain, OpenAI has two paths, both difficult. Read the full story.The AI Hype Index: DeepSeek mania, Israels spying tool, and cheating at chessSeparating AI reality from hyped-up fiction isnt always easy. Thats why weve created the AI Hype Indexa simple, at-a-glance summary of everything you need to know about the state of the industry. Take a look at the full index here.The must-readsIve combed the internet to find you todays most fun/important/scary/fascinating stories about technology.1 The Trump administration has barred 80 companies from buying US techThe list of primarily Chinese firms is forbidden from buying American chips. (NYT $)+ The list included a server maker that buys chips from Nvidia. (WSJ $)+ China disputed claims the firms were seeking knowledge for military purposes. (AP News)2 A DOGE staffer provided tech support to a cybercrime ringAnd bragged about trafficking in stolen data and cyberstalking an FBI agent. (Reuters)+ Elon Musk could use DOGEs cuts to steer contracts towards his own firms. (The Guardian)+ Can AI help DOGE slash government budgets? Its complex. (MIT Technology Review)3 The US government has hired a vaccine skeptic to conduct a major vaccine studyThe long-discredited David Geier will oversee analysis of whether jabs cause autism. (WP $)+ The White House appears to be targeting mRNA vaccines. (FT $)+ Why childhood vaccines are a public health success story. (MIT Technology Review)4 Microsoft has unveiled two deep reasoning Copilot AI agentsThe two agents, called Researcher and Analyst, are designed to do just that. (The Verge)+ How ChatGPT search paves the way for AI agents. (MIT Technology Review)5 Inside the rise of Chinese hackingThe cyber threat posed by the country is increasingly sophisticatedand aggressive. (Economist $)6 Google has instructed workers to remove DEI terms from their workThe company has offered up alternative language to use in its place.(The Information $)7 Synthesia is offering shares to reward human actors for its AI avatarsThe compensation scheme is the first of its kind. (FT $)+ Synthesias hyperrealistic deepfakes will soon have full bodies. (MIT Technology Review)8 Chinas RedNote is working to keep its influx of TikTok refugeesTo do so, itll need to expand its user base outside the Chinese diaspora. (Rest of World)9 This operating system is designed to keep running during civilizations collapseCollapse OS is designed to give us access to lost knowledge in case of disaster. (Wired $)10 No one really knows how long people liveLongevity research is bogged down in bad record-keeping. (NY Mag $)+ The quest to legitimize longevity medicine. (MIT Technology Review)Quote of the dayThere are so many great reasons to be on Signal. Now including the opportunity for the vice president of the United States of America to randomly add you to a group chat for coordination of sensitive military operations.Moxie Marlinspike, founder of secure messaging platform Signal, pokes fun at the fallout surrounding US officials accidentally adding a journalist to a private military group chat in a post on X.The big storyLongevity enthusiasts want to create their own independent state. Theyre eyeing Rhode Island.May 2023Jessica HamzelouI recently traveled to Montenegro for a gathering of longevity enthusiasts. All the attendees were super friendly, and the sense of optimism was palpable. Theyre all confident well be able to find a way to slow or reverse agingand they have a bold plan to speed up progress.Around 780 of these people have created a pop-up city that hopes to circumvent the traditional process of clinical trials. They want to create an independent state where like-minded innovators can work together in an all-new jurisdiction that gives them free rein to self-experiment with unproven drugs. Welcome to Zuzalu. Read the full story.We can still have nice thingsA place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet em at me.)+ Good newsit turns out that fungi are actually pretty good at saving imperiled plants.+ Ever wondered what ancient Egyptian mummy remains smell like? These intrepid scientists found out.+ Kudos to this terrible artist, who is a surprise smash hit.+ Check out this handy guide to walking the path of everyday enlightenment.
    0 Comentários ·0 Compartilhamentos ·36 Visualizações
  • The AI Hype Index: DeepSeek mania, Israels spying tool, and cheating at chess
    www.technologyreview.com
    Separating AI reality from hyped-up fiction isnt always easy. Thats why weve created the AI Hype Indexa simple, at-a-glance summary of everything you need to know about the state of the industry.While AI models are certainly capable of creating interesting and sometimes entertaining material, their output isnt necessarily useful. Google DeepMind is hoping that its new robotics model could make machines more receptive to verbal commands, paving the way for us to simply speak orders to them aloud. Elsewhere, the Chinese startup Monica has created Manus, which it claims is the very first general AI agent to complete truly useful tasks. And burnt-out coders are allowing AI to take the wheel entirely in a new practice dubbed vibe coding.
    0 Comentários ·0 Compartilhamentos ·34 Visualizações
  • China built hundreds of AI data centers to catch the AI boom. Now many stand unused.
    www.technologyreview.com
    A year or so ago, Xiao Li was seeing floods of Nvidia chip deals on WeChat. A real estate contractor turned data center project manager, he had pivoted to AI infrastructure in 2023, drawn by the promise of Chinas AI craze.At that time, traders in his circle bragged about securing shipments of high-performing Nvidia GPUs that were subject to US export restrictions. Many were smuggled through overseas channels to Shenzhen. At the height of the demand, a single Nvidia H100 chip, a kind that is essential to training AI models, could sell for up to 200,000 yuan ($28,000) on the black market.Now, his WeChat feed and industry group chats tell a different story. Traders are more discreet in their dealings, and prices have come back down to earth. Meanwhile, two data center projects Li is familiar with are struggling to secure further funding from investors who anticipate poor returns, forcing project leads to sell off surplus GPUs. It seems like everyone is selling, but few are buying, he says.Just months ago, a boom in data center construction was at its height, fueled by both government and private investors. However, many newly built facilities are now sitting empty. According to people on the ground who spoke to MIT Technology Reviewincluding contractors, an executive at a GPU server company, and project managersmost of the companies running these data centers are struggling to stay afloat. The local Chinese outlets Jiazi Guangnian and 36Kr report that up to 80% of Chinas newly built computing resources remain unused.Renting out GPUs to companies that need them for training AI modelsthe main business model for the new wave of data centerswas once seen as a sure bet. But with the rise of DeepSeek and a sudden change in the economics around AI, the industry is faltering.The growing pain Chinas AI industry is going through is largely a result of inexperienced playerscorporations and local governmentsjumping on the hype train, building facilities that arent optimal for todays need, says Jimmy Goodrich, senior advisor for technology at the RAND Corporation.The upshot is that projects are failing, energy is being wasted, and data centers have become distressed assets whose investors are keen to unload them at below-market rates. The situation may eventually prompt government intervention, he says: The Chinese government is likely to step in, take over, and hand them off to more capable operators.A chaotic building boomWhen ChatGPT exploded onto the scene in late 2022, the response in China was swift. The central government designated AI infrastructure as a national priority, urging local governments to accelerate the development of so-called smart computing centersa term coined to describe AI-focused data centers.In 2023 and 2024, over 500 new data center projects were announced everywhere from Inner Mongolia to Guangdong, according to KZ Consulting, a market research firm. According to the China Communications Industry Association Data Center Committee, a state-affiliated industry association, at least 150 of the newly built data centers were finished and running by the end of 2024. State-owned enterprises, publicly traded firms, and state-affiliated funds lined up to invest in them, hoping to position themselves as AI front-runners. Local governments heavily promoted them in the hope theyd stimulate the economy and establish their region as a key AI hub.However, as these costly construction projects continue, the Chinese frenzy over large language models is losing momentum. In 2024 alone, over 144 companies registered with the Cyberspace Administration of Chinathe countrys central internet regulatorto develop their own LLMs. Yet according to the Economic Observer, a Chinese publication, only about 10% of those companies were still actively investing in large-scale model training by the end of the year.Chinas political system is highly centralized, with local government officials typically moving up the ranks through regional appointments. As a result, many local leaders prioritize short-term economic projects that demonstrate quick resultsoften to gain favor with higher-upsrather than long-term development. Large, high-profile infrastructure projects have long been a tool for local officials to boost their political careers.The post-pandemic economic downturn only intensified this dynamic. With Chinas real estate sectoronce the backbone of local economiesslumping for the first time in decades, officials scrambled to find alternative growth drivers. In the meantime, the countrys once high-flying internet industry was also entering a period of stagnation. In this vacuum, AI infrastructure became the new stimulus of choice.AI felt like a shot of adrenaline, says Li. A lot of money that used to flow into real estate is now going into AI data centers.By 2023, major corporationsmany of them with little prior experience in AIbegan partnering with local governments to capitalize on the trend. Some saw AI infrastructure as a way to justify business expansion or boost stock prices, says Fang Cunbao, a data center project manager based in Beijing. Among them were companies like Lotus, an MSG manufacturer, and Jinlun Technology, a textile firmhardly the names one would associate with cutting-edge AI technology.This gold-rush approach meant that the push to build AI data centers was largely driven from the top down, often with little regard for actual demand or technical feasibility, say Fang, Li, and multiple on-the-ground sources, who asked to speak anonymously for fear of political repercussions. Many projects were led by executives and investors with limited expertise in AI infrastructure, they say. In the rush to keep up, many were constructed hastily and fell short of industry standards.Putting all these large clusters of chips together is a very difficult exercise, and there are very few companies or individuals who know how to do it at scale, says Goodrich. This is all really state-of-the-art computer engineering. Id be surprised if most of these smaller players know how to do it. A lot of the freshly built data centers are quickly strung together and dont offer the stability that a company like DeepSeek would want.To make matters worse, project leaders often relied on middlemen and brokerssome of whom exaggerated demand forecasts or manipulated procurement processes to pocket government subsidies, sources say.By the end of 2024, the excitement that once surrounded Chinas data center boom was curdling into disappointment. The reason is simple: GPU rental is no longer a particularly lucrative business.The DeepSeek reckoningThe business model of data centers is in theory straightforward: They make money by renting out GPU clusters to companies that need computing capacity for AI training. In reality, however, securing clients is proving difficult. Only a few top tech companies in China are now drawing heavily on computing power to train their AI models. Many smaller players have been giving up on pretraining their models or otherwise shifting their strategy since the rise of DeepSeek, which broke the internet with R1, its open-source reasoning model that matches the performance of ChatGPT o1 but was built at a fraction of its cost.DeepSeek is a moment of reckoning for the Chinese AI industry. The burning question shifted from Who can make the best large language model? to Who can use them better? says Hangcheng Cao, an assistant professor of information systems at Emory University.The rise of reasoning models like DeepSeeks R1 and OpenAIs ChatGPT o1 and o3 has also changed what businesses want from a data center. With this technology, most of the computing needs come from conducting step-by-step logical deductions in response to users queries, not from the process of training and creating the model in the first place. This reasoning process often yields better results but takes significantly more time. As a result, hardware with low latency (the time it takes for data to pass from one point on a network to another) is paramount. Data centers need to be located near major tech hubs to minimize transmission delays and ensure access to highly skilled operations and maintenance staff.This change means many data centers built in central, western, and rural Chinawhere electricity and land are cheaperare losing their allure to AI companies. In Zhengzhou, a city in Lis home province of Henan, a newly built data center is even distributing free computing vouchers to local tech firms but still struggles to attract clients.Additionally, a lot of the new data centers that have sprung up in recent years were optimized for pretraining workloadslarge, sustained computations run on massive data setsrather than for inference, the process of running trained reasoning models to respond to user inputs in real time. Inference-friendly hardware differs from whats traditionally used for large-scale AI training.GPUs like Nvidia H100 and A100 are designed for massive data processing, prioritizing speed and memory capacity. But as AI moves toward real-time reasoning, the industry seeks chips that are more efficient, responsive, and cost-effective. Even a minor miscalculation in infrastructure needs can render a data center suboptimal for the tasks clients require.In these circumstances, the GPU rental price has dropped to an all-time low. A recent report from the Chinese media outlet Zhineng Yongxian said that an Nvidia H100 server configured with eight GPUs now rents for 75,000 yuan per month, down from highs of around 180,000. Some data centers would rather leave their facilities sitting empty than run the risk of losing even more money because they are so costly to run, says Fan: The revenue from having a tiny part of the data center running simply wouldnt cover the electricity and maintenance cost.Its paradoxicalChina faces the highest acquisition costs for Nvidia chips, yet GPU leasing prices are extraordinarily low, Li says. Theres an oversupply of computational power, especially in central and west China, but at the same time, theres a shortage of cutting-edge chips.However, not all brokers were looking to make money from data centers in the first place. Instead, many were interested in gaming government benefits all along. Some operators exploit the sector for subsidized green electricity, obtaining permits to generate and sell power, according to Fang and some Chinese media reports. Instead of using the energy for AI workloads, they resell it back to the grid at a premium. In other cases, companies acquire land for data center development to qualify for state-backed loans and credits, leaving facilities unused while still benefiting from state funding, according to the local media outlet Jiazi Guangnian.Towards the end of 2024, no clear-headed contractor and broker in the market would still go into the business expecting direct profitability, says Fang. Everyone I met is leveraging the data center deal for something else the government could offer.A necessary evilDespite the underutilization of data centers, Chinas central government is still throwing its weight behind a push for AI infrastructure. In early 2025, it convened an AI industry symposium, emphasizing the importance of self-reliance in this technology.Major Chinese tech companies are taking note, making investments aligning with this national priority. Alibaba Group announced plans to invest over $50 billion in cloud computing and AI hardware infrastructure over the next three years, while ByteDance plans to invest around $20 billion in GPUs and data centers.In the meantime, companies in the US are doing likewise. Major tech firms including OpenAI, Softbank, and Oracle have teamed up to commit to the Stargate initiative, which plans to invest up to $500 billion over the next four years to build advanced data centers and computing infrastructure. Given the AI competition between the two countries, experts say that China is unlikely to scale back its efforts. If generative AI is going to be the killer technology, infrastructure is going to be the determinant of success, says Goodrich, the tech policy advisor to RAND.The Chinese central government will likely see [underused data centers] as a necessary evil to develop an important capability, a growing pain of sorts. You have the failed projects and distressed assets, and the state will consolidate and clean it up. They see the end, not the means, Goodrich says.Demand remains strong for Nvidia chips, and especially the H20 chip, which was custom-designed for the Chinese market. One industry source, who requested not to be identified under his company policy, confirmed that the H20, a lighter, faster model optimized for AI inference, is currently the most popular Nvidia chip, followed by the H100, which continues to flow steadily into China even though sales are officially restricted by US sanctions. Some of the new demand is driven by companies deploying their own versions of DeepSeeks open-source models.For now, many data centers in China sit in limbobuilt for a future that has yet to arrive. Whether they will find a second life remains uncertain. For Fang Cunbao, DeepSeeks success has become a moment of reckoning, casting doubt on the assumption that an endless expansion of AI infrastructure guarantees progress. Thats just a myth, he now realizes. At the start of this year, Fang decided to quit the data center industry altogether. The market is too chaotic. The early adopters profited, but now its just people chasing policy loopholes, he says. Hes decided to go into AI education next.What stands between now and a future where AI is actually everywhere, he says, is not infrastructure anymore, but solid plans to deploy the technology.
    0 Comentários ·0 Compartilhamentos ·36 Visualizações
  • OpenAIs new image generator aims to be practical enough for designers and advertisers
    www.technologyreview.com
    OpenAI has released a new image generator thats designed less for typical surrealist AI art and more for highly controllable and practical creation of visualsa sign that OpenAI thinks its tools are ready for use in fields like advertising and graphic design.The image generator, which is now part of the companys GPT-4o model, was promised by OpenAI last May but wasnt released. Requests for generated images on ChatGPT were filled by an older image generator called DALL-E. OpenAI has been tweaking the new model since then and will now release it over the coming weeks to all tiers of users starting today, replacing the older one.The new model makes progress on technical issues that have plagued AI image generators for years. While most have been great at creating fantastical images or realistic deepfakes, theyve been terrible at something called binding, which refers to the ability to identify certain objects correctly and put them in their proper place (like a sign that says hot dogs properly placed above a food cart, not somewhere else in the image).It was only a few years ago that models started to succeed at things like Put the red cube on top of the blue cube, a feature that is essential for any creative professional use of AI. Generators also struggle with text generation, typically creating distorted jumbles of letter shapes that look more like captchas than readable text.OPENAIExample images from OpenAI show progress here. The model is able to generate 12 discrete graphics within a single imagelike a cat emoji or a lightning boltand place them in proper order. Another shows four cocktails accompanied by recipe cards with accurate, legible text. More images show comic strips with text bubbles, mock advertisements, and instructional diagrams. The model also allows you to upload images to be modified, and it will be available in the video generator Sora as well as in GPT-4o.OPENAIIts a new tool for communication, says Gabe Goh, the lead designer on the generator at OpenAI. Kenji Hata, a researcher at OpenAI who also worked on the tool, puts it a different way: I think the whole idea is that were going away from, like, beautiful art. It can still do that, he clarifies, but it will do more useful things too. You can actually make images work for you, he says, and not just just look at them.Its a clear sign that OpenAI is positioning the tool to be used more by creative professionals: think graphic designers, ad agencies, social media managers, or illustrators. But in entering this domain, OpenAI has two paths, both difficult.One, it can target the skilled professionals who have long used programs like Adobe Photoshop, which is also investing heavily in AI tools that can fill images with generative AI.Adobe really has a stranglehold on this market, and theyre moving fast enough that I dont know how compelling it is for people to switch, says David Raskino, the cofounder and chief technical officer of Irreverent Labs, which works on AI video generation.The second option is to target casual designers who have flocked to tools like Canva (which has also been investing in AI). This is an audience that may not have ever needed technically demanding software like Photoshop but would use more casual design tools to create visuals. To succeed here, OpenAI would have to lure people away from platforms built for design in hopes that the speed and quality of its own image generator would make the switch worth it (at least for part of the design process).Its also possible the tool will simply be used as many image generators are now: to create quick visuals that are good enough to accompany social media posts. But with OpenAI planning massive investments, including participation in the $500 billion Stargate project to build new data centers at unprecedented scale, its hard to imagine that the image generator wont play some ambitious moneymaking role.Regardless, the fact that OpenAIs new image generator has pushed through notable technical hurdles has raised the bar for other AI companies. Clearing those hurdles likely required lots of very specific data, Raskino says, like millions of images in which text is properly displayed at lots of different angles and orientations. Now competing image generators will have to match those achievements to keep up.The pace of innovation should increase here, Raskino says.
    0 Comentários ·0 Compartilhamentos ·67 Visualizações
  • The Download: creating spare human bodies, and ditching US AI models
    www.technologyreview.com
    This is todays edition ofThe Download,our weekday newsletter that provides a daily dose of whats going on in the world of technology.Ethically sourced spare human bodies could revolutionize medicineMany challenges in medicine stem, in large part, from a common root cause: a severe shortage of ethically-sourced human bodies.There might be a way to get out of this moral and scientific deadlock. Recent advances in biotechnology now provide a pathway to producing living human bodies without the neural components that allow us to think, be aware, or feel pain.Many will find this possibility disturbing, but if researchers and policymakers can find a way to pull these technologies together, we may one day be able to create spare bodies, both human and nonhuman.These could revolutionize medical research and drug development, greatly reducing the need for animal testing, rescuing many people from organ transplant lists, and allowing us to produce more effective drugs and treatments. All without crossing most peoples ethical lines. Read the full story.Why the world is looking to ditch US AI modelsEileen GuoA few weeks ago, when I was at the digital rights conference RightsCon in Taiwan, I watched in real time as civil society organizations from around the world, including the US, grappled with the loss of one of the biggest funders of global digital rights work: the United States government.Some policymakers and business leadersin Europe, in particularare reconsidering their reliance on US-based tech and asking whether they can quickly spin up better, homegrown alternatives. This is particularly true for AI. Read the full story.This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.How to delete your 23andMe dataConsumer DNA testing company 23andMe has filed for bankruptcy protection, following months of speculation around CEO Anne Wojcickis plans to take the firm private. The news means that 23andMeand the genetic data of millions of its customerscould soon be put up for sale.But although customers worried about the security of their DNA data can request its deletion, truly scrubbing your information from the companys archives is easier said than done. Read the full story.Rhiannon WilliamsThe must-readsIve combed the internet to find you todays most fun/important/scary/fascinating stories about technology.1 US security leaders accidentally added a journalist to a secret Signal chatThe group used the unapproved platform to discuss classified military strikes in Yemen. (The Atlantic $)+ It raises questions over how the US government is handling sensitive information. (Vox)+ The Trump administration has embraced the encrypted messaging app. (WP $)2 Donald Trumps H-1B visa crackdown could seriously harm US tech firmsAmazon is likely to be hit particularly hard. (Rest of World)+ US visa and green-card holders are being detained and deported. (NY Mag $)+ Tariffs, DOGE and scams are weighing heavily on the tech industry. (Insider $)+ America relies heavily on skilled overseas workers. (The Conversation)3 DeepSeeks runaway success is shaking up Chinas AI startupsTheyre overhauling their business models in an effort to keep up. (FT $)+ The AI development gap between China and the US is narrowing. (Reuters)+ How DeepSeek ripped up the AI playbookand why everyones going to follow its lead. (MIT Technology Review)4 AI companies dont want to be regulated anymoreEmboldened by the Trump administration, the industrys biggest firms are lobbying for fewer rules. (NYT $)5 Colorado is experimenting with psychedelic mushroomsIt plans to administer them in healing centers across the state. (Undark)+ Job titles of the future: Pharmaceutical-grade mushroom grower. (MIT Technology Review)6 Tesla sales are plummeting in EuropeAs customers turn to its Chinese rival BYD. (The Guardian)+ Elon Musks companies are under increasing pressure from their rivals. (Economist $)+ BYD was one of our 2024 Climate Tech Companies to Watch. (MIT Technology Review)7 This Indian city relies on the wind to stay coolPalava City is a living testbed of technological innovation. (WP $)+ No power, no fans, no AC: The villagers fighting to survive Indias deadly heatwaves. (MIT Technology Review)8 Filming your online routine is not for the faint of heartAbsurd clips are doing the rounds on social media yet again. (NY Mag $)9 Floating wood could help to refreeze the ArcticBy helping to seed the formation of new ice. (New Scientist $)+ Inside a new quest to save the doomsday glacier. (MIT Technology Review)10 Silicon Valley workers are ditching dating appsInstead, theyre attending carefully vetted dating meetups IRL. (Wired $)Quote of the dayThe path to saving TikTok should run through Capitol Hill.Three Democratic senators urge Donald Trump to work with Congress to save TikTok from shutting down in the US, the Verge reports.The big storyHow AI is changing gymnastics judgingJanuary 2024The 2023 World Championships last October marked the first time an AI judging system was used on every apparatus in a gymnastics competition. There are obvious upsides to using this kind of technology: AI could help take the guesswork out of the judging technicalities. It could even help to eliminate biases, making the sport both more fair and more transparent.At the same time, others fear AI judging will take away something that makes gymnastics special. Gymnastics is a subjective sport, like diving or dressage, and technology could eliminate the judges role in crafting a narrative.For better or worse, AI has officially infiltrated the world of gymnastics. The question now is whether it really makes it fairer. Read the full story.Jessica Taylor PriceWe can still have nice thingsA place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet em at me.)+ These plants are quite possibly math geniuses.+ Inside the weird and wonderful world of animal art.+ Get me on a (sustainable) trip to the Cook Islands immediately.+ Its officially cherry blossom season around the world!
    0 Comentários ·0 Compartilhamentos ·52 Visualizações
  • How to delete your 23andMe data
    www.technologyreview.com
    This story was originally published in October 2024. In March 2025, 23andMe filed for bankruptcy and announced its plans to facilitate a sale process to maximize the value of its business.MIT Technology ReviewsHow Toseries helps you get things done.Things arent looking good for 23andMe. The consumer DNA testing company recently parted ways with all its board members but CEO Anne Wojcicki over her plans to take the company private. Its also still dealing with the fallout of a major security breach last October, which saw hackers access the personal data of around 5.5 million customers.23andMes business is built on taking saliva samples from its customers. The DNA from those samples is processed and analyzed in its labs to produce personalized genetic reports detailing a users unique health and ancestry. The uncertainty swirling around the companys future and potential new ownership has prompted privacy campaigners to urge users to delete their data.Its not just you. If anyone in your family gave their DNA to 23&Me, for all of your sakes, close your/their account now, Meredith Whittaker, president of the encrypted messaging platform Signal, posted on X after the boards resignation.Customers should consider current threats to their privacy as well as threats that may exist in the futuresome of which may be magnified if 23AndMe were sold to a new owner, says Jason Kelley, activism director at the Electronic Frontier Foundation. 23AndMe has protections around this much of this. But a potential sale could put your data in the hands of a far less scrupulous company.A spokesperson for 23andMe said that the company has strong customer privacy protections in place, and does not share customer data with third parties without customers consent. Our research program is opt-in, requiring customers to go through a separate, informed consent process before joining, they say. We are committed to protecting customer data and are consistently focused on maintaining the privacy of our customers. That will not change.Why deleting your account comes with a caveatDeleting your data from 23andMe is permanent and cannot be reversed. But some of that data will be retained to comply with the companys legal obligations, according to its privacy statement.That means 23andMe and its third-party genotyping laboratory will hang onto some of your genetic information, plus your date of birth and sexalongside data linked to your account deletion request, including your email address and deletion request identifier. When MIT Technology Review asked 23andMe about the nature of the genetic information it retains, it referred us to its privacy policy but didnt provide any other details.Any information youve previously provided and consented to being used in 23andMe research projects also cannot be removed from ongoing or completed studies, although it will not be used in any future ones.Beyond the laboratories that process the saliva samples, the company does not share customer information with anyone else unless the user has given permission for it to do so, the spokesperson says, including employers, insurance companies, law enforcement agencies, or any public databases.We treat law enforcement inquiries, such as a valid subpoena or court order, with the utmost seriousness. We use all legal measures to resist any and all requests in order to protect our customers privacy, the spokesperson says. To date, we have successfully challenged these requests and have not released any information to law enforcement.For those who still want their data deleted, heres how you go about it.How to delete your data from 23andMeLog into your account and navigate to Settings.Under Settings, scroll to the section titled 23andMe data. Select View.You may be asked to enter your date of birth for extra security.In the next section, youll be asked which, if any, personal data youd like to download from the company (onto a personal, not public, computer). Once youre finished, scroll to the bottom and select Permanently delete data.You should then receive an email from 23andMe detailing its account deletion policy and requesting that you confirm your request. Once you confirm youd like your data to be deleted, the deletion will begin automatically and youll immediately lose access to your account.What about your genetic sample?When you set up your 23andMe account, youre given the option either to have your saliva sample securely destroyed or to have it stored for future testing. If youve previously opted to store your sample but now want to delete your 23andMe account, the company says, it will destroy the sample for you as part of the account deletion process.What if you want to keep your genetic data, just not on 23andMe?Even if you want your data taken off 23AndMe, there are reasons why you might still want to have it hosted on other DNA sitesfor genealogical research, for example. And some people like the idea of having their DNA results stored on more than one database in case something happens to any one company. This is where downloading your data comes into play. FamilyTreeDNA, MyHeritage, GEDmatch, and Living DNA are among the DNA testing companies that allow you to upload existing DNA results from other companies, although Ancestry and 23andMe dont accept uploads.How to download your raw genetic dataNavigate directly to you.23andme.com/tools/data/.Click on your profile name on the top right-hand corner. Then select Resources from the menu.Select Browse raw genotyping data and then Download.Visit Account settings and click on View under 23andMe data.Enter your date of birth for security purposes.Tick the box indicating that you understand the limitations and risks associated with uploading your information to third-party sites and press Submit request.23andMe warns its users that uploading their data to other services could put genetic data privacy at risk. For example, bad actors could use someone elses DNA data to create fake genetic profiles. They could use these profiles to match with a relative and access personal identifying information and specific DNA variantssuch as information about any disease risk variants you might carry, the spokesperson says, adding: This is one reason why we dont support uploading DNA to 23andMe at this time.Update: This article has been updated to reflect that when asked about the nature of the genetic information it retains, 23andMe referred us to its privacy policy but didnt provide any other details.
    0 Comentários ·0 Compartilhamentos ·51 Visualizações
  • Ethically sourced spare human bodies could revolutionize medicine
    www.technologyreview.com
    Why do we hear about medical breakthroughs in mice, but rarely see them translate into cures for human disease? Why do so few drugs that enter clinical trials receive regulatory approval? And why is the waiting list for organ transplantation so long? These challenges stem in large part from a common root cause: a severe shortage of ethically-sourced human bodies.It may be disturbing to characterize human bodies in such commodifying terms, but the unavoidable reality is that human biological materials are an essential commodity in medicine, and persistent shortages of these materials create a major bottleneck to progress. This imbalance between supply and demand is the underlying cause of the organ shortage crisis, with more than 100,000 patients currently waiting for a solid organ transplant in the US alone. It also forces us to rely heavily on animals in medical research, a practice that cant replicate major aspects of human physiology and necessitates the infliction of harm to sentient creatures. In addition, the safety and efficacy of any experimental drug must still be confirmed in clinical trials on living human bodies. These costly trials risk harm to patients, can take a decade or longer to complete, and make it through to approval less than 15% of the time.There might be a way to get out of this moral and scientific deadlock. Recent advances in biotechnology now provide a pathway to producing living human bodies without the neural components that allow us to think, be aware, or feel pain. Many will find this possibility disturbing, but if researchers and policymakers can find a way to pull these technologies together, we may one day be able to create spare bodies, both human and nonhuman. These could revolutionize medical research and drug development, greatly reducing the need for animal testing, rescuing many people from organ transplant lists, and allowing us to produce more effective drugs and treatments. All without crossing most peoples ethical lines.Bringing technologies togetherAlthough it may seem like science fiction, recent technological progress has pushed this concept into the realm of plausibility. Pluripotent stem cells, one of the earliest cell types to form during development, can give rise to every type of cell in the adult body. Recently, researchers have used these stem cells to create structures that seem to mimic the early development of actual human embryos. At the same time, artificial uterus technology is rapidly advancing, and other pathways may be opening to allow for the development of fetuses outside of the body.By integrating these different technologies and using established genetic techniques to inhibit brain development, it is possible to envision the creation of bodyoids a potentially unlimited source of human bodies, developed entirely outside of a human body from stem cells, that lack sentience or the ability to feel pain.There are still many technical roadblocks to achieving this vision, but we have reason to expect that bodyoids couldradically transform biomedical research by addressing critical limitations in the current models of research, drug development and medicine. Among many other benefits, theywould offer an almost unlimited source of organs, tissues and cells for use in transplantation.It could even be possible to generate organs directly from a patients own cells, essentially cloning their biological material to ensure that transplanted tissues are a perfect immunological match to a patient and thus eliminating the need for lifelong immunosuppression. Bodyoids developed from a patients cells could also allow for personalized screening of drugs, allowing physicians to directly assess the effect of different interventions in a biological model that accurately reflects a patients own personal genetics and physiology. We can even envision using animal bodyoidsin agriculture, as a substitute for the use of sentient animal species.Of course, exciting possibilities are not certainties. We do not know whether the embryo models recently created from stem cells could give rise to living people or, thus far, even to living mice. We do not know when, or whether, an effective technique will be found for successfully gestating human bodies entirely outside a person. We cannot be sure whether such bodyoids can survive without ever having developed brains or the parts of brains associated with consciousness, or whether they would still serve as accurate models for living people without those brain functions. Even if it all works, it may not be practical or economical to grow bodyoids, possibly for many years, until they can be mature enough to be useful for our ends. Each of these questions will require substantial research and time. But we believe this idea is now plausible enough to justify discussing both the technical feasibility and ethical implications.Ethical considerations and societal implicationsBodyoids could address many ethical problems in modern medicine, offering ways to avoid unnecessary pain and suffering. For example, they could offer an ethical alternative to the way we currently use nonhuman animals for research and food, providing meat or other products with no animal suffering or awareness.But when we come to human bodyoids, the issues become harder. Many will find the concept grotesque or appalling. And for good reason. We have an innate respect for human life in all its forms. We do not allow broad research on people who no longer have consciousness or, in some cases, never had it.At the same time, we know much can be gained from studying the human body. We learn much from the bodies of the dead, which these days are used for teaching and research only with consent. In laboratories, we study cells and tissues that were taken, with consent, from the bodies of the dead and the living. Recently we have even begun using for experiments the animated cadavers of people who have been declared legally dead, who have lost all brain function but whose other organs continue to function with mechanical assistance. Genetically modified pig kidneys have been connected to, or transplanted into, these legally dead but physiologically active cadavers to help researchers determine whether they would work in living people.In all these cases, nothing was, legally, a living human being at the time it was used for research. Human bodyoids would also fall into that category. But there are still a number of issues worth considering. The first is consent: The cells used to make bodyoids would have to come from someone, and wed have to make sure that this someone consented to this particular, likely controversial, use. But perhaps the deepest issue is that bodyoids might diminish the human status of real people who lack consciousness or sentience. Thus far, we have held to a standard that requires us to treat all humans born alive as people, entitled to life and respect. Would bodyoidscreated without pregnancy, parental hopes, or indeed parentsblur that line? Or would we consider a bodyoid a human being, entitled to the same respect? If so, whyjust because it looks like us? A sufficiently detailed mannequin can meet that test. Because it looks like us and is alive? Because it is alive and has our DNA? These are questions that will require careful thought.A call to actionUntil recently, the idea of making something like a bodyoid would have been relegated to the realms of science fiction and philosophical speculation. But now it is at least plausibleand possibly revolutionary. It is time for it to be explored.The potential benefitsfor both human patients and sentient animal speciesare great. Governments, companies and private foundations should start thinking about bodyoids as a possible path for investment.There is no need to start with humanswe can begin exploring the feasibility of this approach with rodents or other research animals.As we proceed, the ethical and social issues are at least as important as the scientific ones. Just because something can be done does not mean it should be done. Even if it looks possible, determining whether we should make bodyoids, nonhuman or human, will require considerable thought, discussion, and debate. Some of that will be by scientists, ethicists, and others with special interest or knowledge. But ultimately, the decisions will be made by societies and governments.The time to start those discussions is now, when a scientific pathway seems clear enough for us to avoid pure speculation but before the world is presented with a troubling surprise. The announcement of the birth of Dolly the cloned sheep back in the 1990s launched a hysterical reaction, complete with speculation about armies of cloned warrior slaves. Good decisions require more preparation. The path toward realizing the potential of bodyoids will not be without challenges; indeed, it may never be possible to get there, or even if it is possible, the path may never be taken. Caution is warranted, but so is bold vision; the opportunity is too important to ignore.Carsten T. Charlesworth is a postdoctoral fellow at the Institute of Stem Cell Biology and Regenerative Medicine (ISCBRM) at Stanford University. Henry T. Greely is the Deane F. and Kate Edelman Johnson Professor of Law and director of the Center for Law and the Biosciences at Stanford University. Hiromitsu Nakauchi is a professor of genetics and an ISCBRM faculty member at Stanford University and a distinguished university professor at the Institute of Science Tokyo.
    0 Comentários ·0 Compartilhamentos ·60 Visualizações
  • Why the world is looking to ditch US AI models
    www.technologyreview.com
    This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first,sign up here.This weeks edition of The Algorithm is brought to you not by your usual host, James ODonnell, but Eileen Guo, an investigative reporter at MIT Technology Review.A few weeks ago, when I was at the digital rights conference RightsCon in Taiwan, I watched in real time as civil society organizations from around the world, including the US, grappled with the loss of one of the biggest funders of global digital rights work: the United States government.As I wrote in my dispatch, the Trump administrations shocking, rapid gutting of the US government (and its push into what some prominent political scientists call competitive authoritarianism) also affects the operations and policies of American tech companiesmany of which, of course, have users far beyond US borders. People at RightsCon said they were already seeing changes in these companies willingness to engage with and invest in communities that have smaller user basesespecially non-English-speaking ones.As a result, some policymakers and business leadersin Europe, in particularare reconsidering their reliance on US-based tech and asking whether they can quickly spin up better, homegrown alternatives. This is particularly true for AI.One of the clearest examples of this is in social media. Yasmin Curzi, a Brazilian law professor who researches domestic tech policy, put it to me this way: Since Trumps second administration, we cannot count on [American social media platforms] to do even the bare minimum anymore.Social media content moderation systemswhich already use automation and are also experimenting with deploying large language models to flag problematic postsare failing to detect gender-based violence in places as varied as India, South Africa, and Brazil. If platforms begin to rely even more on LLMs for content moderation, this problem will likely get worse, says Marlena Wisniak, a human rights lawyer who focuses on AI governance at the European Center for Not-for-Profit Law. The LLMs are moderated poorly, and the poorly moderated LLMs are then also used to moderate other content, she tells me. Its so circular, and the errors just keep repeating and amplifying.Part of the problem is that the systems are trained primarily on data from the English-speaking world (and American English at that), and as a result, they perform less well with local languages and context.Even multilingual language models, which are meant to process multiple languages at once, still perform poorly with non-Western languages. For instance, one evaluation of ChatGPTs response to health-care queries found that results were far worse in Chinese and Hindi, which are less well represented in North American data sets, than in English and Spanish.For many at RightsCon, this validates their calls for more community-driven approaches to AIboth in and out of the social media context. These could include small language models, chatbots, and data sets designed for particular uses and specific to particular languages and cultural contexts. These systems could be trained to recognize slang usages and slurs, interpret words or phrases written in a mix of languages and even alphabets, and identify reclaimed language (onetime slurs that the targeted group has decided to embrace). All of these tend to be missed or miscategorized by language models and automated systems trained primarily on Anglo-American English. The founder of the startup Shhor AI, for example, hosted a panel at RightsCon and talked about its new content moderation API focused on Indian vernacular languages.Many similar solutions have been in development for yearsand weve covered a number of them, including a Mozilla-facilitated volunteer-led effort to collect training data in languages other than English, and promising startups like Lelapa AI, which is building AI for African languages. Earlier this year, we even included small language models on our 2025 list of top 10 breakthrough technologies.Still, this moment feels a little different. The second Trump administration, which shapes the actions and policies of American tech companies, is obviously a major factor. But there are others at play.First, recent research and development on language models has reached the point where data set size is no longer a predictor of performance, meaning that more people can create them. In fact, smaller language models might be worthy competitors of multilingual language models in specific, low-resource languages, says Aliya Bhatia, a visiting fellow at the Center for Democracy & Technology who researches automated content moderation.Then theres the global landscape. AI competition was a major theme of the recent Paris AI Summit, which took place the week before RightsCon. Since then, theres been a steady stream of announcements about sovereign AI initiatives that aim to give a country (or organization) full control over all aspects of AI development.AI sovereignty is just one part of the desire for broader tech sovereignty thats also been gaining steam, growing out of more sweeping concerns about the privacy and security of data transferred to the United States. The European Union appointed its first commissioner for tech sovereignty, security, and democracy last November and has been working on plans for a Euro Stack, or digital public infrastructure. The definition of this is still somewhat fluid, but it could include the energy, water, chips, cloud services, software, data, and AI needed to support modern society and future innovation. All these are largely provided by US tech companies today. Europes efforts are partly modeled after India Stack, that countrys digital infrastructure that includes the biometric identity system Aadhaar. Just last week, Dutch lawmakers passed several motions to untangle the country from US tech providers.This all fits in with what Andy Yen, CEO of the Switzerland-based digital privacy company Proton, told me at RightsCon. Trump, he said, is causing Europe to move faster to come to the realization that Europe needs to regain its tech sovereignty. This is partly because of the leverage that the president has over tech CEOs, Yen said, and also simply because tech is where the future economic growth of any country is.But just because governments get involved doesnt mean that issues around inclusion in language models will go away. I think there needs to be guardrails about what the role of the government here is. Where it gets tricky is if the government decides These are the languages we want to advance or These are the types of views we want represented in a data set, Bhatia says. Fundamentally, the training data a model trains on is akin to the worldview it develops.Its still too early to know what this will all look like, and how much of it will prove to be hype. But no matter what happens, this is a space well be watching.Now read the rest of The AlgorithmDeeper LearningOpenAI has released its first research into how using ChatGPT affects peoples emotional well-beingOpenAI released two pieces of research last week that explore how ChatGPT affects people who engage with it on emotional issues, yielding some interesting results. Female study participants were slightly less likely to socialize with people than their male counterparts who used the chatbot for the same period of time, our reporter Rhiannon Williams writes. And people who used voice mode in a gender that was not their own reported higher levels of loneliness at the end of the experiment.Why it matters: AI companies have raced to build chatbots that act not just as productivity tools but also as companions, romantic partners, friends, therapists, and more. Legally, its largely still a Wild West landscape. Some have instructed users to harm themselves, and others have offered sexually charged conversations as underage characters represented by deepfakes. More research into how people, especially children, are using these AI models is essential. OpenAIs work is only a start. Read more from Rhiannon Williams.Bits and BytesOpinion Why handing over total control to AI agents would be a huge mistakeCompanies like OpenAI and Butterfly Effect (the startup in China that made Manus) are racing to build AI agents that can do tasks for you by taking over your computer. In this op-ed, some top AI researchers detail the potential missteps that could occur if we cede more control of our digital lives to decision-making AIs.A provocative experiment pitted AI against federal judgesResearch has long shown that judges are influenced by many factors, like how sympathetic they are to defendants, or when their last meal was. Despite AI models inherent problems with biases and hallucinations, researchers at the University of Chicago Law School wondered if they can present more objective opinions. They can, but that doesnt make them better judges, the researchers say. (The Washington Post)Elon Musks truth-seeking chatbot often disagrees with himMusk promised his company xAIs model Grok would be an antidote to the woke and politically influenced chatbots that he says dominate today. But in tests done by the Washington Post, the model contradicted many of Musks claims about specific issues. (The Washington Post)A Disney employee downloaded an AI tool that contained malware, and it ruined his lifeMIT Technology Review has long predicted that the proliferation of AI will enable scammers to up their productivity as never before. One victim of this trend is Matthew Van Andel, a Disney employee who downloaded malware disguised as an AI tool. It led to his firing. (Wall Street Journal)The facial recognition company Clearview attempted to buy Social Security numbers and mugshots for its databaseThree years ago, Clearview was fined for scraping images of individuals faces from the internet. Now, court records reveal that the company was attempting to buy 690 million arrest records and 390 million arrest photos in the USrecords that also contained Social Security numbers, emails, and physical addresses. The deal fell through, but Clearview nonetheless holds one of the largest databases of facial images, and its tools are used by police and federal agencies. (404 Media)
    0 Comentários ·0 Compartilhamentos ·42 Visualizações
  • The Download: the dangers of AI agents, and ChatGPTs effects on our wellbeing
    www.technologyreview.com
    This is todays edition ofThe Download,our weekday newsletter that provides a daily dose of whats going on in the world of technology.Why handing over total control to AI agents would be a huge mistakeMargaret Mitchell, Avijit Ghosh, Sasha Luccioni, Giada Pistilli all work for Hugging Face, an open source AI company.AI agents have set the tech industry abuzz. Unlike chatbots, these groundbreaking new systems can navigate multiple applications to execute complex tasks, like scheduling meetings or shopping online, in response to simple user commands. As agents become more capable, a crucial question emerges: How much control are we willing to surrender, and at what cost?The promise is compelling. Who doesnt want assistance with cumbersome work or tasks theres no time for? But this vision for AI agents brings significant risks that might be overlooked in the rush toward greater autonomy. In fact, our research suggests that agent development could be on the cusp of a very serious misstep. Read the full story.OpenAI has released its first research into how using ChatGPT affects peoples emotional wellbeingOpenAI says over 400 million people use ChatGPT every week. But how does interacting with it affect us? Does it make us more or less lonely?These are some of the questions OpenAI set out to investigate, in partnership with the MIT Media Lab, in a pair of new studies. They found that while only a small subset of users engage emotionally with ChatGPT, there are some intriguing differences between how men and women respond to using the chatbot. They also found that participants who trusted and bonded with ChatGPT more were likelier than others to be lonely, and to rely on it more.Chatbots powered by large language models are still a nascent technology, and difficult to study. Thats why this kind of research is an important first step toward greater insight into ChatGPTs impact on us, which could help AI platforms enable safer and healthier interactions. Read the full story.Rhiannon WilliamsThe must-readsIve combed the internet to find you todays most fun/important/scary/fascinating stories about technology.1 Genetic testing firm 23andMe has filed for bankruptcy protectionFollowing months of uncertainty over its future. (CNN)+ Tens of millions of peoples genetic data could soon belong to a new owner. (WSJ $)+ How to delete your 23andMe data. (MIT Technology Review)2 Europe wants to lessen its reliance of US cloud giantsBut thats easier said than done. (Wired $)3 Anduril is considering opening a drone factory in the UKEurope is poised to invest heavily in defenseand Anduril wants in. (Bloomberg $)+ The company recently signed a major drone contract with the UK government. (Insider $)+ We saw a demo of the new AI system powering Andurils vision for war. (MIT Technology Review)4 Bird flu has been detected in a sheep in the UKIts the first known instance of the virus infecting a sheep. (FT $)+ But the UK is yet to report any transmission to humans. (Reuters)+ How the US is preparing for a potential bird flu pandemic. (MIT Technology Review)5 A tiny town in the Alps has emerged as an ALS hotspotSuggesting that its causes may be more environmental than genetic. (The Atlantic $)+ Motor neuron diseases took their voices. AI is bringing them back. (MIT Technology Review)6 Firefly Aerospaces Blue Ghost lunar lander has completed its missionAnd captured some pretty incredible footage along the way. (NYT $)+ Europe is finally getting serious about commercial rockets. (MIT Technology Review)7 How the US could save billions of dollars in wasted energy Ultra tough, multi-pane windows could be the answer. (WSJ $)8 We need new ways to measure painResearchers are searching for objective biological indicators to get rid of the guesswork. (WP $)+ Brain waves can tell us how much pain someone is in. (MIT Technology Review)9 What falling in love with an AI could look likeIts unclear whether loving machines could be training grounds for future relationships, or the future of relationships themselves. (New Yorker $)+ The AI relationship revolution is already here. (MIT Technology Review)10 Could you walk in a straight line for hundreds of miles?YouTubes favorite new challenge isnt so much arduous as it is inconvenient. (The Guardian)Quote of the dayBlockbuster has collapsed. Its time for Netflix to rise.Kian Sadeghi pitches the company they founded, DNA testing firm Nucleus Genomics, as a replacement for 23andMe in a post on X.The big storyThis towns mining battle reveals the contentious path to a cleaner futureJanuary 2024In June last year, Talon, an exploratory mining company, submitted a proposal to Minnesota state regulators to begin digging up as much as 725,000 metric tons of raw ore per year, mainly to unlock the rich and lucrative reserves of high-grade nickel in the bedrock.Talon is striving to distance itself from the mining industrys dirty past, portraying its plan as a clean, friendly model of modern mineral extraction. It proclaims the site will help to power a greener future for the US by producing the nickel needed to manufacture batteries for electric cars and trucks, but with low emissions and light environmental impacts.But as the company has quickly discovered, a lot of locals arent eager for major mining operations near their towns. Read the full story.James TempleWe can still have nice thingsA place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet em at me.)+ Who are fandoms for, and who gets to escape into them?+ A long-lost Klimt painting of Prince William Nii Nortey Dowuona has gone on display in the Netherlands.+ Feeling down? These feel-good movies will pick you right up.+ Why Gen Z are dedicated followers of Old Money fashion.
    0 Comentários ·0 Compartilhamentos ·67 Visualizações
  • Why handing over total control to AI agents would be a huge mistake
    www.technologyreview.com
    AI agents have set the tech industry abuzz. Unlike chatbots, these groundbreaking new systems operate outside of a chat window, navigating multiple applications to execute complex tasks, like scheduling meetings or shopping online, in response to simple user commands. As agents are developed to become more capable, a crucial question emerges: How much control are we willing to surrender, and at what cost?New frameworks and functionalities for AI agents are announced almost weekly, and companies promote the technology as a way to make our lives easier by completing tasks we cant do or dont want to do. Prominent examples include computer use, a function that enables Anthropics Claude system to act directly on your computer screen, and the general AI agent Manus, which can use online tools for a variety of tasks, like scouting out customers or planning trips. These developments mark a major advance in artificial intelligence: systems designed to operate in the digital world without direct human oversight.The promise is compelling. Who doesnt want assistance with cumbersome work or tasks theres no time for? Agent assistance could soon take many different forms, such as reminding you to ask a colleague about their kids basketball tournament or finding images for your next presentation. Within a few weeks, theyll probably be able to make presentations for you.Theres also clear potential for deeply meaningful differences in peoples lives. For people with hand mobility issues or low vision, agents could complete tasks online in response to simple language commands. Agents could also coordinate simultaneous assistance across large groups of people in critical situations, such as by routing traffic to help drivers flee an area en masse as quickly as possible when disaster strikes.But this vision for AI agents brings significant risks that might be overlooked in the rush toward greater autonomy. Our research team at Hugging Face has spent years implementing and investigating these systems, and our recent findings suggest that agent development could be on the cusp of a very serious misstep.Giving up control, bit by bitThis core issue lies at the heart of whats most exciting about AI agents: The more autonomous an AI system is, the more we cede human control. AI agents are developed to be flexible, capable of completing a diverse array of tasks that dont have to be directly programmed.For many systems, this flexibility is made possible because theyre built on large language models, which are unpredictable and prone to significant (and sometimes comical) errors. When an LLM generates text in a chat interface, any errors stay confined to that conversation. But when a system can act independently and with access to multiple applications, it may perform actions we didnt intend, such as manipulating files, impersonating users, or making unauthorized transactions. The very feature being soldreduced human oversightis the primary vulnerability.To understand the overall risk-benefit landscape, its useful to characterize AI agent systems on a spectrum of autonomy. The lowest level consists of simple processors that have no impact on program flow, like chatbots that greet you on a company website. The highest level, fully autonomous agents, can write and execute new code without human constraints or oversightthey can take action (moving around files, changing records, communicating in email, etc.) without your asking for anything. Intermediate levels include routers, which decide which human-provided steps to take; tool callers, which run human-written functions using agent-suggested tools; and multistep agents that determine which functions to do when and how. Each represents an incremental removal of human control.Its clear that AI agents can be extraordinarily helpful for what we do every day. But this brings clear privacy, safety, and security concerns. Agents that help bring you up to speed on someone would require that individuals personal information and extensive surveillance over your previous interactions, which could result in serious privacy breaches. Agents that create directions from building plans could be used by malicious actors to gain access to unauthorized areas.And when systems can control multiple information sources simultaneously, potential for harm explodes. For example, an agent with access to both private communications and public platforms could share personal information on social media. That information might not be true, but it would fly under the radar of traditional fact-checking mechanisms and could be amplified with further sharing to create serious reputational damage. We imagine that It wasnt meit was my agent!! will soon be a common refrain to excuse bad outcomes.Keep the human in the loopHistorical precedent demonstrates why maintaining human oversight is critical. In 1980, computer systems falsely indicated that over 2,000 Soviet missiles were heading toward North America. This error triggered emergency procedures that brought us perilously close to catastrophe. What averted disaster was human cross-verification between different warning systems. Had decision-making been fully delegated to autonomous systems prioritizing speed over certainty, the outcome might have been catastrophic.Some will counter that the benefits are worth the risks, but wed argue that realizing those benefits doesnt require surrendering complete human control. Instead, the development of AI agents must occur alongside the development of guaranteed human oversight in a way that limits the scope of what AI agents can do.Open-source agent systems are one way to address risks, since these systems allow for greater human oversight of what systems can and cannot do. At Hugging Face were developing smolagents, a framework that provides sandboxed secure environments and allows developers to build agents with transparency at their core so that any independent group can verify whether there is appropriate human control.This approach stands in stark contrast to the prevailing trend toward increasingly complex, opaque AI systems that obscure their decision-making processes behind layers of proprietary technology, making it impossible to guarantee safety.As we navigate the development of increasingly sophisticated AI agents, we must recognize that the most important feature of any technology isnt increasing efficiency but fostering human well-being.This means creating systems that remain tools rather than decision-makers, assistants rather than replacements. Human judgment, with all its imperfections, remains the essential component in ensuring that these systems serve rather than subvert our interests.Margaret Mitchell, Avijit Ghosh, Sasha Luccioni, Giada Pistilli all work for Hugging Face, an open source AI company.
    0 Comentários ·0 Compartilhamentos ·79 Visualizações
  • OpenAI has released its first research into how using ChatGPT affects peoples emotional wellbeing
    www.technologyreview.com
    OpenAI says over 400 million people use ChatGPT every week. But how does interacting with it affect us? Does it make us more or less lonely? These are some of the questions OpenAI set out to investigate, in partnership with the MIT Media Lab, in a pair of new studies.They found that only a small subset of users engage emotionally with ChatGPT. This isnt surprising given that ChatGPT isnt marketed as an AI companion app like Replika or Character.AI, says Kate Devlin, a professor of AI and society at Kings College London, who did not work on the project. ChatGPT has been set up as a productivity tool, she says. But we know that people are using it like a companion app anyway. In fact, the people who do use it that way are likely to interact with it for extended periods of time, some of them averaging about half an hour a day.The authors are very clear about what the limitations of these studies are, but its exciting to see theyve done this, Devlin says. To have access to this level of data is incredible.The researchers found some intriguing differences between how men and women respond to using ChatGPT. After using the chatbot for four weeks, female study participants were slightly less likely to socialize with people than their male counterparts who did the same. Meanwhile, participants who set ChatGPTs voice mode to a gender that was not their own for their interactions reported significantly higher levels of loneliness and more emotional dependency on the chatbot at the end of the experiment. OpenAI currently has no plans to publish either study.Chatbots powered by large language models are still a nascent technology, and its difficult to study how they affect us emotionally. A lot of existing research in the areaincluding some of the new work by OpenAI and MITrelies upon self-reported data, which may not always be accurate or reliable. That said, this latest research does chime with what scientists so far have discovered about how emotionally compelling chatbot conversations can be. For example, in 2023 MIT Media Lab researchers found that chatbots tend to mirror the emotional sentiment of a users messages, suggesting a kind of feedback loop where the happier you act, the happier the AI seems, or on the flipside, if you act sadder, so does the AI.OpenAI and the MIT Media Lab used a two-pronged method. First they collected and analyzed real-world data from close to 40 million interactions with ChatGPT. Then they asked the 4,076 users whod had those interactions how they made them feel. Next, the Media Lab recruited almost 1,000 people to take part in a four-week trial. This was more in-depth, examining how participants interacted with ChatGPT for a minimum of five minutes each day. At the end of the experiment, participants completed a questionnaire to measure their perceptions of the chatbot, their subjective feelings of loneliness, their levels of social engagement, their emotional dependence on the bot, and their sense of whether their use of the bot was problematic. They found that participants who trusted and bonded with ChatGPT more were likelier than others to be lonely, and to rely on it more.This work is an important first step toward greater insight into ChatGPTs impact on us, which could help AI platforms enable safer and healthier interactions, says Jason Phang, an OpenAI policy researcher who worked on the project.A lot of what were doing here is preliminary, but were trying to start the conversation with the field about the kinds of things that we can start to measure, and to start thinking about what the long-term impact on users is, he says.Although the research is welcome, its still difficult to identify when a human isand isntengaging with technology on an emotional level, says Devlin. She says the study participants may have been experiencing emotions that werent recorded by the researchers.In terms of what the teams set out to measure, people might not necessarily have been using ChatGPT in an emotional way, but you cant divorce being a human from your interactions [with technology], she says. We use these emotion classifiers that we have created to look for certain thingsbut what that actually means to someones life is really hard to extrapolate.
    0 Comentários ·0 Compartilhamentos ·103 Visualizações
  • The Download: saving the doomsday glacier, and Europes hopes for its rockets
    www.technologyreview.com
    This is todays edition ofThe Download,our weekday newsletter that provides a daily dose of whats going on in the world of technology.Inside a new quest to save the doomsday glacierThe Thwaites glacier is a fortress larger than Florida, a wall of ice that reaches nearly 4,000 feet above the bedrock of West Antarctica, guarding the low-lying ice sheet behind it.But a strong, warm ocean current is weakening its foundations and accelerating its slide into the sea. Scientists fear the waters could topple the walls in the coming decades, kick-starting a runaway process that would crack up the West Antarctic Ice Sheet, marking the start of a global climate disaster. As a result, they are eager to understand just how likely such a collapse is, when it could happen, and if we have the power to stop it.Scientists at MIT and Dartmouth College founded Arte Glacier Initiative last year in the hope of providing clearer answers to these questions. The nonprofit research organization will officially unveil itself, launch its website, and post requests for research proposals today, timed to coincide with the UNs inaugural World Day for Glaciers, MIT Technology Review can report exclusively. Read the full story.James TempleEurope is finally getting serious about commercial rocketsEurope is on the cusp of a new dawn in commercial space technology. As global political tensions intensify and relationships with the US become increasingly strained, several European companies are now planning to conduct their own launches in an attempt to reduce the continents reliance on American rockets.In the coming days, Isar Aerospace, a company based in Munich, will try to launch its Spectrum rocket from a site in the frozen reaches of Andya island in Norway. A spaceport has been built there to support small commercial rockets, and Spectrum is the first to make an attempt.Regardless of whether it succeeds or fails, the launch attempt heralds an important moment as Europe tries to kick-start its own private rocket industry. It and other launches scheduled for later this year could give Europe multiple ways to reach space without having to rely on US rockets. Read the full story.Jonathan OCallaghanAutopsies can reveal intimate health details. Should they be kept private?Jessica HamzelouOver the past couple of weeks, Ive been following news of the deaths of actor Gene Hackman and his wife, pianist Betsy Arakawa. It was heartbreaking to hear how Arakawa appeared to have died from a rare infection days before her husband, who had advanced Alzheimers disease and may have struggled to understand what had happened.But as I watched the medical examiner reveal details of the couples health, I couldnt help feeling a little uncomfortable. Media reports claim that the couple liked their privacy and had been out of the spotlight for decades. But here I was, on the other side of the Atlantic Ocean, being told what pills Arakawa had in her medicine cabinet, and that Hackman had undergone multiple surgeries.Should autopsy reports be kept private? A persons cause of death is public information. But what about other intimate health details that might be revealed in a postmortem examination? Read the full story.This article first appeared in The Checkup, MIT Technology Reviews weekly biotech newsletter. To receive it in your inbox every Thursday, sign up here.The must-readsIve combed the internet to find you todays most fun/important/scary/fascinating stories about technology.1 Elon Musk will be briefed on the USs top-secret plans for war with ChinaDespite Teslas reliance on China, and SpaceXs role as a US defense contractor. (WSJ $)+ Other private companies could only dream of having access to sensitive military data. (NYT $)2 Take a look inside the library of pirated books that Meta trains its AI onIt considered paying for the books, but decided to use LibGen instead. (The Atlantic $)+ Copyright traps could tell writers if an AI has scraped their work. (MIT Technology Review)3 A judge has blocked DOGE from accessing social security systemsShe accused DOGE of failing to explain why it needed to see the private data of millions of Americans. (TechCrunch)+ Federal workers grilled a Trump appointee during an all-hands meeting. (Wired $)+ Can AI help DOGE slash government budgets? Its complex. (MIT Technology Review)4 The Trump administration is poised to shut down an anti-censorship fundThe project, which helps internet users living under oppressive regimes, is under threat. (WP $)+ Tens of millions will lose access to secure and trusted VPNs. (Bloomberg $)+ Activists are reckoning with a US retreat from promoting digital rights. (MIT Technology Review)5 Tesla is recalling tens of thousands of CybertrucksAfter it used the wrong glue to attach its steel panels. (Fast Company $)+Its the largest Cybertruck recall to date. (BBC)6 This crypto billionaire has his sights set on the starsJed McCaleb is the sole backer of an ambitious space station project. (Bloomberg $)+ Is DOGE going to come for NASA? (New Yorker $)7 The irresistible allure of SpotifyMaybe algorithms arent all bad, after all. (Vox)+ By delivering what people seem to want, has Spotify killed the joy of music discovery? (MIT Technology Review)8 Dating apps and AI? Its complicated While some are buzzing at the prospect of romantic AI agents, others arent so sure. (Insider $)9 Crypto bars are becoming a thingAnd Washington is the first casualty. (The Verge)10 The ways we use emojis is evolving Are you up to date? (FT $)Quote of the dayIts an assault, and a particularly cruel one to use my work to train the monster that threatens the ruination of original literature.Author AJ West, whose books were included in the library of pirated material Meta used to train its AI model, calls for the company to compensate writers in a post on Bluesky.The big storyAre we alone in the universe?November 2023The quest to determine if anyone or anything is out there has gained a greater scientific footing over the past 50 years. Back then, astronomers had yet to spot a single planet outside our solar system. Now we know the galaxy is teeming with a diversity of worlds.Were now getting closer than ever before to learning how common living worlds like ours actually are. New tools, including artificial intelligence, could help scientists look past their preconceived notions of what constitutes life.Future instruments will sniff the atmospheres of distant planets and scan samples from our local solar system to see if they contain telltale chemicals in the right proportions for organisms to prosper. But determining whether these planets actually contain organisms is no easy task. Read the full story.Adam MannWe can still have nice thingsA place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet em at me.)+ Get your weekend off to a good start with these beautiful nebulas.+ Justice for Mariah: a judge has ruled that she didnt steal All I Want For Christmas Is You from other writers.+ Were no longer extremely online any more apparentlyso what are we?+ The fascinating tale of White Mana, one of Americas oldest burger joints.
    0 Comentários ·0 Compartilhamentos ·110 Visualizações
  • Inside a new quest to save the doomsday glacier
    www.technologyreview.com
    The Thwaites glacier is a fortress larger than Florida, a wall of ice that reaches nearly 4,000 feet above the bedrock of West Antarctica, guarding the low-lying ice sheet behind it.But a strong, warm ocean current is weakening its foundations and accelerating its slide into the Amundsen Sea. Scientists fear the waters could topple the walls in the coming decades, kick-starting a runaway process that would crack up the West Antarctic Ice Sheet.That would mark the start of a global climate disaster. The glacier itself holds enough ice to raise ocean levels by more than two feet, which could flood coastlines and force tens of millions of people living in low-lying areas to abandon their homes.The loss of the entire ice sheetwhich could still take centuries to unfoldwould push up sea levels by 11 feet and redraw the contours of the continents.This is why Thwaites is known as the doomsday glacierand why scientists are eager to understand just how likely such a collapse is, when it could happen, and if we have the power to stop it.Scientists at MIT and Dartmouth College founded Arte Glacier Initiative last year in the hope of providing clearer answers to these questions. The nonprofit research organization will officially unveil itself, launch its website, and post requests for research proposals today, March 21, timed to coincide with the UNs inaugural World Day for Glaciers, MIT Technology Review can report exclusively.Arte will also announce it is issuing its first grants, each for around $200,000 over two years, to a pair of glacier researchers at the University of Wisconsin-Madison.One of the organizations main goals is to study the possibility of preventing the loss of giant glaciers, Thwaites in particular, by refreezing them to the bedrock. It would represent a radical intervention into the natural world, requiring a massive, expensive engineering project in a remote, treacherous environment.But the hope is that such a mega-adaptation project could minimize the mass relocation of climate refugees, prevent much of the suffering and violence that would almost certainly accompany it, and help nations preserve trillions of dollars invested in high-rises, roads, homes, ports, and airports around the globe.About a million people are displaced per centimeter of sea-level rise, says Brent Minchew, an associate professor of geophysics at MIT, who cofounded Arte Glacier Initiative and will serve as its chief scientist. If were able to bring that down, even by a few centimeters, then we would safeguard the homes of millions.But some scientists believe the idea is an implausible, wildly expensive distraction, drawing money, expertise, time, and resources away from more essential polar research efforts.Sometimes we can get a little over-optimistic about what engineering can do, says Twila Moon, deputy lead scientist at the National Snow and Ice Data Center at the University of Colorado Boulder.Two possible futuresMinchew, who earned his PhD in geophysics at Caltech, says he was drawn to studying glaciers because they are rapidly transforming as the world warms, increasing the dangers of sea-level rise.But over the years, I became less content with simply telling a more dramatic story about how things were going and more open to asking the question of what can we do about it, says Minchew, who will return to Caltech as a professor this summer.Last March, he cofounded Arte Glacier Initiative with Colin Meyer, an assistant professor of engineering at Dartmouth, in the hope of funding and directing research to improve scientific understanding of two big questions: How big a risk does sea-level rise pose in the coming decades, and can we minimize that risk?Brent Minchew, an MIT professor of geophysics, co-founded Arte Glacier Initiative and will serve as its chief scientist.COURTESY: BRENT MINCHEWPhilanthropic funding is needed to address both of these challenges, because theres no private-sector funding for this kind of research and government funding is minuscule, says Mike Schroepfer, the former Meta chief technology officer turned climate philanthropist, who provided funding to Arte through his new organization, Outlier Projects.The nonprofit has now raised about $5 million from Outlier and other donors, including the Navigation Fund, the Kissick Family Foundation, the Sky Foundation, the Wedner Family Foundation, and the Grantham Foundation.Minchew says they named the organization Arte, mainly because its the sharp mountain ridge between two valleys, generally left behind when a glacier carves out the cirques on either side. It directs the movement of the glacier and is shaped by it.Its meant to symbolize two possible futures, he says. One where we do something; one where we do nothing.Improving forecastsThe somewhat reassuring news is that, even with rising global temperatures, it may still take thousands of years for the West Antarctic Ice Sheet to completely melt.In addition, sea-level rise forecasts for this century generally range from as little as 0.28 meters (11 inches) to 1.10 meters (about three and a half feet), according to the latest UN climate panel report. The latter only occurs under a scenario with very high greenhouse gas emissions (SSP5-8.5), which significantly exceeds the pathway the world is now on.But theres still a low-likelihood that ocean levels could surge nearly two meters (about six and a half feet) by 2100 that cannot be excluded, given deep uncertainty linked to ice-sheet processes, the report adds.Two meters of sea-level rise could force nearly 190 million people to migrate away from the coasts, unless regions build dikes or other shoreline protections, according to some models. Many more people, mainly in the tropics, would face heightened flooding dangers.Much of the uncertainty over what will happen this century comes down to scientists limited understanding of how Antarctic ice sheets will respond to growing climate pressures.The initial goal of Arte Glacier Initiative is to help narrow the forecast ranges by improving our grasp of how Thwaites and other glaciers move, melt, and break apart.Gravity is the driving force nudging glaciers along the bedrock and reshaping them as they flow. But many of the variables that determine how fast they slide lie at the base. That includes the type of sediment the river of ice slides along; the size of the boulders and outcroppings it contorts around; and the warmth and strength of the ocean waters that lap at its face.In addition, heat rising from deep in the earth warms the ice closest to the ground, creating a lubricating layer of water that hastens the glaciers slide. That acceleration, in turn, generates more frictional heat that melts still more of the ice, creating a self-reinforcing feedback effect.Minchew and Meyer are confident that the glaciology field is at a point where it could speed up progress in sea-level rise forecasting, thanks largely to improving observational tools that are producing more and better data.That includes a new generation of satellites orbiting the planet that can track the shifting shape of ice at the poles at far higher resolutions than in the recent past. Computer simulations of ice sheets, glaciers and sea ice are improving as well, thanks to growing computational resources and advancing machine learning techniques.On March 21, Arte will issue a request for proposals from research teams to contribute to an effort to collect, organize, and openly publish existing observational glacier data. Much of that expensively gathered information is currently inaccessible to researchers around the world, Minchew says.Colin Meyer, an assistant professor of engineering at Dartmouth, co-founded Arte Glacier Initiative.ELI BURAKBy funding teams working across these areas, Artes founders hope to help produce more refined ice-sheet models and narrower projections of sea-level rise.This improved understanding would help cities plan where to build new bridges, buildings, and homes, and to determine whether theyll need to erect higher seawalls or raise their roads, Meyer says. It could also provide communities with more advance notice of the coming dangers, allowing them to relocate people and infrastructure to safer places through an organized process known as managed retreat.A radical interventionBut the improved forecasts might also tell us that Thwaites is closer to tumbling into the ocean than we think, underscoring the importance of considering more drastic measures.One idea is to build berms or artificial islands to prop up fragile parts of glaciers, and to block the warm waters that rise from the deep ocean and melt them from below. Some researchers have also considered erecting giant, flexible curtains anchored to the seabed to achieve the latter effect.Others have looked at scattering highly reflective beads or other materials across ice sheets, or pumping ocean water onto them in the hopes it would freeze during the winter and reinforce the headwalls of the glaciers.But the concept of refreezing glaciers in place, know as a basal intervention, is gaining traction in scientific circles, in part because theres a natural analogue for it.The glacier that stalledAbout 200 years ago, the Kamb Ice Stream, another glacier in West Antarctica that had been sliding about 350 meters (1,150 feet) per year, suddenly stalled.Glaciologists believe an adjacent ice stream intersected with the catchment area under the glacier, providing a path for the water running below it to flow out along the edge instead. That loss of fluid likely slowed down the Kamb Ice Stream, reduced the heat produced through friction, and allowed water at the surface to refreeze.The deceleration of the glacier sparked the idea that humans might be able to bring about that same phenomenon deliberately, perhaps by drilling a series of boreholes down to the bedrock and pumping up water from the bottom.Minchew himself has focused on a variation he believes could avoid much of the power use and heavy operating machinery hassles of that approach: slipping long tubular devices, known as thermosyphons, down nearly to the bottom of the boreholes.These passive heat exchangers, which are powered only by the temperature differential between two areas, are commonly used to keep permafrost cold around homes, buildings and pipelines in Arctic regions. The hope is that we could deploy extremely long ones, stretching up to two kilometers and encased in steel pipe, to draw warm temperatures away from the bottom of the glacier, allowing the water below to freeze.Minchew says hes in the process of producing refined calculations, but estimates that halting Thwaites could require drilling as many as 10,000 boreholes over a 100-square-kilometer area.He readily acknowledges that would be a huge undertaking, but provides two points of comparison to put such a project into context: Melting the necessary ice to create those holes would require roughly the amount of energy all US domestic flights consume from jet fuel in about two and a half hours. Or, it would produce about the same level of greenhouse gas emissions as constructing 10 kilometers of seawalls, a small fraction of the length the world would need to build if it cant slow down the collapse of the ice sheets, he says.Kick the systemOne of Artes initial grantees is Marianne Haseloff, an assistant professor of geoscience at the University of Wisconsin-Madison. She studies the physical processes that govern the behavior of glaciers and is striving to more faithfully represent them in ice sheet models.Haseloff says she will use those funds to develop mathematical methods that could more accurately determine whats known as basal shear stress, or the resistance of the bed to sliding glaciers, based on satellite observations. That could help refine forecasts of how rapidly glaciers will slide into the ocean, in varying settings and climate conditions.Artes other initial grant will go to Lucas Zoet, an associate professor in the same department as Haseloff and the principal investigator with the Surface Processes group.He intends to use the funds to build the labs second ring shear device, the technical term for a simulated glacier.The existing device, which is the only one operating in the world, stands about eight feet tall and fills the better part of a walk-in freezer on campus. The core of the machine is a transparent drum filled with a ring of ice, sitting under pressure and atop a layer of sediment. It slowly spins for weeks at a time as sensors and cameras capture how the ice and earth move and deform.Lucas Zoet, an associate professor at the University of WisconsinMadison, stands in front of his labs ring shear device, a simulated glacier.ETHAN PARRISHThe research team can select the sediment, topography, water pressure, temperature, and other conditions to match the environment of a real-world glacier of interest, be it Thwaites todayor Thwaites in 2100, under a high greenhouse gas emissions scenario.Zoet says these experiments promise to improve our understanding of how glaciers move over different types of beds, and to refine an equation known as the slip law, which represents these glacier dynamics mathematically in computer models.The second machine will enable them to run more experiments and to conduct a specific kind that the current device cant: a scaled-down, controlled version of the basal intervention.Zoet says the team will be able to drill tiny holes through the ice, then pump out water or transfer heat away from the bed. They can then observe whether the simulated glacier freezes to the base at those points and experiment with how many interventions, across how much space, are required to slow down its movement.It offers a way to test out different varieties of the basal intervention that is far easier and cheaper than using water drills to bore to the bottom of an actual glacier in Antarctica, Zoet says. The funding will allow the lab to explore a wide range of experiments, enabling them to kick the system in a way we wouldnt have before, he adds.Virtually impossibleThe concept of glacier interventions is in its infancy. There are still considerable unknowns and uncertainties, including how much it would cost, how arduous the undertaking would be, and which approach would be most likely to work, or if any of them are feasible.This is mostly a theoretical idea at this point, says Katharine Ricke, an associate professor at the University of California, San Diego, who researches the international relations implications of geoengineering, among other topics.Conducting extensive field trials or moving forward with full-scale interventions may also require surmounting complex legal questions, she says. Antarctica isnt owned by any nation, but its the subject of competing territorial claims among a number of countries and governed under a decades-old treaty to which dozens are a party.The basal interventionrefreezing the glacier to its bedfaces numerous technical hurdles that would make it virtually impossible to execute, Moon and dozens of other researchers argued in a recent preprint paper, Safeguarding the polar regions from dangerous geoengineering.Among other critiques, they stress that subglacial water systems are complex, dynamic, and interconnected, making it highly difficult to precisely identify and drill down to all the points that would be necessary to remove enough water or add enough heat to substantially slow down a massive glacier.Further, they argue that the interventions could harm polar ecosystems by adding contaminants, producing greenhouse gases, or altering the structure of the ice in ways that may even increase sea-level rise.Overwhelmingly, glacial and polar geoengineering ideas do not make sense to pursue, in terms of the finances, the governance challenges, the impacts, and the possibility of making matters worse, Moon says.No easy path forwardBut Douglas MacAyeal, professor emeritus of glaciology at the University of Chicago, says the basal intervention would have the lightest environmental impact among the competing ideas. He adds that nature has already provided an example of it working, and that much of the needed drilling and pumping technology is already in use in the oil industry.I would say its the strongest approach at the starting gate, he says, but we dont really know anything about it yet. The research still has to be done. Its very cutting-edge.Minchew readily acknowledges that there are big challenges and significant unknownsand that some of these ideas may not work.But he says its well worth the effort to study the possibilities, in part because much of the research will also improve our understanding of glacier dynamics and the risks of sea-level riseand in part because its only a question of when, not if, Thwaites will collapse.Even if the world somehow halted all greenhouse gas emissions tomorrow, the forces melting that fortress of ice will continue to do so.So one way or another, the world will eventually need to make big, expensive, difficult interventions to protect people and infrastructure. The cost and effort of doing one project in Antarctica, he says, would be small compared to the global effort required to erect thousands of miles of seawalls, ratchet up homes, buildings, and roads, and relocate hundreds of millions of people.One thing is challengingand the other is even more challenging, Minchew says. Theres no easy path forward.
    0 Comentários ·0 Compartilhamentos ·91 Visualizações
  • Autopsies can reveal intimate health details. Should they be kept private?
    www.technologyreview.com
    Over the past couple of weeks, Ive been following news of the deaths of actor Gene Hackman and his wife, pianist Betsy Arakawa. It was heartbreaking to hear how Arakawa appeared to have died from a rare infection days before her husband, who had advanced Alzheimers disease and may have struggled to understand what had happened.But as I watched the medical examiner reveal details of the couples health, I couldnt help feeling a little uncomfortable. Media reports claim that the couple liked their privacy and had been out of the spotlight for decades. But here I was, on the other side of the Atlantic Ocean, being told what pills Arakawa had in her medicine cabinet, and that Hackman had undergone multiple surgeries.It made me wonder: Should autopsy reports be kept private? A persons cause of death is public information. But what about other intimate health details that might be revealed in a postmortem examination?The processes and regulations surrounding autopsies vary by country, so well focus on the US, where Hackman and Arakawa died. Here, a medico-legal autopsy may be organized by law enforcement agencies and handled through courts, while a clinical autopsy may be carried out at the request of family members.And there are different levels of autopsysome might involve examining specific organs or tissues, while more thorough examinations would involve looking at every organ and studying tissues in the lab.The goal of an autopsy is to discover the cause of a persons death. Autopsy reports, especially those resulting from detailed investigations, often reveal health conditionsconditions that might have been kept private while the person was alive. There are multiple federal and state laws designed to protect individuals health information. For example, the Health Insurance Portability and Accountability Act (HIPAA) protects individually identifiable health information up to 50 years after a persons death. But some things change when a person dies.For a start, the cause of death will end up on the death certificate. That is public information. The public nature of causes of death is taken for granted these days, says Lauren Solberg, a bioethicist at the University of Florida College of Medicine. It has become a public health statistic. She and her student Brooke Ortiz, who have been researching this topic, are more concerned about other aspects of autopsy results.The thing is, autopsies can sometimes reveal more than what a person died from. They can also pick up what are known as incidental findings. An examiner might find that a person who died following a covid-19 infection also had another condition. Perhaps that condition was undiagnosed. Maybe it was asymptomatic. That finding wouldnt appear on a death certificate. So who should have access to it?The laws over who should have access to a persons autopsy report vary by state, and even between counties within a state. Clinical autopsy results will always be made available to family members, but local laws dictate which family members have access, says Ortiz.Genetic testing further complicates things. Sometimes the people performing autopsies will run genetic tests to help confirm the cause of death. These tests might reveal what the person died from. But they might also flag genetic factors unrelated to the cause of death that might increase the risk of other diseases.In those cases, the persons family members might stand to benefit from accessing that information. My health information is my health informationuntil it comes to my genetic health information, says Solberg. Genes are shared by relatives. Should they have the opportunity to learn about potential risks to their own health?This is where things get really complicated. Ethically speaking, we should consider the wishes of the deceased. Would that person have wanted to share this information with relatives?Its also worth bearing in mind that a genetic risk factor is often just that; theres often no way to know whether a person will develop a disease, or how severe the symptoms would be. And if the genetic risk is for a disease that has no treatment or cure, will telling the persons relatives just cause them a lot of stress?One 27-year-old experienced this when a 23&Me genetic test told her she had a 28% chance of developing late-onset Alzheimers disease by age 75 and a 60% chance by age 85.Im suddenly overwhelmed by this information, she posted on a dementia forum. I cant help feeling this overwhelming sense of dread and sadness that Ill never be able to un-know this information.In their research, Solberg and Ortiz came across cases in which individuals who had died in motor vehicle accidents underwent autopsies that revealed other, asymptomatic conditions. One man in his 40s who died in such an accident was found to have a genetic kidney disease. A 23-year-old was found to have had kidney cancer.Ideally, both medical teams and family members should know ahead of time what a person would have wantedwhether thats an autopsy, genetic testing, or health privacy. Advance directives allow people to clarify their wishes for end-of-life care. But only around a third of people in the US have completed one. And they tend to focus on care before death, not after.Solberg and Ortiz think they should be expanded. An advance directive could specify how people want to share their health information after theyve died. Talking about death is difficult, says Solberg. For physicians, for patients, for familiesit can be uncomfortable. But it is important.On March 17, a New Mexico judge granted a request from a representative of Hackmans estate to seal police photos and bodycam footage as well as the medical records of Hackman and Arakawa. The medical investigator is temporarily restrained from disclosing the Autopsy Reports and/or Death Investigation Reports for Mr. and Mrs. Hackman, according to Deadline.This article first appeared in The Checkup,MIT Technology Reviewsweekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first,sign up here.
    0 Comentários ·0 Compartilhamentos ·89 Visualizações
  • Europe is finally getting serious about commercial rockets
    www.technologyreview.com
    Europe is on the cusp of a new dawn in commercial space technology. As global political tensions intensify and relationships with the US become increasingly strained, several European companies are now planning to conduct their own launches in an attempt to reduce the continents reliance on American rockets.In the coming days, Isar Aerospace, a company based in Munich, will try to launch its Spectrum rocket from a site in the frozen reaches of Andya island in Norway. A spaceport has been built there to support small commercial rockets, and Spectrum is the first to make an attempt.Its a big milestone, says Jonathan McDowell, an astronomer and spaceflight expert at the Harvard-Smithsonian Center for Astrophysics in Massachusetts. Its long past time for Europe to have a proper commercial launch industry.Spectrum stands 28 meters (92 feet) tall, the length of a basketball court. The rocket has two stages, or parts, the first with nine enginespowered by an unusual fuel combination of liquid oxygen and propane not seen on other rockets before, which Isar says results in higher performanceand the second with a single engine to give satellites their final kick into orbit.The ultimate goal for Spectrum is to carry satellites weighing up to 1,000 kilograms (2,200 pounds) to low Earth orbit. On this first launch, however, there are no satellites on board, because success is anything but guaranteed. Its unlikely to make it to orbit, says Malcolm Macdonald, an expert in space technology at Strathclyde University in Scotland. The first launch of any rocket tends not to work.Regardless of whether it succeeds or fails, the launch attempt heralds an important moment as Europe tries to kick-start its own private rocket industry. Two other companiesOrbex of the UK and Rocket Factory Augsburg (RFA) of Germanyare expected to make launch attempts later this year. These efforts could give Europe multiple ways to reach space without having to rely on US rockets.Europe has to be prepared for a more uncertain future, says Macdonald. The uncertainty of what will happen over the next four years with the current US administration amplifies the situation for European launch companies.Trailing in the USs wakeEurope has for years trailed behind the US in commercial space efforts. The successful launch of SpaceXs first rocket, the Falcon 1, in 2008 began a period of American dominance of the global launch market. In 2024, 145 of 263 global launch attempts were made by US entitiesand SpaceX accounted for 138 of those. SpaceX is the benchmark at the moment, says Jonas Kellner, head of marketing, communications, and political affairs at RFA. Other US companies, like Rocket Lab (which launches from both the US and New Zealand), have also become successful, while commercial rockets are ramping up in China, too.Europe has launched its own government-funded Ariane and Vega rockets for decades from the Guiana Space Centre, a spaceport it operates in French Guiana in South America. Most recently, on March 6, the European Space Agency (ESA) launched its new heavy-lift Ariane 6 rocket from there for the first time. However, the history of rocket launches from Europe itself is much more limited. In 1997 the US defense contractor Northrop Grumman air-launched a Pegasus rocket from a plane that took off from the Canary Islands. In 2023 the US company Virgin Orbit failed to reach orbit with its LauncherOne rocket after a launch attempt from Cornwall in the UK. No vertical orbital rocket launch has ever been attempted from Western Europe.Isar Aerospace is one of a handful of companies hoping to change that with help from agencies like ESA, which has provided funding to rocket launch companies through its Boost program since 2019. In 2024 it awarded 44.22 million ($48 million) to Isar, Orbex, RFA, and the German launch company HyImpulse. The hope is that one or more of the companies will soon begin regular launches from Europe from two potential sites: Isars chosen location in Andya and the SaxaVord Spaceport on the Shetland Islands north of the UK, where RFA and Orbex plan to make their attempts.I expect four or five companies to get to the point of launching, and then over a period of years reliability and launch cadence [or frequency] will determine which one or two of them survives, says McDowell.ISAR AEROSPACEUnique advantagesIn their initial form these rockets will not rival anything on offer from SpaceX in terms of size and cadence. SpaceX sometimes launches its 70-meter (230-foot) Falcon 9 rocket multiple times per week and is developing its much larger Starship vehicle for missions to the moon and Mars. However, the smaller European rockets can allow companies in Europe to launch satellites to orbit without having to travel all the way across the Atlantic. There is an advantage to having it closer, says Kellner, who says it will take RFA one or two days by sea to get its rockets to SaxaVord, versus one or two weeks to travel across the Atlantic.Launching from Europe is useful, too, for reaching specific orbits. Traditionally, a lot of satellite launches have taken place near the equator, in places such as Cape Canaveral in Florida, to get an extra boost from Earths rotation. Crewed spacecraft have also launched from these locations to reach space stations in equatorial orbit around Earth and the moon. From Europe, though, satellites can launch north over uninhabited stretches of water to reach polar orbit, which can allow imaging satellites to see the entirety of Earth rotate underneath them.Increasingly, says McDowell, companies want to place satellites into sun-synchronous orbit, a type of polar orbit where a satellite orbiting Earth stays in perpetual sunlight. This is useful for solar-powered vehicles. By far the bulk of the commercial market now is sun-synchronous polar orbit, says McDowell. So having a high-latitude launch site that has good transport links with customers in Europe does make a difference.Europes end goalIn the longer term, Europes rocket ambitions might grow to vehicles that are more of a match for the Falcon 9 through initiatives like ESAs European Launcher Challenge, which will award contracts later this year. We are hoping to develop [a larger vehicle] in the European Launcher Challenge, says Kellner. Perhaps Europe might even consider launching humans into space one day on larger rockets, says Thilo Kranz, ESAs program manager for commercial space transportation. We are looking into this, he says. If a commercial operator comes forward with a smart way of approaching [crewed] access to space, that would be a favorable development for Europe.A separate ESA project called Themis, meanwhile, is developing technologies to reuse rockets. This was the key innovation of SpaceXs Falcon 9, allowing the company to dramatically drive down launch costs. Some European companies, like MaiaSpace and RFA, are also investigating reusability. The latter is planning to use parachutes to bring the first stage of its rocket back to a landing in the sea, where it can be recovered.As soon as you get up to something like a Falcon 9 competitor, I think its clear now that reusability is crucial, says McDowell. Theyre not going to be economically competitive without reusability.The end goal for Europe is to have a sovereign rocket industry that reduces its reliance on the US. Where we are in the broader geopolitical situation probably makes this a bigger point than it might have been six months ago, says Macdonald.The continent has already shown it can diversify from the US in other ways. Europe now operates its own successful satellite-based alternative to the US Global Positioning System (GPS), called Galileo; it began launching in 2011 and is four times more accurate than its American counterpart. Isar Aerospace, and the companies that follow, might be the first sign that commercial European rockets can break from America in a similar way.We need to secure access to space, says Kranz, and the more options we have in launching into space, the higher the flexibility.
    0 Comentários ·0 Compartilhamentos ·75 Visualizações
  • Roundtables: AI Chatbots Have Joined the Chat
    www.technologyreview.com
    Recorded onMarch 20, 2025 AI Chatbots Have Joined the ChatSpeakers: Rachel Courtland, commissioning editor, Rhiannon Williams, news reporter, and Eileen Guo, features & investigations reporter.Chatbots are quickly changing how we connect to each other and ourselves. But are these changes for the better? How should they be monitored and regulated? Hear from MIT Technology Review editor Rachel Courtland in conversation with reporter Rhiannon Williams and senior reporter Eileen Guo as they unpack the landscape around chatbots.Related CoverageThe AI relationship revolution is already hereAn AI chatbot told a user how to kill himselfbut the company doesnt want to censor it
    0 Comentários ·0 Compartilhamentos ·82 Visualizações
  • The Download: the future of energy, and chatting about chatbots
    www.technologyreview.com
    This is todays edition ofThe Download,our weekday newsletter that provides a daily dose of whats going on in the world of technology.4 technologies that could power the future of energyWhere can you find lasers, electric guitars, and racks full of novel batteries, all in the same giant room? This week, the answer was the 2025 ARPA-E Energy Innovation Summit just outside Washington, DC.Energy innovation can take many forms, and the variety in energy research was on display at the summit. ARPA-E, part of the US Department of Energy, provides funding for high-risk, high-reward research projects. The summit gathers projects the agency has funded, along with investors, policymakers, and journalists.Hundreds of projects were exhibited in a massive hall during the conference, featuring demonstrations and research results. Here are four of the most interesting innovations MIT Technology Review spotted on site. Read the full story.Casey CrownhartIf youre interested in hearing more about what Casey learnt from the ARPA-E Energy Innovation Summit, check out the latest edition of The Spark, our weekly climate and energy newsletter. Sign up to receive it in your inbox every Wednesday.Join us today to chat about chatbotsChatbots are changing how we connect to each other and ourselves. But are these changes for the better, and how should they be monitored and regulated?To learn more, join me for a live Roundtable session today at 12pm ET. Ill be chatting with MIT Technology Review editor Rachel Courtland and senior reporter Eileen Guo, and well be unpacking the landscape around chatbots. Register to ensure you dont miss out!The must-readsIve combed the internet to find you todays most fun/important/scary/fascinating stories about technology.1 A French scientist was denied US entry over anti-Donald Trump messagesUS authorities claimed the exchanges criticising the Trump administrations research policy qualified as terrorism. (Le Monde)+ Frances research minister is a high-profile critic of Trump policy. (The Guardian)+ Customs and Border Protection is cracking down at airports across the US. (The Verge)2 RFK Jr wants to let bird flu spread through poultry farmsExperts warn that this approach isnt just dangerousit wont work. (Scientific American $)+ A bird flu outbreak has been confirmed in Scotland. (BBC)+ How the US is preparing for a potential bird flu pandemic. (MIT Technology Review)3 Clearview AI tried to buy millions of mugshots for its databasesBut negotiations between the facial recognition company and an intelligence firm broke down. (404 Media)4 Top US graduates are desperate to work for Chinese AI startupsDeepSeeks success has sparked major interest in firms outside America. (Bloomberg $)+ Four Chinese AI startups to watch beyond DeepSeek. (MIT Technology Review)5 Reddit has become a lifeline for US federal workersUnpaid moderators are working around the clock to help answer urgent questions. (NYT $)+ The only two democrats on the board of the FTC have been fired. (Vox)+ Elon Musk, DOGE, and the Evil Housekeeper Problem. (MIT Technology Review)6 The European Commission is targeting Apple and GoogleIts proceeding with regulatory action, despite the risk of retaliation from Trump. (FT $)+ It has accused Alphabet of favoring its own services in search results. (The Information $)+ Metas AI chatbot is finally launching in Europe after all. (The Verge)7 AI agents could spell bad news for shopping appsDoorDash and Uber could suffer if humans outsource their ordering to bots. (The Information $)+ Dunzo was a major delivery success story in India. So what happened? (Rest of World)+ Your most important customer may be AI. (MIT Technology Review)8 This startup is making concrete using CO2It combines the gas with a byproduct from coal power plants to make lower carbon concrete. (Fast Company $)+ How electricity could help tackle a surprising climate villain. (MIT Technology Review)9 This robot dog has a functional digital nervous systemAnd will be taught to walk by a real human dog trainer, not an algorithm. (Reuters)10 Dark matter could be getting weakerIf its true, it holds major implications for our understanding of the universe. (Quanta Magazine)+ Are we alone in the universe? (MIT Technology Review)Quote of the dayThe corrupting influence of billionaires in law enforcement is an issue that affects all of us.Alvaro Bedoya, a former commissioner at the Federal Trade Commission, speaks out after being fired by Donald Trump, the Verge reports.The big storyThe arrhythmia of our current ageOctober 2025Arrhythmia means the heart beats, but not in proper timea critical rhythm of life suddenly going rogue and unpredictable. Its frightening to experience, but what if its also a good metaphor for our current times? That a pulse once seemingly so steady is now less sure.Perhaps this wobbliness might be extrapolated into a broader sense of life in the 2020s.Maybe you feel it, toothat the world seems to have skipped more than a beat or two as demagogues rant and democracy shudders, hurricanes rage, and glaciers dissolve. We cant stop watching tiny screens where influencers pitch products we dont need alongside news about senseless wars that destroy, murder, and maim tens-of-thousands.All the resulting anxiety has been hard on our heartsliterally and metaphorically. Read the full story.David Ewing DuncanWe can still have nice thingsA place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet em at me.)+ Now that David Lynch is no longer with us, who is the flagbearer for transcendental meditation?+ Who doesnt love a little mindless comedyespecially when Leslie Nielsen is involved.+ Chinas pets are seriously pampered ($)+ The worlds oldest known cerapodan dinosaur, which were massive herbivores, has been discovered in Morocco.
    0 Comentários ·0 Compartilhamentos ·89 Visualizações
  • The elephant in the room for energy tech? Uncertainty.
    www.technologyreview.com
    At a conference dedicated to energy technology that I attended this week, I noticed an outward attitude of optimism and excitement. But its hard to miss the current of uncertainty just underneath.The ARPA-E Energy Innovation Summit, held this year just outside Washington, DC, gathers some of the most cutting-edge innovators working on everything from next-generation batteries to plants that can mine for metals. Researchers whose projects have received funding from ARPA-Epart of the US Department of Energy that gives money to high-risk research in energygather to show their results and mingle with each other, investors, and nosy journalists like yours truly. (For more on a few of the coolest things I saw, check out this story.)This year, though, there was an elephant in the room, and its the current state of the US federal government. Or maybe its climate change? In any case, the vibes were weird.The last time I was at this conference, two years ago, climate change was a constant refrain on stage and in conversations. The central question was undoubtedly: How do we decarbonize, generate energy, and run our lives without relying on polluting fossil fuels?This time around, I didnt hear the phrase climate change once during the opening session, including in speeches from US Secretary of Energy Chris Wright and acting ARPA-E Director Daniel Cunningham. The focus was on American energy dominance, on how we can get our hands on more, more, more energy to meet growing demand.Last week, Wright spoke at an energy conference in Houston and had a lot to say about climate, calling climate change a side effect of building the modern world and climate policies irrational and quasi-religious, and he said that when it came to climate action, the cure had become worse than the disease.I was anticipating similar talking points at the summit, but this week, climate change hardly got a mention.What I noticed in Wrights speech and in the choice of programming throughout the conference is that some technologies appear to be among the favored, and others are decidedly less prominent. Nuclear power and fusion were definitely on the in list. There was a nuclear panel in the opening session, and in his remarks Wright called out companies like Commonwealth Fusion Systems and Zap Energy. He also praised small modular reactors.Renewables, including wind and solar, were mentioned only in the context of their inconsistencyWright dwelled on that, rather than on other facts Id argue are just as important, like that they are among the cheapest methods of generating electricity today.In any case, Wright seemed appropriately hyped about energy, given his role in the administration. Call me biased, but I think theres no more impactful place to work in than energy, Wright said during his opening remarks on the first morning of the summit. He sang the praises of energy innovation, calling it a tool to drive progress, and outlined his long career in the field.This all comes after a chaotic couple of months for the federal government that are undoubtedly affecting the industry. Mass layoffs have hit federal agencies, including the Department of Energy. President Donald Trump very quickly tried to freeze spending from the Inflation Reduction Act, which includes tax credits and other support for EVs and power plants.As I walked around the showcase and chatted with experts over coffee, I heard a range of reactions to the opening session and feelings about this moment for the energy sector.People working in industries the Trump administration seems to favor, like nuclear energy, tended to be more positive. Some in academia who rely on federal grants to fund their work were particularly nervous about what comes next. One researcher refused to talk to me when I said I was a journalist. In response to my questions about why they werent able to discuss the technology on display at their booth, another member on the same project said only that its a wild time.Making progress on energy technology doesnt require that we all agree on exactly why were doing it. But in a moment when we need all the low-carbon technologies we can get to address climate changea problem scientists overwhelmingly agree is a threat to our planetI find it frustrating that politics can create such a chilling effect in some sectors.At the conference, I listened to smart researchers talk about their work. I saw fascinating products and demonstrations, and Im still optimistic about where energy can go. But I also worry that uncertainty about the future of research and government support for emerging technologies will leave some valuable innovations in the dust.This article is from The Spark, MIT Technology Reviews weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.
    0 Comentários ·0 Compartilhamentos ·113 Visualizações
  • 4 technologies that could power the future of energy
    www.technologyreview.com
    Where can you find lasers, electric guitars, and racks full of novel batteries, all in the same giant room? This week, the answer was the 2025 ARPA-E Energy Innovation Summit just outside Washington, DC.Energy innovation can take many forms, and the variety in energy research was on display at the summit. ARPA-E, part of the US Department of Energy, provides funding for high-risk, high-reward research projects. The summit gathers projects the agency has funded, along with investors, policymakers, and journalists.Hundreds of projects were exhibited in a massive hall during the conference, featuring demonstrations and research results. Here are four of the most interesting innovations MIT Technology Review spotted on site.Steel made with lasersStartup Limelight Steel has developed a process to make iron, the main component in steel, by using lasers to heat iron ore to super-high temperatures.Steel production makes up roughly 8% of global greenhouse gas emissions today, in part because most steel is still made with blast furnaces, which rely on coal to hit the high temperatures that kick off the required chemical reactions.Limelight instead shines lasers on iron ore, heating it to temperatures over 1,600 C. Molten iron can then be separated from impurities, and the iron can be put through existing processes to make steel.The company has built a small demonstration system with a laser power of about 1.5 kilowatts, which can process between 10 and 20 grams of ore. The whole system is made up of 16 laser arrays, each just a bit larger than a postage stamp.The components in the demonstration system are commercially available; this particular type of laser is used in projectors. The startup has benefited from years of progress in the telecommunications industry that has helped bring down the cost of lasers, says Andy Zhao, the companys cofounder and CTO.The next step is to build a larger-scale system that will use 150 kilowatts of laser power and could make up to 100 tons of steel over the course of a year.Rocks that can make fuelThe hunks of rock at a booth hosted by MIT might not seem all that high-tech, but someday they could help produce fuels and chemicals.A major topic of conversation at the ARPA-E summit was geologic hydrogentheres a ton of excitement about efforts to find underground deposits of the gas, which can be used as a fuel across a wide range of industries, including transportation and heavy industry.Last year, ARPA-E funded a handful of projects on the topic, including one in Iwnetim Abates lab at MIT. Abate is among the researchers who are aiming not just to hunt for hydrogen, but to actually use underground conditions to help produce it. Earlier this year, his team published research showing that by using catalysts and conditions common in the subsurface, scientists can produce hydrogen as well as other chemicals, like ammonia. Abate cofounded a spinout company, Addis Energy, to commercialize the research, which has since also received ARPA-E funding.All the rocks on the table, from the chunk of dark, hard basalt to the softer talc, could be used to produce these chemicals.An electric guitar powered by iron nitride magnetsThe sound of music drifted from the Niron Magnetics booth across nearby walkways. People wandering by stopped to take turns testing out the companys magnets, in the form of an electric guitar.Most high-powered magnets today contain neodymiumdemand for them is set to skyrocket in the coming years, especially as the world builds more electric vehicles and wind turbines. Supplies could stretch thin, and the geopolitics are complicated because most of the supply comes from China.Niron is making new magnets that dont contain rare earth metals. Instead, Nirons technology is based on more abundant materials: nitrogen and iron.The guitar is a demonstration producttoday, magnets in electric guitars typically contain aluminum, nickel, and cobalt-based magnets that help translate the vibrations from steel strings into an electric signal that is broadcast through an amplifier. Niron made an instrument using its iron nitride magnets instead. (See photos of the guitar from an event last year here.)Niron opened a pilot commercial facility in late 2024 that has the capacity to produce 10 tons of magnets annually. Since we last covered Niron, in early 2024, the company has announced plans for a full-scale plant, which will have an annual capacity of about 1,500 tons of magnets once its fully ramped up.Batteries for powering high-performance data centersThe increasing power demand from AI and data centers was another hot topic at the summit, with server racks dotting the showcase floor to demonstrate technologies aimed at the sector. One stuffed with batteries caught my eye, courtesy of Natron Energy.The company is making sodium-ion batteries to help meet power demand from data centers.Data centers energy demands can be incredibly variableand as their total power needs get bigger, those swings can start to affect the grid. Natrons sodium-ion batteries can be installed at these facilities to help level off the biggest peaks, allowing computing equipment to run full out without overly taxing the grid, says Natron cofounder and CTO Colin Wessells.Sodium-ion batteries are a cheaper alternative to lithium-based chemistries. Theyre also made without lithium, cobalt, and nickel, materials that are constrained in production or processing. Were seeing some varieties of sodium-ion batteries popping up in electric vehicles in China.Natron opened a production line in Michigan last year, and the company plans to open a $1.4 billion factory in North Carolina.
    0 Comentários ·0 Compartilhamentos ·119 Visualizações
  • Powering the food industry with AI
    www.technologyreview.com
    There has never been a more pressing time for food producers to harness technology to tackle the sectors tough mission. To produce ever more healthy and appealing food for a growing global population in a way that is resilient and affordable, all while minimizing waste and reducing the sectors environmental impact. From farm to factory, artificial intelligence and machine learning can support these goals by increasing efficiency, optimizing supply chains, and accelerating the research and development of new types of healthy products.In agriculture, AI is already helping farmers to monitor crop health, tailor the delivery of inputs, and make harvesting more accurate and efficient. In labs, AI is powering experiments in gene editing to improve crop resilience and enhance the nutritional value of raw ingredients. For processed foods, AI is optimizingproduction economics, improving the texture and flavor of products like alternative proteins and healthier snacks, and strengthening food safety processes too.DOWNLOAD THE REPORTBut despite this promise, industry adoption still lags. Data-sharing remains limited and companies across the value chain have vastly different needs and capabilities. There are also few standards and data governance protocols in place, and more talent and skills are needed to keep pace with the technological wave.All the same, progress is being made and the potential for AI in the food sector is huge. Key findings from the report are as follows:Predictive analytics are accelerating R&D cycles in crop and food science. AI reduces the time and resources needed to experiment with new food productsand turns traditional trial-and-error cycles into more efficient data-driven discoveries. Advanced models and simulations enable scientists to explore natural ingredients and processes by simulating thousands of conditions, configurations, and genetic variations until they crack the right combination.AI is bringing data-driven insights to a fragmented supply chain. AI can revolutionize the food industrys complex value chain by breaking operational silos and translating vast streams of data into actionable intelligence. Notably, large language models (LLMs) and chatbots can serve as digital interpreters, democratizing access to data analysis for farmers and growers, and enabling more informed, strategic decisions by food companies.Partnerships are crucial for maximizing respective strengths. While large agricultural companies lead inAI implementation, promising breakthroughs often emerge from strategic collaborations that leverage complementary strengths with academic institutions and startups. Large companies contribute extensive datasets and industry experience, while startups bring innovation, creativity, and a clean data slate. Combining expertise in a collaborative approach can increase the uptake of AI.Download the full report.This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Reviews editorial staff.
    0 Comentários ·0 Compartilhamentos ·104 Visualizações
  • The Download: US aid disruptions, and imagining the future
    www.technologyreview.com
    This is todays edition ofThe Download,our weekday newsletter that provides a daily dose of whats going on in the world of technology.HIV could infect 1,400 infants every day because of US aid disruptionsAround 1,400 infants are being infected by HIV every day as a result of the new US administrations cuts to funding to AIDS organizations, new modeling suggests.In an executive order issued January 20, President Donald Trump paused new foreign aid funding to global health programs. Four days later, US Secretary of State Marco Rubio issued a stop-work order on existing foreign aid assistance. Surveys suggest that these changes forced more than a third of global organizations that provide essential HIV services to close within days of the announcements.Hundreds of thousands of people are losing access to HIV treatments as a result. Read the full story.Jessica HamzelouMIT Technology Review Narrated: What the future holds for those born todayHappy birthday, baby.You have been born into an era of intelligent machines. They have watched over you almost since your conception. They let your parents listen in on your tiny heartbeat, track your gestation on an app, and post your sonogram on social media. Well before you were born, you were known to the algorithm.How will you and the next generation of machines grow up together? We asked more than a dozen experts to imagine your joint future.This is our latest story to be turned into a MIT Technology Review Narrated podcast, whichwere publishing each week on Spotify and Apple Podcasts. Just navigate to MIT Technology Review Narrated on either platform, and follow us to get all our new content as its released.The must-readsIve combed the internet to find you todays most fun/important/scary/fascinating stories about technology.1 A judge has ordered DOGE to cease dismantling USAIDIts been told to reinstate employees email access and let them return to their offices. (WP $)+ The judge believes its efforts probably violated the US Constitution.(Reuters)+ The department has also targeted workers that prevent tech overspending. (The Intercept)+ Can AI help DOGE slash government budgets? Its complex. (MIT Technology Review)2 Can Oracle save TikTok?A security proposal from the cloud giant could reportedly allow it to keep operating in the US. (Bloomberg $)+ The deal would leave the apps algorithm in the hands of its Chinese parent company. (Politico)3 NASAs astronauts have touched down on EarthThey safely landed off the coast of Florida yesterday evening. (FT $)+ A pod of dolphins dropped by to witness the spectacle. (The Guardian)4 AI is turning cyber crime into a digital arms raceEuropol warns that more criminals than ever are exploiting AI tools for nefarious means. (FT $)+ Five ways criminals are using AI. (MIT Technology Review)5 An Italian newspaper has published an edition produced entirely by AIThe technology was responsible for the irony too, apparently. (The Guardian)6 Teslas taxi service has been greenlit in CaliforniaBut the road ahead is still full of obstacles. (Wired $)+ Chinese EVs are snapping at Teslas heels across the world. (Rest of World)+ It certainly seems as though Asia will birth the next EV superpower. (Economist $)+ Robotaxis are one of our 10 Breakthrough Technologies of 2025. (MIT Technology Review)7 Online platforms are fueling facial dysmorphiaHours of staring at their own faces made these women anxious and depressed. (NY Mag $)+ The fight for Instagram face. (MIT Technology Review)8 Inside the hunt for water on MarsWe know that the red planet was once host to it, but we dont know why. (Knowable Magazine)9 This robotic spider is shedding light on how real spiders hunt Namely using a form of echolocation. (Ars Technica)10 We could be dramatically underestimating the Earths population New data analysis suggests it could be much higher than previously thought. (New Scientist $)Quote of the dayIn no uncertain terms is this an audit. Its a heist, stealing a vast amount of government data.An anonymous auditor offers a scathing review of DOGEs attempts at auditing US government departments to Wired.The big storyThe humble oyster could hold the key to restoring coastal waters. Developers hate it.October 2023Carol Friend has taken on a difficult job. She is one of the 10 people in Delaware currently trying to make it as a cultivated oyster farmer.Her Salty Witch Oyster Company holds a lease to grow the mollusks as part of the states new program for aquaculture, launched in 2017. It has sputtered despite its obvious promise.Five years after the first farmed oysters went into the Inland Bays, the aquaculture industry remains in a larval stage. Oysters themselves are almost mythical in their ability to clean and filter water. But human willpower, investment, and flexibility are all required to allow the oysters to simply do their thingparticularly when developers start to object. Read the full story.Anna KramerWe can still have nice thingsA place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet em at me.)+ If youre stuck for something to do this weekend, why not host a reading hang?+ Do baby owls really sleep on their stomachs? Like most things in life, the truth is somewhere in the middle.+ Keep your eyes peeled the next time youre in the British countryside, you might just spot a black leopard.+ I couldnt agree morewhy When Harry Met Sally is a perfect film.
    0 Comentários ·0 Compartilhamentos ·118 Visualizações
Mais Stories