Massachusetts Institute of Technology (MIT)
Massachusetts Institute of Technology (MIT)
The Massachusetts Institute of Technology is a world leader in research and education.
  • 2 A la gente le gusta esto.
  • 479 Entradas
  • 2 Fotos
  • 0 Videos
  • 0 Vista previa
  • Science &Technology
Buscar
Actualizaciones Recientes
  • WWW.TECHNOLOGYREVIEW.COM
    Inside the controversial tree farms powering Apple’s carbon neutral goal
    We were losing the light, and still about 20 kilometers from the main road, when The grove grew as if indifferent to certain unspoken rules of botany. There was no understory, no foreground or background, only the trees themselves, which grew as a wall of bare trunks that rose 100 feet or so before concluding with a burst of thick foliage near the top. The rows of trees ran perhaps the length of a New York City block and fell away abruptly on either side into untidy fields of dirt and grass. The vista recalled the husk of a failed condo development, its first apartments marooned when the builders ran out of cash. Standing there against the setting sun, the trees were, in their odd way, also rather stunning. I had no service out here—we had just left a remote nature preserve in southwestern Brazil—but I reached for my phone anyway, for a picture. The concern on the face of my travel partner, Clariana Vilela Borzone, a geographer and translator who grew up nearby, flicked to amusement. My camera roll was already full of eucalyptus. The trees sprouted from every hillside, along every road, and more always seemed to be coming. Across the dirt path where we were stopped, another pasture had been cleared for planting. The sparse bushes and trees that had once shaded cattle in the fields had been toppled and piled up, as if in a Pleistocene gravesite.  Borzone’s friends and neighbors were divided on the aesthetics of these groves. Some liked the order and eternal verdancy they brought to their slice of the Cerrado, a large botanical region that arcs diagonally across Brazil’s midsection. Its native savanna landscape was largely gnarled, low-slung, and, for much of the year, rather brown. And since most of that flora had been cleared decades ago for cattle pasture, it was browner and flatter still. Now that land was becoming trees. It was becoming beautiful.  Some locals say they like the order and eternal verdancy of the eucalyptus, which often stand in stark contrast to the Cerrado’s native savanna landscape.PABLO ALBARENGA Others considered this beauty a mirage. “Green deserts,” they called the groves, suggesting bounty from afar but holding only dirt and silence within. These were not actually forests teeming with animals and undergrowth, they charged, but at best tinder for a future megafire in a land parched, in part, by their vigorous growth. This was in fact a common complaint across Latin America: in Chile, the planted rows of eucalyptus were called the “green soldiers.” It was easy to imagine getting lost in the timber, a funhouse mirror of trunks as far as the eye could see. The timber companies that planted these trees push back on these criticisms as caricatures of a genus that’s demonized all over the world. They point to their sustainable forestry certifications and their handsome spending on fire suppression, and to the microphones they’ve placed that record cacophonies of birds and prove the groves are anything but barren. Whether people like the look of these trees or not, they are meeting a human need, filling an insatiable demand for paper and pulp products all over the world. Much of the material for the world’s toilet and tissue paper is grown in Brazil, and that, they argue, is a good thing: Grow fast and furious here, as responsibly as possible, to save many more trees elsewhere.  But I was in this region for a different reason: Apple. And also Microsoft and Meta and TSMC, and many smaller technology firms too. I was here because On a practical level, the answer seemed straightforward. Nobody disputed how swiftly or reliably eucalyptus could grow in the tropics. This knowledge was the product of decades of scientific study and tabulations of biomass for wood or paper. Each tree was roughly 47% carbon, which meant that many tons of it could be stored within every planted hectare. This could be observed taking place in real time, in the trees by the road. Come back and look at these young trees tomorrow, and you’d see it: fresh millimeters of carbon, chains of cellulose set into lignin.  At the same time, Apple and the others were also investing in an industry, and a tree, with a long and controversial history in this part of Brazil and elsewhere. They were exerting their wealth and technological oversight to try to make timber operations more sustainable, more supportive of native flora, and less water intensive. Still, that was a hard sell to some here, where hundreds of thousands of hectares of pasture are already in line for planting; more trees were a bleak prospect in a land increasingly racked by drought and fire. Critics called the entire exercise an excuse to plant even more trees for profit.  Borzone and I did not plan to stay and watch the eucalyptus grow. Garden or forest or desert, ally or antagonist—it did not matter much with the emerging stars of the Southern Cross and our gas tank empty. We gathered our things from our car and set off down the dirt road through the trees. A big promise My journey into the Cerrado had begun months earlier, in the fall of 2023, when the actress Octavia Spencer appeared as Mother Nature in an ad alongside Apple CEO Tim Cook. In 2020, the company had set a goal to go “net zero” by the end of the decade, at which point all of its products—laptops, CPUs, phones, earbuds—would be produced without increasing the level of carbon in the atmosphere. “Who wants to disappoint me first?” Mother Nature asked with a sly smile. It was a third of the way to 2030—a date embraced by many corporations aiming to stay in line with the UN’s goal of limiting warming to 1.5 °C over preindustrial levels—and where was the progress? Apple CEO Tim Cook stares down Octavia Spencer as “Mother Nature” in their ad spot touting the company’s claims for carbon neutrality.APPLE VIA YOUTUBE Cook was glad to inform her of the good news: The new Apple Watch was leading the way. A limited supply of the devices were already carbon neutral, thanks to things like recycled materials and parts that were specially sent by ship—not flown—from one factory to another. These special watches were labeled with a green leaf on Apple’s iconically soft, white boxes. Critics were quick to point out that declaring an individual product “carbon neutral” while the company was still polluting had the whiff of an early victory lap, achieved with some convenient accounting. But the work on the watch spoke to the company’s grand ambitions. Apple claimed that changes like procuring renewable power and using recycled materials had enabled it to cut emissions 75% since 2015. “We’re always prioritizing reductions; they’ve got to come first,” Chris Busch, Apple’s director of environmental initiatives, told me soon after the launch.  The company also acknowledged that it could not find reductions to balance all its emissions. But it was trying something new.  Since the 1990s, companies have purchased carbon credits based largely on avoiding emissions. Take some patch of forest that was destined for destruction and protect it; the stored carbon that wasn’t lost is turned into credits. But as the carbon market expanded, so did suspicion of carbon math—in some cases, because of fraud or bad science, but also because efforts to contain deforestation are often frustrated, with destruction avoided in one place simply happening someplace else. Corporations that once counted on carbon credits for “avoided” emissions can no longer trust them. (Many consumers feel they can’t either, with some even suing Apple over the ways it used past carbon projects to make its claims about the Apple Watch.) But that demand to cancel out carbon dioxide hasn’t gone anywhere—if anything, as AI-driven emissions knock some companies off track from reaching their carbon targets (and raise questions about the techniques used to claim emissions reductions), the need is growing. For Apple, even under the rosiest assumptions about how much it will continue to pollute, the gap is significant: In 2024, the company reported offsetting 700,000 metric tons of CO2, but the number it will need to hit in 2030 to meet its goals is 9.6 million.  So the new move is to invest in carbon “removal” rather than avoidance. The idea implies a more solid achievement: taking carbon molecules out of the atmosphere. There are many ways to attempt that, from trying to change the pH of the oceans so that they absorb more of the molecules to building machines that suck carbon straight out of the air. But these are long-term fixes. None of these technologies work at the scale and price that would help Apple and others meet their shorter-term targets. For that, trees have emerged again as the answer. This time the idea is to plant new ones instead of protecting old ones.  To expand those efforts in a way that would make a meaningful dent in emissions, Apple determined, it would also need to make carbon removal profitable. A big part of this effort would be driven by the Restore Fund, a $200 million partnership with Goldman Sachs and Conservation International, a US environmental nonprofit, Profits would come from responsibly turning trees into products, Goldman’s head of sustainability explained when the fund was announced in 2021. But it was also an opportunity for Apple, and future investors, to “almost look at, touch, and feel their carbon,” he said—a concreteness that carbon credits had previously failed to offer. “The aim is to generate real, measurable carbon benefits, but to do that alongside financial returns,” Busch told me. It was intended as a flywheel of sorts: more investors, more planting, more carbon—an approach to climate action that looked to abundance rather than sacrifice. UNSPLASH APPLE Apple markets its watch as a carbon-neutral product, based in part on the use of carbon credits. The announcement of the carbon-neutral Apple Watch was the occasion to promote the Restore Fund’s three initial investments, which included a native forestry project as well as eucalyptus farms in Paraguay and Brazil. The Brazilian timber plans were by far the largest in scale, and were managed by BTG Pactual, Latin America’s largest investment bank.  Busch connected me with Mark Using eucalyptus for carbon removal also offered a new opportunity. Wishnie was overseeing a planned $1 billion initiative that was set to transform BTG’s timber portfolio; it aimed at a 50-50 split between timber and native restoration on old pastureland, with an emphasis on connecting habitats along rivers and streams. As a “high quality” project, it was meant to do better than business as usual. The conservation areas would exceed the legal requirements for native preservation in Brazil, which range from 20% to 35% in the Cerrado. In a part of Brazil that historically gets little conservation attention, it would potentially represent the largest effort yet to actually bring back the native landscape.  When BTG approached Conservation International with the 50% figure, the organization thought it was “too good to be true,” Miguel Calmon, the senior director of the nonprofit’s Brazilian programs, told me. With the restoration work paid for by the green financing and the sale of carbon credits, scale and longevity could be achieved. “Some folks may do this, but they never do this as part of the business,” he said. “It comes from not a corporate responsibility. It’s about, really, the business that you can optimize.” So far, BTG has raised $630 million for the initiative and earmarked 270,000 hectares, an area more than double the city of Los Angeles. The first farm in the plan, located on a 24,000-hectare cattle ranch, was called Project Alpha. The location, Wishnie said, was confidential.  “We talk about restoration as if it’s a thing that happens,” Mark Wishnie said, promoting BTG’s plans to intermingle new farms alongside native preserves.COURTESY OF BTG But a property of that size sticks out, even in a land of large farms. It didn’t take very much digging into municipal land records in the Brazilian state of Mato Grosso do Sul, where many of the company’s Cerrado holdings are located, to turn up a recently sold farm that matched the size. It was called Fazenda Engano, or “Deception Farm”—hence the rebrand. The land was registered to an LLC with links to holding companies for other BTG eucalyptus plantations located in a neighboring region that locals had taken to calling the Cellulose Valley for its fast-expanding tree farms and pulp factories.   The area was largely seen as a land of opportunity, even as some locals had raised the alarm over concerns that the land couldn’t handle the trees. They had allies in prominent ecologists who have long questioned the wisdom of tree-planting in the Cerrado—and increasingly spar with other conservationists who see great potential in turning pasture into forest. The fight has only gotten more heated as more investors hunt for new climate solutions.  Still, where Apple goes, others often follow. And when it comes to sustainability, other companies look to it as a leader. I wasn’t sure if I could visit Project Alpha and see whether Apple and its partners had really found a better way to plant, but I started making plans to go to the Cerrado anyway, to see the forests behind those little green leaves on the box.  Complex calculations In 2015, a study by Thomas Crowther, an ecologist then at ETH Zürich, attempted a census of global tree cover, finding more than 3 trillion trees in all. A useful number, surprisingly hard to divine, like counting insects or bacteria.  A follow-up study a few years later proved more controversial: Earth’s surface held space for at least 1 trillion more trees. That represented a chance to store 200 metric gigatons, or about 25%, of atmospheric carbon once they matured. (The paper was later corrected in multiple ways, including an acknowledgment that the carbon storage potential could be about one-third less.) The study became a media sensation, soon followed by a fleet of tree-planting initiatives with “trillion” in the name—most prominently through a World Economic Forum effort launched by Salesforce CEO Marc Benioff at Davos, which President Donald Trump pledged to support during his first term.  But for as long as tree planting has been heralded as a good deed—from Johnny Appleseed to programs that promise a tree for every shoe or laptop purchased—the act has also been chased closely by a follow-up question: How many of those trees survive? Consider Trump’s most notable planting, which placed an oak on the White House grounds in 2018. It died just over a year later.  During President Donald Trump’s first term, he and French President Emmanuel Macron planted an oak on the South Lawn of the White House.CHIP SOMODEVILLA/GETTY IMAGES To critics, including Bill Gates, the efforts were symbolic of short-term thinking at the expense of deeper efforts to cut or remove carbon. (Gates’s spat with Benioff descended to name-calling in the New York Times. “Are we the science people or are we the idiots?” he asked.) The lifespan of a tree, after all, is brief—a pit stop—compared with the thousand-year carbon cycle, so its progeny must carry the torch to meaningfully cancel out emissions. Most don’t last that long.  “The number of trees planted has become a kind of currency, but it’s meaningless,” Pedro Brancalion, a professor of tropical forestry at the University of São Paulo, told me. He had nothing against the trees, which the world could, in general, use a lot more of. But to him, a lot of efforts were riding more on “good vibes” than on careful strategy.  Soon after arriving in São Paulo last summer, I drove some 150 miles into the hills outside the city to see the outdoor lab Brancalion has filled with experiments on how to plant trees better: trees given too many nutrients or too little; saplings monitored with wires and tubes like ICU admits, or skirted with tarps that snatch away rainwater. At the center of one of Brancalion’s plots stands a tower topped with a whirling station, the size of a hobby drone, monitoring carbon going in and out of the air (and, therefore, the nearby vegetation)—a molecular tango known as flux.  Brancalion works part-time for a carbon-focused restoration company, Re:Green, which had recently sold 3 million carbon credits to Microsoft and was raising a mix of native trees in parts of the Amazon and the Atlantic Forest. While most of the trees in his lab were native ones too, like jacaranda and brazilwood, he also studies eucalyptus. The lab in fact sat on a former eucalyptus farm; in the heart of his fields, a grove of 80-year-old trees dripped bark like molting reptiles.  To Pedro Brancalion, a lot of tree-planting efforts were riding more on “good vibes” than on careful strategy. He experiments with new ways to grow eucalyptus interspersed with native species.PABLO ALBARENGA Eucalyptus planting swelled dramatically under Brazil’s military dictatorship in the 1960s. The goal was self-sufficiency—a nation’s worth of timber and charcoal, quickly—and the expansion was fraught. Many opinions of the tree were forged in a spate of dubious land seizures followed by clearing of the existing vegetation—disputes that, in some places, linger to this day. Still, that campaign is also said to have done just as Wishnie described, easing the demand that would have been put on regions like the Amazon as Rio and São Paulo were built.  The new trees also laid the foundation for Brazil to become a global hub for engineered forestry; it’s currently home to about a third of the world’s farmed eucalyptus. Today’s saplings are the products of decades of tinkering with clonal breeding, growing quick and straight, resistant to pestilence and drought, with exacting growth curves that chart biomass over time: Seven years to maturity is standard for pulp. Trees planted today grow more than three times as fast as their ancestors. 
    0 Commentarios 0 Acciones 1 Views
  • WWW.TECHNOLOGYREVIEW.COM
    Roundtables: Brain-Computer Interfaces: From Promise to Product
    Speakers: David Rotman, editor at large, and Antonio Regalado, senior editor for biomedicine. Brain-computer interfaces (BCIs) have been crowned the 11th Breakthrough Technology of 2025 by MIT Technology Review's readers. BCIs are electrodes implanted into the brain to send neural commands to computers, primarily to assist paralyzed people. Hear from MIT Technology Review editor at large David Rotman and senior editor for biomedicine Antonio Regalado as they explore the past, present, and future of BCIs. Related Coverage
    0 Commentarios 0 Acciones 32 Views
  • WWW.TECHNOLOGYREVIEW.COM
    3 Things Caiwei Chen is into right now
    A new play about OpenAI I recently saw Doomers, a new play by Matthew Gasda about the aborted 2023 coup at OpenAI, here represented by a fictional company called MindMesh. The action is set almost entirely in a meeting room; the first act follows executives immediately after the firing of company CEO Seth (a stand-in for Sam Altman), and the second re-creates the board negotiations that determined his fate. It’s a solid attempt to capture the zeitgeist of Silicon Valley’s AI frenzy and the world’s moral panic over artificial intelligence, but the rapid-fire, high-stakes exchanges mean it sometimes seems to get lost in its own verbosity. Themed dinner parties and culinary experiments The vastness of Chinese cuisine defies easy categorization, and even in a city with no shortage of options, I often find myself cooking—not just to recapture something closer to home, but to create a home unlike one that ever existed. Recently, I’ve been experimenting with a Chinese take on the charcuterie board—pairing toasted steamed buns, called mantou, with furu, a fermented tofu spread that is sharp, pungent, and full of umami. Sewing and copying my own clothes I started sewing three years ago, but only in the past year have I begun making clothes from scratch. As a lover of vintage fashion—especially ’80s silhouettes—I started out with old patterns I found on Etsy. But recently, I tried something new: copying a beloved dress I bought in a thrift store in Beijing years ago. Doing this is quite literally a process of reverse-engineering—­pinning the garment down, tracing its seams, deconstructing its logic, and rebuilding it. At times my brain feels like an old Mac hitting its GPU limit. But when it works, it feels like a small act of magic. It’s an exercise in certainty, the very thing that drew me to fashion in the first place—a chance to inhabit something that feels like an extension of myself.
    0 Commentarios 0 Acciones 28 Views
  • WWW.TECHNOLOGYREVIEW.COM
    The Download: introducing the Creativity issue
    This is today's edition of The Download, our weekday newsletter that provides a daily dose of what's going on in the world of technology. Introducing: the Creativity issue The university computer lab may seem like an unlikely center for creativity. We tend to think of creativity as happening more in the artist’s studio or writers’ workshop. But throughout history, very often our greatest creative leaps—and I would argue that the web and its descendants represent one such leap—have been due to advances in technology. But the key to artistic achievement has never been the technology itself. It has been the way artists have applied it to express our humanity. This latest issue of our magazine, which was entirely produced by human beings using computers, explores creativity and the tension between the artist and technology. We hope you enjoy reading it as much as we enjoyed putting it together.—Mat Honan, editor in chief Here’s just a taste of what you can expect: + AI is warping our expectations of music. New diffusion AI models that make songs from scratch are complicating our definitions of authorship and human creativity. Read the full story. + Meet the researchers testing the “Armageddon” approach to asteroid defense. Read the full story. + How the federal government is tracking changes in the supply of street drugs. A new harm reduction initiative is helping prevent needless deaths. Read the full story. + How AI is ushering in a new era of co-creativity, laying the groundwork for a future in which humans and machines create things together. Read the full story.+ South Korea’s graphic artists are divided over whether AI will immortalize their work or threaten their creativity. + A new biosensor can detect bird flu in just five minutes. Read the full story. MIT Technology Review Narrated: Quantum computing is taking on its biggest challenge—noise For a while researchers thought they’d have to make do with noisy, error-prone systems, at least in the near term. That’s starting to change. This is our latest story to be turned into a MIT Technology Review Narrated podcast, which we’re publishing each week on Spotify and Apple Podcasts. Just navigate to MIT Technology Review Narrated on either platform, and follow us to get all our new content as it’s released.Join us today to chat about brain-computer interfaces Brain-computer interfaces are electrodes implanted into the brain to send neural commands to computers, primarily to assist paralyzed people, and our readers recently named them as the 11th Breakthrough Technology of 2025 in our annual list. So what are the next steps for companies like Neuralink, Synchron, and Neuracle? And will they be able to help paralyzed people at scale? Join our editor at large David Rotman and senior editor for biomedicine Antonio Regalado today for an exclusive subscriber-only Roundtable discussion exploring the past, present, and future of brain-computer interfaces. Register here to tune in at 1pm ET this afternoon!The must-reads I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology. 1 OpenAI is interested in buying Chrome from Google  ChatGPT’s head of product Nick Turley said folding its tech into Chrome would improve it greatly. (Bloomberg $)+ It would be just one of many prospective buyers. (Insider $)+ Turley would also be happy with a distribution deal with Google. (The Information $)2 Instagram’s founder says Meta starved it of resources Kevin Systrom believes Mark Zuckerberg saw the app as a threat to Facebook. (NYT $)+ It sounds as if the pair had a strained relationship. (The Verge)3 Elon Musk will step back from DOGE next month  In his absence, Tesla’s profits have plummeted. (WP $)+ But he’ll still spend a day or so a week working on US government matters. (CNBC)+ There’s no denying that his political activities have damaged Tesla’s brand. (WSJ $)+ DOGE’s tech takeover threatens the safety and stability of our critical data. (MIT Technology Review)4 Chinese scientists and students are under scrutiny in the US It’s a repeat of the China Initiative program launched under Trump’s first Presidency. (WSJ $)+ US universities are starting to push back against government overreach. (Ars Technica)+ The FBI accused him of spying for China. It ruined his life. (MIT Technology Review) 5 Rare earth elements aren’t so rare after allWhich is bad news for China. (Wired $) + But China’s export curbs are harming Tesla’s Optimus robot production. (Reuters)+ This rare earth metal shows us the future of our planet’s resources. (MIT Technology Review)6 How to wean yourself off fossil fuelsMassive home batteries are an intriguing energy alternative. (Vox) 7 A new mission to grow food in space has blasted offScientists are investigating creating food from single cells in orbit. (BBC) + Future space food could be made from astronaut breath. (MIT Technology Review)8 It’s time to bid farewell to SkypeRIP to the OG video calling platform. (Rest of World)  9 Analysts are using AI to psychologically profile top soccer players ⚽ And also to spot bright young talent. (The Guardian)10 Saving the world’s seeds is a tricky business 🌱 They’re the first line of defense against extinction. (Knowable Magazine)+ The weeds are winning. (MIT Technology Review)Quote of the day “Stuffing Chrome with even more AI crap is one way to spur browser innovation, I guess.” —Tech critic Paris Marx isn’t convinced that OpenAI buying Chrome would improve it, in a post on Bluesky. The big story How gamification took over the worldIt’s a thought that occurs to every video-game player at some point: What if the weird, hyper-focused state I enter when playing in virtual worlds could somehow be applied to the real one? Often pondered during especially challenging or tedious tasks in meatspace (writing essays, say, or doing your taxes), it’s an eminently reasonable question to ask. Life, after all, is hard. And while video games are too, there’s something almost magical about the way they can promote sustained bouts of superhuman concentration and resolve.For some, this phenomenon leads to an interest in flow states and immersion. For others, it’s simply a reason to play more games. For a handful of consultants, startup gurus, and game designers in the late 2000s, it became the key to unlocking our true human potential. But instead of liberating us, gamification turned out to be just another tool for coercion, distraction, and control. Read the full story.—Bryan Gardiner We can still have nice things A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet 'em at me.) + Succession creator Jesse Armstrong’s new film Mountainhead looks intriguing.+ Domestic cats have a much more complicated history than we previously realized.+ If you enjoyed the new vampire flick Sinners, you’ll love these Indian folk horrors.+ This hispi cabbage side dish looks incredible.
    0 Commentarios 0 Acciones 25 Views
  • WWW.TECHNOLOGYREVIEW.COM
    Seeing AI as a collaborator, not a creator
    But none of that would have been possible if I hadn’t been bored and curious. And more to the point: curious about tech.  The university computer lab may seem at first like an unlikely center for creativity. We tend to think of creativity as happening more in the artist’s studio or writers’ workshop. But throughout history, very often our greatest creative leaps—and I would argue that the web and its descendants represent one such leap—have been due to advances in technology.  There are the big easy examples, like photography or the printing press, but it’s also true of all sorts of creative inventions that we often take for granted. Oil paints. Theaters. Musical scores. Electric synthesizers! Almost anywhere you look in the arts, perhaps outside of pure vocalization, technology has played a role.   But the key to artistic achievement has never been the technology itself. It has been the way artists have applied it to express our humanity. Think of the way we talk about the arts. We often compliment it with words that refer to our humanity, like soul, heart, and life; we often criticize it with descriptors such as sterile, clinical, or lifeless. (And sure, you can love a sterile piece of art, but typically that’s because the artist has leaned into sterility to make a point about humanity!) All of which is to say I think that AI can be, will be, and already is a tool for creative expression, but that true art will always be something steered by human creativity, not machines.  I could be wrong. I hope not.  This issue, which was entirely produced by human beings using computers, explores creativity and the tension between the artist and technology. You can see it on our cover illustrated by Tom Humberstone, and read about it in stories from James O’Donnell, Will Douglas Heaven, Rebecca Ackermann, Michelle Kim, Bryan Gardiner, and Allison Arieff.  Yet of course, creativity is about more than just the arts. All of human advancement stems from creativity, because creativity is how we solve problems. So it was important to us to bring you accounts of that as well. You’ll find those in stories from Carrie Klein, Carly Kay, Matthew Ponsford, and Robin George Andrews. (If you’ve ever wanted to know how we might nuke an asteroid, this is the issue for you!)   We’re also trying to get a little more creative ourselves. Over the next few issues, you’ll notice some changes coming to this magazine with the addition of some new regular items (see Caiwei Chen’s “3 Things” for one such example). Among those changes, we are planning to solicit and publish more regular reader feedback and answer questions you may have about technology. We invite you to get creative and email us: newsroom@technologyreview.com. As always, thanks for reading.
    0 Commentarios 0 Acciones 33 Views
  • WWW.TECHNOLOGYREVIEW.COM
    Why we still need AM radio
    Ariel Aberg-Riger is the author of America Redux: Visual Stories from Our Dynamic History.
    0 Commentarios 0 Acciones 30 Views
  • WWW.TECHNOLOGYREVIEW.COM
    Building better cities
    Clara Brenner, MBA ’12, arrived in Cambridge on the lookout for a business partner. She wanted to start her own ­company—and never have to deal with a boss again. She would go it alone if she had to, but she hoped to find someone whose skills would complement her own. It’s a common MBA tale. Many people attend business school with hopes of finding the one. Building that relationship is so important to a company’s foundation that it’s been described in romantic terms: Networking is akin to dating around, and some view settling down with a business partner as a marriage of sorts. Brenner didn’t have to look for long. She met her match—Julie Lein, MBA ’12—soon after arriving at Sloan more than a decade ago. But their first encounter wasn’t exactly auspicious. In fact, their relationship began with an expletive. Lein was sitting at a card table in a hallway in E52, glumly selling tickets to a fashion show featuring work-­appropriate clothes for women—at that time, the marquee event for Sloan’s Women in Management Club, and one that both Lein and Brenner thought was patently absurd.  Lein had no interest in attending, but she wanted to support the club’s mission of boosting women in business. “She looked very miserable,” says Brenner. Lein asked if she wanted to buy a ticket, Brenner recalls, and “I think I said, ‘F*** no.’”  “We both bonded over the fact that this was such a stupid idea,” says Lein. (The fashion show has since been retired, in part thanks to Lein and Brenner’s lobbying.)                              Today, the two run the Urban Innovation Fund, a San Francisco–based venture capital firm that has raised $212 million since 2016 and invested in 64 startups addressing the most pressing problems facing cities. It has supported businesses like Electriphi, a provider of EV charging and fleet management software, which was acquired by one of the biggest names in the auto industry. And it funds companies focused on helping kids learn to code, providing virtual tutoring services, offering financing for affordable housing, and more. The companies in its portfolio have a total value of $5.3 billion, and at least eight have been acquired thus far.                              Though Brenner and Lein hit it off quickly, they weren’t an obvious fit as business partners. Brenner arrived at Sloan after weathering an early career in commercial real estate just after the 2008 financial crash. She hoped to start her own company in that industry. Lein, on the other hand, had worked in political polling and consulting. She initially planned to get an advanced policy degree, until a mentor suggested an MBA. She hoped to start her own political polling firm after graduation.  Ultimately, though, their instant kinship became more important than their subject matter expertise. Brenner, says Lein, is “methodical” and organized, while she “just goes and executes” without overthinking. Their relationship—in business, and still as close friends—is rooted in trust and a commitment to realizing the vision they’ve created together. “We were able to see that ... our skills and style were very complementary, and we just were able to do things better and faster together,” says Brenner.  In 2012, the two teamed up to run Sloan’s second Women in Management Conference, which they had helped found the year before. It was then, they say, that they knew they would work together after graduation.  Still, they had trouble agreeing on the type of venture that made the most sense. Their initial talks involved a tug-of-war over whose area of expertise would win—real estate or policy.  But in the summer of 2011, they’d both happened to land internships at companies focused on challenges in cities—companies that would now be called “urban-tech startups,” says Brenner, though that term was not used at the time. The overlap was fortuitous: When they compared notes, they agreed that it made sense to investigate the potential for companies in that emerging space. Lyft was just getting its start, as was Airbnb. After exploring the idea further, the two concluded there was some “there” there. “We felt like all these companies had a lot in common,” says Brenner. “They were solving very interesting community challenges in cities, but in a very scalable, nontraditional way.” They were also working in highly regulated areas that VC firms were often hesitant to touch, even though these companies were attracting significant attention.  To Brenner and Lein, some of that attention was the wrong kind; companies like Uber were making what they saw as obvious missteps that were landing in the news. “No one was helping [these companies] with, like, ‘You should hire a lobbyist’ or ‘You should have a policy team,’” says Brenner. The two saw an opportunity to fund businesses that could make a measurable positive impact on urban life—and to help them navigate regulatory and policy environments as they grew from startups to huge companies.  Upon graduating in 2012, they launched Tumml, an accelerator program for such startups. The name was drawn from the Yiddish word tummler, often used by Brenner’s grandmother to describe someone who inspires others to action.  At the time, Brenner says, “world-­positive investing” was “not cool at all” among funders because it was perceived as yielding lower returns, even though growing numbers of tech companies were touting their efforts to improve society. In another unusual move, the partners structured their startup accelerator as a nonprofit evergreen fund, allowing them to invest in companies continuously without setting a fixed end date. By the end of their third year, they were supporting 38 startups.  Tumml found success by offering money, mentorship, and guidance, but the pair realized that relying solely on fickle philanthropic funding meant the model had a ceiling. To expand their work, they retired Tumml and launched the Urban Innovation Fund in 2016 with $24.5 million in initial investments. While Tumml had offered relatively small checks and support to companies at the earliest stages, UIF would allow Brenner and Lein to supercharge their funding and involvement.  Their focus has remained on startups tackling urban problems in areas such as public health, education, and transportation. The types of companies they look for are those that drive economic vitality in cities, make urban areas more livable, or make cities more sustainable. As Tumml did, UIF provides not just funding but also consistent support in navigating regulatory challenges. “It’s a very, very small subset of companies that can both work on a problem that, at least in our minds, really matters and be an enormous business.” And, like Tumml, UIF has taken on industries or companies that other investors may see as risky. When it was raising its first fund, Lein remembers, they pitched a large institution on its vision, which includes investing in companies that work on climate and energy. The organization, burned by the money it lost when the first cleantech bubble burst, was extremely wary—it wasn’t interested in a fund that emphasized those areas. But Lein and Brenner pressed on. Today, climate tech remains one of the fund’s largest areas, accounting for more than a sixth of its portfolio of 64 companies (see “Urban innovation in action,” at right). In addition to Electriphi, they have invested in Public Grid, a company that gives households access to affordable clean energy, and Optiwatt, an app that helps EV drivers schedule charging at times of day when it is cheaper or cleaner.                              “They took risks in areas, [including] mobility and transportation, where other people might not play because of policy and regulation risk. And they were willing to think about the public-private partnerships and what might be needed,” says Rachel Sheinbein, MBA ’04, SM ’04, a Bay Area–based angel investor who has worked with the Urban Innovation Fund on investments. “They weren’t afraid to take that on.” Lein and Brenner have also invested in health companies like Cleancard, which is working to provide at-home testing for cancers, and startups creating workflow tools, like KarmaSuite, which has built software to help nonprofits track grants.  Meanwhile, they have cast a wide net and built a portfolio rich in companies that happen to be led by entrepreneurs from underrepresented groups: Three-quarters of the companies in UIF’s current portfolio were founded by women or people of color, and nearly 60% include an immigrant on their founding team.   When it comes to selecting companies, Brenner says, they make “very calculated decisions” based in part on regulatory factors that may affect profits. But they’re still looking for the huge returns that drive other investors.  “It’s a very, very small subset of companies that can both work on a problem that, at least in our minds, really matters and be an enormous business,” she says. “Those are really the companies that we’re looking for.” One of the most obvious examples of that winning combination is Electriphi. When Brenner and Lein invested in the company, in 2019, the Biden administration hadn’t mandated the electrification of federal auto fleets, and the Inflation Reduction Act, which included financial incentives for clean energy, hadn’t yet been drafted. And California had yet to announce its intention to completely phase out gas-powered cars. “It was not a hot space,” says Brenner. But after meeting with Electriphi’s team, both Brenner and Lein felt there was something there. The partners tracked the startup for months, saw it achieving its goals, and ended up offering it the largest investment, by several orders of magnitude, that their fund had ever made. Less than two years later, Ford acquired it for an undisclosed sum.  “When we were originally talking about Electriphi, a lot of people were like, ‘Eh, it’s going to take too long for fleets to transition, and we don’t want to make a bet at this time,’” Sheinbein recalls. But she says the partners at Urban Innovation Fund were willing to take on an investment that other people were “still a little bit hesitant” about. Sheinbein also invested in the startup.  GABRIELA HASBUN Impact investing has now taken root in the building where Lein and Brenner first met. What was once an often overlooked investing area, says Bill Aulet, SM ’94, managing director of the Martin Trust Center for MIT Entrepreneurship, is now a core element of how Sloan teaches entrepreneurship. Aulet sees Urban Innovation Fund’s social-enterprise investing strategy as very viable in the current market. “Will it outperform cryptocurrency? Not right now,” he says, but he adds that many people want to put their money toward companies with the potential to improve the world.  Lein, who worked as Aulet’s teaching assistant at Sloan for a class now known as Entrepreneurship 101, helped establish the mold at Sloan for a social-impact entrepreneur—that is, someone who sees doing good as a critical objective, not just a marketing strategy. “Entrepreneurs don’t just have to found startups,” says Aulet. “You can also be what we call an entrepreneurship amplifier,” which he defines as “someone who helps entrepreneurship thrive.” When they make investments, VCs tend to prioritize such things as the need for a company’s products and the size of its potential market. Brenner and Lein say they pay the most attention to the team when deciding whether to make a bet: Do they work together well? Are they obsessive about accomplishing their goals? Those who have watched UIF grow say Brenner and Lein’s partnership fits that profile itself.  “I can just tell when a team really respects each other and [each] sees the value in the other one’s brain,” says Sheinbein. For Lein and Brenner, she says, their “mutual respect and admiration for each other” is obvious.  “We went to Sloan, we spent a bunch of money, but we found each other,” says Lein.  “We couldn’t agree on a new urban-tech startup to start,” she adds, so instead, they built an ecosystem of them—all in the name of improving cities for the people who live there. 
    0 Commentarios 0 Acciones 32 Views
  • WWW.TECHNOLOGYREVIEW.COM
    Unleashing the potential of qubits, one molecule at a time
    It all began with a simple origami model.  As an undergrad at Harvard, Danna Freedman went to a professor’s office hours for her general chemistry class and came across an elegant paper model that depicted the fullerene molecule. The intricately folded representation of chemical bonds and atomic arrangements sparked her interest, igniting a profound curiosity about how the structure of molecules influences their function.  She stayed and chatted with the professor after the other students left, and he persuaded her to drop his class so she could instead dive immediately into the study of chemistry at a higher level. Soon she was hooked. After graduating with a chemistry degree, Freedman earned a PhD at the University of California, Berkeley, did a postdoc at MIT, and joined the faculty at Northwestern University. In 2021, she returned to MIT as the Frederick George Keyes Professor of Chemistry. Freedman’s fascination with the relationship between form and function at the molecular level laid the groundwork for a trailblazing career in quantum information science, eventually leading her to be honored with a 2022 MacArthur fellowship—and the accompanying “genius” grant—as one of the leading figures in the field. Today, her eyes light up when she talks about the “beauty” of chemistry, which is how she sees the intricate dance of atoms that dictates a molecule’s behavior. At MIT, Freedman focuses on creating novel molecules with specific properties that could revolutionize the technology of sensing, leading to unprecedented levels of precision.  Designer molecules Early in her graduate studies, Freedman noticed that many chemistry research papers claimed to contribute to the development of quantum computing, which exploits the behavior of matter at extremely small scales to deliver much more computational power than a conventional computer can achieve. While the ambition was clear, Freedman wasn’t convinced. When she read these papers carefully, she found that her skepticism was warranted. “I realized that nobody was trying to design magnetic molecules for the actual goal of quantum computing!” she says. Such molecules would be suited to acting as quantum bits, or qubits, the basic unit of information in quantum systems. But the research she was reading about had little to do with that.  Nevertheless, that realization got Freedman thinking—could molecules be designed to serve as qubits? She decided to find out. Her work made her among the first to use chemistry in a way that demonstrably advanced the field of quantum information science, which she describes as a general term encompassing the use of quantum technology for computation, sensing, measurement, and communication.  Unlike traditional bits, which can only equal 0 or 1, qubits are capable of “superposition”—simultaneously existing in multiple states. This is why quantum computers made from qubits can solve large problems faster than classical computers. Freedman, however, has always been far more interested in tapping into qubits’ potential to serve as exquisitely precise sensors. Qubits store information in quantum properties that can be easily disrupted. While the delicacy of those properties makes qubits hard to control, it also makes them especially sensitive and therefore very useful as sensors. Qubits encode information in quantum properties—such as spin and energy—that can be easily disrupted. While the delicacy of those properties makes qubits hard to control, it also makes them especially sensitive and therefore very useful as sensors. Harnessing the power of qubits is notoriously tricky, though. For example, two of the most common types—superconducting qubits, which are often made of thin aluminum layers, and trapped-ion qubits, which use the energy levels of an ion’s electrons to represent 1s and 0s—must be kept at temperatures approaching absolute zero (–273 °C). Maintaining special refrigerators to keep them cool can be costly and difficult. And while researchers have made significant progress recently, both types of qubits have historically been difficult to connect into larger systems. Eager to explore the potential of molecular qubits, Freedman has pioneered a unique “bottom-up” approach to creating them: She designs novel molecules with specific quantum properties to serve as qubits targeted for individual applications. Instead of focusing on a general goal such as maximizing coherence time (how long a qubit can preserve its quantum state), she begins by asking what kinds of properties are needed for, say, a sensor meant to measure biological phenomena at the molecular level. Then she and her team set out to create molecules that have these properties and are suitable for the environment where they’d be used. To determine the precise structure of a new molecule, Freedman’s team uses software to analyze and process visualizations (such as those in teal and pink above) of data collected by an x-ray diffractometer. The diagram at right depicts an organometallic Cr(IV) complex made of a central chromium atom and four hydrocarbon ligands.COURTESY OF DANNA FREEDMAN Made of a central metallic atom surrounded by hydrocarbon atoms, molecular qubits store information in their spin. The encoded information is later translated into photons, which are emitted to “read out” the information. These qubits can be tuned with laser precision—imagine adjusting a radio dial—by modifying the strength of the ligands, or bonds, connecting the hydrocarbons to the metal atom. These bonds act like tiny tuning forks; by adjusting their strength, the researchers can precisely control the qubit’s spin and the wavelength of the emitted photons. That emitted light can be used to provide information about atomic-level changes in electrical or magnetic fields.  While many researchers are eager to build reliable, scalable quantum computers, Freedman and her group devote most of their attention to developing custom molecules for quantum sensors. These ultrasensitive sensors contain particles in a state so delicately balanced that extremely small changes in their environments unbalance them, causing them to emit light differently. For example, one qubit designed in Freedman’s lab, made of a chromium atom surrounded by four hydrocarbon molecules, can be customized so that tiny changes in the strength of a nearby magnetic field will change its light emissions in a particular way.   A key benefit of using such molecules for sensing is that they are small enough—just a nanometer or so wide—to get extremely close to the thing they are sensing. That can offer an unprecedented level of precision when measuring something like the surface magnetism of two-­dimensional materials, since the strength of a magnetic field decays with distance. A molecular quantum sensor “might not be more inherently accurate than a competing quantum sensor,” says Freedman, “but if you can lose an order of magnitude of distance, that can give us a lot of information.” Quantum sensors’ ability to detect electric or magnetic changes at the atomic level and make extraordinarily precise measurements could be useful in many fields, such as environmental monitoring, medical diagnostics, geolocation, and more. When designing molecules to serve as quantum sensors, Freedman’s group also factors in the way they can be expected to act in a specific sensing environment. Creating a sensor for water, for example, requires a water-compatible molecule, and a sensor for use at very low temperatures requires molecules that are optimized to perform well in the cold. By custom-­engineering molecules for different uses, the Freedman lab aims to make quantum technology more versatile and widely adaptable. Embracing interdisciplinarity As Freedman and her group focus on the highly specific work of designing custom molecules, she is keenly aware that tapping into the power of quantum science depends on the collective efforts of scientists from different fields. “Quantum is a broad and heterogeneous field,” she says. She believes that attempts to define it narrowly hurt collective research—and that scientists must welcome collaboration when the research leads them beyond their own field. Even in the seemingly straightforward scenario of using a quantum computer to solve a chemistry problem, you would need a physicist to write a quantum algorithm, engineers and materials scientists to build the computer, and chemists to define the problem and identify how the quantum computer might solve it.  MIT’s collaborative environment has helped Freedman connect with researchers in different disciplines, which she says has been instrumental in advancing her research. She’s recently spoken with neurobiologists who proposed problems that quantum sensing could potentially solve and provided helpful context for building the sensors. Looking ahead, she’s excited about the potential applications of quantum science in many scientific fields. “MIT is such a great place to nucleate a lot of these connections,” she says. “As quantum expands, there are so many of these threads which are inherently interdisciplinary,” she says. Inside the lab Freedman’s lab in Building 6 is a beehive of creativity and collaboration. Against a backdrop of colorful flasks and beakers, researchers work together to synthesize molecules, analyze their structures, and unlock the secrets hidden within their intricate atomic arrangements. “We are making new molecules and putting them together atom by atom to discover whether they have the properties we want,” says Christian Oswood, a postdoctoral fellow.  Some sensitive molecules can only be made in the lab’s glove box, a nitrogen-filled transparent container that protects chemicals from oxygen and water in the ambient air. An example is an organometallic solution synthesized by one of Freedman’s graduate students, David Ullery, which takes the form of a vial of purple liquid. (“A lot of molecules have really pretty colors,” he says.) Freedman is a passionate educator, dedicated to demystifying the complexities of chemistry for her students. Aware that many of them find the subject daunting, she strives to go beyond textbook equations. Once synthesized, the molecules are taken to a single-crystal x-ray diffractometer a few floors below the Freedman lab. There, x-rays are directed at crystallized samples, and from the diffraction pattern, researchers can deduce their molecular structure—how the atoms connect. Studying the precise geometry of these synthesized molecules reveals how the structure affects their quantum properties, Oswood explains. Researchers and students at the lab say Freedman’s cross-disciplinary outlook played a big role in drawing them to it. With a chemistry background and a special interest in physics, for example, Ullery joined because he was excited by the way Freedman’s research bridges those two fields.  Crystals of an organometallic Cr(IV) complex. Freedman’s lab designed a series of molecules like this one to detect changes in a magnetic field.COURTESY OF DANNA FREEDMAN Others echo this sentiment. “The opportunity to be in a field that’s both new and expanding like quantum science, and attacking it from this specific angle, was exciting to me both intellectually and professionally,” says Oswood. Another graduate student, Cindy Serena Ngompe Massado, says she enjoys being part of the lab because she gets to collaborate with scientists in other fields. “It allows you to really approach scientific challenges in a more holistic and productive way,” she says. Though the researchers spend most of their time synthesizing and analyzing molecules, fun infuses the lab too. Freedman checks in with everyone frequently, and conversations often drift beyond just science. She’s just as comfortable chatting about Taylor Swift and Travis Kelce as she is discussing research. “Danna is very personable and very herself with us,” Ullery says. “It adds a bit of levity to being in an otherwise stressful grad school environment.” Bringing textbook chemistry to life In the classroom, Freedman is a passionate educator, dedicated to demystifying the complexities of chemistry for her students. Aware that many of them find the subject daunting, she strives to go beyond textbook equations. For each lecture in her advanced inorganic chemistry classes, she introduces the “molecule of the day,” which is always connected to the lesson plan. When teaching about bimetallic molecules, for example, she showcased the potassium rubidium molecule, citing active research at Harvard aimed at entangling its nuclear spins. For a lecture on superconductors, she brought a sample of the superconducting material yttrium barium copper oxide that students could handle.  Chemistry students often think “This is painful” or “Why are we learning this?” Freedman says. Making the subject matter more tangible and showing its connection to ongoing research spark students’ interest and underscore the material’s relevance. Freedman sees frustrating research as an opportunity to discover new things. “I like students to work on at least one ‘safer’ project along with something more ambitious,” she says.M. SCOTT BRAUER/MIT NEWS OFFICE Freedman believes this is an exceptionally exciting time for budding chemists. She emphasizes the importance of curiosity and encourages them to ask questions. “There is a joy to being able to walk into any room and ask any question and extract all the knowledge that you can,” she says.  In her own research, she embodies this passion for the pursuit of knowledge, framing challenges as stepping stones to discovery. When she was a postdoc, her research on electron spins in synthetic materials hit what seemed to be a dead end that ultimately led to the discovery of a new class of magnetic material. So she tells her students that even the most difficult aspects of research are rewarding because they often lead to interesting findings.  That’s exactly what happened to Ullery. When he designed a molecule meant to be stable in air and water and emit light, he was surprised that it didn’t—and that threw a wrench into his plan to develop the molecule into a sensor that would emit light only under particular circumstances. So he worked with theoreticians in Giulia Galli’s group at the University of Chicago, developing new insights on what drives emission, and that led to the design of a new molecule that did emit light.  “Frustrating research is almost fun to deal with,” says Freedman, “even if it doesn’t always feel that way.” 
    0 Commentarios 0 Acciones 44 Views
  • WWW.TECHNOLOGYREVIEW.COM
    Inside-out learning
    When the prison doors first closed behind him more than 50 years ago, Lee Perlman, PhD ’89, felt decidedly unsettled.   In his first job out of college, as a researcher for a consulting company working on a project for the US Federal Bureau of Prisons, he had been tasked with interviewing incarcerated participants in a drug rehab program. Once locked inside, he found himself alone in a room with a convicted criminal. “I didn’t know whether I should be scared,” he recalls.  Since then, he has spent countless hours in such environments in his role as a teacher of philosophy. He’s had “very, very few experiences” where he felt unsafe in prisons over the years, he says. “But that first time you go in, you do feel unsafe. I think that’s what you should feel. That teaches you something about what it feels like for anybody going into prison.” As a lecturer in MIT’s Experimental Study Group (ESG) for more than 40 years, Perlman has guided numerous MIT students through their own versions of that passage through prison doors. He first began teaching in prisons in the 1980s, when he got the idea of bringing his ESG students studying nonviolence into the Massachusetts Correctional Institution at Norfolk to talk with men serving life sentences. The experience was so compelling that Perlman kept going back, and since the early 2000s he has been offering full courses behind bars.  In 2018, Perlman formalized these efforts by cofounding the Educational Justice Institute (TEJI) at MIT with Carole Cafferty, a former corrections professional. Conceived both to provide college-level education with technology access to incarcerated individuals and to foster empathy and offer a window into the criminal justice system for MIT students, TEJI creates opportunities for the two groups to learn side by side.  “There’s hard data that there’s nothing that works like education to cut recidivism, to change the atmosphere within a prison so prisons become less violent places.” Lee Perlman, PhD ’89 “We believe that there are three fundamental components of education that everybody should have, regardless of their incarceration status: emotional literacy, digital literacy, and financial literacy,” says Cafferty. TEJI offers incarcerated students classes in the humanities, computer science, and business, the credits from which can be applied toward degrees from private universities and community colleges. The emotional literacy component, featuring Perlman’s philosophy courses, is taught in an “inside-out” format, with a mixed group of incarcerated “inside” students and “outside” classmates (from MIT and other universities where TEJI courses are sometimes cross-listed).  “I’ve been really torn throughout my life,” Perlman says, “between this part of me that would like to be a monk and sit in a cave and read books all day long and come out and discuss them with other monks, and this other half of me that wants to do some good in the world, really wants to make a difference.” Behind prison walls, the concepts he relishes discussing—love, authenticity, compassion—have become his tools for doing that good. TEJI also serves as a convener of people from academia and the criminal justice system. Within MIT, it works with the Sloan School of Management, the Music and Theater Arts Section, the Priscilla King Gray Public Service Center, and others on courses and special prison-related projects. And by spearheading broader initiatives like the Massachusetts Prison Education Consortium and the New England Commission on the Future of Higher Education in Prison, TEJI has helped lay the groundwork for significant shifts in how incarcerated people across the region and beyond prepare to rejoin society. “Lee and I both share the belief that education can and should be a transformative force in the lives of incarcerated people,” Cafferty says. “But we also recognize that the current system doesn’t offer a lot of opportunities for that.” Through TEJI, they’re working to create more. Perlman didn’t set out to reform prison education. “There’s never been any plan,” he says. “Before I was an academic I was a political organizer, so I have that political organizer brain. I just look for … where’s the opening you can run through?” Before earning his PhD in political philosophy, Perlman spent eight years making his mark on Maryland’s political scene. At age 28, he came up short by a few hundred votes in a primary for the state senate. In the late 1970s, Perlman says, he was named one of 10 rising stars in Maryland politics by the Baltimore Sun and one of the state’s most feared lobbyists by Baltimore Magazine because he got lawmakers to “do things they’d be perfectly willing to leave alone,” as he puts it, like pass election reform bills. The legislators gave him the nickname Wolfman, “probably just because I had a beard,” he says, “but it kind of grew to mean other things.” Perlman still has the beard. Working in tandem with Cafferty and others, he’s also retained his knack for nudging change forward. Lee Perlman, PhD ’89, and Philip Hutchful, an incarcerated student, take part in the semester’s final meeting of Perlman’s “inside-out” class Nonviolence as a Way of Life at the Boston Pre-Release Center.JAY DIAS/MASSACHUSETTS DEPARTMENT OF CORRECTION Cafferty understands, better than most, how difficult that can be in the prison system. She held numerous roles in her 25-year corrections career, ultimately serving as superintendent of the Middlesex Jail and House of Correction, where she oversaw the introduction of the first tablet-based prison literacy program in New England.  “I used to say someday when I write a book, it’s going to be called Swimming Against the Tide,” she says. In a correctional environment, “safety and security come first, always,” she explains. “Programming and education are much further down the list of priorities.” TEJI’s work pushes against a current in public opinion that takes a punitive rather than rehabilitative view of incarceration. Some skeptics see educating people in prison as rewarding bad deeds. “Out in the world I’ve had people say to me, ‘Maybe I should commit a crime so I can get a free college education,’” says Perlman. “My general response is, well, you really have one choice here: Do you want more crime or less crime? There’s hard data that there’s nothing that works like education to cut recidivism, to change the atmosphere within a prison so prisons become less violent places. Also, do you want to spend more or do you want to spend less money on this problem? For every dollar we spend on prison education and similar programs, we save five dollars.” The research to which Perlman refers includes a 2018 RAND study, which found that participants in correctional education programs in the US were 28% less likely to reoffend than their counterparts who did not participate. It’s a powerful number, considering that roughly 500,000 people are released from custody each year. Perlman has such statistics at the ready, as he must. But talk to him for any amount of time and the humanity behind the numbers is what stands out.  “There is a sizable group of people in prison who, if society was doing a better job, would have different lives,” he says, noting that “they’re smart enough and they have character enough” to pull it off: “We can make things happen in prison that will put them on a different path.”  “Most of the people I teach behind bars are people that have had terrible experiences with education and don’t feel themselves to be very capable at all,” he says. So he sometimes opens his class by saying: “Something you probably wouldn’t guess about me is that I failed the 11th grade twice and dropped out of high school. And now I have a PhD from MIT and I’ve been teaching at MIT for 40 years. So you never know where life’s gonna lead you.”  Though Perlman struggled to find his motivation in high school, he “buckled down and learned how much I loved learning,” as he puts it, when his parents sent him to boarding school to finish his diploma. He went on to graduate from St. John’s College in Annapolis, Maryland. Growing up in Michigan in the 1960s, he’d learned about fair housing issues because his mother was involved with the civil rights movement, and he lived for a time with a Black family that ran a halfway house for teenage girls. By the time he took that first job interviewing incarcerated former drug addicts, he was primed to understand their stories within the context of poverty, discrimination, and other systemic factors. He began volunteering for a group helping people reenter society after incarceration, and as part of his training, he spent a night booked into jail.  “I didn’t experience any ill treatment,” he says, “but I did experience the complete powerlessness you have when you’re a prisoner.” Jocelyn Zhu ’25 took a class with Perlman in the fall of 2023 at the Suffolk County House of Correction, and entering the facility gave her a similar sense of powerlessness.  “We had to put our phones away, and whatever we were told to do we would have to do, and that’s not really an experience that you’re in very often as a student at MIT,” says Zhu. “There was definitely that element of surrender: ‘I’m not in charge of my environment.’” On the flip side, she says, “because you’re in that environment, the only thing you’re doing while you’re there is learning—and really focusing in on the discussion you’re having with other students.” “I call them the ‘philosophical life skills’ classes,” says Perlman, “because there are things in our lives that everybody should sit down and think through as well as they can at some point.” He says that while those classes work fine with just MIT students, being able to go into a prison and talk through the same issues with people who have had very different life experiences adds a richness to the discussion that would be hard to replicate in a typical classroom.  He recalls the first time he broached the topic of forgiveness in a prison setting. Someone serving a life sentence for murder put things in a way Perlman had never considered. He remembers the man saying: “What I did was unforgivable. If somebody said ‘I forgive you for taking my child’s life,’ I wouldn’t even understand what that meant. For me, forgiveness means trying, at least … to regard me as somebody who’s capable of change … giving me the space to show you that I’m not the person who did that anymore.’”  Perlman went home and revised his lecture notes. “I completely reformulated my conception of forgiveness based on that,” he says. “And I tell that story every time I teach the class.” The meeting room at the minimum-­security Boston Pre-Release Center is simply furnished: clusters of wooden tables and chairs, a whiteboard, some vending machines. December’s bare branches are visible through a row of windows that remain closed even on the warmest of days (“Out of Bounds,” warns a sign taped beside them). This afternoon, the room is hosting one of Perlman’s signature classes, Nonviolence as a Way of Life. To close the fall 2024 semester, he has asked his students to creatively recap four months of Thursdays together.  Before long, the students are enmeshed in a good-natured showdown, calling out letters to fill in the blanks in a mystery phrase unfolding on the whiteboard. Someone solves it (“An eye for an eye makes the world go blind”) and scores bonus points for identifying its corresponding unit on the syllabus (Restorative Justice).  “It’s still anybody’s game!” announces the presenting student, Jay Ferran, earning guffaws with his spot-on TV host impression. Ferran and the other men in the room wearing jeans are residents of the Pre-Release Center. They have shared this class all semester with undergrad and grad students from MIT and Harvard (who are prohibited from wearing jeans by the visitor dress code). Before they all part ways, they circle up their chairs one last time. “Humor can be a defense mechanism, but it never felt that way in here,” says Isabel Burney, a student at the Harvard Graduate School of Education. “I really had a good time laughing with you guys.” “I appreciate everyone’s vulnerability,” says Jack Horgen ’26. “I think that takes a lot of grace, strength, and honesty.” “I’d like to thank the outside students for coming in and sharing as well,” says Ferran. “It gives a bit of freedom to interact with students who come from the outside. We want to get on the same level. You give us hope.” After the room has emptied out, Ferran reflects further on finding himself a college student at this stage in his life. Now in his late 40s, he dropped out of high school when he became a father. “I always knew I was smart and had the potential, but I was a follower,” he says.  As Ferran approaches the end of his sentence, he’s hoping to leverage the college credits he’s earned so far into an occupation in counseling and social work. His classmate Philip Hutchful, 35, is aiming for a career in construction management.  Access to education in prison “gives people a second chance at life,” Hutchful says. “It keeps your mind busy, rewires your brain.” JAY DIAS/MASSACHUSETTS DEPARTMENT OF CORRECTION JAY DIAS/MASSACHUSETTS DEPARTMENT OF CORRECTION JAY DIAS/MASSACHUSETTS DEPARTMENT OF CORRECTION MIT undergrads Denisse Romero Cruz ’25, Jack Horgen ’26, and Alor Sahoo ’26 at the final session of Perlman’s Nonviolence as a Way of Life class at the Boston Pre-Release Center. Along with about 45% of the Boston Pre-Release Center’s residents, Ferran and Hutchful are enrolled in the facility’s School of Reentry, which partners with MIT and other local colleges and universities to provide educational opportunities during the final 12 to 18 months of a sentence.  “We have seen a number of culture shifts for our students and their families, such as accountability, flexible thinking, and curiosity,” says the program’s executive director, Lisa Millwood. There are “students who worked hard just so they can proudly be there to support their grandchildren, or students who have made pacts with their teenage children who are struggling in school to stick with it together.” Ferran and Hutchful had previously taken college-level classes through the School of Reentry, but the prospect of studying alongside MIT and Harvard students raised new qualms.  “These kids are super smart—how can I compete with them? I’m going to feel so stupid,” Ferran remembers thinking. “In fact, it wasn’t like that at all.” “We all had our own different types of knowledge,” says Hutchful. Both Ferran and Hutchful say they’ve learned skills that they’ll put to use in their post-release lives, from recognizing manipulation to fostering nonviolent communication. Hutchful especially appreciates the principle that “you need to attack the problem, not the person,” saying, “This class teaches you how to deal with all aspects of people—angry people, impatient people. You’re not being triggered to react.”  Perlman has taught Nonviolence as a Way of Life nearly every semester since TEJI launched. Samuel Tukua ’25 took the class a few years ago. Like Hutchful, he has applied its lessons.  “I wouldn’t be TAing it for the third year now if it didn’t have this incredible impact on my life,” Tukua says.  Meeting incarcerated people did not in itself shift Tukua’s outlook; their stories didn’t surprise him, given his own upbringing in a low-income neighborhood near Atlanta. But watching learners from a range of backgrounds find common ground in big philosophical ideas helped convince him of those ideas’ validity. For example, he started to notice undercurrents of violence in everyday actions and speech.  “It doesn’t matter whether you came from a highly violent background or if you came from a privileged, less violent background,” he says he realized. “That kind of inner violence or that kind of learned treatment exists inside all of us.”  Marisa Gaetz ’20, a fifth-year PhD candidate in math at MIT, has stayed in TEJI’s orbit in the seven years since its founding—first as a student, then as a teaching assistant, and now by helping to run its computer science classes.  Limitations on in-person programming imposed by the covid-19 pandemic led Gaetz and fellow MIT grad student Martin Nisser, SM ’19, PhD ’24, to develop remote computer education classes for incarcerated TEJI students. In 2021, she and Nisser (now an assistant professor at the University of Washington) joined with Emily Harburg, a tech access advocate, to launch Brave Behind Bars, which partners closely with TEJI to teach Intro to Python, web development, and game design in both English and Spanish to incarcerated people across the US and formerly incarcerated students in Colombia and Mexico.  Since many inside students have laptop access only during class time, the remote computer courses typically begin with a 30-minute lecture followed by Zoom breakouts with teaching assistants. A ratio of one TA for every three or four students ensures that “each student feels supported, especially with coding, which can be frustrating if you’re left alone with a bug for too long,” Gaetz says.  Gaetz doesn’t always get to hear how things work out for her students,but she’s learned of encouraging outcomes. One Brave Behind Bars TA who got his start in their classes is now a software engineer. Another group of alums founded Reentry Sisters, an organization for formerly incarcerated women. “They made their own website using the skills that they learned in our class,” Gaetz says. “That was really amazing to see.” Although the pandemic spurred some prisons to expand use of technology, applying those tools to education in a coordinated way requires the kind of bridge-building TEJI has become known for since forming the Massachusetts Prison Education Consortium (MPEC) in 2018. “I saw there were a bunch of colleges doing various things in prisons and we weren’t really talking to each other,” says Perlman. TEJI secured funding from the Mellon Foundation and quickly expanded MPEC’s membership to more than 80 educational institutions, corrections organizations, and community-based agencies. Millwood says the School of Reentry has doubled its capacity and program offerings thanks to collaborations developed through MPEC. At the regional level, TEJI teamed up with the New England Board of Higher Education in 2022 to create the New England Commission on the Future of Higher Education in Prison. Its formation was prompted in part by the anticipated increase in demand for high-quality prison education programs thanks to the FAFSA Simplification Act, which as of 2023 reversed a nearly three-decade ban on awarding federal Pell grants to incarcerated people. Participants included leaders from academia and correctional departments as well as formerly incarcerated people. One, Daniel Throop, cochaired a working group called “Career, Workforce, and Employer Connections” just a few months after his release.  “I lived out a reentry while I was on the commission in a way that was very, very powerful,” Throop says. “I was still processing in real time.”  “Most of the people I teach behind bars are people that have had terrible experiences with education and don’t feel themselves to be very capable at all.” Lee Perlman, PhD ’89 During his incarceration in Massachusetts, Throop had revived the long-defunct Norfolk Prison Debating Society, which went head-to-head with university teams including MIT’s. Credits from his classes, including two with Perlman, culminated in a bachelor’s degree in interdisciplinary studies magna cum laude from Boston University, which he earned before his release. But he still faced big challenges. “Having a criminal record is still a very, very real hurdle,” Throop says. “I was so excited when those doors of prison finally opened after two decades, only to be greatly discouraged that so many doors of the community remained closed to me.”  Initially, the only employment he could get was loading UPS trucks by day and unloading FedEx trucks by night. He eventually landed a job with the Massachusetts Bail Fund and realized his goal of launching the National Prison Debate League.  “I fortunately had the educational credentials and references and the wherewithal to not give up on myself,” says Throop. “A lot of folks fail with less resources and privilege and ability and support.” The commission’s 2023 report advocates for improved programming and support for incarcerated learners spanning the intake, incarceration, and reentry periods. To help each state implement the recommendations, the New England Prison Education Collaborative (NEPEC) launched in October 2024 with funding from the Ascendium Education Group. Perlman encouraged TEJI alumna Nicole O’Neal, then working at Tufts University, to apply for the position she now holds as a NEPEC project manager.  Like Throop, O’Neal has firsthand experience with the challenges of reentry. Despite the stigma of having served time, having a transcript with credits earned during the period she was incarcerated “proved valuable for both job applications and securing housing,” she says. With the help of a nonprofit called Partakers and “a lot of personal initiative,” she navigated the confusing path to matriculation on Boston University’s campus, taking out student loans so she could finish the bachelor’s degree she’d begun in prison. A master’s followed. “I’ve always known that education was going to be my way out of poverty,” she says. From her vantage point at NEPEC, O’Neal sees how TEJI’s approach can inspire other programs. “What truly sets TEJI apart is the way that it centers students as a whole, as people and not just as learners,” she says. “Having the opportunity to take an MIT course during my incarceration wasn’t just about earning credits—it was about being seen as capable of engaging with the same level of intellectual rigor as students outside. That recognition changed how I saw myself and my future.” On a Zoom call one Wednesday evening in December, Perlman’s inside-out course on Stoicism is wrapping up. Most participants are women incarcerated in Maine. These are among Perlman’s most advanced and long-standing students, thanks to the state’s flexible approach to prison education—Perlman says it’s “maybe the most progressive system in the country,” early to adopt remote learning, experiment with mixed-gender classes, and allow email communication between teachers and students.  The mood is convivial, the banter peppered with quotes from the likes of Marcus Aurelius and Epictetus. More than one student is crocheting a Christmas gift, hands working busily at the edges of their respective Zoom rectangles.  As the students review what they’ve learned, the conversation turns to the stereotype of Stoicism as a lack of emotion. “I get the feeling the Stoics understood their emotions better than most because they weren’t puppets to their emotions,” says a student named Nicole. “They still feel things—they’re just not governed by it.” Jay Ferran, an incarcerated student at the Boston Pre-Release Center, presents a game to help recap what the class learned over the semester.JAY DIAS/MASSACHUSETTS DEPARTMENT OF CORRECTION Jade, who is a year into a 16-month sentence, connects this to her relationship with her 14-month-old son: “I think I would be a bad Stoic in how I love him. That totally governs me.” Perlman, a bit mischievously: “Does anyone want to talk Jade into being a Stoic mother?”  Another classmate, Victoria, quips: “I think you’d like it better when he’s a teenager.” When the laughter dies down, she says more seriously, “I think it’s more about not allowing your emotions to carry you away.” But she adds that it’s hard to do that as a parent.   “Excessive worry is also a hindrance,” Jade concedes. “So how do I become a middle Stoic?” “A middle Stoic would be an Aristotelian, I think,” muses Perlman. When the conversation comes around to amor fati, the Stoic notion of accepting one’s fate, Perlman asks how successful his students have been at this. The group’s sole participant from a men’s facility, Arthur, confesses that he has struggled with this over more than 20 years in prison. But for the last few years, school has brought him new focus. He helps run a space where other residents can study. “I hear you saying you can only love your fate if you have a telos, a purpose,” Perlman says. “I was always teaching people things to survive or get ahead by any means necessary,” Arthur says. “Now it’s positive building blocks.” “Education is my telos, and when I couldn’t access it at first, I had to focus on what was in my control,” says Victoria. “I framed my prison experiences as refusing to be harmed by the harmful process of incarceration. I’m going to use this opportunity for myself … so I can be who I want to be when I leave here.”  Soon after, the video call—and the course—ends. But if Perlman’s former students’ experience is any indication, the ideas their teacher has introduced will continue to percolate. O’Neal, who took Perlman’s Philosophy of Love, is still mulling over an exploration of loyalty in Tristan and Isolde that brought a classmate to tears. She thinks Perlman’s ability to nurture dialogue on sensitive topics begins with his relaxed demeanor—a remarkable quality in the prison environment. “It’s like you’re coming to our house. A lot of [people] show up as guests. Lee shows up like someone who’s been around—you know, and he’s willing to clean up the dishes with you. He just feels at home,” she says. “So he made us feel at home.”  Throop becomes animated when he describes taking Philosophy of the Self and Soul with Perlman and MIT students at MCI-Norfolk in 2016.  “Over those days and weeks, we got to meet and discuss the subject ­matter—walking around the prison yards together, my classmates and I, and then coming back and having these almost indescribable—I’m rarely at a loss for words!—weekly class discussions,” Throop remembers. Perlman “would throw one big question out there, and he would sit back and patiently let us all chop that material up,” he adds. “These discussions were like the highlight of all of our weeks, because we got to have this super-cool exchange of ideas, testing our perspectives … And then these 18-to-20-year-old students who were coming in with a whole different worldview, and being able to have those worldviews collide in a healthy way.”   “We all were having such enriching discussions that the semester flew by,” he says. “You didn’t want school to end.” 
    0 Commentarios 0 Acciones 53 Views
  • WWW.TECHNOLOGYREVIEW.COM
    The Institute’s greatest ambassadors
    After decades of working as a biologist at a Southern school with a Division 1 football team, coming to MIT was a bit of a culture shock—in the best possible way. I’ve heard from MIT alumni all about late-night psetting, when to catch MITHenge, and the best way to celebrate Pi Day (with pie, of course). And I’ve also learned that for many of you, the Institute is more than simply your alma mater. As the MIT Alumni Association celebrates its 150th anniversary, I’m reflecting on the extraordinary talent and drive of the people here, and what it is that makes MIT alumni—like MIT itself—just a little bit different. As students, you learned to investigate, question, argue, critique, and refine your ideas with faculty and with each other, managing to be both collaborative and competitive. You hacked the toughest and most interesting problems and came up with the most unconventional solutions. And you developed and nurtured a uniquely entrepreneurial, hands-on MIT spirit that only those who have earned a degree here can fully understand, but that the rest of us can easily identify and admire. An article in this magazine about the history of the MIT Alumni Association notes that when the association was formed, there were 84 alumni in total. By 1888, the number had increased to an impressive 579. And it grew by orders of magnitude; today nearly 149,000 alumni are members. But even as the alumni community has grown and evolved, its culture and character have remained remarkably consistent, represented by men and women known for their rigorous thinking, incisive analysis, mens et manus ethos, and drive to make a real and transformative impact on people and communities everywhere. As MIT alumni, you recognize each other by your Brass Rats. These sturdy, cleverly designed rings not only signify your completion (some might say survival) of an immensely difficult course of study. They also signal to the world that you stand ready to share your expertise, knowledge, and experience in the service of humanity.  Alumni have always been the Institute’s greatest ambassadors, and today that role has taken on even greater meaning and importance. We are working intensely, every day, to make the case for the vital importance of MIT to ensuring the nation’s security, prosperity, health, and quality of life. And I’m deeply grateful that we can rely on MIT’s extraordinary family of alumni to help share that message far and wide.
    0 Commentarios 0 Acciones 53 Views
  • WWW.TECHNOLOGYREVIEW.COM
    Bug-size robots that fly and flip could pollinate futuristic farms’ crops
    Tiny flying robots could perform such useful tasks as pollinating crops inside multilevel warehouses, boosting yields while mitigating some of agriculture’s harmful impacts on the environment. The latest robo-bug from an MIT lab, inspired by the anatomy of the bee, comes closer to matching nature’s performance than ever before.  Led by Kevin Chen, an associate professor in the Department of Electrical Engineering and Computer Science and the senior author of a paper on the work, the team adapted an earlier flying robot composed of four identical two-winged units, combined into a rectangular device about the size of a microcassette. The wings managed to flap like an insect’s, but the bot couldn’t fly for long. One problem is that the wings would blow air into each other when flapping, reducing the lift forces they could generate. In the new design, each of the four units has a single flapping wing pointing away from the robot’s center, stabilizing the wings and boosting their lift forces. The researchers also improved the way the wings are connected to the actuators, or artificial muscles, that flap them. In previous designs, when the actuators’ movements reached the extremely high frequencies needed for flight, the devices often started buckling. That reduced the power and efficiency of the robot. Thanks in part to a new, longer wing hinge, the actuators now experience less mechanical strain and can apply more force, so the bots can fly faster, longer, and in more precise paths. The robots can precisely track a trajectory enough to spell M-I-T.COURTESY OF THE RESEARCHERS Weighing less than a paper clip, the new robotic insect can hover for more than 1,000 seconds—almost 17 minutes—without any degradation of flight precision. “When my student Yi-Hsuan Hsiao was performing that flight, he said it was the slowest 1,000 seconds he had spent in his entire life. The experiment was extremely nerve-racking,” Chen says. The new robot also reached an average speed of 35 centimeters per second, the fastest flight researchers have reported, and was able to perform body rolls and double flips. It can even precisely track a trajectory that spells M-I-T. “At the end of the day, we’ve shown flight that is 100 times longer than anyone else in the field has been able to do, so this is an extremely exciting result,” Chen says. COURTESY OF THE RESEARCHERS From here, he and his students want to see how far they can push this new design, with the goal of achieving flight for longer than 10,000 seconds. They also want to improve the precision of the robots so they could land in and take off from the center of a flower. In the long run, the researchers hope to install tiny batteries and sensors so the robots could fly and navigate outside the lab. The design has more room for those electronics now that they’ve halved the number of wings. The bots still can’t achieve the fine-tuned behavior of a real bee, Chen acknowledges. Still, he says, “with the improved lifespan and precision of this robot, we are getting closer to some very exciting applications, like assisted pollination.” 
    0 Commentarios 0 Acciones 52 Views
  • WWW.TECHNOLOGYREVIEW.COM
    The Download: canceled climate tech projects, and South Korea’s AI web comics
    1 Google could be forced to sell Chrome A new remedies trial has begun, following last year’s ruling that Google illegally abused its search market power. (WP $)+ The DoJ alleges that Google is using AI to strengthen its monopoly. (Axios)+ Multiple states also want Google to share data with its rivals. (The Information $)+ Microsoft and other rivals will be watching the outcome closely. (WSJ $)2 The FTC is suing Uber The lawsuit claims the company charged its customers without their consent. (WSJ $)+ It claimed its customers would save $25 a month thanks to its Uber One service. (Reuters)+ The Trump administration is really going after Big Tech. (FT $)3 Inside the fight to prevent DOGE from eradicating rural health careCommunity health centers are at the mercy of grant funding. (The Atlantic $)+ Cuts to sexual healthcare have come amid a rise in syphilis cases. (The Guardian)+ Here’s a who’s-who of DOGE staff. (NYT $)+ The ACLU is going after DOGE records. (Wired $)4 Misleading political content is thriving on Facebook in Canada And it’s become worse since the country blocked news from users’ feeds. (NYT $)+ The country is preparing to vote in a federal election, too. (The Guardian)+ Meta will start using AI tools to detect underage users. (The Verge)5 How Big Tech conceals its hidden workforce in Africa They’re training AI models and moderating content behind the scenes. (Rest of World)+ We are all AI’s free data workers. (MIT Technology Review) 6 A school funded by Pricilla Chan is shutting downThe Primary School is closing at the end of the 2026 academic year. (Bloomberg $) 7 The FBI can’t find records of its hacking tool purchasesDespite spending hundreds of thousands of dollars on them. (404 Media) + Cyberattacks by AI agents are coming. (MIT Technology Review)8 Bluesky is finally getting blue checkmarks‘Authentic and notable’ accounts will be able to apply. (Engadget) + It’s a mixture of Twitter’s old approach and a more decentralized option. (Wired $)9 The hidden joys of Google Maps It’s not just for navigation, y’know. (The Guardian)
    0 Commentarios 0 Acciones 41 Views
  • WWW.TECHNOLOGYREVIEW.COM
    The future of AI processing
    Artificial Intelligence (AI) is emerging in everyday use cases, thanks to advances in foundational models, more powerful chip technology, and abundant data. To become truly embedded and seamless, AI computation must now be distributed—and much of it will take place on device and at the edge.  To support this evolution, computation for running AI workloads must be allocated to the right hardware based on a range of factors, including performance, latency, and power efficiency. Heterogeneous compute enables organizations to allocate workloads dynamically across various computing cores like central processing units (CPUs), graphics processing units (GPUs), neural processing units (NPUs), and other AI accelerators. By assigning workloads to the processors best suited to different purposes, organizations can better balance latency, security, and energy usage in their systems.  DOWNLOAD THE FULL REPORT Key findings from the report are as follows:  • More AI is moving to inference and the edge. As AI technology advances, inference—a model’s ability to make predictions based on its training—can now be run closer to users and not just in the cloud. This has advanced the deployment of AI to a range of different edge devices, including smartphones, cars, and industrial internet of things (IIoT). Edge processing reduces the reliance on cloud to offer faster response times and enhanced privacy. Going forward, hardware for on-device AI will only improve in areas like memory capacity and energy efficiency.  • To deliver pervasive AI, organizations are adopting heterogeneous compute. To commercialize the full panoply of AI use cases, processing and compute must be performed on the right hardware. A heterogeneous approach unlocks a solid, adaptable foundation for the deployment and advancement of AI use cases for everyday life, work, and play. It also allows organizations to prepare for the future of distributed AI in a way that is reliable, efficient, and secure. But there are many trade-offs between cloud and edge computing that require careful consideration based on industry-specific needs.  • Companies face challenges in managing system complexity and ensuring current architectures can adapt to future needs. Despite progress in microchip architectures, such as the latest high-performance CPU architectures optimized for AI, software and tooling both need to improve to deliver a compute platform that supports pervasive machine learning, generative AI, and new specializations. Experts stress the importance of developing adaptable architectures that cater to current machine learning demands, while allowing room for technological shifts. The benefits of distributed compute need to outweigh the downsides in terms of complexity across platforms.  Download the full report. This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. This content was researched, designed, and written entirely by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.
    0 Commentarios 0 Acciones 39 Views
  • WWW.TECHNOLOGYREVIEW.COM
    Generative AI is reshaping South Korea’s webcomics industry
    “My mind is still sharp and my hands work just fine, so I have no interest in getting help from AI to draw or write stories,” says Lee Hyun-se, a legendary South Korean cartoonist best known for his seminal series A Daunting Team, a 1983 manhwa about the coming-of-age of heroic underdog baseball players. “Still, I’ve joined hands with AI to immortalize my characters Kkachi, Umji, and Ma Dong-tak.” By embracing generative AI, Lee is charting a new creative frontier in South Korea’s web comics industry. Since comics magazines faded at the turn of the century, web comics—serialized comics that read from top to bottom on digital platforms—have gone from niche subculture to global entertainment powerhouse, drawing in hundreds of millions of readers around the world. Lee has long been at its forefront, pushing the boundaries of his craft. Lee drew inspiration for his renegade baseball avengers from the Sammi Superstars, one of South Korea’s first professional baseball teams, whose journey of perseverance captivated a country stifled by military dictatorship. The series gained a cult following among readers seeking a creative escape from political repression, mesmerized by his bold brushstrokes and cinematic compositions that defied the conventions of cartoons.  Kkachi, the rebellious protagonist in A Daunting Team, is an alter ego of Lee himself. A scrappy outcast with untamed, spiky hair, he is a fan favorite who challenges the world with unrelenting passion and a brave conscience. He has reappeared throughout Lee’s signature works, painted with a new layer of pathos each time—a supernatural warrior who saves Earth from an alien attack in Armageddon and a rogue police officer battling a powerful criminal syndicate in Karon’s Dawn. Over decades, Kkachi has become a cultural icon in South Korea.  But Lee worries about Kkachi’s future. “In South Korea, when an author dies, his characters also get buried in his grave,” he says, drawing contrasts with enduring American comic characters like Superman and Spider-Man. Lee craves artistic immortality. He wants his characters to stay alive not just in the memories of readers, but also on their web comic platforms. “Even after I die, I want my worldviews and characters to communicate and resonate with the people of a new era,” he says. “That’s the kind of immortality I want.” Lee believes that AI can help him realize his vision. In partnership with Jaedam Media, a web comics production company based in Seoul, he developed the “Lee Hyun-se AI model” by fine-tuning the open-source AI art generator Stable Diffusion, created by the UK-based startup Stability AI. Using a data set of 5,000 volumes of comics that he has published over 46 years, the resulting model generates comics in his signature style.  This year, Lee is preparing to publish his first AI-assisted web comic, a remake of his 1994 manhwa Karon’s Dawn. Writers at Jaedam Media are adapting the story into a modernized crime drama starring Kkachi as a police officer in present-day Seoul and his love interest Umji as a daring prosecutor. Students at Sejong University, where Lee teaches comics, are creating the artwork using his AI model.  The creative process unfolds in several stages. First, Lee’s AI model generates illustrations based on text prompts and reference images, like 3D anatomy models and hand-drawn sketches that provide cues for different movements and gestures. Lee’s students then curate and edit the illustrations, adjusting the characters’ poses, tailoring their facial expressions, and integrating them into cartoonish compositions that AI can’t engineer. After many rounds of refinement and regeneration, Lee steps in to orchestrate the final product, adding his distinct artistic edge. AI companies envision that artists could automate the grunt work of drawing and channel their creative energy into storytelling and art direction. “Under my direction, a character might glare with sad eyes even when they’re angry or ferocious eyes when they’re happy,” he says. “It’s a subversive expression, a nuance that AI struggles to capture. Those delicate details I need to direct myself.” Ultimately, Lee wants to build an AI system that embodies his meticulous approach to human expressions. The grand vision of his experimental AI project is to create a “Lee Hyun-se simulation agent”—an advanced generation of his AI model that replicates his creative mind. The model would be trained on digital archives of Lee’s essays, interviews, and texts from his comics—the subject of an exhibit at the National Library of Korea last year—to encode his philosophy, personality, and values. “It’s going to take a long time for AI to learn my myriad worldviews because I’ve published so much work,” he says. The digital clone of Lee would generate new comics with his artistic intuition, perceiving its environment and making creative choices as he would—perhaps even publishing a series far in the future starring Kkachi as a post-human protagonist. “Fifty years from now, what kinds of comics would Lee Hyun-se create if he saw the world then?” Lee asks. “The question fascinates me.” Lee’s quest for a lasting artistic legacy is part of a broader creative evolution driven by technology. In the decades since their emergence, web comics have transformed the art of storytelling, offering an infinite digital canvas that integrates music, animation, and interactive visuals with the effects of new tools like automated coloring programs. The addition of AI is spurring the next wave of innovation. But even as it unlocks new creative possibilities, it is fueling anxieties over artistic agency and authorship. Last year the South Korean startup Onoma AI, named after the Greek word for “name” (a signal of its ambition to redefine creative storytelling), launched an AI-powered web comic generator called TooToon. The software allows users to create synopses, characters, and storyboards with simple text prompts and convert rough sketches into polished illustrations that reflect their personal artistic style. TooToon claims to streamline the labor-­intensive creative process by cutting down the production time between concept development and line art from six months to just two weeks. Companies like Onoma AI champion the idea that AI can help anyone be an artist—even if you can’t draw or afford to hire an army of assistants to keep up with the industry’s insane production demands. In their vision, artists would emerge as directors of their own AI-powered solo studios, automating the grunt work of drawing and channeling their creative energy into storytelling and art direction. The productivity breakthrough, they say, would help artists brainstorm more experimental ideas, take on big-scale productions, and disrupt the studio monopolies that dominate the market. Oh Hye-seong is the protagonist of “Karon’s Dawn,” an AI-assisted web comic series by the South Korean cartoonist Lee Hyun-se, which will be released later this year.COURTESY OF THE PUBLISHER “AI would expand the web comic ecosystem,” says Song Min, the founder and CEO of Onoma AI. Song describes the industry in South Korea as a “pyramid”—powerhouse platforms like Naver Webtoon and Kakao Webtoon at the top, followed by big-shot studios, where artists collaborate to mass-produce web comics. “The rest of the artists, those outside the studio system, can’t create alone,” he explains. “AI would empower more artists to emerge as independent artists.” Last year, Onoma AI partnered with a group of young web comic artists to create Tarot: A Tale of Seven Pages, a mystery thriller unraveling the twisted fates of strangers cursed by a hand of tarot cards. Through these collaborations, Song uses the artists’ feedback to refine TooToon. Still, even as a champion of AI-generated art, he questions whether it’s “a good thing for AI to be perfect.” Just as engineers need to keep coding to hone their skills, he wonders if AI should leave room for artists to keep drawing to nurture their craft. “AI is an inevitable tour de force, but for now, the big hurdles lie in artists’ perception and copyright,” he says. Onoma AI built Illustrious, the large language model powering TooToon, by fine-tuning Stable Diffusion on the Danbooru2023 data set, a public image bank of anime-style illustrations. But Stable Diffusion, along with other popular image generators built on the model, has come under fire for indiscriminately scraping images from the internet, sparking a barrage of lawsuits over copyright infringement. In turn, web comic generators are facing intense backlash from artists who fear that the programs are being trained on their art without their consent. "Can you create without a soul? Who knows?” As companies silo their training data, artists and readers have launched a digital campaign to boycott AI-generated web comics. In May 2023, readers bombarded The Knight King Returns with the Gods on Naver Webtoon with blazingly low ratings after discovering that AI had been used to refine portions of the artwork. The following month, artists flooded the platform with anonymous posts protesting “AI web comics created from theft,” sharply criticizing Naver’s contract policy requiring artists who publish on the platform to consent to having their works used as AI training data. To settle the standoff, the Korea Copyright Commission issued a set of guidelines in December 2023, urging AI developers to obtain permission from copyright holders before using their works as training data; articulate the purpose, scope, and duration of use; and provide fair compensation. A year later, amid growing calls from AI companies for access to more data, the South Korean government proposed carving out an exemption to copyright laws that would allow AI models to be trained on copyrighted works under the doctrine of fair use. But no legislation or regulation has yet established a clear legal framework, leaving artists in limbo. While seasoned artists like Lee embrace the technology as a tool to expand their legacy, wholeheartedly licensing their intellectual property to AI, younger artists see it as a threat. They fear that AI will steal their artwork and, more important, their identity as artists. “Drawing is the most difficult and the most fun part of making comics,” says Park So-won, a young web comic artist based in Seoul. Park grew up dreaming of becoming a cartoonist, watching her mother, an animator, bring characters to life. After years of juggling gigs as an artist assistant at a web comics studio, interrupted by a brief creative hiatus, she made her breakthrough on the platform Lezhin Comics with Legs That Won’t Walk, a queer romance noir about a boxer who falls in love with a loan shark chasing after him over his alcoholic father’s debt. As an independent artist, Park is constantly at work. She publishes a new episode every 10 days, often pulling all-nighters to produce up to 80 cuts of drawing, even with the help of assistants handling background art and coloring. Occasionally she finds herself in a flow state, working 30 hours straight without a break. Still, Park can’t imagine outsourcing her drawings, which she sees as the heart of her comics, to AI. “The crux of a comic, however important the story, is the drawing. If the story were written in words, people wouldn’t have read it, would they? The story is just a thought—the execution is the drawing,” she says. “The grammar of comics is the drawing.” Handing over her drawing would mean surrendering her artistic agency. A strip from ”A Daunting Team,” a 1983 baseball manhwa made by Lee Hyun-se.COURTESY OF THE PUBLISHER Park thinks algorithmic art lacks soul—like “objects that exist in a void”—and isn’t worried about whether AI can draw better than she does. Her drawings have evolved over the years, shaped by her shifting outlook on the world and breaking new creative ground over time—an artistic progression that she thinks an algorithm trained to emulate existing works could never make. “I’ll keep charting new territory as an artist, while AI will stay the same,” she says. To Park, art is supreme indulgence: “I’ve come this far because I love to draw. If AI takes away my favorite thing to do in the world, what would I do?” But other comic artists, whose strengths lie in storytelling, welcome the innovation. Bae Jin-soo was an aspiring screenwriter before debuting as an artist on Naver Webtoon’s amateur comics page in 2010. To turn his screenplay into a comic, Bae taught himself to draw by photographing different compositions and tracing them on paper. “I can’t draw, so I’ll bet on my writing,” he thought. After his debut seriesFriday: Forbidden Tales took off, Bae rose to stardom with his three-part series Money Game, Pie Game, and Funny Game—brainy psychological thrillers packed with plot twists and witty, thought-provoking narratives about a group of contestants playing eccentric games to win a cash prize. They have even inspired a popular Netflix adaptation, The 8 Show.  “I still have so many more stories I want to tell,” Bae says. A prolific writer, he keeps a running list of new ideas in a pocket notepad, the genre-bending plots spanning horror, politics, and black comedy. But with his mind racing ahead of his hand, breathing life into all his ideas would require commissioning a studio to execute the illustrations. For Bae, an AI-powered web comic generator could be a game changer. “If AI could handle my artwork, I would create an endless stream of new comics,” he says. Bae is also eager to explore AI as a “backup battery for story ideas,” like a writer’s assistant. Even so, to hold his ground as an artist, he plans to dig deeper into his imagination to generate original and experimental ideas that could be found nowhere else. “That’s the domain of [human] creators,” he says. Still, Bae wonders if his own creative edge would slowly erode through extensive collaboration with AI: “Would my own colors start to fade?” Meanwhile, comics students at Sejong University in Seoul are learning to integrate AI into their tool kits. The budding artists are being trained as “creative coders,” turning strips of comics into data sets by meticulously annotating their content, and as prompt engineers who can guide AI to produce characters that align with their aesthetic sensibilities. “Creativity takes time—to reflect and contemplate on your work,” says Han Chang-wan, a professor of comics and animation at Sejong University, who teaches a class on AI-generated web comics. Han says that’s what AI will buy for his students: the time to “create more diverse characters, more kaleidoscopic plots, and more eclectic genres” that challenge the formulaic comics mass-produced by studios. Ultimately, he hopes, they’ll “tap into an entirely new readership.” As artists navigate this uncharted future, generative AI is raising profound questions about what powers creativity. “AI could be a technical assistant to artists,” says Shin Il-sook, the president of the Korea Cartoonist Association and the renowned cartoonist behind the historical fantasy romance The Four Daughters of Armian, which follows a brave-hearted princess exiled from a matriarchal kingdom as she embarks on a journey of survival and self-discovery through war, love, and political power battles. Still, she wonders if AI can really be a creative companion.  “Creativity is about making something never seen before, driven by a desire to share it with other people,” Shin says. “It’s deeply intertwined with the human experience and its afflictions. That’s why an artist who has walked through life’s suffering and honed their craft produces remarkable art,” she says. “Can you create without a soul? Who knows?”  Michelle Kim is a freelance journalist and lawyer based in Seoul.
    0 Commentarios 0 Acciones 31 Views
  • WWW.TECHNOLOGYREVIEW.COM
    $8 billion of US climate tech projects have been canceled so far in 2025
    This year has been rough for climate technology: Companies have canceled, downsized, or shut down at least 16 large-scale projects worth $8 billion in total in the first quarter of 2025, according to a new report. That’s far more cancellations than have typically occurred in recent years, according to a new report from E2, a nonpartisan policy group. The trend is due to a variety of reasons, including drastically revised federal policies. In recent months, the White House has worked to claw back federal investments, including some of those promised under the Inflation Reduction Act. New tariffs on imported goods, including those from China (which dominates supply chains for batteries and other energy technologies), are also contributing to the precarious environment. And demand for some technologies, like EVs, is lagging behind expectations.  E2, which has been tracking new investments in manufacturing and large-scale energy projects, is now expanding its regular reports to include project cancellations, shutdowns, and downsizings as well.  From August 2022 to the end of 2024, 18 projects were canceled, closed, or downsized, according to E2’s data. The first three months of 2025 have already seen 16 projects canceled. “I wasn’t sure it was going to be this clear,” says Michael Timberlake, communications director of E2. “What you’re really seeing is that there’s a lot of market uncertainty.” Despite the big number, it is not comprehensive. The group only tracks large-scale investments, not smaller announcements that can be more difficult to follow. The list also leaves out projects that companies have paused. “The incredible uncertainty in the clean energy sector is leading to a lot of projects being canceled or downsized, or just slowed down,” says Jay Turner, a professor of environmental studies at Wellesley College. Turner leads a team that also tracks the supply chain for clean energy in the US in a database called the Big Green Machine. Some turnover is normal, and there have been a lot of projects announced since the Inflation Reduction Act was passed in 2022—so there are more in the pipeline to potentially be canceled, Turner says. So many battery and EV projects were announced that supply would have exceeded demand “even in a best-case scenario,” Turner says. So some of the project cancellations are a result of right-sizing, or getting supply and demand in sync. Other projects are still moving forward, with hundreds of manufacturing facilities under construction or operational. But it’s not as many as we’d see in a more stable policy landscape, Turner says. The cancellations include a factory in Georgia from Aspen Aerogels, which received a $670 million loan commitment from the US Department of Energy in October. The facility would have made materials that can help prevent or slow fires in battery packs. In a February earnings call, executives said the company plans to focus on an existing Rhode Island facility and projects in other countries, including China and Mexico. Aspen Aerogels didn’t respond to a request for further comment.  Hundreds of projects that have been announced in just the last few years are under construction or operational despite the wave of cancellations. But it is an early sign of growing uncertainty for climate technology.   “You’re seeing a business environment that’s just unsure what’s next and is hesitant to commit one way or another,” Timberlake says.
    0 Commentarios 0 Acciones 56 Views
  • WWW.TECHNOLOGYREVIEW.COM
    Yahoo will give millions to a settlement fund for Chinese dissidents, decades after exposing user data
    A lawsuit to hold Yahoo responsible for “willfully turning a blind eye” to the mismanagement of a human rights fund for Chinese dissidents was settled for $5.425 million last week, after an eight-year court battle. At least $3 million will go toward a new fund; settlement documents say it will “provide humanitarian assistance to persons in or from the [People’s Republic of China] who have been imprisoned in the PRC for exercising their freedom of speech.”  This ends a long fight for accountability stemming from decisions by Yahoo, starting in the early 2000s, to turn over information on Chinese internet users to state security, leading to their imprisonment and torture. After the actions were exposed and the company was publicly chastised, Yahoo created the Yahoo Human Rights Fund (YHRF), endowed with $17.3 million, to support individuals imprisoned for exercising free speech rights online.  But in the years that followed, its chosen nonprofit partner, the Laogai Research Foundation, badly mismanaged the fund, spending less than $650,000—or 4%—on direct support for the dissidents. Most of the money was, instead, spent by the late Harry Wu, the politically connected former Chinese dissident who led Laogai, on his own projects and interests. A group of dissidents sued in 2017, naming not just Laogai and its leadership but also Yahoo and senior members from its leadership team during the time in question; at least one person from Yahoo always sat on YHRF’s board and had oversight of its budget and activities.   The defendants—which, in addition to Yahoo and Laogai, included the Impresa Legal Group, the law firm that worked with Laogai—agreed to pay the six formerly imprisoned Chinese dissidents who filed the suit, with five of them slated to receive $50,000 each and the lead plaintiff receiving $55,000.  The remainder, after legal fees and other expense reimbursements, will go toward a new fund to continue YHRF’s original mission of supporting individuals in China imprisoned for their speech. The fund will be managed by a small nonprofit organization, Humanitarian China, founded in 2004 by three participants in the 1989 Chinese democracy movement. Humanitarian China has given away $2 million in cash assistance to Chinese dissidents and their families, funded primarily by individual donors.  This assistance is often vital; political prisoners are frequently released only after years or decades in prison, sometimes with health problems and without the skills to find steady work in the modern job market. They continue to be monitored, visited, and penalized by state security, leaving local employers even more unwilling to hire them. It’s a “difficult situation,” Xu Wanping, one of the plaintiffs, previously told MIT Technology Review—“the sense of isolation and that kind of helplessness we feel … if this lawsuit can be more effective, if it could help restart this program, it is really meaningful.” As we wrote in our original story, “Xu lives in low-income housing in his hometown of Chongqing, in western China. He Depu, another plaintiff, his wife, and an adult son survive primarily on a small monthly hardship allowance of 1,500 RMB ($210) provided by the local government as collateral to ensure that he keeps his opinions to himself. But he knows that even if he is silent, this money could disappear at any point.”  The terms of the settlement bar the parties from providing more than a cursory statement to the media, but Times Wang, the plaintiffs’ lawyer, previously told MIT Technology Review about the importance of the fund. In addition to the crucial financial support, “it is a source of comfort to them [the dissidents] to know that there are people outside of China who stand with them,” he said.  MIT Technology Review took an in-depth look at the case and the mismanagement at YHRF, which you can read here. 
    0 Commentarios 0 Acciones 67 Views
  • WWW.TECHNOLOGYREVIEW.COM
    The quest to build islands with ocean currents in the Maldives
    In satellite images, the 20-odd coral atolls of the Maldives look something like skeletal remains or chalk lines at a crime scene. But these landforms, which circle the peaks of a mountain range that has vanished under the Indian Ocean, are far from inert. They’re the products of living processes—places where coral has grown toward the surface over hundreds of thousands of years. Shifting ocean currents have gradually pushed sand—made from broken-up bits of this same coral—into more than 1,000 other islands that poke above the surface.  But these currents can also be remarkably transient, constructing new sandbanks or washing them away in a matter of weeks. In the coming decades, the daily lives of the half-million people who live on this archipelago—the world’s lowest-lying nation—will depend on finding ways to keep a solid foothold amid these shifting sands. More than 90% of the islands have experienced severe erosion, and climate change could make much of the country uninhabitable by the middle of the century. Off one atoll, just south of the Maldives’ capital, Malé, researchers are testing one way to capture sand in strategic locations—to grow islands, rebuild beaches, and protect coastal communities from sea-level rise. Swim 10 minutes out into the En’boodhoofinolhu Lagoon and you’ll find the Ramp Ring, an unusual structure made up of six tough-skinned geotextile bladders. These submerged bags, part of a recent effort called the Growing Islands project, form a pair of parentheses separated by 90 meters (around 300 feet). The bags, each about two meters tall, were deployed in December 2024, and by February, underwater images showed that sand had climbed about a meter and a half up the surface of each one, demonstrating how passive structures can quickly replenish beaches and, in time, build a solid foundation for new land. “There’s just a ton of sand in there. It’s really looking good,” says Skylar Tibbits, an architect and founder of the MIT Self-Assembly Lab, which is developing the project in partnership with the Malé-based climate tech company Invena. The Self-Assembly Lab designs material technologies that can be programmed to transform or “self-assemble” in the air or underwater, exploiting natural forces like gravity, wind, waves, and sunlight. Its creations include sheets of wood fiber that form into three-dimensional structures when splashed with water, which the researchers hope could be used for tool-free flat-pack furniture.  Growing Islands is their largest-scale undertaking yet. Since 2017, the project has deployed 10 experiments in the Maldives, testing different materials, locations, and strategies, including inflatable structures and mesh nets. The Ramp Ring is many times larger than previous deployments and aims to overcome their biggest limitation.  In the Maldives, the direction of the currents changes with the seasons. Past experiments have been able to capture only one seasonal flow, meaning they lie dormant for months of the year. By contrast, the Ramp Ring is “omnidirectional,” capturing sand year-round. “It’s basically a big ring, a big loop, and no matter which monsoon season and which wave direction, it accumulates sand in the same area,” Tibbits says. The approach points to a more sustainable way to protect the archipelago, whose growing population is supported by an economy that caters to 2 million annual tourists drawn by its white beaches and teeming coral reefs. Most of the country’s 187 inhabited islands have already had some form of human intervention to reclaim land or defend against erosion, such as concrete blocks, jetties, and breakwaters. Since the 1990s, dredging has become by far the most significant strategy. Boats equipped with high-power pumping systems vacuum up sand from one part of the seabed and spray it into a pile somewhere else. This temporary process allows resort developers and densely populated islands like Malé to quickly replenish beaches and build limitlessly customizable islands. But it also leaves behind dead zones where sand has been extracted—and plumes of sediment that cloud the water with a sort of choking marine smog. Last year, the government placed a temporary ban on dredging to prevent damage to reef ecosystems, which were already struggling amid spiking ocean temperatures. Holly East, a geographer at the University of Northumbria, says Growing Islands’ structures offer an exciting alternative to dredging. But East, who is not involved in the project, warns that they must be sited carefully to avoid interrupting sand flows that already build up islands’ coastlines.  To do this, Tibbits and Invena cofounder Sarah Dole are conducting long-term satellite analysis of the En’boodhoofinolhu Lagoon to understand how sediment flows move around atolls. On the basis of this work, the team is currently spinning out a predictive coastal intelligence platform called Littoral. The aim is for it to be “a global health monitoring system for sediment transport,” Dole says. It’s meant not only to show where beaches are losing sand but to “tell us where erosion is going to happen,” allowing government agencies and developers to know where new structures like Ramp Rings can best be placed. Growing Islands has been supported by the National Geographic Society, MIT, the Sri Lankan engineering group Sanken, and tourist resort developers. In 2023, it got a big bump from the US Agency for International Development: a $250,000 grant that funded the construction of the Ramp Ring deployment and would have provided opportunities to scale up the approach. But the termination of nearly all USAID contracts following the inauguration of President Trump means the project is looking for new partners. Matthew Ponsford is a freelance reporter based in London.
    0 Commentarios 0 Acciones 57 Views
  • WWW.TECHNOLOGYREVIEW.COM
    AI is pushing the limits of the physical world
    Architecture often assumes a binary between built projects and theoretical ones. What physics allows in actual buildings, after all, is vastly different from what architects can imagine and design (often referred to as “paper architecture”). That imagination has long been supported and enabled by design technology, but the latest advancements in artificial intelligence have prompted a surge in the theoretical.  Karl Daubmann, College of Architecture and Design at Lawrence Technological University“Very often the new synthetic image that comes from a tool like Midjourney or Stable Diffusion feels new,” says Daubmann, “infused by each of the multiple tools but rarely completely derived from them.” “Transductions: Artificial Intelligence in Architectural Experimentation,” a recent exhibition at the Pratt Institute in Brooklyn, brought together works from over 30 practitioners exploring the experimental, generative, and collaborative potential of artificial intelligence to open up new areas of architectural inquiry—something they’ve been working on for a decade or more, since long before AI became mainstream. Architects and exhibition co-­curators Jason Vigneri-Beane, Olivia Vien, Stephen Slaughter, and Hart Marlow explain that the works in “Transductions” emerged out of feedback loops among architectural discourses, techniques, formats, and media that range from imagery, text, and animation to mixed-­reality media and fabrication. The aim isn’t to present projects that are going to break ground anytime soon; architects already know how to build things with the tools they have. Instead, the show attempts to capture this very early stage in architecture’s exploratory engagement with AI. Technology has long enabled architecture to push the limits of form and function. As early as 1963, Sketchpad, one of the first architectural software programs, allowed architects and designers to move and change objects on screen. Rapidly, traditional hand drawing gave way to an ever-expanding suite of programs—­Revit, SketchUp, and BIM, among many others—that helped create floor plans and sections, track buildings’ energy usage, enhance sustainable construction, and aid in following building codes, to name just a few uses.  The architects exhibiting in “Trans­ductions” view newly evolving forms of AI “like a new tool rather than a profession-­ending development,” says Vigneri-Beane, despite what some of his peers fear about the technology. He adds, “I do appreciate that it’s a somewhat unnerving thing for people, [but] I feel a familiarity with the rhetoric.” After all, he says, AI doesn’t just do the job. “To get something interesting and worth saving in AI, an enormous amount of time is required,” he says. “My architectural vocabulary has gotten much more precise and my visual sense has gotten an incredible workout, exercising all these muscles which have atrophied a little bit.” Vien agrees: “I think these are extremely powerful tools for an architect and designer. Do I think it’s the entire future of architecture? No, but I think it’s a tool and a medium that can expand the long history of mediums and media that architects can use not just to represent their work but as a generator of ideas.” Andrew Kudless, Hines College of Architecture and DesignThis image, part of the Urban Resolution series, shows how the Stable Diffusion AI model “is unable to focus on constructing a realistic image and instead duplicates features that are prominent in the local latent space,” Kudless says. Jason Vigneri-Beane, Pratt Institute “These images are from a larger series on cyborg ecologies that have to do with co-creating with machines to imagine [other] machines,” says Vigneri-Beane. “I might refer to these as cryptomegafauna—infrastructural robots operating at an architectural scale.” Martin Summers, University of Kentucky College of Design“Most AI is racing to emulate reality,” says Summers. “I prefer to revel in the hallucinations and misinterpretations like glitches and the sublogic they reveal present in a mediated reality.” Jason Lee, Pratt InstituteLee typically uses AI “to generate iterations or high-resolution sketches,” he says. “I am also using it to experiment with how much realism one can incorporate with more abstract representation methods.” Olivia Vien, Pratt Institute For the series Imprinting Grounds, Vien created images digitally and fed them into Midjourney. “It riffs on the ideas of damask textile patterns in a more digital realm,” she says.Robert Lee Brackett III, Pratt Institute“While new software raises concerns about the absence of traditional tools like hand drawing and modeling, I view these technologies as collaborators rather than replacements,” Brackett says.
    0 Commentarios 0 Acciones 40 Views
  • WWW.TECHNOLOGYREVIEW.COM
    How creativity became the reigning value of our time
    Americans don’t agree on much these days. Yet even at a time when consensus reality seems to be on the verge of collapse, there remains at least one quintessentially modern value we can all still get behind: creativity.  We teach it, measure it, envy it, cultivate it, and endlessly worry about its death. And why wouldn’t we? Most of us are taught from a young age that creativity is the key to everything from finding personal fulfillment to achieving career success to solving the world’s thorniest problems. Over the years, we’ve built creative industries, creative spaces, and creative cities and populated them with an entire class of people known simply as “creatives.” We read thousands of books and articles each year that teach us how to unleash, unlock, foster, boost, and hack our own personal creativity. Then we read even more to learn how to manage and protect this precious resource.  Given how much we obsess over it, the concept of creativity can feel like something that has always existed, a thing philosophers and artists have pondered and debated throughout the ages. While it’s a reasonable assumption, it’s one that turns out to be very wrong. As Samuel Franklin explains in his recent book, The Cult of Creativity, the first known written use of creativity didn’t actually occur until 1875, “making it an infant as far as words go.” What’s more, he writes, before about 1950, “there were approximately zero articles, books, essays, treatises, odes, classes, encyclopedia entries, or anything of the sort dealing explicitly with the subject of ‘creativity.’” This raises some obvious questions. How exactly did we go from never talking about creativity to always talking about it? What, if anything, distinguishes creativity from other, older words, like ingenuity, cleverness, imagination, and artistry? Maybe most important: How did everyone from kindergarten teachers to mayors, CEOs, designers, engineers, activists, and starving artists come to believe that creativity isn’t just good—personally, socially, economically—but the answer to all life’s problems? Thankfully, Franklin offers some potential answers in his book. A historian and design researcher at the Delft University of Technology in the Netherlands, he argues that the concept of creativity as we now know it emerged during the post–World War II era in America as a kind of cultural salve—a way to ease the tensions and anxieties caused by increasing conformity, bureaucracy, and suburbanization. “Typically defined as a kind of trait or process vaguely associated with artists and geniuses but theoretically possessed by anyone and applicable to any field, [creativity] provided a way to unleash individualism within order,” he writes, “and revive the spirit of the lone inventor within the maze of the modern corporation.” Brainstorming, a new method for encouraging creative thinking, swept corporate America in the 1950s. A response to pressure for new products and new ways of marketing them, as well as a panic over conformity, it inspired passionate debate about whether true creativity should be an individual affair or could be systematized for corporate use.INSTITUTE OF PERSONALITY AND SOCIAL RESEARCH, UNIVERSITY OF CALIFORNIA, BERKELEY/THE MONACELLI PRESS I spoke to Franklin about why we continue to be so fascinated by creativity, how Silicon Valley became the supposed epicenter of it, and what role, if any, technologies like AI might have in reshaping our relationship with it.  I’m curious what your personal relationship to creativity was growing up. What made you want to write a book about it? Like a lot of kids, I grew up thinking that creativity was this inherently good thing. For me—and I imagine for a lot of other people who, like me, weren’t particularly athletic or good at math and science—being creative meant you at least had some future in this world, even if it wasn’t clear what that future would entail. By the time I got into college and beyond, the conventional wisdom among the TED Talk register of thinkers—people like Daniel Pink and Richard Florida—was that creativity was actually the most important trait to have for the future. Basically, the creative people were going to inherit the Earth, and society desperately needed them if we were going to solve all of these compounding problems in the world.  On the one hand, as someone who liked to think of himself as creative, it was hard not to be flattered by this. On the other hand, it all seemed overhyped to me. What was being sold as the triumph of the creative class wasn’t actually resulting in a more inclusive or creative world order. What’s more, some of the values embedded in what I call the cult of creativity seemed increasingly problematic—specifically, the focus on self-­realization, doing what you love, and following your passion. Don’t get me wrong—it’s a beautiful vision, and I saw it work out for some people. But I also started to feel like it was just a cover for what was, economically speaking, a pretty bad turn of events for many people.   Staff members at the University of California’s Institute of Personality Assessment and Research simulate a situational procedure involving group interaction, called the Bingo Test. Researchers of the 1950s hoped to learn how factors in people’s lives and environments shaped their creative aptitude.INSTITUTE OF PERSONALITY AND SOCIAL RESEARCH, UNIVERSITY OF CALIFORNIA, BERKELEY/THE MONACELLI PRESS Nowadays, it’s quite common to bash the “follow your passion,” “hustle culture” idea. But back when I started this project, the whole move-fast-and-break-things, disrupter, innovation-economy stuff was very much unquestioned. In a way, the idea for the book came from recognizing that creativity was playing this really interesting role in connecting two worlds: this world of innovation and entrepreneurship and this more soulful, bohemian side of our culture. I wanted to better understand the history of that relationship. When did you start thinking about creativity as a kind of cult—one that we’re all a part of?  Similar to something like the “cult of domesticity,” it was a way of describing a historical moment in which an idea or value system achieves a kind of broad, uncritical acceptance. I was finding that everyone was selling stuff based on the idea that it boosted your creativity, whether it was a new office layout, a new kind of urban design, or the “Try these five simple tricks” type of thing.  You start to realize that nobody is bothering to ask, “Hey, uh, why do we all need to be creative again? What even is this thing, creativity?” It had become this unimpeachable value that no one, regardless of what side of the political spectrum they fell on, would even think to question. That, to me, was really unusual, and I think it signaled that something interesting was happening. Your book highlights midcentury efforts by psychologists to turn creativity into a quantifiable mental trait and the “creative person” into an identifiable type. How did that play out?  The short answer is: not very well. To study anything, you of course need to agree on what it is you’re looking at. Ultimately, I think these groups of psychologists were frustrated in their attempts to come up with scientific criteria that defined a creative person. One technique was to go find people who were already eminent in fields that were deemed creative—writers like Truman Capote and Norman Mailer, architects like Louis Kahn and Eero Saarinen—and just give them a battery of cognitive and psychoanalytic tests and then write up the results. This was mostly done by an outfit called the Institute of Personality Assessment and Research (IPAR) at Berkeley. Frank Barron and Don MacKinnon were the two biggest researchers in that group. Another way psychologists went about it was to say, all right, that’s not going to be practical for coming up with a good scientific standard. We need numbers, and lots and lots of people to certify these creative criteria. This group of psychologists theorized that something called “divergent thinking” was a major component of creative accomplishment. You’ve heard of the brick test, where you’re asked to come up with many creative uses for a brick in a given amount of time? They basically gave a version of that test to Army officers, schoolchildren, rank-and-file engineers at General Electric, all kinds of people. It’s tests like those that ultimately became stand-ins for what it means to be “creative.” Are they still used?  When you see a headline about AI making people more creative, or actually being more creative than humans, the tests they are basing that assertion on are almost always some version of a divergent thinking test. It’s highly problematic for a number of reasons. Chief among them is the fact that these tests have never been shown to have predictive value—that’s to say, a third grader, a 21-year-old, or a 35-year-old who does really well on divergent thinking tests doesn’t seem to have any greater likelihood of being successful in creative pursuits. The whole point of developing these tests in the first place was to both identify and predict creative people. None of them have been shown to do that.  Reading your book, I was struck by how vague and, at times, contradictory the concept of “creativity” was from the beginning. You characterize that as “a feature, not a bug.” How so? Ask any creativity expert today what they mean by “creativity,” and they’ll tell you it’s the ability to generate something new and useful. That something could be an idea, a product, an academic paper—whatever. But the focus on novelty has remained an aspect of creativity from the beginning. It’s also what distinguishes it from other similar words, like imagination or cleverness. But you’re right: Creativity is a flexible enough concept to be used in all sorts of ways and to mean all sorts of things, many of them contradictory. I think I write in the book that the term may not be precise, but that it’s vague in precise and meaningful ways. It can be both playful and practical, artsy and technological, exceptional and pedestrian. That was and remains a big part of its appeal.  The question of “Can machines be ‘truly creative’?” is not that interesting, but the questions of “Can they be wise, honest, caring?” are more important if we’re going to be welcoming [AI] into our lives as advisors and assistants. Is that emphasis on novelty and utility a part of why Silicon Valley likes to think of itself as the new nexus for creativity? Absolutely. The two criteria go together. In techno-solutionist, hypercapitalist milieus like Silicon Valley, novelty isn’t any good if it’s not useful (or at least marketable), and utility isn’t any good (or marketable) unless it’s also novel. That’s why they’re often dismissive of boring-but-important things like craft, infrastructure, maintenance, and incremental improvement, and why they support art—which is traditionally defined by its resistance to utility—only insofar as it’s useful as inspiration for practical technologies. At the same time, Silicon Valley loves to wrap itself in “creativity” because of all the artsy and individualist connotations. It has very self-consciously tried to distance itself from the image of the buttoned-down engineer working for a large R&D lab of a brick-and-mortar manufacturing corporation and instead raise up the idea of a rebellious counterculture type tinkering in a garage making weightless products and experiences. That, I think, has saved it from a lot of public scrutiny. Up until recently, we’ve tended to think of creativity as a human trait, maybe with a few exceptions from the rest of the animal world. Is AI changing that? When people started defining creativity in the ’50s, the threat of computers automating white-collar work was already underway. They were basically saying, okay, rational and analytical thinking is no longer ours alone. What can we do that the computers can never do? And the assumption was that humans alone could be “truly creative.” For a long time, computers didn’t do much to really press the issue on what that actually meant. Now they’re pressing the issue. Can they do art and poetry? Yes. Can they generate novel products that also make sense or work? Sure. I think that’s by design. The kinds of LLMs that Silicon Valley companies have put forward are meant to appear “creative” in those conventional senses. Now, whether or not their products are meaningful or wise in a deeper sense, that’s another question. If we’re talking about art, I happen to think embodiment is an important element. Nerve endings, hormones, social instincts, morality, intellectual honesty—those are not things essential to “creativity” necessarily, but they are essential to putting things out into the world that are good, and maybe even beautiful in a certain antiquated sense. That’s why I think the question of “Can machines be ‘truly creative’?” is not that interesting, but the questions of “Can they be wise, honest, caring?” are more important if we’re going to be welcoming them into our lives as advisors and assistants.  This interview is based on two conversations and has been edited and condensed for clarity. Bryan Gardiner is a writer based in Oakland, California.
    0 Commentarios 0 Acciones 65 Views
  • WWW.TECHNOLOGYREVIEW.COM
    The world’s biggest space-based radar will measure Earth’s forests from orbit
    Forests are the second-largest carbon sink on the planet, after the oceans. To understand exactly how much carbon they trap, the European Space Agency and Airbus have built a satellite called Biomass that will use a long-prohibited band of the radio spectrum to see below the treetops around the world. It will lift off from French Guiana toward the end of April and will boast the largest space-based radar in history, though it will soon be tied in orbit by the US-India NISAR imaging satellite, due to launch later this year. Roughly half of a tree’s dry mass is made of carbon, so getting a good measure of how much a forest weighs can tell you how much carbon dioxide it’s taken from the atmosphere. But scientists have no way of measuring that mass directly.  “To measure biomass, you need to cut the tree down and weigh it, which is why we use indirect measuring systems,” says Klaus Scipal, manager of the Biomass mission.  These indirect systems rely on a combination of field sampling—foresters roaming among the trees to measure their height and diameter—and remote sensing technologies like lidar scanners, which can be flown over the forests on airplanes or drones and used to measure treetop height along lines of flight. This approach has worked well in North America and Europe, which have well-established forest management systems in place. “People know every tree there, take lots of measurements,” Scipal says.  But most of the world’s trees are in less-mapped places, like the Amazon jungle, where less than 20% of the forest has been studied in depth on the ground. To get a sense of the biomass in those remote, mostly inaccessible areas, space-based forest sensing is the only feasible option. The problem is, the satellites we currently have in orbit are not equipped for monitoring trees.  Tropical forests seen from space look like green plush carpets, because all we can see are the treetops; from imagery like this, we can’t tell how high or thick the trees are. Radars we have on satellites like Sentinel 1 use short radio wavelengths like those in the C band, which fall between 3.9 and 7.5 centimeters. These bounce off the leaves and smaller branches and can’t penetrate the forest all the way to the ground.  This is why for the Biomass mission ESA went with P-band radar. P-band radio waves, which are about 10 times longer in wavelength, can see bigger branches and the trunks of trees, where most of their mass is stored. But fitting a P-band radar system on a satellite isn’t easy. The first problem is the size.  “Radar systems scale with wavelengths—the longer the wavelength, the bigger your antennas need to be. You need bigger structures,” says Scipal. To enable it to carry the P-band radar, Airbus engineers had to make the Biomass satellite two meters wide, two meters thick, and four meters tall. The antenna for the radar is 12 meters in diameter. It sits on a long, multi-joint boom, and Airbus engineers had to fold it like a giant umbrella to fit it into the Vega C rocket that will lift it into orbit. The unfolding procedure alone is going to take several days once the satellite gets to space.  Sheer size, though, is just one reason we have generally avoided sending P-band radars to space. Operating such radar systems in space is banned by International Telecommunication Union regulations, and for a good reason: interference.  Workers roll the BIOMASS satellite out into a cleanroom to be inspected before the launchESA-CNES-ARIANESPACE/OPTIQUE VIDéO DU CSG–S. MARTIN “The primary frequency allocation in P band is for huge SOTR [single-object-tracking radars] Americans use to detect incoming intercontinental ballistic missiles. That was, of course, a problem for us,” Scipal says. To get an exemption from the ban on space-based P-band radars, ESA had to agree to several limitations, the most painful of which was turning the Biomass radar off over North America and Europe to avoid interfering with SOTR coverage. “This was a pity. It’s a European mission, so we wanted to do observations in Europe,” Scipal says. The rest of the world, though, is fair game. The Biomass mission is scheduled to last five years. Calibration of the radar and other systems is going to take the first five months. After that, Biomass will enter its tomography phase, gathering data to create detailed biomass maps of the forests in India, Australia, Siberia, South America, Africa—everywhere but North America and Europe. “Tomography will work like a CT scan in a hospital. We will take images of each area from various different positions and create the 3D map of the forests,” Scipal says.  Getting full, global coverage is expected to take 18 months. Then, for the rest of the mission, Biomass will switch to a different measurement method, capturing one full global map every nine months to measure how the condition of our forests changes over time.  “The scientific goal here is to really understand the role of forests in the global carbon cycle. The main interest is the tropics because it’s the densest forest which is under the biggest threat of deforestation and the one we know the least about,” Scipal says. Biomass is going to provide hectare-scale-resolution 3D maps of those tropical forests, including everything from the tree heights to ground topography—something we’ve never had before. But there are limits to what it can do.  “One drawback is that we won’t get insights into seasonal deviations in forest throughout the year because of the time it takes for Biomass to do global coverage,” says Irena Hajnsek, a professor of Earth observation at ETH Zurich, who is not involved in the Biomass mission. And Biomass is still going to leave some of our questions about carbon sinks unanswered. “In all our estimations of climate change, we know how much carbon is in the atmosphere, but we do not know so much about how much carbon is stored on land,” says Hajnsek. Biomass will have its limits, she says, since significant amounts of carbon are trapped in the soil in permafrost areas, which the mission won’t be able to measure. “But we’re going to learn how much carbon is stored in the forests and also how much of it is getting released due to disturbances like deforestation or fires,” she says. “And that is going to be a huge contribution.”
    0 Commentarios 0 Acciones 71 Views
  • WWW.TECHNOLOGYREVIEW.COM
    This spa’s water is heated by bitcoin mining
    At first glance, the Bathhouse spa in Brooklyn looks not so different from other high-end spas. What sets it apart is out of sight: a closet full of cryptocurrency-­mining computers that not only generate bitcoins but also heat the spa’s pools, marble hammams, and showers.  When cofounder Jason Goodman opened Bathhouse’s first location in Williamsburg in 2019, he used conventional pool heaters. But after diving deep into the world of bitcoin, he realized he could fit cryptocurrency mining seamlessly into his business. That’s because the process, where special computers (called miners) make trillions of guesses per second to try to land on the string of numbers that will earn a bitcoin, consumes tremendous amounts of electricity—which in turn produces plenty of heat that usually goes to waste.   “I thought, ‘That’s interesting—we need heat,’” Goodman says of Bathhouse. Mining facilities typically use fans or water to cool their computers. And pools of water, of course, are a prominent feature of the spa.  It takes six miners, each roughly the size of an Xbox One console, to maintain a hot tub at 104 °F. At Bathhouse’s  Williamsburg location, miners hum away quietly inside two large tanks, tucked in a storage closet among liquor bottles and teas. To keep them cool and quiet, the units are immersed directly in non-conductive oil, which absorbs the heat they give off and is pumped through tubes beneath Bathhouse’s hot tubs and hammams.  Mining boilers, which cool the computers by pumping in cold water that comes back out at 170 °F, are now also being used at the site. A thermal battery stores excess heat for future use.  Goodman says his spas aren’t saving energy by using bitcoin miners for heat, but they’re also not using any more than they would with conventional water heating. “I’m just inserting miners into that chain,” he says.  Goodman isn’t the only one to see the potential in heating with crypto. In Finland, Marathon Digital Holdings turned fleets of bitcoin miners into a district heating system to warm the homes of 80,000 residents. HeatCore, an integrated energy service provider, has used bitcoin mining to heat a commercial office building in China and to keep pools at a constant temperature for fish farming. This year it will begin a pilot project to heat seawater for desalination. On a smaller scale, bitcoin fans who also want some extra warmth can buy miners that double as space heaters.  Crypto enthusiasts like Goodman think much more of this is coming—especially under the Trump administration, which has announced plans to create a bitcoin reserve. This prospect alarms environmentalists.  The energy required for a single bitcoin transaction varies, but as of mid-March it was equivalent to the energy consumed by an average US household over 47.2 days, according to the Bitcoin Energy Consumption Index, run by the economist Alex de Vries.  Among the various cryptocurrencies, bitcoin mining gobbles up the most energy by far. De Vries points out that others, like ethereum, have eliminated mining and implemented less energy-­intensive algorithms. But bitcoin users resist any change to their currency, so de Vries is doubtful a shift away from mining will happen anytime soon.  One key barrier to using bitcoin for heating, de Vries says, is that the heat can only be transported short distances before it dissipates. “I see this as something that is extremely niche,” he says. “It’s just not competitive, and you can’t make it work at a large scale.”  The more renewable sources that are added to electric grids to replace fossil fuels, the cleaner crypto mining will become. But even if bitcoin is powered by renewable energy, “that doesn’t make it sustainable,” says Kaveh Madani, director of the United Nations University Institute for Water, Environment, and Health. Mining burns through valuable resources that could otherwise be used to meet existing energy needs, Madani says.  For Goodman, relaxing into bitcoin-heated water is a completely justifiable use of energy. It soothes the muscles, calms the mind, and challenges current economic structures, all at the same time.  Carrie Klein is a freelance journalist based in New York City.
    0 Commentarios 0 Acciones 62 Views
  • WWW.TECHNOLOGYREVIEW.COM
    Longevity clinics around the world are selling unproven treatments
    The quest for long, healthy life—and even immortality—is probably almost as old as humans are, but it’s never been hotter than it is right now. Today my newsfeed is full of claims about diets, exercise routines, and supplements that will help me live longer. A lot of it is marketing fluff, of course. It should be fairly obvious that a healthy, plant-rich diet and moderate exercise will help keep you in good shape. And no drugs or supplements have yet been proved to extend human lifespan. The growing field of longevity medicine is apparently aiming for something in between these two ends of the wellness spectrum. By combining the established tools of clinical medicine (think blood tests and scans) with some more experimental ones (tests that measure your biological age), these clinics promise to help their clients improve their health and longevity. But a survey of longevity clinics around the world, carried out by an organization that publishes updates and research on the industry, is revealing a messier picture. In reality, these clinics—most of which cater only to the very wealthy—vary wildly in their offerings. Today, the number of longevity clinics is thought to be somewhere in the hundreds. The proponents of these clinics say they represent the future of medicine. “We can write new rules on how we treat patients,” Eric Verdin, who directs the Buck Institute for Research on Aging, said at a professional meeting last year. Phil Newman, who runs Longevity.Technology, a company that tracks the longevity industry, says he knows of 320 longevity clinics operating around the world. Some operate multiple centers on an international scale, while others involve a single “practitioner” incorporating some element of “longevity” into the treatments offered, he says. To get a better idea of what these offerings might be, Newman and his colleagues conducted a survey of 82 clinics around the world, including the US, Australia, Brazil, and multiple countries in Europe and Asia. Some of the results are not all that surprising. Three-quarters of the clinics said that most of their clients were Gen Xers, aged between 44 and 59. This makes sense—anecdotally, it’s around this age that many people start to feel the effects of aging. And research suggests that waves of molecular changes associated with aging hit us in our 40s and again in our 60s. (Longevity influencers Bryan Johnson, Andrew Huberman, and Peter Attia all fall into this age group too.) And I wasn’t surprised to see that plenty of clinics are offering aesthetic treatments, focusing more on how old their clients look. Of the clinics surveyed, 28% said they offered Botox injections, 35% offered hair loss treatments, and 38% offered “facial rejuvenation procedures.”  “The distinction between longevity medicine and aesthetic medicine remains blurred,” Andrea Maier of the National University of Singapore, and cofounder of a private longevity clinic, wrote in a commentary on the report. Maier is also former president of the Healthy Longevity Medicine Society, an organization that was set up with the aim of establishing clinical standards and credibility for longevity clinics. Other results from the survey underline how much of a challenge this will be; many clinics are still offering unproven treatments. Over a third of the clinics said they offered stem-cell treatments, for example. There is no evidence that those treatments will help people live longer—and they are not without risk, either. I was a little surprised to see that most of the clinics are also offering prescription medicines off label. In other words, drugs that have been approved for specific medical issues are apparently being prescribed for aging instead. This is also not without risks—all medicines have side effects. And, again, none of them have been proved to slow or reverse human aging. And these prescriptions are coming from certified medical doctors. More than 80% of clinics reported that their practice was overseen by a medical doctor with more than 10 years of clinical experience. It was also a little surprising to learn that despite their high fees, most of these clinics are not making a profit. For clients, the annual costs of attending a longevity clinic range between $10,000 and $150,000, according to Fountain Life, a company with clinics in Florida and Prague. But only 39% of the surveyed clinics said they were turning a profit and 30% said they were “approaching breaking even,” while 16% said they were operating at a loss. Proponents of longevity clinics have high hopes for the field. They see longevity medicine as nothing short of a revolution—a move away from reactive treatments and toward proactive health maintenance. But these survey results show just how far they have to go. This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.
    0 Commentarios 0 Acciones 79 Views
  • WWW.TECHNOLOGYREVIEW.COM
    NASA has made an air traffic control system for drones
    On Thanksgiving weekend of 2013, Jeff Bezos, then Amazon’s CEO, took to 60 Minutes to make a stunning announcement: Amazon was a few years away from deploying drones that would deliver packages to homes in less than 30 minutes.  It lent urgency to a problem that Parimal Kopardekar, director of the NASA Aeronautics Research Institute, had begun thinking about earlier that year. “How do you manage and accommodate large-scale drone operations without overloading the air traffic control system?” Kopardekar, who goes by PK, recalls wondering. Busy managing all airplane takeoffs and landings, air traffic controllers clearly wouldn’t have the capacity to oversee the fleets of package-delivering drones Amazon was promising.  The solution PK devised, which subsequently grew into a collaboration between federal agencies, researchers, and industry, is a system called unmanned-­aircraft-system traffic management, or UTM. Instead of verbally communicating with air traffic controllers, drone operators using UTM share their intended flight paths with each other via a cloud-based network. This highly scalable approach may finally open the skies to a host of commercial drone applications that have yet to materialize. Amazon Prime Air launched in 2022 but was put on hold after crashes at a testing facility, for example. On any given day, only 8,500 or so unmanned aircraft fly in US airspace, the vast majority of which are used for recreational purposes rather than for services like search and rescue missions, real estate inspections, video surveillance, or farmland surveys.  One obstacle to wider use has been concern over possible midair drone-to-drone collisions. (Drones are typically restricted to airspace below 400 feet and their access to airports is limited, which significantly lowers the risk of drone-airplane collisions.) Under Federal Aviation Administration regulations, drones generally cannot fly beyond an operator’s visual line of sight, limiting flights to about a third of a mile. This prevents most collisions but also most use cases, such as delivering medication to a patient’s doorstep or dispatching a police drone to an active crime scene so first responders can better prepare before arriving. Now, though, drone operators are increasingly incorporating UTM into their flights. The system uses path planning algorithms, like those that run in Google Maps, to chart a course that considers not only weather and obstacles like buildings and trees but the flight paths of nearby drones. It’ll automatically reroute a flight before takeoff if another drone has reserved the same volume of airspace at the same time, making the new flight trajectory visible to subsequent pilots. Drones can then fly autonomously to and from their destination, and no air traffic controller is required.  Over the past decade, NASA and industry have demonstrated to the FAA through a series of tests that drones can safely maneuver around each other by adhering to UTM. And last summer, the agency gave the go-ahead for multiple drone delivery companies using UTM to begin flying simultaneously in the same airspace above Dallas—a first in US aviation history. Drone operators without in-house UTM capabilities have also begun licensing UTM services from FAA-approved third-party providers. UTM only works if all participants abide by the same rules and agree to share data, and it’s enabled a level of collaboration unusual for companies competing to gain a foothold in a young, hot field, notes Peter Sachs, head of airspace integration strategy at Zipline, a drone delivery company based in South San Francisco that’s approved to use UTM.  “We all agree that we need to collaborate on the practical, behind-the-scenes nuts and bolts to make sure that this preflight deconfliction for drones works really well,” Sachs says. (“Strategic deconfliction” is the technical term for processes that minimize drone-drone collisions.) Zipline and the drone delivery companies Wing, Flytrex, and DroneUp all operate in the Dallas area and are racing to expand to more cities, yet they disclose where they’re flying to one another in the interest of keeping the airspace conflict-free. Greater adoption of UTM may be on the way. The FAA is expected to soon release a new rule called Part 108 that may allow operators to fly beyond visual line of sight if, among other requirements, they have some UTM capability, eliminating the need for the difficult-­to-obtain waiver the agency currently requires for these flights. To safely manage this additional drone traffic, drone companies will have to continue working together to keep their aircraft out of each other’s way.  Yaakov Zinberg is a writer based in Cambridge, Massachusetts.
    0 Commentarios 0 Acciones 55 Views
  • WWW.TECHNOLOGYREVIEW.COM
    A Google Gemini model now has a “dial” to adjust how much it reasons
    Google DeepMind’s latest update to a top Gemini AI model includes a dial to control how much the system “thinks” through a response. The new feature is ostensibly designed to save money for developers, but it also concedes a problem: Reasoning models, the tech world’s new obsession, are prone to overthinking, burning money and energy in the process. Since 2019, there have been a couple of tried and true ways to make an AI model more powerful. One was to make it bigger by using more training data, and the other was to give it better feedback on what constitutes a good answer. But toward the end of last year, Google DeepMind and other AI companies turned to a third method: reasoning. “We’ve been really pushing on ‘thinking,’” says Jack Rae, a principal research scientist at DeepMind. Such models, which are built to work through problems logically and spend more time arriving at an answer, rose to prominence earlier this year with the launch of the DeepSeek R1 model. They’re attractive to AI companies because they can make an existing model better by training it to approach a problem pragmatically. That way, the companies can avoid having to build a new model from scratch.  When the AI model dedicates more time (and energy) to a query, it costs more to run. Leaderboards of reasoning models show that one task can cost upwards of $200 to complete. The promise is that this extra time and money help reasoning models do better at handling challenging tasks, like analyzing code or gathering information from lots of documents.  “The more you can iterate over certain hypotheses and thoughts,” says Google DeepMind chief technical officer Koray Kavukcuoglu, the more “it’s going to find the right thing.” This isn’t true in all cases, though. “The model overthinks,” says Tulsee Doshi, who leads the product team at Gemini, referring specifically to Gemini Flash 2.5, the model released today that includes a slider for developers to dial back how much it thinks. “For simple prompts, the model does think more than it needs to.”  When a model spends longer than necessary on a problem only to arrive at a mediocre answer, it makes the model expensive to run for developers and worsens AI’s environmental footprint. Nathan Habib, an engineer at Hugging Face who has studied the proliferation of such reasoning models, says overthinking is abundant. In the rush to show off smarter AI, companies are reaching for reasoning models like hammers even where there’s no nail in sight, Habib says. Indeed, when OpenAI announced a new model in February, it said it would be the company’s last nonreasoning model.  The performance gain is “undeniable” for certain tasks, Habib says, but not for many others where people normally use AI. Even when reasoning is used for the right problem, things can go awry. Habib showed me an example of a leading reasoning model that was asked to work through an organic chemistry problem. It started out okay, but halfway through its reasoning process the model’s responses started resembling a meltdown: It sputtered “Wait, but …” hundreds of times. It ended up taking far longer than a nonreasoning model would spend on one task. Kate Olszewska, who works on evaluating Gemini models at DeepMind, says Google’s models can also get stuck in loops. Google’s new “reasoning” dial is one attempt to solve that problem. For now, it’s built not for the consumer version of Gemini but for developers who are making apps. Developers can set a budget for how much computing power the model should spend on a certain problem, the idea being to turn down the dial if the task shouldn’t involve much reasoning at all. Outputs from the model are about six times more expensive to generate when reasoning is turned on. Another reason for this flexibility is that it’s not yet clear when more reasoning will be required to get a better answer. “It’s really hard to draw a boundary on, like, what’s the perfect task right now for thinking?” Rae says.  Obvious tasks include coding (developers might paste hundreds of lines of code into the model and then ask for help), or generating expert-level research reports. The dial would be turned way up for these, and developers might find the expense worth it. But more testing and feedback from developers will be needed to find out when medium or low settings are good enough. Habib says the amount of investment in reasoning models is a sign that the old paradigm for how to make models better is changing. “Scaling laws are being replaced,” he says.  Instead, companies are betting that the best responses will come from longer thinking times rather than bigger models. It’s been clear for several years that AI companies are spending more money on inferencing—when models are actually “pinged” to generate an answer for something—than on training, and this spending will accelerate as reasoning models take off. Inferencing is also responsible for a growing share of emissions. (While on the subject of models that “reason” or “think”: an AI model cannot perform these acts in the way we normally use such words when talking about humans. I asked Rae why the company uses anthropomorphic language like this. “It’s allowed us to have a simple name,” he says, “and people have an intuitive sense of what it should mean.” Kavukcuoglu says that Google is not trying to mimic any particular human cognitive process in its models.) Even if reasoning models continue to dominate, Google DeepMind isn’t the only game in town. When the results from DeepSeek began circulating in December and January, it triggered a nearly $1 trillion dip in the stock market because it promised that powerful reasoning models could be had for cheap. The model is referred to as “open weight”—in other words, its internal settings, called weights, are made publicly available, allowing developers to run it on their own rather than paying to access proprietary models from Google or OpenAI. (The term “open source” is reserved for models that disclose the data they were trained on.)  So why use proprietary models from Google when open ones like DeepSeek are performing so well? Kavukcuoglu says that coding, math, and finance are cases where “there’s high expectation from the model to be very accurate, to be very precise, and to be able to understand really complex situations,” and he expects models that deliver on that, open or not, to win out. In DeepMind’s view, this reasoning will be the foundation of future AI models that act on your behalf and solve problems for you. “Reasoning is the key capability that builds up intelligence,” he says. “The moment the model starts thinking, the agency of the model has started.”
    0 Commentarios 0 Acciones 66 Views
  • WWW.TECHNOLOGYREVIEW.COM
    We need targeted policies, not blunt tariffs, to drive “American energy dominance”
    President Trump and his appointees have repeatedly stressed the need to establish “American energy dominance.”  But the White House’s profusion of executive orders and aggressive tariffs, along with its determined effort to roll back clean-energy policies, are moving the industry in the wrong direction, creating market chaos and economic uncertainty that are making it harder for both legacy players and emerging companies to invest, grow, and compete. Heat Exchange MIT Technology Review’s guest opinion series, offering expert commentary on legal, political and regulatory issues related to climate change and clean energy. You can read the rest of the pieces here. The current 90-day pause on rolling out most of the administration’s so-called “reciprocal” tariffs presents a critical opportunity. Rather than defaulting to broad, blunt tariffs, the administration should use this window to align trade policy with a focused industrial strategy—one aimed at winning the global race to become a manufacturing powerhouse in next-generation energy technologies.  By tightly aligning tariff design with US strengths in R&D and recent government investments in the energy innovation lifecycle, the administration can turn a regressive trade posture into a proactive plan for economic growth and geopolitical advantage. The president is right to point out that America is blessed with world-leading energy resources. Over the past decade, the country has grown from being a net importer to a net exporter of oil and the world’s largest producer of oil and gas. These resources are undeniably crucial to America’s ability to reindustrialize and rebuild a resilient domestic industrial base, while also providing strategic leverage abroad.  But the world is slowly but surely moving beyond the centuries-old model of extracting and burning fossil fuels, a change driven initially by climate risks but increasingly by economic opportunities. America will achieve true energy dominance only by evolving beyond being a mere exporter of raw, greenhouse-gas-emitting energy commodities—and becoming the world’s manufacturing and innovation hub for sophisticated, high-value energy technologies. Notably, the nation took a lead role in developing essential early components of the cleantech sector, including solar photovoltaics and electric vehicles. Yet too often, the fruits of that innovation—especially manufacturing jobs and export opportunities—have ended up overseas, particularly in China. China, which is subject to Trump’s steepest tariffs and wasn’t granted any reprieve in the 90-day pause, has become the world’s dominant producer of lithium-ion batteries, EVs, wind turbines, and other key components of the clean-energy transition. Today, the US is again making exciting strides in next-generation technologies, including fusion energy, clean steel, advanced batteries, industrial heat pumps, and thermal energy storage. These advances can transform industrial processes, cut emissions, improve air quality, and maximize the strategic value of our fossil-fuel resources. That means not simply burning them for their energy content, but instead using them as feedstocks for higher-value materials and chemicals that power the modern economy. The US’s leading role in energy innovation didn’t develop by accident. For several decades, legislators on both sides of the political divide supported increasing government investments into energy innovation—from basic research at national labs and universities to applied R&D through ARPA-E and, more recently, to the creation of the Office of Clean Energy Demonstrations, which funds first-of-a-kind technology deployments. These programs have laid the foundation for the technologies we need—not just to meet climate goals, but to achieve global competitiveness. Early-stage companies in competitive, global industries like energy do need extra support to help them get to the point where they can stand up on their own. This is especially true for cleantech companies whose overseas rivals have much lower labor, land, and environmental compliance costs. That’s why, for starters, the White House shouldn’t work to eliminate federal investments made in these sectors under the Bipartisan Infrastructure Law and the Inflation Reduction Act, as it’s reportedly striving to do as part of the federal budget negotiations. Instead, the administration and its Republican colleagues in Congress should preserve and refine these programs, which have already helped expand America’s ability to produce advanced energy products like batteries and EVs. Success should be measured not only in barrels produced or watts generated, but in dollars of goods exported, jobs created, and manufacturing capacity built. The Trump administration should back this industrial strategy with smarter trade policy as well. Steep, sweeping tariffs won’t  build long-term economic strength.  But there are certain instances where reasonable, modern, targeted tariffs can be a useful tool in supporting domestic industries or countering unfair trade practices elsewhere. That’s why we’ve seen leaders of both parties, including Presidents Biden and Obama, apply them in recent years. Such levies can be used to protect domestic industries where we’re competing directly with geopolitical rivals like China, and where American companies need breathing room to scale and thrive. These aims can be achieved by imposing tariffs on specific strategic technologies, such as EVs and next-generation batteries. But to be clear, targeted tariffs on a few strategic sectors are starkly different from Trump’s tariffs, which now include 145% levies on most Chinese goods, a 10% “universal” tariff on other nations and 25% fees on steel and aluminum.  Another option is implementing a broader border adjustment policy, like the Foreign Pollution Fee Act recently reintroduced by Senators Cassidy and Graham, which is designed to create a level playing field that would help clean manufacturers in the US compete with heavily polluting businesses overseas.   Just as important, the nation must avoid counterproductive tariffs on critical raw materials like steel, aluminum, and copper or retaliatory restrictions on critical minerals—all of which are essential inputs for US manufacturing. The nation does not currently produce enough of these materials to meet demand, and it would take years to build up that capacity. Raising input costs through tariffs only slows our ability to keep or bring key industries home. Finally, we must be strategic in how we deploy the country’s greatest asset: our workforce. Americans are among the most educated and capable workers in the world. Their time, talent, and ingenuity shouldn’t be spent assembling low-cost, low-margin consumer goods like toasters. Instead, we should focus on building cutting-edge industrial technologies that the world is demanding. These are the high-value products that support strong wages, resilient supply chains, and durable global leadership. The worldwide demand for clean, efficient energy technologies is rising rapidly, and the US cannot afford to be left behind. The energy transition presents not just an environmental imperative but a generational opportunity for American industrial renewal. The Trump administration has a chance to define energy dominance not just in terms of extraction, but in terms of production—of technology, of exports, of jobs, and of strategic influence. Let’s not let that opportunity slip away. Addison Killean Stark is the chief executive and cofounder of AtmosZero, an industrial steam heat pump startup based in Loveland, Colorado. He was previously a fellow at the Department of Energy’s ARPA-E division, which funds research and development of advanced energy technologies.
    0 Commentarios 0 Acciones 57 Views
  • WWW.TECHNOLOGYREVIEW.COM
    How a 1980s toy robot arm inspired modern robotics
    As a child of an electronic engineer, I spent a lot of time in our local Radio Shack as a kid. While my dad was locating capacitors and resistors, I was in the toy section. It was there, in 1984, that I discovered the best toy of my childhood: the Armatron robotic arm.  A drawing from the patent application for the Armatron robotic arm.COURTESY OF TAKARA TOMY Described as a “robot-like arm to aid young masterminds in scientific and laboratory experiments,” it was the rare toy that lived up to the hype printed on the front of the box. This was a legit robotic arm. You could rotate the arm to spin around its base, tilt it up and down, bend it at the “elbow” joint, rotate the “wrist,” and open and close the bright-­orange articulated hand in elegant chords of movement, all using only the twistable twin joysticks.  Anyone who played with this toy will also remember the sound it made. Once you slid the power button to the On position, you heard a constant whirring sound of plastic gears turning and twisting. And if you tried to push it past its boundaries, it twitched and protested with a jarring “CLICK … CLICK … CLICK.” It wasn’t just kids who found the Armatron so special. It was featured on the cover of the November/December 1982 issue of Robotics Age magazine, which noted that the $31.95 toy (about $96 today) had “capabilities usually found only in much more expensive experimental arms.” JIM GOLDEN A few years ago I found my Armatron, and when I opened the case to get it working again, I was startled to find that other than the compartment for the pair of D-cell batteries, a switch, and a tiny three-volt DC motor, this thing was totally devoid of any electronic components. It was purely mechanical. Later, I found the patent drawings for the Armatron online and saw how incredibly complex the schematics of the gearbox were. This design was the work of a genius—or a madman. The man behind the arm I needed to know the story of this toy. I reached out to the manufacturer, Tomy (now known as Takara Tomy), which has been in business in Japan for over 100 years. It put me in touch with Hiroyuki Watanabe, a 69-year-old engineer and toy designer living in Tokyo. He’s retired now, but he worked at Tomy for 49 years, building many classic handheld electronic toys of the ’80s, including Blip, Digital Diamond, Digital Derby, and Missile Strike. Watanabe’s name can be found on 44 patents, and he was involved in bringing between 50 and 60 products to market. Watanabe answered emailed questions via video, and his responses were translated from Japanese. “I didn’t have a period where I studied engineering professionally. Instead, I enrolled in what Japan would call a technical high school that trains technical engineers, and I actually [entered] the electrical department there,” he told me.  Afterward, he worked at Komatsu Manufacturing—because, he said, he liked bulldozers. But in 1974, he saw that Tomy was hiring, and he wanted to make toys. “I was told that it was the No. 1 toy company in Japan, so I decided [it was worth a look],” he said. “I took a night train from Tohoku to Tokyo to take a job exam, and that’s how I ended up joining the company.” The inspiration for the Armatron came from a newspaper clipping that Watanabe’s boss brought to him one day. “It showed an image of a [mechanical arm] holding an egg with three fingers. I think we started out thinking, ‘This is where things are heading these days, so let’s make this,’” he recalled.  As the lead of a small team, Watanabe briefly turned his attention to another project, and by the time he returned to the robotic arm, the team had a prototype. But it was quite different from the Armatron’s final form. “The hand stuck out from the main body to the side and could only move about 90 degrees. The control panel also had six movement positions, and they were switched using six switches. I personally didn’t like that,” said Watanabe. So he went back to work. The Armatron’s inventor, Hiroyuki Watanabe, in Tokyo in 2025COURTESY OF TAKARA TOMY Watanabe’s breakthrough was inspired by the radio-controlled helicopters he operated as a hobby. Holding up a radio remote controller with dual joystick controls, he told me, “This stick operation allows you to perform four movements with two arms, but I thought that if you twist this part, you can use six movements.” Watanabe at work at Tomy in Tokyo in 1982.COURTESY OF HIROYUKI WATANABE “I had always wanted to create a system that could rotate 360 degrees, so I thought about how to make that system work,” he added. Watanabe stressed that while he is listed as the Armatron’s primary inventor, it was a team effort. A designer created the case, colors, and logo, adding touches to mimic features seen on industrial robots of the time, such as the rubber tubes (which are just for looks).  When the Armatron first came out, in 1981, robotics engineers started contacting Watanabe. “I wasn’t so much hearing from people at toy stores, but rather from researchers at university laboratories, factories, and companies that were making industrial robots,” he said. “They were quite encouraging, and we often talked together.” The long reach of the robot at Radio Shack The bold look and function of Armatron made quite an impression on many young kids who would one day have a career in robotics. One of them was Adam Burrell, a mechanical design engineer who has been building robots for 15 years at Boston Dynamics, including Petman, the YouTube-famous Atlas, and the dog-size quadruped called Spot.  Burrell grew up a few blocks away from a Radio Shack in New York City. “If I was going to the subway station, we would walk right by Radio Shack. I would stop in and play with it and set the timer, do the challenges,” he says. “I know it was a toy, but that was a real robot.” The Armatron was the hook that lured him into Radio Shack and then sparked his lifelong interest in engineering: “I would roll pennies and use them to buy soldering irons and solder at Radio Shack.”  “There’s research to this day using AI to try to figure out optimal ways to grab objects that [a robot] sees in a bin or out in the world.” Burrell had a fateful reunion with the toy while in grad school for engineering. “One of my office mates had an Armatron at his desk,” he recalls, “and it was broken. We took it apart together, and that was the first time I had seen the guts of it.  “It had this fantastic mechanical gear train to just engage and disengage this one motor in a bunch of different ways. And it was really fascinating that it had done so much—the one little motor. And that sort of got me back thinking about industrial robot arms again.”  Eric Paulos, a professor of electrical engineering and computer science at the University of California, Berkeley, recalls nagging his parents about what an educational gift Armatron would make. Ultimately, he succeeded in his lobbying.  “It was just endless exploration of picking stuff up and moving it around and even just watching it move. It was mesmerizing to me. I felt like I really owned my own little robot,” he recalls. “I cherish this thing. I still have it to this day, and it’s still working.”  The Armatron on the cover of the November/December 1982 issue of Robotics Age magazine.PUBLIC DOMAIN Today, Paulos builds robots and teaches his students how to build their own. He challenges them to solve problems within constraints, such as building with cardboard or Play-Doh; he believes the restrictions facing Watanabe and his team ultimately forced them to be more creative in their engineering. It’s not very hard to draw connections between the Armatron—an impossibly analog robot—and highly advanced machines that are today learning to move in incredible new ways, powered by AI advancements like computer vision and reinforcement learning. Paulos sees parallels between the problems he tackled as a kid with his Armatron and those that researchers are still trying to deal with today: “What happens when you pick things up and they’re too heavy, but you can sort of pick it up if you approach it from different angles? Or how do you grip things? There’s research to this day using AI to try to figure out optimal ways to grab objects that [a robot] sees in a bin or out in the world.” While AI may be taking over the world of robotics, the field still requires engineers—builders and tinkerers who can problem-solve in the physical world.  A page from the 1984 Radio Shack catalogue, featuring the Armatron for $31.95.COURTESY OF RADIOSHACKCATALOGS.COM The Armatron encouraged kids to explore these analog mechanics, a reminder that not all breakthroughs happen on a computer screen. And that hands-on curiosity hasn’t faded. Today, a new generation of fans are rediscovering the Armatron through online communities and DIY modifications. Dozens of Armatron videos are on YouTube, including one where the arm has been modified to run on steam power.  “I’m very happy to see people who love mechanisms are amazed,” Watanabe told me. “I’m really happy that there are still people out there who love our products in this way.”  Jon Keegan writes about technology and AI and publishes Beautiful Public Data, a curated collection of government data sets (beautifulpublicdata.com).
    0 Commentarios 0 Acciones 66 Views
  • WWW.TECHNOLOGYREVIEW.COM
    The Download: the US office that tracks foreign disinformation is being eliminated, and explaining vibe coding
    This is today's edition of The Download, our weekday newsletter that provides a daily dose of what's going on in the world of technology. US office that counters foreign disinformation is being eliminated The only office within the US State Department that monitors foreign disinformation is to be eliminated, according to US Secretary of State Marco Rubio, confirming reporting by MIT Technology Review. The Counter Foreign Information Manipulation and Interference (R/FIMI) Hub is a small office in the State Department’s Office of Public Diplomacy that tracks and counters foreign disinformation campaigns. The culling of the office leaves the State Department without a way to actively counter the increasingly sophisticated disinformation campaigns from foreign governments like those of Russia, Iran, and China. Read the full story. —Eileen Guo What is vibe coding, exactly? When OpenAI cofounder Andrej Karpathy excitedly took to X back in February to post about his new hobby, he probably had no idea he was about to coin a phrase that encapsulated an entire movement steadily gaining momentum across the world. “There’s a new kind of coding I call ‘vibe coding’, where you fully give in to the vibes, embrace exponentials, and forget that the code even exists," he said. “I’m building a project or webapp, but it’s not really coding—I just see stuff, say stuff, run stuff, and copy paste stuff, and it mostly works.”  If this all sounds very different from poring over lines of code, that’s because Karpathy was talking about a particular style of coding with AI assistance. His words struck a chord among software developers and enthusiastic amateurs alike.  In the months since, his post has sparked think pieces and impassioned debates across the internet. But what exactly is vibe coding? Who does it benefit, and what’s its likely future? Read the full story. —Rhiannon Williams This story is the latest for MIT Technology Review Explains, our series untangling the complex, messy world of technology to help you understand what’s coming next. You can read more from the series here. These four charts sum up the state of AI and energy You’ve probably read that AI will drive an increase in electricity demand. But how that fits into the context of the current and future grid can feel less clear from the headlines. A new report from the International Energy Agency digs into the details of energy and AI, and I think it’s worth looking at some of the data to help clear things up. Here are four charts from the report that sum up the crucial points about AI and energy demand.  —Casey Crownhart This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here. We need targeted policies, not blunt tariffs, to drive “American energy dominance” —Addison Killean Stark President Trump and his appointees have repeatedly stressed the need to establish “American energy dominance.”  But the White House’s profusion of executive orders and aggressive tariffs, along with its determined effort to roll back clean-energy policies, are moving the industry in the wrong direction, creating market chaos and economic uncertainty that are making it harder for both legacy players and emerging companies to invest, grow, and compete. Read the full story. This story is part of Heat Exchange, MIT Technology Review’s guest opinion series, offering expert commentary on legal, political and regulatory issues related to climate change and clean energy. You can read the rest of the pieces here. MIT Technology Review Narrated: Will we ever trust robots? If most robots still need remote human operators to be safe and effective, why should we welcome them into our homes? This is our latest story to be turned into a MIT Technology Review Narrated podcast, which  we’re publishing each week on Spotify and Apple Podcasts. Just navigate to MIT Technology Review Narrated on either platform, and follow us to get all our new content as it’s released. The must-reads I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology. 1 The Trump administration has cancelled lifesaving aid to foreign children After Elon Musk previously promised to preserve it. (The Atlantic $)+ DOGE worker Jeremy Lewin, who dismantled USAID, has a new role. (Fortune $)+ The department attempted to embed its staff in an independent non-profit. (The Guardian)+ Elon Musk, DOGE, and the Evil Housekeeper Problem. (MIT Technology Review)2 Astronomers have detected a possible signature of life on a distant planet It’s the first time the potential for life has been spotted on a habitable planet. (NYT $)+ Maybe we should be building observatories on the moon. (Ars Technica) 3 OpenAI’s new AI models can reason with images They’re capable of integrating images directly into their reasoning process. (VentureBeat)+ But they’re still vulnerable to making mistakes. (Ars Technica)+ AI reasoning models can cheat to win chess games. (MIT Technology Review) 4 Trump’s new chip crackdown will cost US firms billionsIt’s not just Nvidia that’s set to suffer. (WP $) + But Jensen Huang isn’t giving up on China altogether. (WSJ $)+ He’s said the company follows export laws ‘to the letter.’ (CNBC)5 Elon Musk reportedly used X to search for potential mothers of his children Sources suggest he has many more children than is publicly known. (WSJ $)6 Local US cops are being trained as immigration enforcers Critics say the rollout is ripe for civil rights abuses. (The Markup)+ ICE is still bound by constitutional limits—for now. (The Conversation)7 This electronic weapon can fry drone swarms from a distanceThe RapidDestroyer uses a high-power radio frequency to take down multiple drones. (FT $) + Meet the radio-obsessed civilian shaping Ukraine’s drone defense. (MIT Technology Review)8 TikTok is attempting to fight back against misinformationIt’s rolling out an X-style community notes feature. (Bloomberg $) 9 A deceased composer’s brain is still making music Three years after Alvin Lucier’s death, cerebral organoids made from his white blood cells are making sounds. (Popular Mechanics)+ AI is coming for music, too. (MIT Technology Review)10 This AI agent can switch personalities Depending what you need it to do. (Wired $) Quote of the day “Yayy, we get one last meal before getting on the electric chair.” —Jing Levine, who runs a party goods business with her husband that’s heavily reliant on suppliers in China, reacts to Donald Trump’s plans to pause tariffs except for China, the New York Times reports. The big story AI means the end of internet search as we’ve known it We all know what it means, colloquially, to google something. You pop a few words in a search box and in return get a list of blue links to the most relevant results. Fundamentally, it’s just fetching information that’s already out there on the internet and showing it to you, in a structured way. But all that is up for grabs. We are at a new inflection point. The biggest change to the way search engines deliver information to us since the 1990s is happening right now. No more keyword searching. Instead, you can ask questions in natural language. And instead of links, you’ll increasingly be met with answers written by generative AI and based on live information from across the internet, delivered the same way.  Not everyone is excited for the change. Publishers are completely freaked out. And people are also worried about what these new LLM-powered results will mean for our fundamental shared reality. Read the full story. —Mat Honan We can still have nice things A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet 'em at me.) + Essential viewing: Sweden is broadcasting its beloved moose spring migration for 20 days straight.+ Fearsome warlord Babur was obsessed with melons, and frankly, I don’t blame him.+ Great news for squid fans: a colossal squid has been captured on film for the first time! 🦑+ Who stole my cheese?
    0 Commentarios 0 Acciones 50 Views
  • WWW.TECHNOLOGYREVIEW.COM
    These four charts sum up the state of AI and energy
    While it’s rare to look at the news without finding some headline related to AI and energy, a lot of us are stuck waving our hands when it comes to what it all means. Sure, you’ve probably read that AI will drive an increase in electricity demand. But how that fits into the context of the current and future grid can feel less clear from the headlines. That’s true even for people working in the field.  A new report from the International Energy Agency digs into the details of energy and AI, and I think it’s worth looking at some of the data to help clear things up. Here are four charts from the report that sum up the crucial points about AI and energy demand. 1. AI is power hungry, and the world will need to ramp up electricity supply to meet demand.  This point is the most obvious, but it bears repeating: AI is exploding, and it’s going to lead to higher energy demand from data centers. “AI has gone from an academic pursuit to an industry with trillions of dollars at stake,” as the IEA report’s executive summary puts it. Data centers used less than 300 terawatt-hours of electricity in 2020. That could increase to nearly 1,000 terawatt-hours in the next five years, which is more than Japan’s total electricity consumption today. Today, the US has about 45% of the world’s data center capacity, followed by China. Those two countries will continue to represent the overwhelming majority of capacity through 2035.   2. The electricity needed to power data centers will largely come from fossil fuels like coal and natural gas in the near term, but nuclear and renewables could play a key role, especially after 2030. The IEA report is relatively optimistic on the potential for renewables to power data centers, projecting that nearly half of global growth by 2035 will be met with renewables like wind and solar. (In Europe, the IEA projects, renewables will meet 85% of new demand.) In the near term, though, natural gas and coal will also expand. An additional 175 terawatt-hours from gas will help meet demand in the next decade, largely in the US, according to the IEA’s projections. Another report, published this week by the energy consultancy BloombergNEF, suggests that fossil fuels will play an even larger role than the IEA projects, accounting for two-thirds of additional electricity generation between now and 2035. Nuclear energy, a favorite of big tech companies looking to power operations without generating massive emissions, could start to make a dent after 2030, according to the IEA data. 3. Data centers are just a small piece of expected electricity demand growth this decade. We should be talking more about appliances, industry, and EVs when we talk about energy! Electricity demand is on the rise from a whole host of sources: Electric vehicles, air-conditioning, and appliances will each drive more electricity demand than data centers between now and the end of the decade. In total, data centers make up a little over 8% of electricity demand expected between now and 2030. There are interesting regional effects here, though. Growing economies will see more demand from the likes of air-conditioning than from data centers. On the other hand, the US has seen relatively flat electricity demand from consumers and industry for years, so newly rising demand from high-performance computing will make up a larger chunk.  4. Data centers tend to be clustered together and close to population centers, making them a unique challenge for the power grid.   The grid is no stranger to facilities that use huge amounts of energy: Cement plants, aluminum smelters, and coal mines all pull a lot of power in one place. However, data centers are a unique sort of beast. First, they tend to be closely clustered together. Globally, data centers make up about 1.5% of total electricity demand. However, in Ireland, that number is 20%, and in Virginia, it’s 25%. That trend looks likely to continue, too: Half of data centers under development in the US are in preexisting clusters. Data centers also tend to be closer to urban areas than other energy-intensive facilities like factories and mines.  Since data centers are close both to each other and to communities, they could have significant impacts on the regions where they’re situated, whether by bringing on more fossil fuels close to urban centers or by adding strain to the local grid. Or both. Overall, AI and data centers more broadly are going to be a major driving force for electricity demand. It’s not the whole story, but it’s a unique part of our energy picture to continue watching moving forward.  This article is from The Spark, MIT Technology Review’s weekly climate newsletter. To receive it in your inbox every Wednesday, sign up here.
    0 Commentarios 0 Acciones 66 Views
  • WWW.TECHNOLOGYREVIEW.COM
    AI is coming for music, too
    The end of this story includes samples of AI-generated music. Artificial intelligence was barely a term in 1956, when top scientists from the field of computing arrived at Dartmouth College for a summer conference. The computer scientist John McCarthy had coined the phrase in the funding proposal for the event, a gathering to work through how to build machines that could use language, solve problems like humans, and improve themselves. But it was a good choice, one that captured the organizers’ founding premise: Any feature of human intelligence could “in principle be so precisely described that a machine can be made to simulate it.”  In their proposal, the group had listed several “aspects of the artificial intelligence problem.” The last item on their list, and in hindsight perhaps the most difficult, was building a machine that could exhibit creativity and originality. At the time, psychologists were grappling with how to define and measure creativity in humans. The prevailing theory—that creativity was a product of intelligence and high IQ—was fading, but psychologists weren’t sure what to replace it with. The Dartmouth organizers had one of their own. “The difference between creative thinking and unimaginative competent thinking lies in the injection of some randomness,” they wrote, adding that such randomness “must be guided by intuition to be efficient.”  Nearly 70 years later, following a number of boom-and-bust cycles in the field, we now have AI models that more or less follow that recipe. While large language models that generate text have exploded in the last three years, a different type of AI, based on what are called diffusion models, is having an unprecedented impact on creative domains. By transforming random noise into coherent patterns, diffusion models can generate new images, videos, or speech, guided by text prompts or other input data. The best ones can create outputs indistinguishable from the work of people, as well as bizarre, surreal results that feel distinctly nonhuman.  Now these models are marching into a creative field that is arguably more vulnerable to disruption than any other: music. AI-generated creative works—from orchestra performances to heavy metal—are poised to suffuse our lives more thoroughly than any other product of AI has done yet. The songs are likely to blend into our streaming platforms, party and wedding playlists, soundtracks, and more, whether or not we notice who (or what) made them.  For years, diffusion models have stirred debate in the visual-art world about whether what they produce reflects true creation or mere replication. Now this debate has come for music, an art form that is deeply embedded in our experiences, memories, and social lives. Music models can now create songs capable of eliciting real emotional responses, presenting a stark example of how difficult it’s becoming to define authorship and originality in the age of AI.  The courts are actively grappling with this murky territory. Major record labels are suing the top AI music generators, alleging that diffusion models do little more than replicate human art without compensation to artists. The model makers counter that their tools are made to assist in human creation.   In deciding who is right, we’re forced to think hard about our own human creativity. Is creativity, whether in artificial neural networks or biological ones, merely the result of vast statistical learning and drawn connections, with a sprinkling of randomness? If so, then authorship is a slippery concept. If not—if there is some distinctly human element to creativity—what is it? What does it mean to be moved by something without a human creator? I had to wrestle with these questions the first time I heard an AI-generated song that was genuinely fantastic—it was unsettling to know that someone merely wrote a prompt and clicked “Generate.” That predicament is coming soon for you, too.  Making connections After the Dartmouth conference, its participants went off in different research directions to create the foundational technologies of AI. At the same time, cognitive scientists were following a 1950 call from J.P. Guilford, president of the American Psychological Association, to tackle the question of creativity in human beings. They came to a definition, first formalized in 1953 by the psychologist Morris Stein in the Journal of Psychology: Creative works are both novel, meaning they present something new, and useful, meaning they serve some purpose to someone. Some have called for “useful” to be replaced by “satisfying,” and others have pushed for a third criterion: that creative things are also surprising.  Later, in the 1990s, the rise of functional magnetic resonance imaging made it possible to study more of the neural mechanisms underlying creativity in many fields, including music. Computational methods in the past few years have also made it easier to map out the role that memory and associative thinking play in creative decisions.  What has emerged is less a grand unified theory of how a creative idea originates and unfolds in the brain and more an ever-growing list of powerful observations. We can first divide the human creative process into phases, including an ideation or proposal step, followed by a more critical and evaluative step that looks for merit in ideas. A leading theory on what guides these two phases is called the associative theory of creativity, which posits that the most creative people can form novel connections between distant concepts. STUART BRADFORD “It could be like spreading activation,” says Roger Beaty, a researcher who leads the Cognitive Neuroscience of Creativity Laboratory at Penn State. “You think of one thing; it just kind of activates related concepts to whatever that one concept is.” These connections often hinge specifically on semantic memory, which stores concepts and facts, as opposed to episodic memory, which stores memories from a particular time and place. Recently, more sophisticated computational models have been used to study how people make connections between concepts across great “semantic distances.” For example, the word apocalypse is more closely related to nuclear power than to celebration. Studies have shown that highly creative people may perceive very semantically distinct concepts as close together. Artists have been found to generate word associations across greater distances than non-artists. Other research has supported the idea that creative people have “leaky” attention—that is, they often notice information that might not be particularly relevant to their immediate task.  Neuroscientific methods for evaluating these processes do not suggest that creativity unfolds in a particular area of the brain. “Nothing in the brain produces creativity like a gland secretes a hormone,” Dean Keith Simonton, a leader in creativity research, wrote in the Cambridge Handbook of the Neuroscience of Creativity.  The evidence instead points to a few dispersed networks of activity during creative thought, Beaty says—one to support the initial generation of ideas through associative thinking, another involved in identifying promising ideas, and another for evaluation and modification. A new study, led by researchers at Harvard Medical School and published in February, suggests that creativity might even involve the suppression of particular brain networks, like ones involved in self-censorship.  So far, machine creativity—if you can call it that—looks quite different. Though at the time of the Dartmouth conference AI researchers were interested in machines inspired by human brains, that focus had shifted by the time diffusion models were invented, about a decade ago.  The best clue to how they work is in the name. If you dip a paintbrush loaded with red ink into a glass jar of water, the ink will diffuse and swirl into the water seemingly at random, eventually yielding a pale pink liquid. Diffusion models simulate this process in reverse, reconstructing legible forms from randomness. For a sense of how this works for images, picture a photo of an elephant. To train the model, you make a copy of the photo, adding a layer of random black-and-white static on top. Make a second copy and add a bit more, and so on hundreds of times until the last image is pure static, with no elephant in sight. For each image in between, a statistical model predicts how much of the image is noise and how much is really the elephant. It compares its guesses with the right answers and learns from its mistakes. Over millions of these examples, the model gets better at “de-noising” the images and connecting these patterns to descriptions like “male Borneo elephant in an open field.”  Now that it’s been trained, generating a new image means reversing this process. If you give the model a prompt, like “a happy orangutan in a mossy forest,” it generates an image of random white noise and works backward, using its statistical model to remove bits of noise step by step. At first, rough shapes and colors appear. Details come after, and finally (if it works) an orangutan emerges, all without the model “knowing” what an orangutan is. Musical images The approach works much the same way for music. A diffusion model does not “compose” a song the way a band might, starting with piano chords and adding vocals and drums. Instead, all the elements are generated at once. The process hinges on the fact that the many complexities of a song can be depicted visually in a single waveform, representing the amplitude of a sound wave plotted against time.  Think of a record player. By traveling along a groove in a piece of vinyl, a needle mirrors the path of the sound waves engraved in the material and transmits it into a signal for the speaker. The speaker simply pushes out air in these patterns, generating sound waves that convey the whole song.  From a distance, a waveform might look as if it just follows a song’s volume. But if you were to zoom in closely enough, you could see patterns in the spikes and valleys, like the 49 waves per second for a bass guitar playing a low G. A waveform contains the summation of the frequencies of all different instruments and textures. “You see certain shapes start taking place,” says David Ding, cofounder of the AI music company Udio, “and that kind of corresponds to the broad melodic sense.”  Since waveforms, or similar charts called spectrograms, can be treated like images, you can create a diffusion model out of them. A model is fed millions of clips of existing songs, each labeled with a description. To generate a new song, it starts with pure random noise and works backward to create a new waveform. The path it takes to do so is shaped by what words someone puts into the prompt. Ding worked at Google DeepMind for five years as a senior research engineer on diffusion models for images and videos, but he left to found Udio, based in New York, in 2023. The company and its competitor Suno, based in Cambridge, Massachusetts, are now leading the race for music generation models. Both aim to build AI tools that enable nonmusicians to make music. Suno is larger, claiming more than 12 million users, and raised a $125 million funding round in May 2024. The company has partnered with artists including Timbaland. Udio raised a seed funding round of $10 million in April 2024 from prominent investors like Andreessen Horowitz as well as musicians Will.i.am and Common. The results of Udio and Suno so far suggest there’s a sizable audience of people who may not care whether the music they listen to is made by humans or machines. Suno has artist pages for creators, some with large followings, who generate songs entirely with AI, often accompanied by AI-generated images of the artist. These creators are not musicians in the conventional sense but skilled prompters, creating work that can’t be attributed to a single composer or singer. In this emerging space, our normal definitions of authorship—and our lines between creation and replication—all but dissolve. The results of Udio and Suno so far suggest there’s a sizable audience of people who may not care whether the music they listen to is made by humans or machines. The music industry is pushing back. Both companies were sued by major record labels in June 2024, and the lawsuits are ongoing. The labels, including Universal and Sony, allege that the AI models have been trained on copyrighted music “at an almost unimaginable scale” and generate songs that “imitate the qualities of genuine human sound recordings” (the case against Suno cites one ABBA-adjacent song called “Prancing Queen,” for example).  Suno did not respond to requests for comment on the litigation, but in a statement responding to the case posted on Suno’s blog in August, CEO Mikey Shulman said the company trains on music found on the open internet, which “indeed contains copyrighted materials.” But, he argued, “learning is not infringing.” A representative from Udio said the company would not comment on pending litigation. At the time of the lawsuit, Udio released a statement mentioning that its model has filters to ensure that it “does not reproduce copyrighted works or artists’ voices.”  Complicating matters even further is guidance from the US Copyright Office, released in January, that says AI-generated works can be copyrighted if they involve a considerable amount of human input. A month later, an artist in New York received what might be the first copyright for a piece of visual art made with the help of AI. The first song could be next.   Novelty and mimicry These legal cases wade into a gray area similar to one explored by other court battles unfolding in AI. At issue here is whether training AI models on copyrighted content is allowed, and whether generated songs unfairly copy a human artist’s style.  But AI music is likely to proliferate in some form regardless of these court decisions; YouTube has reportedly been in talks with major labels to license their music for AI training, and Meta’s recent expansion of its agreements with Universal Music Group suggests that licensing for AI-generated music might be on the table.  If AI music is here to stay, will any of it be any good? Consider three factors: the training data, the diffusion model itself, and the prompting. The model can only be as good as the library of music it learns from and the descriptions of that music, which must be complex to capture it well. A model’s architecture then determines how well it can use what’s been learned to generate songs. And the prompt you feed into the model—as well as the extent to which the model “understands” what you mean by “turn down that saxophone,” for example—is pivotal too. Is the result creation or simply replication of the training data? We could ask the same question about human creativity. Arguably the most important issue is the first: How extensive and diverse is the training data, and how well is it labeled? Neither Suno nor Udio has disclosed what music has gone into its training set, though these details will likely have to be disclosed during the lawsuits.  Udio says the way those songs are labeled is essential to the model. “An area of active research for us is: How do we get more and more refined descriptions of music?” Ding says. A basic description would identify the genre, but then you could also say whether a song is moody, uplifting, or calm. More technical descriptions might mention a two-five-one chord progression or a specific scale. Udio says it does this through a combination of machine and human labeling.  “Since we want to target a broad range of target users, that also means that we need a broad range of music annotators,” he says. “Not just people with music PhDs who can describe the music on a very technical level, but also music enthusiasts who have their own informal vocabulary for describing music.” Competitive AI music generators must also learn from a constant supply of new songs made by people, or else their outputs will be stuck in time, sounding stale and dated. For this, today’s AI-generated music relies on human-generated art. In the future, though, AI music models may train on their own outputs, an approach being experimented with in other AI domains. Because models start with a random sampling of noise, they are nondeterministic; giving the same AI model the same prompt will result in a new song each time. That’s also because many makers of diffusion models, including Udio, inject additional randomness through the process—essentially taking the waveform generated at each step and distorting it ever so slightly in hopes of adding imperfections that serve to make the output more interesting or real. The organizers of the Dartmouth conference themselves recommended such a tactic back in 1956. According to Udio co­founder and chief operating officer Andrew Sanchez, it’s this randomness inherent in generative AI programs that comes as a shock to many people. For the past 70 years, computers have executed deterministic programs: Give the software an input and receive the same response every time.  “Many of our artists partners will be like, ‘Well, why does it do this?’” he says. “We’re like, well, we don’t really know.” The generative era requires a new mindset, even for the companies creating it: that AI programs can be messy and inscrutable. Is the result creation or simply replication of the training data? Fans of AI music told me we could ask the same question about human creativity. As we listen to music through our youth, neural mechanisms for learning are weighted by these inputs, and memories of these songs influence our creative outputs. In a recent study, Anthony Brandt, a composer and professor of music at Rice University, pointed out that both humans and large language models use past experiences to evaluate possible future scenarios and make better choices.  Indeed, much of human art, especially in music, is borrowed. This often results in litigation, with artists alleging that a song was copied or sampled without permission. Some artists suggest that diffusion models should be made more transparent, so we could know that a given song’s inspiration is three parts David Bowie and one part Lou Reed. Udio says there is ongoing research to achieve this, but right now, no one can do it reliably.  For great artists, “there is that combination of novelty and influence that is at play,” Sanchez says. “And I think that that’s something that is also at play in these technologies.” But there are lots of areas where attempts to equate human neural networks with artificial ones quickly fall apart under scrutiny. Brandt carves out one domain where he sees human creativity clearly soar above its machine-made counterparts: what he calls “amplifying the anomaly.” AI models operate in the realm of statistical sampling. They do not work by emphasizing the exceptional but, rather, by reducing errors and finding probable patterns. Humans, on the other hand, are intrigued by quirks. “Rather than being treated as oddball events or ‘one-offs,’” Brandt writes, the quirk “permeates the creative product.”  STUART BRADFORD He cites Beethoven’s decision to add a jarring off-key note in the last movement of his Symphony no. 8. “Beethoven could have left it at that,” Brandt says. “But rather than treating it as a one-off, Beethoven continues to reference this incongruous event in various ways. In doing so, the composer takes a momentary aberration and magnifies its impact.” One could look to similar anomalies in the backward loop sampling of late Beatles recordings, pitched-up vocals from Frank Ocean, or the incorporation of “found sounds,” like recordings of a crosswalk signal or a door closing, favored by artists like Charlie Puth and by Billie Eilish’s producer Finneas O’Connell.  If a creative output is indeed defined as one that’s both novel and useful, Brandt’s interpretation suggests that the machines may have us matched on the second criterion while humans reign supreme on the first.  To explore whether that is true, I spent a few days playing around with Udio’s model. It takes a minute or two to generate a 30-second sample, but if you have paid versions of the model you can generate whole songs. I decided to pick 12 genres, generate a song sample for each, and then find similar songs made by people. I built a quiz to see if people in our newsroom could spot which songs were made by AI.  The average score was 46%. And for a few genres, especially instrumental ones, listeners were wrong more often than not. When I watched people do the test in front of me, I noticed that the qualities they confidently flagged as a sign of composition by AI—a fake-sounding instrument, a weird lyric—rarely proved them right. Predictably, people did worse in genres they were less familiar with; some did okay on country or soul, but many stood no chance against jazz, classical piano, or pop. Beaty, the creativity researcher, scored 66%, while Brandt, the composer, finished at 50% (though he answered correctly on the orchestral and piano sonata tests).  Remember that the model doesn’t deserve all the credit here; these outputs could not have been created without the work of human artists whose work was in the training data. But with just a few prompts, the model generated songs that few people would pick out as machine-made. A few could easily have been played at a party without raising objections, and I found two I genuinely loved, even as a lifelong musician and generally picky music person. But sounding real is not the same thing as sounding original. The songs did not feel driven by oddities or anomalies—certainly not on the level of Beethoven’s “jump scare.” Nor did they seem to bend genres or cover great leaps between themes. In my test, people sometimes struggled to decide whether a song was AI-generated or simply bad.  How much will this matter in the end? The courts will play a role in deciding whether AI music models serve up replications or new creations—and how artists are compensated in the process—but we, as listeners, will decide their cultural value. To appreciate a song, do we need to picture a human artist behind it—someone with experience, ambitions, opinions? Is a great song no longer great if we find out it’s the product of AI?  Sanchez says people may wonder who is behind the music. But “at the end of the day, however much AI component, however much human component, it’s going to be art,” he says. “And people are going to react to it on the quality of its aesthetic merits.” In my experiment, though, I saw that the question really mattered to people—and some vehemently resisted the idea of enjoying music made by a computer model. When one of my test subjects instinctively started bobbing her head to an electro-pop song on the quiz, her face expressed doubt. It was almost as if she was trying her best to picture a human rather than a machine as the song’s composer. “Man,” she said, “I really hope this isn’t AI.”  It was. 
    0 Commentarios 0 Acciones 52 Views
  • WWW.TECHNOLOGYREVIEW.COM
    Adapting for AI’s reasoning era
    Anyone who crammed for exams in college knows that an impressive ability to regurgitate information is not synonymous with critical thinking. The large language models (LLMs) first publicly released in 2022 were impressive but limited—like talented students who excel at multiple-choice exams but stumble when asked to defend their logic. Today's advanced reasoning models are more akin to seasoned graduate students who can navigate ambiguity and backtrack when necessary, carefully working through problems with a methodical approach. As AI systems thatlearn by mimicking the mechanisms of the human brain continue to advance, we're witnessing an evolution in models from rote regurgitation to genuine reasoning. This capability marks a new chapter in the evolution of AI—and what enterprises can gain from it. But in order to tap into this enormous potential, organizations will need to ensure they have the right infrastructure and computational resources to support the advancing technology. The reasoning revolution "Reasoning models are qualitatively different than earlier LLMs," says Prabhat Ram, partner AI/HPC architect at Microsoft, noting that these models can explore different hypotheses, assess if answers are consistently correct, and adjust their approach accordingly. "They essentially create an internal representation of a decision tree based on the training data they've been exposed to, and explore which solution might be the best." This adaptive approach to problem-solving isn’t without trade-offs. Earlier LLMs delivered outputs in milliseconds based on statistical pattern-matching and probabilistic analysis. This was—and still is—efficient for many applications, but it doesn’t allow the AI sufficient time to thoroughly evaluate multiple solution paths. In newer models, extended computation time during inference—seconds, minutes, or even longer—allows the AI to employ more sophisticated internal reinforcement learning. This opens the door for multi-step problem-solving and more nuanced decision-making. To illustrate future use cases for reasoning-capable AI, Ram offers the example of a NASA rover sent to explore the surface of Mars. "Decisions need to be made at every moment around which path to take, what to explore, and there has to be a risk-reward trade-off. The AI has to be able to assess, 'Am I about to jump off a cliff? Or, if I study this rock and I have a limited amount of time and budget, is this really the one that's scientifically more worthwhile?'" Making these assessments successfully could result in groundbreaking scientific discoveries at previously unthinkable speed and scale. Reasoning capabilities are also a milestone in the proliferation of agentic AI systems: autonomous applications that perform tasks on behalf of users, such as scheduling appointments or booking travel itineraries. "Whether you're asking AI to make a reservation, provide a literature summary, fold a towel, or pick up a piece of rock, it needs to first be able to understand the environment—what we call perception—comprehend the instructions and then move into a planning and decision-making phase," Ram explains. Enterprise applications of reasoning-capable AI systems The enterprise applications for reasoning-capable AI are far-reaching. In health care, reasoning AI systems could analyze patient data, medical literature, and treatment protocols to support diagnostic or treatment decisions. In scientific research, reasoning models could formulate hypotheses, design experimental protocols, and interpret complex results—potentially accelerating discoveries across fields from materials science to pharmaceuticals. In financial analysis, reasoning AI could help evaluate investment opportunities or market expansion strategies, as well as develop risk profiles or economic forecasts. Armed with these insights, their own experience, and emotional intelligence, human doctors, researchers, and financial analysts could make more informed decisions, faster. But before setting these systems loose in the wild, safeguards and governance frameworks will need to be ironclad, particularly in high-stakes contexts like health care or autonomous vehicles. "For a self-driving car, there are real-time decisions that need to be made vis-a-vis whether it turns the steering wheel to the left or the right, whether it hits the gas pedal or the brake—you absolutely do not want to hit a pedestrian or get into an accident," says Ram. "Being able to reason through situations and make an ‘optimal’ decision is something that reasoning models will have to do going forward." The infrastructure underpinning AI reasoning To operate optimally, reasoning models require significantly more computational resources for inference. This creates distinct scaling challenges. Specifically, because the inference durations of reasoning models can vary widely—from just a few seconds to many minutes—load balancing across these diverse tasks can be challenging. Overcoming these hurdles requires tight collaboration between infrastructure providers and hardware manufacturers, says Ram, speaking of Microsoft’s collaboration with NVIDIA, which brings its accelerated computing platform to Microsoft products, including Azure AI. "When we think about Azure, and when we think about deploying systems for AI training and inference, we really have to think about the entire system as a whole," Ram explains. "What are you going to do differently in the data center? What are you going to do about multiple data centers? How are you going to connect them?" These considerations extend into reliability challenges at all scales: from memory errors at the silicon level, to transmission errors within and across servers, thermal anomalies, and even data center-level issues like power fluctuations—all of which require sophisticated monitoring and rapid response systems. By creating a holistic system architecture designed to handle fluctuating AI demands, Microsoft and NVIDIA’s collaboration allows companies to harness the power of reasoning models without needing to manage the underlying complexity. In addition to performance benefits, these types of collaborations allow companies to keep pace with a tech landscape evolving at breakneck speed. "Velocity is a unique challenge in this space," says Ram. "Every three months, there is a new foundation model. The hardware is also evolving very fast—in the last four years, we've deployed each generation of NVIDIA GPUs and now NVIDIA GB200NVL72. Leading the field really does require a very close collaboration between Microsoft and NVIDIA to share roadmaps, timelines, and designs on the hardware engineering side, qualifications and validation suites, issues that arise in production, and so on." Advancements in AI infrastructure designed specifically for reasoning and agentic models are critical for bringing reasoning-capable AI to a broader range of organizations. Without robust, accessible infrastructure, the benefits of reasoning models will remain relegated to companies with massive computing resources. Looking ahead, the evolution of reasoning-capable AI systems and the infrastructure that supports them promises even greater gains. For Ram, the frontier extends beyond enterprise applications to scientific discovery and breakthroughs that propel humanity forward: "The day when these agentic systems can power scientific research and propose new hypotheses that can lead to a Nobel Prize, I think that's the day when we can say that this evolution is complete.” To learn more, please read Microsoft and NVIDIA accelerate AI development and performance, watch the NVIDIA GTC AI Conference sessions on demand, and explore the topic areas of Azure AI solutions and Azure AI infrastructure. This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff. This content was researched, designed, and written entirely by human writers, editors, analysts, and illustrators. This includes the writing of surveys and collection of data for surveys. AI tools that may have been used were limited to secondary production processes that passed thorough human review.
    0 Commentarios 0 Acciones 90 Views
Quizás te interese…