How AI is used to surveil workers
www.technologyreview.com
This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first,sign up here.Opaque algorithms meant to analyze worker productivity have been rapidly spreading through our workplaces, as detailed in a new must-read piece by Rebecca Ackermann, published Monday in MIT Technology Review.Since the pandemic, lots of companies have adopted software to analyze keystrokes or detect how much time workers are spending at their computers. The trend is driven by a suspicion that remote workers are less productive, though thats not broadly supported by economic research. Still, that belief is behind the efforts of Elon Musk, DOGE, and the Office of Personnel Management to roll back remote work for US federal employees.The focus on remote workers, though, misses another big part of the story: algorithmic decision-making in industries where people dont work at home. Gig workers like ride-share drivers might be kicked off their platforms by an algorithm, with no way to appeal. Productivity systems at Amazon warehouses dictated a pace of work that Amazons internal teams found would lead to more injuries, but the company implemented them anyway, according to a 2024 congressional report.Ackermann posits that these algorithmic tools are less about efficiency and more about control, which workers have less and less of. There are few laws requiring companies to offer transparency about what data is going into their productivity models and how decisions are made. Advocates say that individual eorts to push back against or evade electronic monitoring are not enough, she writes. The technology is too widespread and the stakes too high.Productivity tools dont just track work, Ackermann writes. They reshape the relationship between workers and those in power. Labor groups are pushing back against that shift in power by seeking to make the algorithms that fuel management decisions more transparent.The full piece contains so much that surprised me about the widening scope of productivity tools and the very limited means that workers have to understand what goes into them. As the pursuit of efficiency gains political influence in the US, the attitudes and technologies that transformed the private sector may now be extending to the public sector. Federal workers are already preparing for that shift, according to a new story in Wired. For some clues as to what that might mean, read Rebecca Ackermanns full story.Now read the rest of The AlgorithmDeeper LearningMicrosoft announced last week that it has made significant progress in its 20-year quest to make topological quantum bits, or qubitsa special approach to building quantum computers that could make them more stable and easier to scale up.Why it matters: Quantum computers promise to crunch computations faster than any conventional computer humans could ever build, which could mean faster discovery of new drugs and scientific breakthroughs. The problem is that qubitsthe unit of information in quantum computing, rather than the typical 1s and 0sare very, very finicky. Microsofts new type of qubit is supposed to make fragile quantum states easier to maintain, but scientists outside the project say theres a long way to go before the technology can be proved to work as intended. And on top of that, some experts are asking whether rapid advances in applying AI to scientific problems could negate any real need for quantum computers at all. Read more from Rachel Courtland.Bits and BytesXs AI model appears to have briefly censored unflattering mentions of Trump and MuskElon Musk has long alleged that AI models suppress conservative speech. In response, he promised that his company xAIs AI model, Grok, would be maximally truth-seeking (though, as weve pointed out previously, making things up is just what AI does). Over last weekend, users noticed that if you asked Grok about who is the biggest spreader of misinformation, the model reported it was explicitly instructed not to mention Donald Trump or Elon Musk. An engineering lead at xAI said an unnamed employee had made this change, but its now been reversed. (TechCrunch)Figure demoed humanoid robots that can work together to put your groceries awayHumanoid robots arent typically very good at working with one another. But the robotics company Figure showed off two humanoids helping each other put groceries away, another sign that general AI models for robotics are helping them learn faster than ever before. However, weve written about how videos featuring humanoid robots can be misleading, so take these developments with a grain of salt. (The Robot Report)OpenAI is shifting its allegiance from Microsoft to SoftbankIn calls with its investors, OpenAI has signaled that its weakening its ties to Microsoftits largest investorand partnering more closely with Softbank. The latter is now working on the Stargate project, a $500 billion effort to build data centers that will support the bulk of the computing power needed for OpenAIs ambitious AI plans. (The Information)Humane is shutting down the AI Pin and selling its remnants to HPOne big debate in AI is whether the technology will require its own piece of hardware. Rather than just conversing with AI on our phones, will we need some sort of dedicated device to talk to? Humane got investments from Sam Altman and others to build just that, in the form of a badge worn on your chest. But after poor reviews and sluggish sales, last week the company announced it would shut down. (The Verge)Schools are replacing counselors with chatbotsSchool districts, dealing with a shortage of counselors, are rolling out AI-powered well-being companions for students to text with. But experts have pointed out the dangers of relying on these tools and say the companies that make them often misrepresent their capabilities and effectiveness. (The Wall Street Journal)What dismantling Americas leadership in scientific research will meanFederal workers spoke to MIT Technology Review about the efforts by DOGE and others to slash funding for scientific research. They say it could lead to long-lasting, perhaps irreparable damage to everything from the quality of health care to the publics access to next-generation consumer technologies. (MIT Technology Review)Your most important customer may be AIPeople are relying more and more on AI models like ChatGPT for recommendations, which means brands are realizing they have to figure out how to rank higher, much as they do with traditional search results. Doing so is a challenge, since AI model makers offer few insights into how they form recommendations. (MIT Technology Review)
0 Commentaires ·0 Parts ·43 Vue