• Your iPhone Storage Is Full Again. Do This to Reclaim Space.
    www.wsj.com
    How to deal with photos and software updates that are eating up gigabytes.
    0 التعليقات ·0 المشاركات ·142 مشاهدة
  • Getting an all-optical AI to handle non-linear math
    arstechnica.com
    See the light Getting an all-optical AI to handle non-linear math Instead of sensing photons and processing the results, why not process the photons? Jacek Krywko Jan 12, 2025 7:07 am | 2 An optical processor built by researchers at MIT. Credit: MIT An optical processor built by researchers at MIT. Credit: MIT Story textSizeSmallStandardLargeWidth *StandardWideLinksStandardOrange* Subscribers only Learn moreA standard digital camera used in a car for stuff like emergency braking has a perceptual latency of a hair above 20 milliseconds. Thats just the time needed for a camera to transform the photons hitting its aperture into electrical chargers using either CMOS or CCD sensors. It doesnt count the further milliseconds needed to send that information to an onboard computer or process it there.A team of MIT researchers figured that if you had a chip that could process photons directly, you could skip the entire digitization step and perform calculations with the photons themselves. It has the potential to be mind-bogglingly faster.Were focused on a very specific metric here, which is latency. We aim for applications where what matters the most is how fast you can produce a solution. Thats why we are interested in systems where were able to do all the computations optically, says Saumil Bandyopadhyay, an MIT researcher, The team that implemented a complete deep neural network on a photonic chip, achieving a latency of 410 picoseconds. To put that in perspective, Bandyopadhyays chip could process the entire neural net it had onboard around 58 times within a single tick of the 4 GHz clock on a standard CPU.Matrices and nonlinearityNeural networks work with multiple layers of computational units that function as neurons. Each neuron can take an input, and those inputs can be, lets say, numbers, says Bandyopadhyay. Those numbers are then multiplied by either a constant called weight or a parameter as they are passed on to the next layer of neurons. Each layer takes a weighted sum of the preceding layers and sends it forward.This is the equivalent of linear algebraperforming matrix multiplication. However, AI models are often used to find intricate patterns in data where the output is not always proportional to the input. For this, you also need non-linear thresholding functions that adjust the weights between the layers of neurons. What makes deep neural networks so powerful is that were able to map very complicated relationships in data by repeatedly cascading both these linear operations and non-linear thresholding functions between the layers, Bandyopadhyay says.The problem is that this cascading requires massive parallel computations that, when done on standard computers, take tons of energy and time. Bandyopadhyays team feels this problem can be solved by performing the equivalent operations using photons rather than electrons. In photonic chips, information can be encoded in optical properties like polarization, phase, magnitude, frequency, and wavevector. While this would be extremely fast and energy-efficient, building such chips isnt easy.Siphoning lightConveniently, photonics turned out to be particularly good at linear matrix operations, Bandyopadhyay claims. A group at MIT led by Dirk Englund, a professor who is a co-author of Bandyopadhyays study, demonstrated a photonic chip doing matrix multiplication entirely with light in 2017. What the field struggled with, though, was implementing non-linear functions in photonics.The usual solution, so far, relied on bypassing the problem by doing linear algebra on photonic chips and offloading non-linear operations to external electronics. This, however, increased latency, since the information had to be converted from light to electrical signals, processed on an external processor, and converted back to light. And bringing the latency down is the primary reason why we want to build neural networks in photonics, Bandyopadhyay says.To solve this problem, Bandyopadhyay and his colleagues designed and built what is likely to be the worlds first chip that can compute the entire deep neural net, including both linear and non-linear operations, using photons. The process starts with an external laser with a modulator that feeds light into the chip through an optical fiber. This way we convert electrical inputs to light, Bandyopadhyay explains.The light is then fanned out to six channels and fed into a layer of six neurons that perform linear matrix multiplication using an array of devices called Mach-Zehnder interferometers. They are essentially programmable beam splitters, taking two optical fields and mixing them coherently to produce two output optical fields. By applying the voltage, you can control how much those the two inputs mix, Bandyopadhyay says.What a single Mach-Zehnder interferometer does in this context is a two-by-two matrix operation, performed on a pair of optical signals. With a rectangular array of those interferometers, the team could realize a larger set of matrix operations across all six optical channels.Once matrix multiplication is done in the first layer, the information goes to another layer through a unit that is responsible for nonlinear thresholding. We did this by co-integrating electronics and optics, Bandyopadhyay says. This works by sending a tiny bit of the optical signal to a photodiode that measures how much optical power is there. The result of this measurement is used to manipulate the rest of the photons passing through the device. We use that little bit of optical signal siphoned to the diode to modulate the rest of the optical signal, Bandyopadhyay explains.The entire chip had three layers of neurons performing matrix multiplications and two nonlinear function units in between. Overall, the network implemented on the chip could work with 132 parameters.This, in a way, highlights some of the limitations optical chips have today. The number of parameters used in the Chat GPT-4 large language model is reportedly 1 trillion. Compared to this trillion, the 132 parameters supported by Bandyopadhyays chip looked like less.Modest beginningsLarge language models are basically the biggest models you could have, right? They are the hardest to tackle. We are focused more on sort of applications where you benefit from lower latency, and models like that turn out to be smaller, Bandyopadhyay says. His team gears their chip toward powering AIs that work with up to 100,000 parameters. Its not like we have to go straight to Chat GPT to do something that is commercially useful. We can target these smaller models first, Bandyopadhyay adds.The smaller model Bandyopadhyay implemented on the chip in his study recognized spoken vowels, which is a task commonly used as a benchmark in research on AI-focused hardware. It scored 92 percent accuracy, which was on par with neural networks run on standard computers.But there are other and way cooler things small models can do. One of them is keeping self-driving cars from crashing. The idea is you have an autonomous navigation system where you want to repeatedly classify lidar signals with very fast latency, at speeds that are way faster than human reflexes, Bandyopadhyay says. According to his team, chips like the one they are working on should make it possible to classify lidar data directly, pushing photons straight into photonic chips without converting them to electrical signals.Other things Bandyopadhyay thinks could be powered by photonic chips are automotive vision systems that are entirely different from the camera-based systems we use today. You can essentially replace the camera as we know it. Instead, you could have a large array of inputs taking optical signals, sampling them, and sending them directly to optical processors for machine learning computations, Bandyopadhyay says. Its just a question of engineering the system.The team built the chip using standard CMOS processes, which Bandyopadhyay says should make scaling it easier. You are not limited by just what can fit on a single chip. You can make multi-chip systems to realize bigger networks. This is a promising direction for photonic chips technologythis is something you can already see happening in electronics, Bandyopadhyay claims.Nature Photonics, 2024. DOI: https://doi.org/10.1038/s41566-024-01567-zJacek KrywkoAssociate WriterJacek KrywkoAssociate Writer Jacek Krywko is a freelance science and technology writer who covers space exploration, artificial intelligence research, computer science, and all sorts of engineering wizardry. 2 Comments
    0 التعليقات ·0 المشاركات ·126 مشاهدة
  • Not everyone in the Palisades is wealthy. I'm a 22-year-old renter with multiple jobs who evacuated.
    www.businessinsider.com
    22-year-old Tabitha Snavely evacuated her apartment ahead of the Palisades wildfire.Snavely, who works multiple jobs, said many in her building were blue-collar workers or older people.She evacuated to her parents' house and told BI she needs to find a new place to live that is closer to work.This as-told-to essay is based on interviews with Tabitha Snavely, a 22-year-old living in the Palisades. Her identity and employment has been verified by Business Insider. This story has been edited for length and clarity.I'm 22 years old, and I live alone in a rented medium-sized apartment building in the Palisades. I've lived in California my whole life so this isn't my first time dealing with a fire.On Tuesday night, I woke up to the alert of an evacuation. I had about 45 minutes to pack and feed my dog really quickly because I didn't know when we'd be in a safe area. As I was packing up, I started smelling smoke in my unit. As soon as I started smelling the smoke, I was like, "OK, I need to leave now."I threw some clothes in a suitcase, grabbed my dog, loaded up my car, and started driving. I grabbed everything that I had the instinct to get out in an emergency, like my phone, laptop, some clothes.When I pulled out of the parking garage, I could see so much smoke and then hundreds of cars trying to get out. Luckily, I am closer to the Pacific Coast Highway, so I was able to get out pretty quick.I evacuated to San Diego. My family thought it was a better idea for me to evacuate here with them than to go to a friend's in Santa Monica or Hollywood because as the fire keeps growing, more people seem to keep getting evacuated.The aftermathI saw on the Watch Duty app on Wednesday that my apartment building was in a bright red zone I planned to check on it over the weekend. Now, I've seen news footage showing my street and buildings that burned, including my apartment building.Everything I left behind may now be gone: photos, books, my college diploma. I had my great-grandmother's quilt that she made for me before she passed away. I also had a set of plates and mugs that my grandma gave to me when she died. A lot of my things are thrifted or are hand-me-downs so they're not easily replaceable.At the same time, my parents are selling their house, so the place I could stay for evacuation, although it isn't close to my job, is very temporary. I feel like a floater with nowhere to go because I don't want to impose on my friends too long without a secure plan for where to go.A lot of my friends live in Santa Monica, Brentwood, or Malibu, and they have been starting to get evacuated now, but some of them have been able to return to their homes.I'm close with a lot of the older people who have lived in the Palisades for decades. I would hang out with them at the coffee shops for hours. I don't know how any of them are doing because many of them only have home phones and not cellphones.I have a hard time feeling at home, but this apartment finally felt like home. I loved everything about it.It's not only wealthy people living in the PalisadesI know also that a lot of people think that the Palisades is very wealthy, but I don't feel rich. I have multiple jobs. A lot of the people in my building are blue-collar workers, and there are a lot of older people in the Palisades.I work at a coconut water company, which allowed me to work remotely for the next week before returning in person. I also am a personal assistant for someone who owns a film production company, which also gave me the week off.Luckily, I haverenter's insurance, but I don't know where I will be living. Airbnb has offered a free week to people who have lost their homes, but I filled out the form and haven't heard anything back yet.I have friends who have offered their couches for me to sleep on while I figure out what to do next, but I still do not have a definitive plan because I've been so scatterbrained since the fire evacuation alert woke me up.I think we need to keep more compassion alive at this time because online, I've been seeing a lot of people say that the Palisades is very rich and they can all just afford to rebuild.No one deserves to lose their home.
    0 التعليقات ·0 المشاركات ·89 مشاهدة
  • China and its military have been making some big moves ahead of Trump's return to the White House
    www.businessinsider.com
    China has been busy lately showcasing its military might and hybrid warfare tactics.The moves come ahead of Trump's return to the White House this month.Its activities highlight China's ambitions and intentions and the challenge it poses.Since the 2024 US presidential election, China has been surprisingly busy with overt and covert displays of might that represent a challenge for the US, both the current and incoming administrations.China, long identified as the Pentagon's "pacing challenge," has flexed new military capabilities, increased pressure on US allies and partners, and engaged in hybrid warfare in cyberspace.Over the past couple of months, suspected next-generation Chinese combat aircraft have appeared, China's navy has launched new warships, the Chinese military simulated a naval blockade of Japan for the first time and carried out massive drills near Taiwan, the US has blamed Chinese hackers for major hacks of the Treasury Department and telecommunications firms, and concerns have been raised about China's involvement in damage to critical undersea infrastructure.Some of Beijing's recent actions might be "part of a long-term strategy to shape a new (or returning) administration's approach to China and deter external support for Taiwan," Matthew Funaiole, a senior fellow with the China Power Project at the Center for Strategic and International Studies, told Business Insider.China has been flexing its military muscles in a big way China said its "Joint Sword-2024B" successfully tested integrating joint military operations. GREG BAKER/AFP via Getty Images China deployed around 90 of its navy and coast guard vessels around Taiwan, as well as southern Japanese islands, for a large-scale exercise, Taiwan said in December. Beijing didn't announce anything ahead of time and hasn't acknowledged it as a military drill.Taiwan recorded over 60 incursions into its air defense identification zone and said Chinese forces were simulating attacks on foreign ships and disrupting the navigation of others.The exercise was China's largest since the 1996 Taiwan Strait Crisis. It followed Taiwanese President Lai Ching-te's visit to several Pacific partners, which Beijing condemned, and came ahead of Trump's return to the White House. China routinely ramps up demonstrations of military power at symbolically important times for both international and domestic audiences.Giselle Donnelly, a senior fellow in defense and national security policy at the American Enterprise Institute, said that the timing of these exercises "is more than coincidence," not unlike Russian President Vladimir Putin's efforts over the last few months to put pressure on Ukraine before Trump takes office and US aid to Kyiv faces an uncertain future. Cross-Strait relations are tense as China continues its coercion and intimidation tactics against Taiwan. SAM YEH/AFP via Getty Images Chinese leadership may see exercises like this as a way to "get an early read" on the incoming Trump administration's approach to US-Chinese politics, she added.In December, China also held military exercises resembling a naval blockade in the Miyako Strait between Japan's main island and Miyako Island, Japanese officials told The Yomiuri Shimbun, which reported the news earlier this month.And just before ringing in 2025, China announced its air and naval forces were conducting combat readiness patrols around the Scarborough Shoal, a disputed area in the South China Sea near the Philippines that was the site of heightened and repeated confrontations between Beijing and Manila last year. Chinese vessels were accused of harassing Philippine ships, in some cases ramming them and blasting crews with water.China has unveiled new capabilities China's new advanced stealth jet, the J-35A. People's Liberation Army News Communication Center In November, China unveiled advanced aircraft at its Zhuhai Airshow, including the J-35A land-based stealth fighter. The Chinese developer hailed the fifth-gen jet's stealth, informationization, and networking capabilities, calling it a "point guard" for Chinese airpower similar to how the F-35 stealth fighter has been referred to as a "quarterback" by Lockheed Martin and the US military.The next month, however, China surprised Chinese aviation watchers with what observers suspect are prototypes of next-generation aircraft. The Pentagon said in its latest Chinese military power report that Beijing is developing new medium- and long-range stealth bombers to strike regional and global targets." It's unclear if the aircraft, which were flown in broad daylight, are part of those efforts. The Type 076 amphibious assault ship is the largest of its kind in the world. VCG/VCG via Getty Images China also launched the warship CNS Sichuan, China's first Type 076, and the world's largest amphibious assault vessel an upgrade from China's Type 075 warship. The large ship features an electromagnetic catapult system for launching and retrieving fixed-wing and unmanned aircraft.For several years now, the US Department of Defense has noted China's growing navy, already the world's largest, and shipbuilding prowess, as the industrial juggernaut churns out new vessels.China has been called out for dangerous cyber activity China's telecommunications hack targeted high-level US officials, such as President-elect Trump. Allison Robbert/Getty Images Washington has accused Chinese actors of engaging in major hacks lately.Just before the new year, the Treasury revealed that suspected Chinese state-sponsored hackers had breached its systems and were able to "access certain unclassified documents" from department workstations. The department said it was working "fully characterize the incident and determine its overall impact."The hack followed the discovery of a yearslong breach by China of US telecommunications companies. Top targets of the hack included Trump, his pick for vice president, J.D. Vance, and current VP Kamala Harris. Washington linked this hack to an incident involving Microsoft last summer. Lawmakers have expressed concern that encrypted calls and texts may no longer be secure.White House Deputy national security advisor Anne Neuberger said that data belonging to millions of Americans was likely compromised by the hack and noted the US doesn't believe these hackers have been"fully removed" from targeted systems. Trump could face a more aggressive China in the Indo-Pacific region. Sun Xiang/Xinhua via Getty Images Chinese actors are also believed to have accessed the Justice Department's list of wiretapped phone numbers related to potential espionage crimes.There have long been concerns about China's hybrid warfare and its potential for systems destruction warfare in a crisis situation.China has also been accused of sabotaging undersea cables near Taiwan and been linked to an incident in the Baltic Sea. Experts and officials have assessed the efforts are in line with the larger hybrid warfare tactics employed by Beijing.Over the past two months, China has showcased capabilities and engaged in actions that represent potential challenges for the US and its allies and partners. These are issues the incoming Trump administration will continue to face.And it isn't China alone. US and other Western officials have increasingly expressed alarm at cooperation between China, Russia, Iran, and North Korea, all of which have been stepping up their efforts to confront the US-led world order.
    0 التعليقات ·0 المشاركات ·97 مشاهدة
  • I paid $110 for a structured Gel-X manicure. The short set was low-maintenance and long-lasting.
    www.businessinsider.com
    2025-01-12T12:46:01Z Read in app Angle down iconAn icon in the shape of an angle pointing down. The gems were such a fun touch. Gia Yetikyel This story is available exclusively to Business Insider subscribers. Become an Insider and start reading now.Have an account? I get manicures every month, and I prefer structured nails when I don't do extensions.I spent $110 on a structured Gel-X set, which included soft builder gel, nail art, and cuticle oil.I liked the low maintenance of this manicure, and it lasted four weeks.For most of 2024, I tested several different types of manicures, from Japanese and Russian to intricately painted press-ons and at-home gel. By the end of the year, I was thrilled to be going back to basics with my go-to Chicago nail tech.For this appointment, I opted for a $110 structured Gel-X manicure with some added artistic flair.A structured manicure is similar to a regular gel set, except there's an extra layer of soft or hard gel to build up your natural nail before applying the polish. This layer creates an apex, making for a stronger foundation that lessens the likelihood of breakage.With nail health and manicure longevity at the top of my priorities list, here's how it went. It was nice to return to my usual nail artists.I love the pink, maximalist theme of my nail artist's studio. Gia Yetikyel After months of experimental manicures, I went back to my usual licensed nail tech, Teresa "Tere" Rodriguez, a Chicago-based artist who specializes in gel extensions and structured manicures.I get manicures once a month, which can really put a gamble on my nail health. I adore long, heavily decorated nails, but those tend to be more damaging especially when I break one.Whenever I'm looking for a break from extensions, I opt for structured manicures to (hopefully) better protect my nail health.Plus, I like that they typically last three to four weeks as opposed to the standard two to three. The manicure prep followed a pretty simple process.My nail artists made sure to start with a solid base. Gia Yetikyel Based on Rodriguez's website options, I booked a "Tier 3 Extra Intermediate" appointment for 6:30 p.m. That level included charms, textures, and nail art.Rodriguez started the appointment by prepping my nails and conducting cuticle care like any standard manicure. Next came the builder gel.Structured manicures add an extra layer of gel beneath the polish. Gia Yetikyel Once the nails were prepped, Rodriguez followed up by applying a slip layer, which is a thin layer of builder gel. They then added a thicker layer of soft builder gel, which created a solid apex.After the builder came a base layer of black gel polish. I opted for a flashy gold design.I sent a nail inspiration image prior to my appointment. Gia Yetikyel I sent Rodriguez inspiration images for my manicure a week before my appointment. The main theme was gold glamor with lots of gemstones.Since I'm only working with the size of my actual nail beds (as opposed to longer extensions), I had to be picky about the size, shape, and color I wanted the gemstones to be.Rodriguez got to work adding 3D effects and rubbing gold chrome on top of the black gel on each nail. They then hand-placed a gemstone on nearly every open spot and secured them by curing the gel under the lamp.Before curing each nail, the tech made sure I was happy with the design, which I really appreciated. The set took about two hours in total.The gems were such a fun touch. Gia Yetikyel With prep work, builder gel, curing, and designing, I knew I was in for the long haul.Rodriguez and I usually chat to pass the time during the appointment, but they also offer silent appointment options, where clients can listen to a podcast or watch a show instead of talking.I like to be involved in the design process of each nail, which can be time-consuming, so I wasn't surprised when my phone read 8:30 p.m. as Rodriguez applied cuticle oil to my finished nails. After four weeks, I thought my nails still looked pretty decent.The manicure had grown out a bit and I lost a couple of gems, but that's about all the damage. Gia Yetikyel Overall, I enjoyed how low-maintenance this set was. Although I feel more confident with long nails, I took comfort in the fact that I didn't have to worry much about breakage.After four weeks, my manicure was still holding up with the exception of a few wayward gemstones, which can be chalked up to my hands-on lifestyle. Because of the gold base, the missing gems didn't put much of a dent in the overall aesthetic.Luckily, I didn't think the grow-out was too bad, which ultimately saved me money because I could skip a mid-month appointment. However, I was really interested in the health of my nails under the gel.When I got the set removed, my nails looked a little thin but not as flimsy as I anticipated.I wouldn't use a structured gel manicure as a way to grow out my natural nails, but I think it's great for low-maintenance upkeep. reviewClose iconTwo crossed lines that form an 'X'. It indicates a way to close an interaction, or dismiss a notification.
    0 التعليقات ·0 المشاركات ·111 مشاهدة
  • Its getting harder to measure just how good AI is getting
    www.vox.com
    Toward the end of 2024, I offered a take on all the talk about whether AIs scaling laws were hitting a real-life technical wall. I argued that the question matters less than many think: There are existing AI systems powerful enough to profoundly change our world, and the next few years are going to be defined by progress in AI, whether the scaling laws hold or not. Its always a risky business prognosticating about AI, because you can be proven wrong so fast. Its embarrassing enough as a writer when your predictions for the upcoming year dont pan out. When your predictions for the upcoming week are proven false? Thats pretty bad. But less than a week after I wrote that piece, OpenAIs end-of-year series of releases included their latest large language model (LLM), o3. o3 does not exactly put the lie to claims that the scaling laws that used to define AI progress dont work quite that well anymore going forward, but it definitively puts the lie to the claim that AI progress is hitting a wall. o3 is really, really impressive. In fact, to appreciate how impressive it is were going to have to digress a little into the science of how we measure AI systems.Standardized tests for robotsIf you want to compare two language models, you want to measure the performance of each of them on a set of problems that they havent seen before. Thats harder than it sounds since these models are fed enormous amounts of text as part of training, theyve seen most tests before. So what machine learning researchers do is build benchmarks, tests for AI systems that let us compare them directly to one another and to human performance across a range of tasks: math, programming, reading and interpreting texts, you name it. For a while, we tested AIs on the US Math Olympiad, a mathematics championship, and on physics, biology, and chemistry problems. The problem is that AIs have been improving so fast that they keep making benchmarks worthless. Once an AI performs well enough on a benchmark we say the benchmark is saturated, meaning its no longer usefully distinguishing how capable the AIs are, because all of them get near-perfect scores. 2024 was the year in which benchmark after benchmark for AI capabilities became as saturated as the Pacific Ocean. We used to test AIs against a physics, biology, and chemistry benchmark called GPQA that was so difficult that even PhD students in the corresponding fields would generally score less than 70 percent. But the AIs now perform better than humans with relevant PhDs, so its not a good way to measure further progress. On the Math Olympiad qualifier, too, the models now perform among top humans. A benchmark called the MMLU was meant to measure language understanding with questions across many different domains. The best models have saturated that one, too. A benchmark called ARC-AGI was meant to be really, really difficult and measure general humanlike intelligence but o3 (when tuned for the task) achieves a bombshell 88 percent on it. We can always create more benchmarks. (We are doing so ARC-AGI-2 will be announced soon, and is supposed to be much harder.) But at the rate AIs are progressing, each new benchmark only lasts a few years, at best. And perhaps more importantly for those of us who arent machine learning researchers, benchmarks increasingly have to measure AI performance on tasks that humans couldnt do themselves in order to describe what they are and arent capable of. Yes, AIs still make stupid and annoying mistakes. But if its been six months since you were paying attention, or if youve mostly only playing around with the free versions of language models available online, which are well behind the frontier, you are overestimating how many stupid and annoying mistakes they make, and underestimating how capable they are on hard, intellectually demanding tasks. The invisible wallThis week in Time, Garrison Lovely argued that AI progress didnt hit a wall so much as become invisible, primarily improving by leaps and bounds in ways that people dont pay attention to. (I have never tried to get an AI to solve elite programming or biology or mathematics or physics problems, and wouldnt be able to tell if it was right anyway.)Anyone can tell the difference between a 5-year-old learning arithmetic and a high schooler learning calculus, so the progress between those points looks and feels tangible. Most of us cant really tell the difference between a first-year math undergraduate and the worlds most genius mathematicians, so AIs progress between those points hasnt felt like much.But that progress is in fact a big deal. The way AI is going to truly change our world is by automating an enormous amount of intellectual work that was once done by humans, and three things will drive its ability to do that.One is getting cheaper. o3 gets astonishing results, but it can cost more than $,1000 to think about a hard question and come up with an answer. However, the end-of-year release of Chinas DeepSeek indicated that it might be possible to get high-quality performance very cheaply.The second is improvements in how we interface with it. Everyone I talk to about AI products is confident there are tons of innovation to be achieved in how we interact with AIs, how they check their work, and how we set which AI to use for which task. You could imagine a system where normally a mid-tier chatbot does the work but can internally call in a more expensive model when your question needs it. This is all product work versus sheer technical work, and its what I warned in December would transform our world even if all AI progress halted.And the third is AI systems getting smarter and for all the declarations about hitting walls, it looks like they are still doing that. The newest systems are better at reasoning, better at problem solving, and just generally closer to being experts in a wide range of fields. To some extent we dont even know how smart they are because were still scrambling to figure out how to measure it once we are no longer really able to use tests against human expertise.I think that these are the three defining forces of the next few years thats how important AI is. Like it or not (and I dont really like it, myself; I dont think that this world-changing transition is being handled responsibly at all) none of the three are hitting a wall, and any one of the three would be sufficient to lastingly change the world we live in.A version of this story originally appeared in the Future Perfect newsletter. Sign up here!Youve read 1 article in the last monthHere at Vox, we're unwavering in our commitment to covering the issues that matter most to you threats to democracy, immigration, reproductive rights, the environment, and the rising polarization across this country.Our mission is to provide clear, accessible journalism that empowers you to stay informed and engaged in shaping our world. By becoming a Vox Member, you directly strengthen our ability to deliver in-depth, independent reporting that drives meaningful change.We rely on readers like you join us.Swati SharmaVox Editor-in-ChiefSee More:
    0 التعليقات ·0 المشاركات ·119 مشاهدة
  • The Best Obesity Drugs Arent Even Here Yet
    gizmodo.com
    By Ed Cara Published January 12, 2025 | Comments (0) | Ozempic is just the beginning, according to new research. Steve Christo/Corbis via Getty Ozempic is just the beginning of a new era of obesity treatment. A review published this week previews the emergence of similar experimental drugs that will likely be even more effective at helping people lose weight. Researchers at McGill University conducted the study, which was a review of the clinical trial data surrounding GLP-1 drugs like semaglutide (the active ingredient in Ozempic and Wegovy). The researchers reaffirmed the safety and effectiveness of todays drugs. But they also highlighted the potential superiority of newer compounds currently under development such as retatrutide, which has helped people lose more than 20% of their original body weight in trials so far. Semaglutide is a synthetic and longer-lasting version of the hormone GLP-1a hormone that regulates hunger and insulin production, among other things. Developed by Novo Nordisk, semaglutide was first approved for type 2 diabetes in 2017 as Ozempic, then for obesity in 2021 as Wegovy. Its far from the first GLP-1 drug to reach the public, but semaglutide has been a game-changer for obesity treatment. Its been shown to help people lose somewhere between 10% to 15% of their weight in studies, well above the typical success seen with diet and exercise alone and even surpassing the typical results of older GLP-1 drugs.Semaglutide isnt the only new kid on the block, though. Eli Lillys tirzepatide mimics both GLP-1 and another hunger-related hormone called GIPa potent combination that has allowed it to dethrone semaglutide. In clinical trials, people on tirzepatide have lost as much as 20% of their baseline weight. There are dozens of other related obesity treatments in the pipeline as well, some of which have made it to human testing and are poised to overshadow even tirzepatide.The McGill researchers analyzed data from 26 randomized clinical trials of single-agent GLP-1 drugs, double agonists like tirzepatide, and even triple-agonist drugs like retatrutide, which combines synthetic versions of three hunger-related hormones: GLP-1, GIP, and the glucagon. These trials involved people living with obesity but who did not have type 2 diabetes. As expected, they found that todays approved drugs were generally safe and effective, with tirzepatide faring the best currently (participants lost up to 17% body weight after 72 weeks of therapy). But they also singled out retatrutide as performing even better in a shorter period of time, with participants losing up to 22% of their body weight after only 48 weeks of therapy.We found that, of the 12 GLP-1 [drugs] identified by our search, the greatest mean body weight reduction was reported in randomized controlled trials of retatrutide, tirzepatide, and semaglutide, the researchers wrote in their paper, published Tuesday in theAnnals of Internal Medicine.Retatrutide is being developed by Eli Lilly, and its now currently being tested in phase 3 trialstrials that will reach their conclusion by 2026. And it wont be the only newcomer arriving in the near-future that could outslug todays existing drugs. Last year, for instance, early trial results of the drug amycretin (developed by Novo Nordisk) suggested that it could provide greater weight loss than semaglutide and tirzepatide. Other drug companies are working on their own competitors to Ozempic, such asBoehringer Ingelheim and Zealand Pharmas dual agonist survodutide. Expectations have gotten so high that Novo Nordisks stock actually dropped when it announced that their drug candidate CagriSema (a mix of semaglutide with the experimental drug cagrilintide) only helped people lose 22% weight in a recent trial, rather than the 25% expected.These drugs arent free of its negatives, of course. They commonly cause gastrointestinal symptoms such as diarrhea and vomiting, and have been tied to rare but serious complications like gastroparesis (stomach paralysis). Another major concern is their price, with semaglutide and tirzepatide often costing around $1,000 per month without insurance coverage (which often isnt provided by private and public insurers). That cost and surging demand has fueled a grey and black market for these drugs, with people turning to cheaper, but less safe compounded and counterfeit versions. Some experts hope that the arrival of more GLP-1 related drugs will help curtail some of these issues, particularly cost and insurance coverage. Whether that actually happens, well have to see. But its almost certain that there will be plenty of drugs coming for semaglutide and tirzepatides current crown as the best obesity treatments around.Daily NewsletterYou May Also Like By Ed Cara Published December 23, 2024 By Ed Cara Published December 17, 2024 By Margherita Bassi Published December 8, 2024 By Ed Cara Published November 26, 2024 By Ed Cara Published November 19, 2024 By Ed Cara Published October 18, 2024
    0 التعليقات ·0 المشاركات ·136 مشاهدة
  • Technology Is Supposed to Decrease Teacher BurnoutIt Can Sometimes Make ItWorse
    gizmodo.com
    When we set out to study pandemic-related changes in schools, we thought wed find that learning management systems that rely on technology to improve teaching would make educators jobs easier. Instead, we found that teachers whose schools were using learning management systems had higher rates of burnout. Our findings were based on a survey of 779 U.S. teachers conducted in May 2022, along with subsequent focus groups that took place in the fall of that year. Our study was peer-reviewed and published in April 2024. During the COVID-19 pandemic, when schools across the country were under lockdown orders, schools adopted new technologies to facilitate remote learning during the crisis. These technologies included learning management systems, which are online platforms that help educators organize and keep track of their coursework.We were puzzled to find that teachers who used a learning management system such as Canvas or Schoology reported higher levels of burnout. Ideally, these tools should have simplified their jobs. We also thought these systems would improve teachers ability to organize documents and assignments, mainly because they would house everything digitally, and thus, reduce the need to print documents or bring piles of student work home to grade. But in the follow-up focus groups we conducted, the data told a different story. Instead of being used to replace old ways of completing tasks, the learning management systems were simply another thing on teachers plates.A telling example was seen in lesson planning. Before the pandemic, teachers typically submitted hard copies of lesson plans to administrators. However, once school systems introduced learning management systems, some teachers were expected to not only continue submitting paper plans but to also upload digital versions to the learning management system using a completely different format.Asking teachers to adopt new tools without removing old requirements is a recipe for burnout. Teachers who taught early elementary grades had the most complaints about learning management systems because the systems did not align with where their students were at. A kindergarten teacher from Las Vegas shared, Now granted my kids cannot really count to 10 when they first come in, but they have to learn a six digit student number to access Canvas. I definitely agree that it does lead to burnout.In addition to technology-related concerns, teachers identified other factors such as administrative support, teacher autonomy and mental health as predictors of burnout.Why it matters Teacher burnout has been a persistent issue in education, and one that became especially pronounced during and after the COVID-19 pandemic. If new technology is being adopted to help teachers do their jobs, then school leaders need to make sure it will not add extra work for them. If it adds to or increases teachers workloads, then adding technology increases the likelihood that a teacher will burn out. This likely compels more teachers to leave the field.Schools that implement new technologies should make sure that they are streamlining the job of being a teacher by offsetting other tasks, and not simply adding more work to their load. The broader lesson from this study is that teacher well-being should be a primary focus with the implementation of schoolwide changes.Whats next We believe our research is relevant for not only learning management systems but for other new technologies, including emerging artificial intelligence tools. We believe future research should identify schools and districts that effectively integrate new technologies and learn from their successes. The Research Brief is a short take on interesting academic work. David T. Marshall, Associate Professor of Educational Research, Auburn University; Teanna Moore, Associate Researcher at Accessible Teaching, Learning and Assessment Systems, University of Kansas, and Timothy Pressley, Associate Professor of Psychology, Christopher Newport UniversityThis article is republished from The Conversation under a Creative Commons license. Read the original article.
    0 التعليقات ·0 المشاركات ·147 مشاهدة
  • How to use Google Deep Research to save hours of time
    www.popsci.com
    Google Deep Research is powered by Gemini AI, not little cartoons. Image: DepositPhotosShareGoogle hasnt been shy in pushing out new AI tools and features in recent months, from Gemini in Gmail to AI-hosted podcasts. One of the latest innovations unveiled is Google Deep Research, which essentially lets Googles Gemini AI loose on the web, with a mission to thoroughly research a topic of your choice.Imagine theres something you need to do that would normally require a lot of Googling: It could be finding the best phone to upgrade to, for example, or trying to understand how a self-driving car is put together, or charting out the history of Scotland in the 17th century. Deep Research can take on any kind of challenge like this.Its whats known as an agentic featurea trending term in AI that basically means these bots get more agency and control over what theyre doing. The idea is theres less hand-holding and more of the AI working independently, which (in theory) should mean less work for you to do.Under your supervision, Deep Research does the hard work for you, Google explains. After you enter your question, it creates a multi-step research plan for you to either revise or approve. Once you approve, it begins deeply analyzing relevant information from across the web on your behalf.One caveat: You need to be signed up to the $20-per-month Gemini Advanced plan to try this out at the moment (which also gives you 2TB of Google One storage as well). Head to the Google One plans page, and you may be able to access a free trial of Gemini Advanced, if you havent previously signed up.Getting started with Google Deep ResearchYoull see a research plan, then the web searching will start. Screenshot: Google There are three main stages to Google Deep Research workflow: To get to the first, head to Gemini on the web, choose the 1.5 Pro with Deep Research option from the drop-down menu in the top-left corner, then tell the AI what you want to know about (you should see some suggestions appear if youre stuck for inspiration).What you search for is completely up to you, but this is a tool for comprehensive research: It needs to be above and beyond something youd get a standard AI chatbot answer for (like whats a good party game for 5-year-olds or how do you get wine stains out of a fabric couch). You could think about something you need help with for work or a side hustle, or just something youre interested in as a hobby.Enter your prompt into the boxit doesnt have to be long, but the more detail the better, if possibleand the first stage of Google Deep Research gets underway. The AI will think about the prompt youve given, mull over how best to approach the topic, and then present you with an overview of how its going to tackle the challenge.This research plan is presented to you, and you can make changes by clicking Edit plan: Youre then dropped back into a chat with the AI, so you can tell it to focus more on certain areas, or add in extra areas of research you want it to cover. Each time you make a suggestion, youll be given a revised plan.When you see a research plan on screen youre happy with, click Start research to move on to the next stage. Gemini gets to work, and will show you on screen the websites its looking at and the information its parsingyou can sit and watch, or you can get on with something else. You can even close down the Gemini app completely if you need to, and come back later to get your answers.Answers and reports from Google Deep ResearchYoure able to keep asking questions about the finished report. Screenshot: Google You shouldnt have to wait too long for your answersmany queries can be researched in just a few minutesand youre then presented with a detailed report, which should cover everything in your original prompt. If you asked any specific questions about a topic, they should be specifically addressed somewhere.This is the third stage of Google Deep Research, working your way through the report. It doesnt have to be the end of the research though: You can ask follow-up questions about anything in the report, just as you would in a normal conversation with Gemini. If you carry on the chat, the report itself gets minimized, but you can open it up again if needed.At the end of the responses, and the end of the report itself, you get references to the web: These can be useful to check the workings of Gemini and to look for further information yourself. Even in Deep Research mode, the Gemini can make mistakes message is displayed, and its important not to rely too heavily on AI reporting until youve made sure its all accurate.Another handy feature of Google Deep Research: You can convert your report into a Google Docs document, by clicking the Open in Docs button in the top right corner. This will present you with a rather plainly formatted document that you can edit or save as usual. If there are tables in your report, theyll be transferred over to Google Docs, and you still get all your web references at the bottom too.From testing out Google Gemini Research on a few tech topics, it seems to do a decent job at collating information and working out whats important and what isnthaving access to the web resources the AI has used really helps in digging deeper and checking facts. Your mileage may of course vary, depending on the topic and the information available, but the tool definitely has the potential to save you a substantial amount of research time.
    0 التعليقات ·0 المشاركات ·148 مشاهدة
  • Last chance to get this VPN with no subscription fees
    www.popsci.com
    Stack CommerceShareWe may earn revenue from the products available on this page and participate in affiliate programs. Learn more Heres some unsettling news for your Sunday: your VPN is overcharging you. It doesnt matter how much or little youre paying because you could be getting a better dealno fees at allwith this VPN router.With the same concept as your digital one, some argue the routers decentralized servers are even more private and secure. But you might care most about never having to pay for it ever, ever again. Today (January 12) is the last day its on sale for $149.97 with free shipping (reg. $219).How does it work? And what can it do?The setup is seamless, maybe more so than your existing one. Plug the routers USB-C plug into your laptop, where it gets power. Choose from over 150,000 servers then wirelessly pair up to five devicesall are compatible, including smart TVs.Each connected device is protected with military-grade encryption, whether youre at home, working from a cafe, or traveling the world.The VPN router is also an excellent choice to unblock content on streaming services. Simply choose a server in locations where content is available, connect the device you want to stream on and enjoy.This VPN also has built-in ad blocking at the server level. Imagine watching YouTube or reading articles without the usual distractions or annoyances plaguing the internet.If you want a VPN without fees, today is your last chance to order this router for $149.97 (reg. $219). Deal ends January 12 at 11:59 p.m. PT and no coupon is needed.StackSocial prices subject to change.Deeper Connect Air Portable VPN Travel Router $149.97See Deal
    0 التعليقات ·0 المشاركات ·150 مشاهدة