• DIY pomodoro desk robot is the cutest way to boost your productivity
    www.yankodesign.com
    There is no shortage of timer designs that help you keep track of time, especially when it comes to the work youre doing. There are actual physical kitchen timers, some shaped like a tomato, and theres also the standard clock app pre-installed on most phones. Useful as these might be, few of them could be considered inspiring on their own, at least not unless you design your own timer to fit your tastes.If your tastes lean toward gadgets and tech, you might want a robot that does exactly that for you, just like this cute box that clearly expresses no judgment on your procrastination. But why stop there when you can have that same robot warn you if the air around is unclean, potentially affecting your productivity as well? If nothing else, just looking at the bots cute face could give you some warm fuzzy feelings and help carry you through your work day.Designer: Coders CafeLike many electronics projects these days, a computer-based design such as this is unsurprisingly powered by a Raspberry Pi single-board computer or SBC. What makes this project a bit different is that its relatively simple and uses only a few components you can buy off shelves. Yes, youll need to 3D print the robots body and youll still need to solder the sensors together, but the guide is pretty thorough on what you need to do.In a nutshell, the Pomodoro Bot is a Raspberry Pi with an attached display, an air quality sensor, and an ambient light sensor all hooked up together in a robot-like 3D printed case. The primary function of the device is to count down from 25 minutes, the recommended block of time in the Pomodoro Technique once you press the cute button on its head. But since its running a full-blown computer inside, it actually doesnt take much to make it do other things that may or may not be related to your productivity.For one, it can alert you to upcoming appointments by visually showing the details on its face. This works by hooking up a Google Calendar account to the Pomodoro Bot via a remote service. Its face also turns red when its sensor detects that the air quality around it is low, silently alerting you to the health hazard. Finally, the light sensor can detect if it has gotten darker around it and switch to a dark mode to protect your eyes.The one catch to this fun little project is that it requires signing up for some online platform that may or may not suit your taste, regardless if its free. Fortunately, we are talking about the Raspberry Pi here along with some common sensors. It wouldnt be outside the realm of possibility for someone to take the same design and use a completely different software that wouldnt even require connecting to the cloud for privacy and efficiency.The post DIY pomodoro desk robot is the cutest way to boost your productivity first appeared on Yanko Design.
    0 Comments ·0 Shares ·29 Views
  • 0 Comments ·0 Shares ·28 Views
  • Hatch Restore 3 Review: A Worthy Upgrade
    www.wired.com
    Hatchs latest sunrise alarm clock and sound machine is bigger and brighter and has more physical controls, making it easier to use without grabbing your phone.
    0 Comments ·0 Shares ·29 Views
  • Inside the Bust That Took Down Pavel Durovand Upended Telegram
    www.wired.com
    The Russian-born CEO styles himself as a free-speech crusader and a scourge of the surveillance state. Heres the real story behind Pavel Durovs arrest and what happened next.
    0 Comments ·0 Shares ·30 Views
  • Apple killed the wrong Vision Pro project
    www.macworld.com
    MacworldAfter Apple kickstarted the spatial computing era with the high-priced Vision Pro mixed-reality headset last year, it seemed to be a first step toward a future all-day wearable device. However, it turns out that might not be the case. A report by Mark Gurman at Bloomberg claims that Apple is moving in the other direction after canceling its most promising augmented reality glasses project after hitting several roadblocks and focusing instead on the next-gen Vision Pro headset.Apples got it completely backward: Apple should kill Vision Pro 2 and pour more resources into its smart glasses project.More is lessIm not undermining the Vision Pros advanced technology and capabilities. Its objectively one of the highest-end consumer headsets on the market, featuring sharp displays, a dozen sensors, a slew of well-designed apps, and seamless integrations with the Apple ecosystem. Its $3,499 price tag, however, acts as the first hurdle barring mass adoption.Beyond its outrageous price tag, the Vision Pro is the first in its product line, and, naturally, its filled with imperfections and limitations. Reviewers almost unanimously agree that it is too heavy and causes neck discomfort with extended use. Thats not to mention its relatively short battery life and lack of outdoor use.visionOS is essentially an immersive iPadOS/macOS hybrid that runs in users fields of view. But, what if we dont want all of these overpriced complexities?A pair of smart glasses would be a fantastic addition to the Apple ecosystem.FoundryRay of lightWhile Meta has long catered to gamers and VR enthusiasts with its Quest headset (that costs one-tenth of a Vision Pro), its $299 Ray-Ban glasses are a different animal. For one, theyre not much more expensive than a Meta-less pair of Ray-Bans, but more importantly, theyre not catered to a niche tech-first category of customers.First, to make its device appealing to wear, Meta collaborated with Ray-Banone of the most popular sunglasses brands. Sometimes people are embarrassed to wear nerdy accessories in public. So, having the Ray-Ban branding instantly takes away that stigma.Also Meta didnt shoot for the moon like Apple did. Instead, Meta built in just a few useful perks to keep its glasses simple and cheap. The temple tips house discreet open-ear speakers for music streaming on the go, so you dont need a separate pair of earbuds. It has a set of AirPods-like controls built right into the arms.But most importantly, the glasses feature a forward-facing camera instead of a screen, which offers a window into the world for Metas AI bot to analyze what it sees and report back. And it also lets wearers take quick photos and videos for direct posting to Instagram Stories.Unlike the Vision Pro, the IPX4-certified Ray-Ban Meta are meant to be used away from home. So, users can put them on like any regular pair of sunglasses, which is great for family picnics, concerts, and influencers.Help wantedIts clear that Apple is currently focused on the flagship headset line, as the Vision Pros successor could launch as soon as next year. In my opinion, Apple shouldnt kill the glasses project but should leverage its own ecosystem advantage to create lightweight spectacles that rely on other devices to do the heavy lifting.A pair of Apple smart glasses could take some cues from the AirPods.FoundryThe AirPods, for example, can announce notifications. Similarly, the Apple Watch packs exclusive perks unavailable to rivaling smartwatch brands, such as auto Mac unlock. So, Apple is in a position to create the best smart glasses for iOS users, as no other manufacturer has access to the underlying ecosystem infrastructure. Plus there are plenty of people who just dont trust Meta.A pair of Apple glasses doesnt need to have a Vision Pro-like interface. Like the early Apple Watches, they could piggyback on a paired iPhones processor and internet connection to offer some handy shortcuts. Like the AirPods, they could handle Siri requests, announce notifications, and accept calls. And with an embedded camera, Apples new Visual Intelligence feature could come alive. The smart glasses would transmit what they see to the connected iPhone, which would then analyze the content with ChatGPT and send the response back to the shades.Other potential features could include snapping quick shots and clips that users can view in the iPhones Photos app. It could also integrate with FaceTime, letting the caller enjoy the scenic route youre taking while you talk. The possibilities are endless even without incorporating any of the current Vision Pro features.Vision Pro is too advanced to shrink down to a pair of glasses.Thiago Trevisan/FoundryShort-sighted VisionBy putting the Apple Glasses on the back burner, the company is missing the smart glasses train. Meta is actively developing more advanced iterations of the Ray-Bans, while Apple seemingly has no plans to announce a competitor anytime soon. By the time the Apple glasses potentially debut, theyre going to have tremendous competition, and with Apple insisting on high-end features, it could be years before anything comes to market.Even if the Vision Pro 2 addresses most of its predecessors shortcomings, which is unlikely, its clear that the general public isnt interested in this form factor. So, maybe Apple should study whats actually working instead of trying to repair whats inherently broken.
    0 Comments ·0 Shares ·30 Views
  • Apple rolls out mysterious iOS 18.3 update for iPhone 11 only
    www.macworld.com
    MacworldWhile iOS 18.3 was not as momentous as some of Apples previous x.3 iPhone updates, it brought a number of significant changes, mostly affecting Apple Intelligence. Now, however, the company has released another version of iOS 18.3 focused on a group of devices that dont support its AI platform.As reported by MacRumors, an update with build number 22D64as distinct from 22D63, which rolled out last weekhas been released for iPhone 11, 11 Pro, and 11 Pro Max only. Those arent the oldest devices that can run iOS 18 (that title is held by the iPhone XS and XR) but theyre close to the bottom of the list and could miss out when iOS 19 is announced this summer. With their A13 chips they arent remotely close to meeting Apple Intelligences hardware requirements: an A17 or later found in the iPhone 15 Pro, Pro Max and the iPhone 16 range. Apple hasnt released a special version of iOS 18.3 for iPhone 12, 13, 14, or 15 handsets, or for that matter the XS and XR, suggesting that this update is addressing specific flaws in the iPhone 11.It seems counterintuitive that the iPhone 11 would need iOS 18.3 at all, but the update does contain some new features that arent related to Apple Intelligence. Of the five features we picked out as noteworthy, for example, there is one for the Calculator app, restoring a pre-iOS 18 ability to continually tap the equals sign in order to repeat the last operation. And the Home app gains support for HomeKit- and Matter-compatible robot vacuums.The focus of the update, however, is more likely to be bug fixes. We already know of two fixes in the general iOS 18.3 release: one tackles an Apple Music problem that caused songs to complete playback even if you closed the app, while another deals with the keyboard disappearing when typing a Siri request. But 22D64 is likely to contain a patch for a bug specific to the late-2019 handsets.To install the update, which we would recommend, you should open the Settings app on your iPhone and go to General > Software Update and follow the instructions.Read our iOS 18 superguide for more information about the latest iPhone software.
    0 Comments ·0 Shares ·30 Views
  • CIOs grapple with subpar global genAI models
    www.computerworld.com
    With the number of generative AI trials soaring in the enterprise, it is typical for the CIO to purchase numerous large language models from various model makers, tweaked for different geographies and languages.But CIOs are discovering that non-English models are faring far more poorly than English ones, even when purchased from the same vendor.There is nothing nefarious about that fact. It is simply because there is a lot less data available to train non-English models.It is almost guaranteed that all LLM implementations in languages other than English will perform with less accuracy and less relevance than implementations in English because of the vast disparity in training sample size, said Akhil Seth, head of AI business development at consultant firm UST.Less data delivers less comprehensiveness, less accuracy, and much more frequent hallucinations. (Hallucinations typically happen when the model has no information to answer the query, so it makes something up. Proud algorithms these LLMs can be.)Nefarious or not, IT leaders at global companies need to deal with this situation or suffer subpar results for customers and employees who speak languages other than English.The major model makers OpenAI, Microsoft, Amazon/AWS, IBM, Google, Anthropic, and Perplexity, among others do not typically divulge the volume of data each model is trained on, and certainly not the quality or nature of that data.Enterprises usually deal with this lack of transparency about training data via extensive testing, but that testing is often focused on the English language model, not those in other languages.There are concerns that this [imbalance of training data] would put applications leveraging non-English languages at an informational and computational disadvantage, said Flavio Villanustre, global chief information security officer of LexisNexis Risk Solutions.The volume, richness, and variability in the underlying training data is key to obtaining high-quality runtime performance of the model. Inquiries in languages that are underrepresented in the training data are likely to yield poor performance, he said.The size difference can be extremeHow much smaller are the datasets used in non-English models? That varies widely depending on the language. Its not so much a matter of the number of people who speak that language as it is the volume of data in that language available for training.Vasi Philomin, the VP and general manager for generative AI at Amazon Web Services (AWS), one of the leading AI as a Service vendors, estimated that the training datasets for non-English models are roughly 10 to 100 times smaller than their English counterparts.Although there is no precise way to predetermine how much data is available for training in a given language, Hans Florian, a distinguished research scientist for multilingual natural language processing at IBM, has a trick. You can look at the number of Wikipedia pages in that language. That correlates quite well with the amount of data available in that language, he said.Training data availability also varies by industry, topic, and use case.If you want your language model to be multilingual, the best thing you can do is have parallel data in the languages you want to support, said Mary Osborne, the senior product manager of AI and natural language processing at SAS. Thats an easy proposition in places like Quebec, for example, where all their government data is created in both English and French. If you wanted to have an LLM that did a great job of answering questions about the Canadian government in both English and French, youd have a good supply of data to pull that off, Osbourne said.But if you wanted to add an obscure indigenous language like Cree or Micmac, those languages would be vastly underrepresented in the sample. They would yield poor results compared to English and French, because the model wouldnt have seen enough data in those indigenous languages to do well, she said.Although dataset size is extremely important in a genAI model, data quality is also critical. Even though there are no objective benchmarks for assessing data quality, experts in various topics have a rough sense of what good and bad content looks like. In healthcare, for example, it might be the difference between using the New England Journal of Medicine or Lancet versus scraping the personal website of a chiropractor in Milwaukee.Like dataset size, data quality often varies by geography, according to Jrgen Bross, senior research scientist and manager in multilingual at IBM.In Japan, for example, IBM needed to apply its own quality filtering, partly because so many quality web sites in Japan are behind strict paywalls. That meant that, on average, the available Japanese data was of lower quality. Fewer newspapers and more product pages, Bross said.Quick fixes bring limited successUSTs Seth said the dataset challenges with non-English genAI models are not going to be easy to overcome. Some of the more obvious mechanisms to address the smaller training datasets for non-English models including automated translation and more aggressive fine-tuning come with their own negatives.Putting a [software] translator somewhere in the inference pipeline is an obvious quick fix, but it will no doubt introduce idiomatic inconsistencies in the generated output and potentially even in the interpretation of the input. Even multilingual models suffer from this, Seth said.Another popular countermeasure for non-English genAI models is using synthetic data to supplement the actual data. Synthetic data is typically generated by machine learning, which extrapolates patterns from real data to create likely data. The problem is that if the original data has even a hint of bias which is common synthetic data is likely to perpetuate and magnify that bias. Forgive the clich, but its the genAI version of three steps forward, two steps back.Indeed, LexisNexis Villanustre worries that this problem could get worse, hurting the accuracy and credibility of genAI-produced global analysis.There is an increasing portion of unstructured content on the internet that is currently created by generative AI models. If not careful, future models could be increasingly trained on output from other models, potentially amplifying biases and inaccuracies, Villanustre said.Practical (and sometimes expensive) approachesSo how can tech leaders better address the problem?It starts during the procurement process. Although IT operations folks typically ask excellent questions about LLMs before they purchase, they tend to be overwhelmingly focused on the English version. It doesnt occur to them that the quality delivered in the non-English models may be dramatically lower.Jason Andersen, a VP and principal analyst with Moor Insights & Strategy, said CIOs need to do everything they can to get model makers to share more information about training data for every model being purchased or licensed. There has to be much more transparency of data provenance, he said.Alternatively, CIOs can consider sourcing their non-English models from regional/local genAI firms that are native to that language. Although that approach might solve the problem for many geographies, it is going to meet strong resistance from many enterprise CIOs, said Rowan Curren, a senior analyst for genAI strategies at Forrester.Most enterprises are far more interested in sourcing their foundation models from their trusted providers, which are generally the major hyperscalers, Curren said. Enterprises really want to acquire those [model training] capabilities via their deployments on AWS, Google, or Microsoft. That gives [CIOs] a higher comfort level. They are hesitant to work with a startup.AWSs Philomin said his team is trying to split the difference for IT customers by using a genAI marketplace approach, borrowing the technique from the AWS Marketplace which in turn had borrowed the concept from its Amazon parent company. Amazons retail approach allows users to purchase from small merchants through Amazon, with Amazon taking a cut.Amazons genAI marketplace called Bedrock does something similar, providing access to a large number of genAI model makers globally. Although it certainly doesnt mitigate all of the downsides of using a little-known provider in various geographies, Philomin argues that it addresses some of them.We are removing some of the risks, [such as] the resilience of the service and the support, Philomin said. But he also stressed that those smaller players are the seller of record, not AWS. That caveat raises the question of how much help the AWS reseller role will be if something later blows up.Another approach to address the training data disparity? Bypass the non-English models (for now) by employing bilingual humans who can comfortably interact with the English model.As a German native who works primarily in English, Ive found that while LLMs are competent in German, they dont quite reach native-level proficiency, said Vincent Schmalbach, an independent AI engineer in Munich.For critical German-language content, Ive developed a practical workflow. I interact with the LLM in English to get the highest quality output, then translate the final result to German. This approach consistently produces better results than working directly in German.The tactic that most genAI specialists agree on is that CIOs need to budget more money to test and fine-tune every non-English model they want to use. That money also needs to cover the additional processing and verification needed for non-English models.That said, fine-tuning can only help so much. The training data is the heart of the genAI brain. If that is inadequate, more fine-tuning can be akin to trying to save a salad with rotting spinach by pouring on more salad dressing.And allocating additional budget to fine-tuning models can be difficult because the number of variables such as the specific languages, topics, and industry in question is too numerous to offer any realistic guidance. But IBMs Florian does offer a tiny bit of optimism: You dont need a permanent budget increase. Its just a one-time budget increase, a one-time expense that you take.In other words, once the non-English model is fully integrated and supplemented, little to no funding is needed beyond whatever the English model needs.Looking aheadTheres reason to hope that the disparity in the quality of output from models in various languages may be lessened or even negated in the coming years. Thats because a model based on a smaller dataset may not suffer from lower accuracy if the underlying data is of a higher quality.One factor now coming into play lies in the difference between public and private data. An executive at one of the largest model makers who asked to not be identified by name or employer said the major LLM makers have pretty much captured as much of the data on the public internet as they can. They are continuing to harvest new data from the internet every day, of course, but those firms are shifting much of their data-gathering efforts to private sources such as corporations and universities.We have found a lot of super high-quality data, but we cannot get access to it because its not on the internet. We need to get agreements with the owners of this data to get access, he said.Tapping into private sources of information including those in various countries around the world will potentially improve the data quality for some topics and industries, and at the same time increase the amount of good training data available for non-English models. As the total universe of training data expands, the imbalance in the amount of training data across languages may matter less and less. However, this shift is also likely to raise prices as the model makers cut deals with third parties to license their private information.Another factor that could minimize the dataset size problem in the next few years is an anticipated increase in unstructured data. Indeed, highly unstructured data such as that collected by video drones watching businesses and their customers could potentially sidestep language issues entirely, as the video analysis could be captured directly and saved in many different languages.Until the volume of high-quality data for non-English languages gets much stronger something that might slowly happen with more unstructured, private, and language-agnostic data in the next few years CIOs need to demand better answers from model vendors on the training data for all non-English models.Lets say a global CIO is buying 118 models from an LLM vendor, in a wide range of languages. The CIO pays maybe $2 billion for the package. The vendor doesnt tell the CIO how little training was done on all of those non-English models, and certainly not where that training data came from.If the vendors were fully transparent on both of those points, CIOs would push back on pricing for everything other than the English model.In response, the model makers would likely not charge CIOs less for the non-English models but instead ramp up their efforts to find more training data to improve the accuracy of those models.Given the massive amount of money enterprises are spending on genAI, the carrot is obvious. The stick? Maybe CIOs need to get out of their comfort zone and start buying their non-English models from regional vendors in every language they need.If that starts to happen on a large scale, the major model makers may suddenly see the value of data-training transparency.
    0 Comments ·0 Shares ·28 Views
  • How would a potential ban on DeepSeek impact enterprises?
    www.cio.com
    Chinese AI startup, DeepSeek, has been facing scrutiny from governments and private entities worldwide but that hasnt stopped enterprises from investing in this OpenAI competitor.European regulators joinedMicrosoft, OpenAI, and the US governmentlast week in independent efforts to determine if DeepSeek infringed on any copyrighted data from any US technology vendor. The investigations could potentially lead to a ban on DeepSeek in the US and EU, impacting millions of dollars that enterprises are already pouring into deploying DeepSeek AI models.
    0 Comments ·0 Shares ·29 Views
  • How the Rubin Observatory will help us understand dark matter and dark energy
    www.technologyreview.com
    MIT Technology Review Explains: Let our writers untangle the complex, messy world of technology to help you understand whats coming next. You can read more from the series here.We can put a good figure on how much we know about the universe: 5%. Thats how much of whats floating about in the cosmos is ordinary matterplanets and stars and galaxies and the dust and gas between them. The other 95% is dark matter and dark energy, two mysterious entities aptly named for our inability to shed light on their true nature.Cosmologists have cast dark matter as the hidden glue binding galaxies together. Dark energy plays an opposite role, ripping the fabric of space apart. Neither emits, absorbs, or reflects light, rendering them effectively invisible. So rather than directly observing either of them, astronomers must carefully trace the imprint they leave behind.Previous work has begun pulling apart these dueling forces, but dark matter and dark energy remain shrouded in a blanket of questionscritically, what exactly are they?Enter the Vera C. Rubin Observatory, one of our 10 breakthrough technologies for 2025. Boasting the largest digital camera ever created, Rubin is expected to study the cosmos in the highest resolution yet once it begins observations later this year. And with a better window on the cosmic battle between dark matter and dark energy, Rubin might narrow down existing theories on what they are made of. Heres a look at how.Untangling dark matters webIn the 1930s, the Swiss astronomer Fritz Zwicky proposed the existence of an unseen force named dunkle Materiein English, dark matterafter studying a group of galaxies called the Coma Cluster. Zwicky found that the galaxies were traveling too quickly to be contained by their joint gravity and decided there must be a missing, unobservable mass holding the cluster together.Zwickys theory was initially met with much skepticism. But in the 1970s an American astronomer, Vera Rubin, obtained evidence that significantly strengthened the idea. Rubin studied the rotation rates of 60 individual galaxies and found that if a galaxy had only the mass were able to observe, that wouldnt be enough to contain its structure; its spinning motion would send it ripping apart and sailing into space.Rubins results helped sell the idea of dark matter to the scientific community, since an unseen force seemed to be the only explanation for these spiraling galaxies breakneck spin speeds. It wasnt necessarily a smoking-gun discovery, says Marc Kamionkowski, a theoretical physicist at Johns Hopkins University. But she saw a need for dark matter. And other people began seeing it too.Evidence for dark matter only grew stronger in the ensuing decades. But sorting out what might be behind its effects proved tricky. Various subatomic particles were proposed. Some scientists posited that the phenomena supposedly generated by dark matter could also be explained by modifications to our theory of gravity. But so far the hunt, which has employed telescopes, particle colliders, and underground detectors, has failed to identify the culprit.The Rubin observatorys main tool for investigating dark matter will be gravitational lensing, an observational technique thats been used since the late 70s. As light from distant galaxies travels to Earth, intervening dark matter distorts its imagelike a cosmic magnifying glass. By measuring how the light is bent, astronomers can reverse-engineer a map of dark matters distribution.Other observatories, like the Hubble Space Telescope and the James Webb Space Telescope, have already begun stitching together this map from their images of galaxies. But Rubin plans to do so with exceptional precision and scale, analyzing the shapes of billions of galaxies rather than the hundreds of millions that current telescopes observe, according to Andrs Alejandro Plazas Malagn, Rubin operations scientist at SLAC National Laboratory. Were going to have the widest galaxy survey so far, Plazas Malagn says.Capturing the cosmos in such high definition requires Rubins 3.2-billion-pixel Large Synoptic Survey Telescope (LSST). The LSST boasts the largest focal plane ever built for astronomy, granting it access to large patches of the sky.The telescope is also designed to reorient its gaze every 34 seconds, meaning astronomers will be able to scan the entire sky every three nights. The LSST will revisit each galaxy about 800 times throughout its tenure, says Steven Ritz, a Rubin project scientist at the University of California, Santa Cruz. The repeat exposures will let Rubin team members more precisely measure how the galaxies are distorted, refining their map of dark matters web. Were going to see these galaxies deeply and frequently, Ritz says. Thats the power of Rubin: the sheer grasp of being able to see the universe in detail and on repeat.The ultimate goal is to overlay this map on different models of dark matter and examine the results. The leading idea, the cold dark matter model, suggests that dark matter moves slowly compared to the speed of light and interacts with ordinary matter only through gravity. Other models suggest different behavior. Each comes with its own picture of how dark matter should clump in halos surrounding galaxies. By plotting its chart of dark matter against what those models predict, Rubin might exclude some theories and favor others.A cosmic tug of warIf dark matter lies on one side of a magnet, pulling matter together, then youll flip it over to find dark energy, pushing it apart. You can think of it as a cosmic tug of war, Plazas Malagn says.Dark energy was discovered in the late 1990s, when astronomers found that the universe was not only expanding, but doing so at an accelerating rate, with galaxies moving away from one another at higher and higher speeds.The expectation was that the relative velocity between any two galaxies should have been decreasing, Kamionkowski says. This cosmological expansion requires something that acts like antigravity. Astronomers quickly decided there must be another unseen factor inflating the fabric of space and pegged it as dark matters cosmic foil.So far, dark energy has been observed primarily through Type Ia supernovas, a special breed of explosion that occurs when a white dwarf star accumulates too much mass. Because these supernovas all tend to have the same peak in luminosity, astronomers can gauge how far away they are by measuring how bright they appear from Earth. Paired with a measure of how fast they are moving, this data clues astronomers in on the universes expansion rate.Rubin will continue studying dark energy with high-resolution glimpses of Type Ia supernovas. But it also plans to retell dark energys cosmic history through gravitational lensing. Because light doesnt travel instantaneously, when we peer into distant galaxies, were really looking at relics from millions to billions of years agohowever long it takes for their light to make the lengthy trek to Earth. Astronomers can effectively use Rubin as a makeshift time machine to see how dark energy has carved out the shape of the universe.These are the types of questions that we want to ask: Is dark energy a constant? If not, is it evolving with time? How is it changing the distribution of dark matter in the universe? Plazas Malagn says.If dark energy was weaker in the past, astronomers expect to see galaxies grouped even more densely into galaxy clusters. Its like urban sprawlthese huge conglomerates of matter, Ritz says. Meanwhile, if dark energy was stronger, it would have pushed galaxies away from one another, creating a more rural landscape.Researchers will be able to use Rubins maps of dark matter and the 3D distribution of galaxies to plot out how the structure of the universe changed over time, unveiling the role of dark energy and, they hope, helping scientists evaluate the different theories to account for its behavior.Of course, Rubin has a lengthier list of goals to check off. Some top items entail tracing the structure of the Milky Way, cataloguing cosmic explosions, and observing asteroids and comets. But since the observatory was first conceptualized in the early 90s, its core goal has been to explore this hidden branch of the universe. After all, before a 2019 act of Congress dedicated the observatory to Vera Rubin, it was simply called the Dark Matter Telescope.Rubin isnt alone in the hunt, though. In 2023, the European Space Agency launched the Euclid telescope into space to study how dark matter and dark energy have shaped the structure of the cosmos. And NASAs Nancy Grace Roman Space Telescope, which is scheduled to launch in 2027, has similar plans to measure the universes expansion rate and chart large-scale distributions of dark matter. Both also aim to tackle that looming question: What makes up this invisible empire?Rubin will test its systems throughout most of 2025 and plans to begin the LSST survey late this year or in early 2026. Twelve to 14 months later, the team expects to reveal its first data set. Then we might finally begin to know exactly how Rubin will light up the dark universe.
    0 Comments ·0 Shares ·30 Views
  • Three things to know as the dust settles from DeepSeek
    www.technologyreview.com
    This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first,sign up here.The launch of a single new AI model does not normally cause much of a stir outside tech circles, nor does it typically spook investors enough to wipe out $1 trillion in the stock market. Now, a couple of weeks since DeepSeeks big moment, the dust has settled a bit. The news cycle has moved on to calmer things, like the dismantling of long-standing US federal programs, the purging of research and data sets to comply with recent executive orders, and the possible fallouts from President Trumps new tariffs on Canada, Mexico, and China.Within AI, though, what impact is DeepSeek likely to have in the longer term? Here are three seeds DeepSeek has planted that will grow even as the initial hype fades.First, its forcing a debate about how much energy AI models should be allowed to use up in pursuit of better answers.You may have heard (including from me) that DeepSeek is energy efficient. Thats true for its training phase, but for inference, which is when you actually ask the model something and it produces an answer, its complicated. It uses a chain-of-thought technique, which breaks down complex questions-like whether its ever okay to lie to protect someones feelingsinto chunks, and then logically answers each one. The method allows models like DeepSeek to do better at math, logic, coding, and more.The problem, at least to some, is that this way of thinking uses up a lot more electricity than the AI weve been used to. Though AI is responsible for a small slice of total global emissions right now, there is increasing political support to radically increase the amount of energy going toward AI. Whether or not the energy intensity of chain-of-thought models is worth it, of course, depends on what were using the AI for. Scientific research to cure the worlds worst diseases seems worthy. Generating AI slop? Less so.Some experts worry that the impressiveness of DeepSeek will lead companies to incorporate it into lots of apps and devices, and that users will ping it for scenarios that dont call for it. (Asking DeepSeek to explain Einsteins theory of relativity is a waste, for example, since it doesnt require logical reasoning steps, and any typical AI chat model can do it with less time and energy.) Read more from me here.Second, DeepSeek made some creative advancements in how it trains, and other companies are likely to follow its lead.Advanced AI models dont just learn on lots of text, images, and video. They rely heavily on humans to clean that data, annotate it, and help the AI pick better responses, often for paltry wages.One way human workers are involved is through a technique called reinforcement learning with human feedback. The model generates an answer, human evaluators score that answer, and those scores are used to improve the model. OpenAI pioneered this technique, though its now used widely by the industry.As my colleague Will Douglas Heaven reports, DeepSeek did something different: It figured out a way to automate this process of scoring and reinforcement learning. Skipping or cutting down on human feedbackthats a big thing, Itamar Friedman, a former research director at Alibaba and now cofounder and CEO of Qodo, an AI coding startup based in Israel, told him. Youre almost completely training models without humans needing to do the labor.It works particularly well for subjects like math and coding, but not so well for others, so workers are still relied upon. Still, DeepSeek then went one step further and used techniques reminiscent of how Google DeepMind trained its AI model back in 2016 to excel at the game Go, essentially having it map out possible moves and evaluate their outcomes. These steps forward, especially since they are outlined broadly in DeepSeeks open-source documentation, are sure to be followed by other companies. Read more from Will Douglas Heaven here.Third, its success will fuel a key debate: Can you push for AI research to be open for all to see and push for US competitiveness against China at the same time?Long before DeepSeek released its model for free, certain AI companies were arguing that the industry needs to be an open book. If researchers subscribed to certain open-source principles and showed their work, they argued, the global race to develop superintelligent AI could be treated like a scientific effort for public good, and the power of any one actor would be checked by other participants.Its a nice idea. Meta has largely spoken in support of that vision, and venture capitalist Marc Andreessen has said that open-source approaches can be more effective at keeping AI safe than government regulation. OpenAI has been on the opposite side of that argument, keeping its models closed off on the grounds that it can help keep them out of the hands of bad actors.DeepSeek has made those narratives a bit messier. We have been on the wrong side of history here and need to figure out a different open-source strategy, OpenAIs Sam Altman said in a Reddit AMA on Friday, which is surprising given OpenAIs past stance. Others, including President Trump, doubled down on the need to make the US more competitive on AI, seeing DeepSeeks success as a wake-up call. Dario Amodei, a founder of Anthropic, said its a reminder that the US needs to tightly control which types of advanced chips make their way to China in the coming years, and some lawmakers are pushing the same point.The coming months, and future launches from DeepSeek and others, will stress-test every single one of these arguments.Now read the rest of The AlgorithmDeeper LearningOpenAI launches a research toolOn Sunday, OpenAI launched a tool called Deep Research. You can give it a complex question to look into, and it will spend up to 30 minutes reading sources, compiling information, and writing a report for you. Its brand new, and we havent tested the quality of its outputs yet. Since its computations take so much time (and therefore energy), right now its only available to users with OpenAIs paid Pro tier ($200 per month) and limits the number of queries they can make per month.Why it matters: AI companies have been competing to build useful agents that can do things on your behalf. On January 23, OpenAI launched an agent called Operator that could use your computer for you to do things like book restaurants or check out flight options. The new research tool signals that OpenAI is not just trying to make these mundane online tasks slightly easier; it wants to position AI as able to handle professional research tasks. It claims that Deep Research accomplishes in tens of minutes what would take a human many hours. Time will tell if users will find it worth the high costs and the risk of including wrong information. Read more from Rhiannon Williams.Bits and BytesDj vu: Elon Musk takes his Twitter takeover tactics to WashingtonFederal agencies have offered exits to millions of employees and tested the prowess of engineersjust like when Elon Musk bought Twitter. The similarities have been uncanny. (The New York Times)AIs use in art and movies gets a boost from the Copyright OfficeThe US Copyright Office finds that art produced with the help of AI should be eligible for copyright protection under existing law in most cases, but wholly AI-generated works probably are not. What will that mean? (The Washington Post)OpenAI releases its new o3-mini reasoning model for freeOpenAI just released a reasoning model thats faster, cheaper, and more accurate than its predecessor. (MIT Technology Review)Anthropic has a new way to protect large language models against jailbreaksThis line of defense could be the strongest yet. But no shield is perfect. (MIT Technology Review).
    0 Comments ·0 Shares ·31 Views