• The 2024 Google Nest Learning Thermostat is $40 off right now
    www.engadget.com
    Many of us in the northern hemisphere are contending with the harsh realities of winter and while the weather outside is often awful, at least we can try to be more comfortable when we're home. A smart thermostat can prove useful on that front as it can optimize the conditions in your home, whether you're there or away. The Google Nest Learning Thermostat is a popular model, and the latest iteration is on sale. It can now be yours for $240, which is $40 off the regular price. The thermostat actually dropped to a slightly lower price of $230 during the holidays. Still, a 14 percent discount is nothing to sniff at especially if the device can help to significantly reduce your energy bills. Google claims that the latest Nest Learning Thermostat delivers more accurate readings thanks to the help of AI. The device can offer suggestions on how to lower your energy usage though, as you might expect, it can automatically adjust settings in your home based on factors like the ambient temperature. To help measure that, the thermostat comes with a wireless temperature sensor that is said to run for up to three years before a battery replacement is needed. Extra sensors are available to purchase separately three for $100 or $36.45 for one (usually $40, but that's on sale too). You can connect as many as six to a single Nest Learning Thermostat and placing them around your domicile. The latest thermostat is more customizable than its predecessors as it has several smartwatch-style faces. You might change the colors or make it appear more like a digital clock. The display is 60 percent larger this time too. In addition, the Nest Thermostat uses integrated Soli radar sensors to determine how close you are to it and automatically adjust the user interface. For instance, as you move back from the display, the thermostat will increase the font size to make text more legible. Follow @EngadgetDeals on Twitter and subscribe to the Engadget Deals newsletter for the latest tech deals and buying advice.This article originally appeared on Engadget at https://www.engadget.com/deals/the-2024-google-nest-learning-thermostat-is-40-off-right-now-181024491.html?src=rss
    0 Commentaires ·0 Parts ·34 Vue
  • DeepSeek just insisted it's ChatGPT, and I think that's all the proof I need
    www.techradar.com
    We asked DeepSeek if it's smarter than Gemini, and it gave us a surprising answer.
    0 Commentaires ·0 Parts ·35 Vue
  • Civ 7 requirements for PC, Steam Deck, Linux, and Mac
    www.techradar.com
    Here are the Civ 7 requirements for PC, Mac, and Steam Deck. This includes minimum and recommended PC specs for the game so that you can see whether you can run it.
    0 Commentaires ·0 Parts ·33 Vue
  • 0 Commentaires ·0 Parts ·46 Vue
  • 0 Commentaires ·0 Parts ·38 Vue
  • 0 Commentaires ·0 Parts ·37 Vue
  • After a week of DeepSeek freakout, doubts and mysteries remain
    www.fastcompany.com
    Welcome toAI Decoded,Fast Companys weekly newsletter that breaks down the most important news in the world of AI. You can sign up to receive this newsletter every weekhere.After a week of DeepSeek freakout, doubts and mysteries remainThe Chinese company DeepSeek sent shockwaves through the AI and investment communities this week as people learned that it created state-of-the-art AI models using far less computing power and capital than anyone thought possible. The company then showed its work in published research papers and by making its models available to other developers. This raised two burning questions: Has the U.S. lost its edge in the AI race? And will we really need as many expensive AI chips as weve been told?How much computing power did DeepSeek really use?DeepSeek claimed it trained its most recent model for about $5.6 million, and without the most powerful AI chips (the U.S. barred Nvidia from selling its powerful H100 graphics processing units in China, so DeepSeek made do with 2,048 H800s). But the information it provided in research papers about its costs and methods is incomplete. The $5 million refers to the final training run of the system, points out Oregon State University AI/robotics professor Alan Fern in a statement to Fast Company. In order to experiment with and identify a system configuration and mix of tricks that would result in a $5M training run, they very likely spent orders of magnitude more. He adds that based on the available information its impossible to replicate DeepSeeks $5.6 million training run.How exactly did DeepSeek do so much with so little?DeepSeek appears to have pulled off some legitimate engineering innovations to make its models less expensive to train and run. But the techniques it used, such as Mixture-of-experts architecture and chain-of-thought reasoning, are well-known in the AI world and generally used by all the major AI research labs.The innovations are described only at a high level in the research papers, so its not easy to see how DeepSeek put its own spin on them. Maybe there was one main trick or maybe there were lots of things that were just very well engineered all over, says Robert Nishihara, cofounder of the AI run-time platform Anyscale. Many of DeepSeeks innovations grew from having to use less powerful GPUs (Nvidia H800s instead of H100s) because of the Biden Administrations chip bans.Being resource limited forces you to come up with new innovative efficient methods, Nishihara says. Thats why grad students come up with a lot of interesting stuff with far less resourcesits just a different mindset.What innovation is likely to influence other AI labs the most?As Anthropics Jack Clark points out in a recent blog post, DeepSeek was able to use a large model, DeepSeek-V3 (~700K parameters), to teach a smaller R1 model to be a reasoning model (like OpenAIs o1) with a surprisingly small amount of training data and no human supervision. V3 generated 800,000 annotated text samples showing questions and the chains of thought it followed to answer them, Clark writes.DeepSeek showed that after processing the samples for a time the smaller R1 model spontaneously began to think about its answers, explains Andrew Jardine, head of go-to-market at Adaptive ML. You just say heres my problemcreate some answers to that problem and then based on the answers that are correct or incorrect, you give it a reward [a binary code that means good] and say try again, and eventually it starts going Im not sure; let me try this new angle or approach or that approach wasnt the right one, let me try this other one and it just starts happening on its own. Theres some real magic there. DeepSeeks researchers called it an aha moment.Why havent U.S. AI companies already been doing what DeepSeek did?How do you know they havent? asks Jardine. We dont have visibility into exactly the techniques that are being used by Google and OpenAI; we dont know exactly how efficient the training approaches are. Thats because those U.S. AI labs dont describe their techniques in research papers and release the weights of their models, as DeepSeek did. Theres a lot of reason to believe they do have at least some of these efficiency methods already. It should come as no surprise if OpenAIs next reasoning model, o3, is less compute-intensive, more cost-effective, and faster than DeepSeeks models.Is Nvidia stock still worth 50X of earnings?Nvidia provides up to 95% percent of the advanced AI chips used to research, train, and run frontier AI models. The companys stock lost 17% of its value on Monday when investors interpreted DeepSeeks research results as a signal that fewer expensive Nvidia chips would be needed in the future than previously anticipated. Metas Yann LeCun says Mondays sell-off grew from a major misunderstanding about AI infrastructure investments.The Turing Award winner says that while DeepSeek showed that frontier models could be trained with fewer GPUs, the main job of the chips in the future will be during inferencethe reasoning work the model does when its responding to a users question or problem. (Actually, DeepSeek did find a novel way of compressing context window data so that less compute is needed during inference.) He says that as AI systems process more data, and more kinds of data, during inference, the computing costs will continue to increase. As of Wednesday night, the stock has not recovered.Did DeepSeek use OpenAI models to help train its own models?Nobody knows for sure, and disagreement remains among AI experts on the question. The Financial Times reports Wednesday that OpenAI believes it has seen evidence that DeepSeek did use content generated by OpenAI models to train its own models, which would violate OpenAIs terms. Distillation refers to saving time and money by feeding the outputs of larger, smarter models into smaller models to teach them how to handle specific tasks.Weve just experienced a moment when the open-source world produced some models that equaled the current closed-source offerings in performance. The real cost of developing the DeepSeek models remains an open question. But in the long run the AI companies that can marshal the most cutting-edge chips and infrastructure will very likely have the advantage as fewer performance gains can be wrung from pretraining and more computing power is applied at inference, when the AI must reason toward its answers. So the answers to the two burning questions raised above are probably not and likely yes.The DeepSeek breakthroughs could be good news for AppleThe problem of finding truly useful ways of using AI in real life is becoming more pressing as the cost of developing models and building infrastructure mounts. One big hope is that powerful AI models will become so small and efficient that they can run on devices like smartphones and AR glasses. DeepSeeks engineering breakthroughs to create cheaper and less compute-hungry models may breathe new life into research on small models that live on edge devices.Dramatically decreased memory requirements for inference make edge inference much more viable, and Apple has the best hardware for exactly that, says tech analyst Ben Thompson in a recent Stratechery newsletter. Apple Silicon uses unified memory, which means that the CPU, GPU, and NPU (neural processing unit) have access to a shared pool of memory; this means that Apples high-end hardware actually has the best consumer chip for inference.Stability AI founder Emad Mostaque says that reasoning models like OpenAIs o1 and DeepSeeks R1 will run on smartphones by next year, performing PhD-level tasks with only 20 watts of electricityequivalent to the human brain.OpenAI releases an AI agent for government workersOpenAI this week announced a new AI tool called ChatGPT Gov thats designed specifically for use by U.S. government agencies. Since sending sensitive government data out through an API to an OpenAI server presents obvious privacy and security problems, ChatGPT Gov can be hosted within an agencys own private cloud environment.[W]e see enormous potential for these tools to support the public sector in tackling complex challengesfrom improving public health and infrastructure to strengthening national security, OpenAI writes in a blog post. The Biden Administration in 2023 directed government agencies to find productive and safe ways to use new generative AI technology (Trump recently revoked the executive order).The Department of Homeland Security, for example, built its own AI chatbot, which is now used by thousands of DHS workers. OpenAI says 90,000 users within federal, state, and local government offices have already used the companys ChatGPT Enterprise product.More AI coverage from Fast Company:Microsoft posts 10% growth for Q4 as it plans to spend $80 billion on AIAI assistants for lawyers are a booming businesswith big risksWhy we need to leverage AI to address global food insecurityAlibaba rolls out AI model, claiming its better than DeepSeek-V3Want exclusive reporting and trend analysis on technology, business innovation, future of work, and design? Sign up for Fast Company Premium.
    0 Commentaires ·0 Parts ·37 Vue
  • Maui wildfire victims spared from testifying in court over $4 billion settlement
    www.fastcompany.com
    Lawyers representing victims of a deadly Hawaii wildfire reached a last-minute deal averting a trial that was scheduled to begin Wednesday over how to split a $4 billion settlement.The agreement means victims and survivors will not have to testify, reliving in court details of the massive inferno in Lahaina that killed more than 100 people, destroyed thousands of properties and caused an estimated $5.5 billion worth of damage.Before the trial was scheduled to begin Wednesday morning, lawyers met in private with Judge Peter Cahill, who later announced that a deal had been reached. Lawyers, who reached the deal late Tuesday, are expected to file court documents detailing the agreement in a week.Some victims had been ready to take the witness stand, while others submitted pre-recorded testimony, describing pain made all the more fresh by the recent destruction in Los Angeles.Some folks Im sure will be disappointed, because in their minds this was their time to share their story, Jacob Lowenthal, one of the attorneys representing individual plaintiffs, said Wednesday. Other folks are going to be relieved because they dont have to go in and testify.One of the individual plaintiffs is Kevin Baclig, whose wife, father-in-law, mother-in-law and brother-in-law were among the 102 people known to have died.Baclig said in a declaration that if called to testify he would describe how for three agonizing days he searched for themfrom hotel to hotel, shelter to shelter. I clung to the fragile hope that maybe they had made it off the island, that they were safe, he said.A month and a half went by and the grim reality set in. He went to the Philippines to gather DNA samples from his wifes close relatives there. The samples matched remains found in the fire. He eventually carried urns holding their remains back to the Philippines.The loss has left me in profound, unrelenting pain, he said. There are no words to describe the emptiness I feel or the weight I carry every day.Hawaii Gov. Josh Green announced the $4 billion settlementagreed by the state, power utility Hawaiian Electric, large landowners and othersabout a year after the deadliest U.S. wildfire in a century devastated Lahaina in 2023. At the time, he touted the speed of the deal to avoid protracted and painful lawsuits.The trial was supposed to determine a percentage split between two groups of plaintiffs, including some who filed individual lawsuits after losing their family members, homes or businesses, and other victims covered by class-action lawsuits, including tourists who canceled trips to Maui because of the blaze.Only a nominal portion of the settlement should go to tourists whose trips were delayed or canceled, Lowenthal said previously.The categories of losses that the class is claiming are just grossly insignificant compared to our losses, he said.Attorneys for the class have not responded to an email from The Associated Press seeking comment on the averted trial.In their trial brief, they challenged the idea that everyone who has a claim worth suing over had already done so. Many people held off hiring attorneys, the brief said, because of the fires disruption to life, distrust in heavy attorney advertising, and a desire to see how the process plays out first.Separately, the state Supreme Court is considering whether insurers can sue the defendants for reimbursement for the $2 billion-plus they have paid out in fire claims, or whether their share must come from the $4 billion settlement. Oral arguments in that case are scheduled for Feb. 6.That is the last big piece that needs to be decided before the global settlement can move forward, Lowenthal said.Jennifer Sinco Kelleher, Associated Press
    0 Commentaires ·0 Parts ·50 Vue
  • The easiest way to grow your mind this year: StackSkills
    www.macworld.com
    MacworldWhen a new year rolls around, that typically means youre feeling inspired to learn a new skill or two. Whether you want to cook like Bobby Flay or switch careers,studying with EDU Unlimited by StackSkills might just get you there.StackSkills is your one-way ticket to lifetime access to over 1,000 courses on almost any topic you might be interested in. Get it while its available for only $19.97 (reg. $600) through February 2.2025 could be the year you finally chef up a tasty Beef Wellingtonor finally land your dream job in IT. You could even finally master the art of drawing or painting orlearn to invest wisely with StackSkills, as the platform hasstock trading courses that are taught by real experts.With lifetime access to this e-learning resource, youll have complete freedom to learn at your own pace. Youll also never run out of new skills to master sincenew courses are added regularly.Add a new skill or two under your belt withEDU Unlimited by StackSkills, now just $19.97 while supplies last. This offer ends February 2 at 11:59 p.m. PT!EDU Unlimited by StackSkills: Lifetime AccessOnly at $19.97 at MacworldStackSocial prices subject to change.
    0 Commentaires ·0 Parts ·35 Vue
  • Nintendo Switch Online + Expansion Pack Adds Ridge Racer 64
    gamingbolt.com
    Nintendo doesnt always grab attention with the games that it adds to the Nintendo Switch Online catalog, but every so often, we do get notable new additions. Another one of those has been announced withRidge Racer 64.Developed by Nintendo and published by Namco for the N64 in 2000,Ridge Racer 64was the first game in the series for a non-PlayStation console. A remake of the racing title was released for the Nintendo DS in 2004, though it was much less well received. The former, however, still has its fans, who should be thrilled about its upcoming re-release.Nintendo Switch Online + Expansion Pack subscribers are able to play the game at no addition cost. It also features four-player splitscreen multiplayer, a first for the series. The game also touts multiple different game modes, nine courses, 25 cars, and more. Check out the trailer below for a glimpse of what the game has in store.Ridge Racer~ Race with up to four players and experience fast-paced arcade action in Ridge Racer 64, available now for #NintendoSwitchOnline + Expansion Pack members! #N64 pic.twitter.com/pvybSDyCS5 Nintendo of America (@NintendoAmerica) January 31, 2025
    0 Commentaires ·0 Parts ·10 Vue