• RFK Jr.s Senate Testimony Is Haunted by His Track Record
    www.wired.com
    During his confirmation hearings this week, Robert F. Kennedy Jr. said he would promote vaccines as HHS secretary, despite a long history of promoting anti-vax positions.
    0 Kommentare ·0 Anteile ·30 Ansichten
  • 6 Best Dash Appliances (2025), Tested and Reviewed
    www.wired.com
    From a mini waffle maker to a rapid egg cooker, these tiny kitchen appliances are easy on the wallet and dont hog space.
    0 Kommentare ·0 Anteile ·30 Ansichten
  • DeepSeeks Answers Include Chinese Propaganda, Researchers Say
    www.nytimes.com
    Since the Chinese companys chatbot surged in popularity, researchers have documented how its answers reflect Chinas view of the world. Some of its responses amplify propaganda Beijing uses to discredit critics.
    0 Kommentare ·0 Anteile ·32 Ansichten
  • Apple shares a secret MacBook tip that power users will love
    www.macworld.com
    MacworldApple M-series MacBooks turn on automatically when you open the laptop or plug in the power adapter. It sort of makes sense, right? If youre opening your MacBook, youre probably going to use it, so it should turn on. (As for plugging it in, Im unsure what the thinking is by Apple to have the MacBook automatically turn on.)However, what if you want to be able to open it or plug it in without it turning on? With macOS Sequoia, you can now adjust your MacBook so it does not turn on when you open it or plug in the adapter.To change this setting, you need to enter commands through the Terminal app, which is located in Applications > Utilities. After launching Terminal, enter one of these commands:If you want to open the MacBook or connect to power with the laptop turning on type the following:sudo nvram BootPreference=%00To stop the MacBook from turning on only when opening it, type the following:sudo nvram BootPreference=%01To stop the MacBook from turning on when only connecting to power, type the following:sudo nvram BootPreference=%02After typing the command, press the Return key. Youll need to enter your password; when you do, you will not see it being typed on the screen. Press Return, and youre set.If you want to reset the feature and want to have your MacBook turn on when you open it or plug it in, launch Terminal and entersudo nvram -d BootPreferenceand then press the Return key. Your Mac will then return to the default function of turning on when you open the lid.For more on using your MacBook with an external display, heres how to stop your MacBook from sleeping when the lid is closed.
    0 Kommentare ·0 Anteile ·35 Ansichten
  • Forget the Studio Display and get this 27-inch 4K Alogic Clarity for half the price
    www.macworld.com
    MacworldWe love the Apples Studio Display, but its way too expensive. But, if you want a great alternative that wont cost you nearly as much. The Alogic Clarity 4K monitor is 20% off, so you can get it for $800 right now.This is a 27-inch IPS monitor with a solid 4K resolution that looks gorgeous. That, of course, is important because if youre going to bring a monitor into your Apple-friendly home, it needs to be at least as stylish as the Studio Display is. We reviewed the Alogic Clarity monitor and were impressed enough to give it a 4.5-star rating and our Editors Choice award. It will appeal to Mac users with its Apple looks and is even, in some ways, a superior monitor to Apples own Studio Display, although its 4K resolution isnt quite as sharp as Apples 5K screen, we wrote.This monitor excels at color accuracy, brightness, and 3000:1 contrast levels. The fact that it also has a built-in webcam makes this a one-stop shop for everyone who sometimes needs to join FaceTime or Zoom calls, and theres also a super handy hub you can use that features HDMI, DisplayPort, and various USB-A ports. Furthermore, you can adjust the monitors height, tilt it, and swivel it however you see fitfeatures that vost hundreds more on the standard Studio Display.So rather than paying $1,600 or more on the Apple Studio Display, save a bundle and get one of these Clarity Pro 27 monitors for $800 over at Alogic instead..Buy now at Alogic
    0 Kommentare ·0 Anteile ·35 Ansichten
  • Apple Q2: Services buys time, what next?
    www.computerworld.com
    Continued stress between the US and China and the slow transition to Apple Intelligence may be limiting Apples business growth, but theres no legitimate way to deny the strategic success of CEO Tim Cooks decision tobuild Apples services business(a decision he likely had in mind during theBeats purchasein 2014). The money it is making with services gives the company strength with which to weather these storms.Think about it like this. Yes, Apples iPhone sales in China fell, and yes, regions in which Apple Intelligence is available saw iPhone sales outpace those in which it is not, but services increased 14% year-on-year, generating $26.3 billion in revenue around 21% of Apples total revenue during the most recently revealed quarter. Thats why it means so much that Cook said, In services, we achieved an all-time revenue record, and in the past year weve seen nearly $100 billion in revenue from our services business.Thats double Cooks original ambition for services.The cost of doing businessWhat makes those dollars even more valuable to Apple is the number of them it gets to keep: While the company generates a 39.31% margin on hardware revenues after costs, it books an astonishing 75% margin on services. In other words, for every 10 dollars of services income Apple creates, it keeps around $7.50.Other details from Apples most recent financial results:Revenue: $124.3 billion (+4% YoY)EPS: $2.40 (+10% YoY)Gross margin: 46.9% (but much higher for services)Net income: $36.3 billionProduct revenue: $98 billion (+2% YoY)Services revenue: $26.3 billion (+14% YoY)Morgan Stanley analyst Erik Woodring today shared his estimate that the average revenue per user Apple is generating with services has now reached around $72 per user, up $5 on the last quarter.Apples management also confirmed that the iPhone 16 is outperforming the iPhone 15 range. The company said theres been a record increase in iPhone upgrades during the quarter, presumably as its customers ensure they have the correct devices to run Apple Intelligence.Services, services, servicesApple has managed its services pivot across the last few years. This configuration is a huge lesson to any business in that it shows the value of diversification. While Apples attempt to diversify its own business with services benefited hugely from the companys incredibly positive customer satisfaction levels, any business should seek out related opportunities if it hopes to maintain growth in challenging circumstances.Services continues to see strong momentum, and the growth of our installed base of active devices gives us great opportunities for the future, said Apple CFO Kevan Parekh. We also see increased customer engagement with our services offerings. Both transacting and paid accounts reached new all-time highs, with paid accounts growing double digits year over year. Paid subscriptions also grew double digits.Services income also requires hardware sales, and not every Apple service will be generating anything like these numbers. The accretive nature of this part of the business is a little like the small fish that lives on a larger whale you cant have one without the other.But in Apples ocean, Parekhs revelation that the company has over 1,000,000,000 paid subscriptions across services on our platforms shows theres plenty of fish in its ocean. Even as competition authorities force more competition into those waters, its a solid bet that Apple will continue to generate good business from the services segment.Not playing gamesKicking the raw data Apple provided on its consolidated balance sheet around, youll see that services revenue after direct sales-related costs delivered almost half of Apples overall net income during the quarter. And if hardware revenue tracks overall hardware margins at 39.1%, then services at 75% is generating more actual net income than any Apple product other than the iPhone. Apple Fitness, indeed. Apple Arcade is not just playing games.Ultimately, however, Apples services income is doing the job it should be doing and generating a lucrative slice of high-margin income that protects the company against product sales-driven challenges. It is also acting as a bulwark as the company engages in the transition to Apple Intelligence.But while the company has done an excellent job crafting business resilience and bought itself time with the initial introduction of its own system-wide AI, it still needs afollow-up punch to consolidate its gains. Is Apple really going to rely on international language rollouts of Apple Intelligence, or does it plan new models for WWDC? How does it intend to augment services with additional offers its customers cant resist?You can follow me on social media! Join me onBlueSky, LinkedIn,Mastodon, andMeWe.
    0 Kommentare ·0 Anteile ·36 Ansichten
  • Italy blocks DeepSeek due to unclear data protection
    www.computerworld.com
    Italys data protection authorityGarantehas chosen to block the app for the much-hyped Chinese AI model DeepSeek in the country.The decision comes after the Chinese companies providing the chatbot service failed to provide the authority with sufficient information about how users personal data is used.Reuterswrites that Garante wants to know, among other things, what personal data DeepSeek collects, from what sources, for what purposes, on what legal basis, and whether the data is stored in China.As a result, DeepSeek is no longer available through Apples or Googles app stores in Italy. Garante has also launched an investigation. DeepSeek has not commented on the matter.
    0 Kommentare ·0 Anteile ·36 Ansichten
  • How DeepSeek ripped up the AI playbookand why everyones going to follow its lead
    www.technologyreview.com
    When the Chinese firm DeepSeek dropped a large language model called R1 last week, it sent shock waves through the US tech industry. Not only did R1 match the best of the homegrown competition, it was built for a fraction of the costand given away for free.The US stock market lost $1 trillion, President Trump called it a wake-up call, and the hype was dialed up yet again. DeepSeek R1 is one of the most amazing and impressive breakthroughs Ive ever seenand as open source, a profound gift to the world, Silicon Valleys kingpin investor Marc Andreessen posted on X.But DeepSeeks innovations are not the only takeaway here. By publishing details about how R1 and a previous model called V3 were built and releasing the models for free, DeepSeek has pulled back the curtain to reveal that reasoning models are a lot easier to build than people thought. The company has closed the lead on the worlds very top labs.The news kicked competitors everywhere into gear. This week, the Chinese tech giant Alibaba announced a new version of its large language model Qwen and the Allen Institute for AI (AI2), a top US nonprofit lab, announced an update to its large language model Tulu. Both claim that their latest models beat DeepSeeks equivalent.Sam Altman, cofounder and CEO of OpenAI, called R1 impressivefor the pricebut hit back with a bullish promise: We will obviously deliver much better models. OpenAI then pushed out ChatGPT Gov, a version of its chatbot tailored to the security needs of US government agencies, in an apparent nod to concerns that DeepSeeks app was sending data to China. Theres more to come.DeepSeek has suddenly become the company to beat. What exactly did it do to rattle the tech world so fully? Is the hype justified? And what can we learn from the buzz about whats coming next? Heres what you need to know.Training stepsLets start by unpacking how large language models are trained. There are two main stages, known as pretraining and post-training. Pretraining is the stage most people talk about. In this process, billions of documentshuge numbers of websites, books, code repositories, and moreare fed into a neural network over and over again until it learns to generate text that looks like its source material, one word at a time. What you end up with is known as a base model.Pretraining is where most of the work happens, and it can cost huge amounts of money. But as Andrej Karpathy, a cofounder of OpenAI and former head of AI at Tesla, noted in a talk at Microsoft Build last year: Base models are not assistants. They just want to complete internet documents.Turning a large language model into a useful tool takes a number of extra steps. This is the post-training stage, where the model learns to do specific tasks like answer questions (or answer questions step by step, as with OpenAIs o3 and DeepSeeks R1). The way this has been done for the last few years is to take a base model and train it to mimic examples of question-answer pairs provided by armies of human testers. This step is known as supervised fine-tuning.OpenAI then pioneered yet another step, in which sample answers from the model are scoredagain by human testersand those scores used to train the model to produce future answers more like those that score well and less like those that dont. This technique, known as reinforcement learning with human feedback (RLHF), is what makes chatbots like ChatGPT so slick. RLHF is now used across the industry.But those post-training steps take time. What DeepSeek has shown is that you can get the same results without using people at allat least most of the time. DeepSeek replaces supervised fine-tuning and RLHF with a reinforcement-learning step that is fully automated. Instead of using human feedback to steer its models, the firm uses feedback scores produced by a computer.Skipping or cutting down on human feedbackthats a big thing, says Itamar Friedman, a former research director at Alibaba and now cofounder and CEO of Qodo, an AI coding startup based in Israel. Youre almost completely training models without humans needing to do the labor.Cheap laborThe downside of this approach is that computers are good at scoring answers to questions about math and code but not very good at scoring answers to open-ended or more subjective questions. Thats why R1 performs especially well on math and code tests. To train its models to answer a wider range of non-math questions or perform creative tasks, DeepSeek still has to ask people to provide the feedback.But even that is cheaper in China. Relative to Western markets, the cost to create high-quality data is lower in China and there is a larger talent pool with university qualifications in math, programming, or engineering fields, says Si Chen, a vice president at the Australian AI firm Appen and a former head of strategy at both Amazon Web Services China and the Chinese tech giant Tencent.DeepSeek used this approach to build a base model, called V3, that rivals OpenAIs flagship model GPT-4o. The firm released V3 a month ago. Last weeks R1, the new model that matches OpenAIs o1, was built on top of V3.To build R1, DeepSeek took V3 and ran its reinforcement-learning loop over and over again. In 2016 Google DeepMind showed that this kind of automated trial-and-error approach, with no human input, could take a board-game-playing model that made random moves and train it to beat grand masters. DeepSeek does something similar with large language models: Potential answers are treated as possible moves in a game.To start with, the model did not produce answers that worked through a question step by step, as DeepSeek wanted. But by scoring the models sample answers automatically, the training process nudged it bit by bit toward the desired behavior.Eventually, DeepSeek produced a model that performed well on a number of benchmarks. But this model, called R1-Zero, gave answers that were hard to read and were written in a mix of multiple languages. To give it one last tweak, DeepSeek seeded the reinforcement-learning process with a small data set of example responses provided by people. Training R1-Zero on those produced the model that DeepSeek named R1.Theres more. To make its use of reinforcement learning as efficient as possible, DeepSeek has also developed a new algorithm called Group Relative Policy Optimization (GRPO). It first used GRPO a year ago, to build a model called DeepSeekMath.Well skip the detailsyou just need to know that reinforcement learning involves calculating a score to determine whether a potential move is good or bad. Many existing reinforcement-learning techniques require a whole separate model to make this calculation. In the case of large language models, that means a second model that could be as expensive to build and run as the first. Instead of using a second model to predict a score, GRPO just makes an educated guess. Its cheap, but still accurate enough to work.A common approachDeepSeeks use of reinforcement learning is the main innovation that the company describes in its R1 paper. But DeepSeek is not the only firm experimenting with this technique. Two weeks before R1 dropped, a team at Microsoft Asia announced a model called rStar-Math, which was trained in a similar way. It has similarly huge leaps in performance, says Matt Zeiler, founder and CEO of the AI firm Clarifai.AI2s Tulu was also built using efficient reinforcement-learning techniques (but on top of, not instead of, human-led steps like supervised fine-tuning and RLHF). And the US firm Hugging Face is racing to replicate R1 with OpenR1, a clone of DeepSeeks model that Hugging Face hopes will expose even more of the ingredients in R1s special sauce.Whats more, its an open secret that top firms like OpenAI, Google DeepMind, and Anthropic may already be using their own versions of DeepSeeks approach to train their new generation of models. Im sure theyre doing almost the exact same thing, but theyll have their own flavor of it, saysZeiler.But DeepSeek has more than one trick up its sleeve. It trained its base model V3 to do something called multi-token prediction, where the model learns to predict a string of words at once instead of one at a time. This training is cheaper and turns out to boost accuracy as well. If you think about how you speak, when youre halfway through a sentence, you know what the rest of the sentence is going to be, says Zeiler. These models should be capable of that too.It has also found cheaper ways to create large data sets. To train last years model, DeepSeekMath, it took a free data set called Common Crawla huge number of documents scraped from the internetand used an automated process to extract just the documents that included math problems. This was far cheaper than building a new data set of math problems by hand. It was also more effective: Common Crawl includes a lot more math than any other specialist math data set thats available.And on the hardware side, DeepSeek has found new ways to juice old chips, allowing it to train top-tier models without coughing up for the latest hardware on the market. Half their innovation comes from straight engineering, says Zeiler: They definitely have some really, really good GPU engineers on that team.Nvidia provides software called CUDA that engineers use to tweak the settings of their chips. But DeepSeek bypassed this code using assembler, a programming language that talks to the hardware itself, to go far beyond what Nvidia offers out of the box. Thats as hardcore as it gets in optimizing these things, says Zeiler. You can do it, but basically its so difficult that nobody does.DeepSeeks string of innovations on multiple models is impressive. But it also shows that the firms claim to have spent less than $6 million to train V3 is not the whole story. R1 and V3 were built on a stack of existing tech. Maybe the very last stepthe last click of the buttoncost them $6 million, but the research that led up to that probably cost 10 times as much, if not more, says Friedman. And in a blog post that cut through a lot of the hype, Anthropic cofounder and CEO Dario Amodei pointed out that DeepSeek probably has around $1 billion worth of chips, an estimate based on reports that the firm in fact used 50,000 Nvidia H100 GPUs.A new paradigmBut why now? There are hundreds of startups around the world trying to build the next big thing. Why have we seen a string of reasoning models like OpenAIs o1 and o3, Google DeepMinds Gemini 2.0 Flash Thinking, and now R1 appear within weeks of each other?The answer is that the base modelsGPT-4o, Gemini 2.0, V3are all now good enough to have reasoning-like behavior coaxed out of them. What R1 shows is that with a strong enough base model, reinforcement learning is sufficient to elicit reasoning from a language model without any human supervision, says Lewis Tunstall, a scientist at Hugging Face.In other words, top US firms may have figured out how to do it but were keeping quiet. It seems that theres a clever way of taking your base model, your pretrained model, and turning it into a much more capable reasoning model, says Zeiler. And up to this point, the procedure that was required for converting a pretrained model into a reasoning model wasnt well known. It wasnt public.Whats different about R1 is that DeepSeek published how they did it. And it turns out that its not that expensive a process, says Zeiler. The hard part is getting that pretrained model in the first place. As Karpathy revealed at Microsoft Build last year, pretraining a model represents 99% of the work and most of the cost.If building reasoning models is not as hard as people thought, we can expect a proliferation of free models that are far more capable than weve yet seen. With the know-how out in the open, Friedman thinks, there will be more collaboration between small companies, blunting the edge that the biggest companies have enjoyed. I think this could be a monumental moment, he says.
    0 Kommentare ·0 Anteile ·33 Ansichten
  • Apple hits back at judge with demand for Google search dominance trial delay
    appleinsider.com
    Having been denied full inclusion at the forthcoming trial to determine Google's future, Apple has filed a motion to delay the whole proceedings.In 2022, Alphabet paid Apple $20 billionIt was formally and legally decided in August 2024 that Google and its Alphabet parent company, represent a search and advertising monopoly. What Apple wants and has previously been denied, is a seat at a trial determining what steps Google must take next.Apple had asked to be a participant in this remedy trial specifically because of its annual $20 billion contract with Google. But it asked by filing a motion on December 23, 2024, and Judge Amit Metha has ruled Apple's motion was simply too late. Continue Reading on AppleInsider | Discuss on our Forums
    0 Kommentare ·0 Anteile ·34 Ansichten
  • B&H is blowing out M3 MacBook Pros at up to $1,200 off
    appleinsider.com
    B&H's blowout MacBook Pro sale features a variety of deals on M3 Pro and M3 Max models, with prices as low as $1,599.Save up to $1,200 on MacBooks - Image credit: AppleMany of the deals end today and B&H's online checkout closes at sundown Eastern Time tonight. So if you see a discounted config that catches your eye, you may want to snap it up right away. You can also check out our Mac Price Guide to compare prices on these closeout configs against Apple's current M4 models.Shop the blowout deals Continue Reading on AppleInsider | Discuss on our Forums
    0 Kommentare ·0 Anteile ·34 Ansichten