• WWW.DIGITALTRENDS.COM
    The worst health care data breach in history just got worse
    Change Healthcare, a subsidiary of UnitedHealth, initially reported a data breach in October last year that was considered the worst in the industry. The breach, which affected up to 100 million users, has now grown to an alarming 190 million, according to Tech Crunch. Cybercriminals reportedly exploited an employee system that lacked multi-factor authentication.UnitedHealth confirmed the new numbers for the ransomware attack on Friday. Change Healthcare has determined the estimated total number of individuals impacted by the Change Healthcare cyberattack is approximately 190 million, Tyler Mason, a spokesperson for UnitedHealth Group, wrote in an email to TechCrunch.Recommended VideosThe vast majority of affected people have already received an individual or substitute notice, and UnitedHealth says the final number will be confirmed and filed with the Office for Civil Rights at a later date. The spokesperson said they were not aware of any misuse of individuals information as a result of this incident and had not seen electronic medical record databases appear in the data during the analysis. In the meantime, users can only worry about who has access to data such as their Social Security number, drivers license number, passport number, diagnoses, test results, medications, and health insurance information. Furthermore, when this breach started, those affected also had to deal with widespread disruption of the healthcare system, preventing pharmacies and doctors from accepting discount prescription cards, which resulted in patients paying full price. Pharmacies and doctors could not file claims.If there is any consolation, those responsible for the breach were found. The BlackCat ransomware gang was responsible for the attack that took 6TB of data. This incident highlights the importance of taking cybersecurity seriously and the necessary precautions to keep your data safe our roundup of the best antivirus software can help you on that front.Editors Recommendations
    0 Commenti 0 condivisioni 172 Views
  • WWW.DIGITALTRENDS.COM
    Samsung may be about to score a double victory over Apple heres how
    Table of ContentsTable of ContentsFirst versus bestTreading a fine lineThe latest Power On newsletter from Bloomberg journalist Mark Gurman asserts that Samsung is ahead of Apple in both artificial intelligence (AI) and with its ultra-thin phone efforts. Does that actually matter, though?Well, yes and no. While I think being first is an empty accolade if its not backed up by quality, Apple cant afford to be too late to the party particularly in the case of AI.Recommended VideosLets start with the slimline phones. Its been widely reported that Apple is developing a super-thin phone dubbed the iPhone 17 Air, which is due to launch this September. Gurman states that Samsung will get there first with its Galaxy S25 Edge phone, a device thats due to arrive in the first half of the year.RelatedYet this tells us nothing about which phone youll actually want to buy. The idea of being first is massively overblown in the tech press, with manufacturers frequently gunning for that coveted worlds first title no matter what. But Im willing to bet good money that customers dont care if one device launches slightly before the other; no, they care about which one gives the better experience. That is where Apple chooses to compete.For instance, there are rumors that the iPhone 17 Air will be slightly thinner than the Samsung Galaxy S25 Edge. Does that matter? Only in so far as it makes the phone better or worse for users. Quality should be the key consideration, not raw timing. We need to stop valorizing the worlds first title and put more emphasis on what makes something the worlds best.So no, I dont care whether Samsung or Apple comes first. Realistically, I dont think customers do either.Nirave Gondhia / Digital TrendsThere are times, however, when timing is important, and thats emphasized elsewhere in Power On. Gurman also says that Samsungs AI offering (which is based on Google Gemini) is several years ahead of Apple Intelligence.Unlike the ultra-thin phones debate, here I think the timing is a bit more important. Not for the empty title of worlds first, but for what that head start means in terms of features and user experience. Because as Gurman says, Apple risks missing out on the biggest technology revolution in 30 years.Apple Intelligence is decent, but its clearly lacking compared to its rivals. As Gurman notes, Google Gemini has pulled ahead, and its not even the best AI on offer. That title belongs to ChatGPT, and while Apple has integrated OpenAIs offering into Apple Intelligence, it just serves to illustrate how far behind Apples own efforts are.Yet theres no reason why Apple cant catch up. Many Apple products, from the iPhone to the iMac, were condemned as being behind their rivals when they launched, yet Apple quickly made amends. The original iPhone came without 3G, the iMac lacked a floppy drive, and it took Apple a few years to help the Apple Watch find its feet. All that is history now. Either Apple pivoted the device to meet demand, or the products inherent qualities overrode concerns about missing features.The stakes are high when it comes to AI, but they were high with those other products too, especially the iPhone. As long as Apple doesnt wait around too long, theres still everything to play for.Thai Nguyen / UnsplashAnd thats the crux of the matter really. AI is moving at a phenomenal pace, and Apple cant afford to drag its heels. But it should move fast in the name of innovation quality, not in the name of simply being first. That would be a hollow victory.Ultimately, timing does matter, but not for the reasons you might think. Worlds first is a meaningless accolade theres no point being first with an awful product that no one wants. Thats why I dont care and Im sure Apple doesnt care either that the Samsung Galaxy S25 Edge will launch six months before the iPhone 17 Air.Apples philosophy of being the best is the right one and consumers are smart enough to recognize a good product when it comes around. But Apple cant take too long to catch up on AI or its efforts will be for nothing. Thats the fine line the company has to tread.Editors Recommendations
    0 Commenti 0 condivisioni 148 Views
  • WWW.WSJ.COM
    What to Know About Chinas DeepSeek AI
    The Chinese upstart says it has trained high-performing AI models cheaply, without using the most advanced chips.
    0 Commenti 0 condivisioni 168 Views
  • WWW.WSJ.COM
    DeepSeek Wont Sink U.S. AI Titans
    Panic fueling the selloff of Nvidia, Broadcom and other tech giants is overblown.
    0 Commenti 0 condivisioni 184 Views
  • ARSTECHNICA.COM
    Mazda celebrates 35 years of the MX-5 with anniversary model
    now I feel old Mazda celebrates 35 years of the MX-5 with anniversary model The MX-5 is the perfect antidote to all those big SUVs. Jonathan M. Gitlin Jan 27, 2025 12:19 pm | 29 Credit: Mazda Credit: Mazda Story textSizeSmallStandardLargeWidth *StandardWideLinksStandardOrange* Subscribers only Learn moreDAYTONA, FloridaThis might make you feel old, but the Mazda MX-5 Miata just turned 35. Still in its fourth generationbetter known to Miata nerds as the "ND"this small, affordable, lightweight sports car is the perfect antidote to, well, just about everything else on the roads. And to celebrate this latest milestone, Mazda has created a new special anniversary edition, which it unveiled at this past weekend's Rolex 24 at Daytona.When the Miata debuted in 1989, it was something of a game-changer. Inspired by classic European roadsters like those built by MG and Alfa Romeo, it was small, lithe, and, most importantly, reliable. It didn't hurt that it looked nice and was great to drive.It's also been something of a hit among amateur racersMazda is proud that each weekend, more Miatas are on track than any other make of car. That goes some way to explaining why Mazda chose this year's Rolex 24 at Daytona to reveal the new 35th Anniversary Editionthe MX-5 Cup series is probably IMSA's most exciting support series.The 35th Anniversary Edition is the latest in a long line of special edition Miatas, including anniversary cars for the 10th, 20th, 25th, and 30th editions. The focus here was on "classic elegance," with Artisan Red paint that's almost burgundy, plus a tan Nappa leather interior that will remind some of the tan leather interiors that Mazda used on some NAs. IMSA posts the MX-5 Cup races on YouTube, and they're highly entertaining. This year, one race was marked by a huge pileup right at the start. Mazda IMSA posts the MX-5 Cup races on YouTube, and they're highly entertaining. This year, one race was marked by a huge pileup right at the start. Mazda The 35th Anniversary Edition even paints the air vent surrounds the same Artisan Red as the bodywork. Mazda The 35th Anniversary Edition even paints the air vent surrounds the same Artisan Red as the bodywork. Mazda IMSA posts the MX-5 Cup races on YouTube, and they're highly entertaining. This year, one race was marked by a huge pileup right at the start. Mazda The 35th Anniversary Edition even paints the air vent surrounds the same Artisan Red as the bodywork. Mazda The 35th Anniversary Edition is similar to the Grand Touring trim, which means features like heated seats, and Mazda says it has added a limited-slip differential, additional bracing, and some newly tuned Bilstein dampers. There's also a beige convertible roof and some shiny 17-inch alloy wheels.It's also a bit more expensive than other Miatas, with an MSRP of $36,250. That's $1,620 more expensive than the next-most-expensive six-speed Miata (the Grand Touring), but it does come with the aforementioned extra equipment. Getting a hold of one might be a bit tricky, thoughMazda will only import 300 into the US.Jonathan M. GitlinAutomotive EditorJonathan M. GitlinAutomotive Editor Jonathan is the Automotive Editor at Ars Technica. He has a BSc and PhD in Pharmacology. In 2014 he decided to indulge his lifelong passion for the car by leaving the National Human Genome Research Institute and launching Ars Technica's automotive coverage. He lives in Washington, DC. 29 Comments
    0 Commenti 0 condivisioni 176 Views
  • ARSTECHNICA.COM
    DeepSeek panic triggers tech stock sell-off as Chinese AI tops App Store
    war of the weights DeepSeek panic triggers tech stock sell-off as Chinese AI tops App Store A new Chinese AI app is sparking existential panic in American AI companies and investors. Benj Edwards Jan 27, 2025 11:29 am | 65 Credit: Luis Diaz Devesa via Getty Images Credit: Luis Diaz Devesa via Getty Images Story textSizeSmallStandardLargeWidth *StandardWideLinksStandardOrange* Subscribers only Learn moreOn Monday morning, Nvidia stock dove 11 percent amid worries over the rise of Chinese AI company DeepSeek, whose R1 reasoning model stunned industry observers last week by challenging American AI supremacy with a low-cost, freely available AI model, and whose AI assistant app jumped to the top of the iPhone App Store's "Free Apps" category over the weekend, overtaking ChatGPT.Whats the big deal about DeepSeek?The drama started around January 20 when Chinese AI startup DeepSeek announced R1, a new simulated reasoning (SR) model that it claimed could match OpenAI's o1 in reasoning benchmarks. Like o1, R1 is trained to work through a simulated chain of thought process before providing an answer, which can potentially improve the accuracy or usefulness of the AI models' outputs for some types of questions posed by the user.That first part wasn't too surprising since other AI companies like Google are hot on the heels of OpenAI with their own simulated reasoning models. In addition, OpenAI itself has announced an upcoming SR model (dubbed "o3") that can surpass o1 in performance.There are three elements of DeepSeek R1 that really shocked experts. First, the Chinese startup appears to have trained the model for only $6 million as a so-called "side project" while using less powerful Nvidia H800 AI-acceleration chips due to US export restrictions on cutting-edge GPUs. Secondly, it appeared just four months after OpenAI announced o1 in September 2024. Finally, and perhaps most importantly, DeepSeek released the model weights for free with an open MIT license, meaning anyone can download it, run it, and fine-tune (modify) it.It suddenly seemed to many observers on social media that American tech companies like OpenAI and Googlewhich have so far thrived on proprietary, closed modelshave "no moat," as tech insiders often say, which means that those companies' technological lead, access to cutting-edge hardware, or impressive bankrolls do not necessarily protect them from upstart market challengers.On Friday, venture capitalist Marc Andreessen wrote on X that DeepSeek R1 is "one of the most amazing and impressive breakthroughs I've ever seen" and a "profound gift to the world." The endorsement from the Andreessen Horowitz cofounder added fuel to the growing buzz around DeepSeek.On top of that, over the weekend, DeepSeek's app, which allows users to experiment with both the R1 model and the company's V3 conventional large language model (LLM) for free, shot to the top of the US iPhone App Store. Multiple AI-related Reddit threads have suddenly been plastered with DeepSeek-related posts, leading to so-far unfounded accusations that someone in China is astroturfingpretending to be ordinary users but actually posting with an agenda to support somethingto artificially drum up support for the Chinese AI company.Over the past weekend, social media has been overtaken with a sort of "sky is falling" in AI mentality, coupled with geopolitical angst about US economic rival China catching up with America, which perhaps inspired a measure of panic in big tech investors and led to the Nvidia stock sell-off, despite the fact that DeepSeek used Nvidia chips for training.As tempting as it is to frame this as a geopolitical tech battle, the "US versus China" framing has been overblown, according to some experts. On LinkedIn, Meta Chief AI Scientist Yann LeCun, who frequently champions open-weights AI models and open source AI research, wrote, "To people who see the performance of DeepSeek and think: 'China is surpassing the US in AI.' You are reading this wrong. The correct reading is: 'Open source models are surpassing proprietary ones.'"But is DeepSeek R1 any good?From the start, DeepSeek has claimed that R1 can match OpenAI's o1 model in AI benchmarks, but benchmarks have historically been easy to game and do not necessarily tell you much about how the models might be used in everyday scenarios.Over the past week, we have experimented with both DeepSeek-V3 (which is roughly the counterpart to OpenAI's GPT-4o), and DeepSeek-R1, and from informal testing, they both seem to be roughly equivalent to OpenAI's ChatGPT models, although that can vary dramatically based on how they are used and prompted. DeepSeek's AI assistant, which you can try at chat.deepseek.com, can even search the web like ChatGPT. We will likely evaluate R1 more formally in a future article.Ultimately, a cheaply trained open weights AI model that can match America's best commercial models is genuinely a threat to closed-source AI companies, but it should not be a surprise to anyone who has been watching the rapid rate of progress in AI. The history of computing is replete with examples of information technology getting cheaper and smaller, becoming a commodity, and eventually being absorbed as a component into larger products.Many software components of modern operating systems (including built-in apps, features, codecs, and utilities) were once separate products that retailed for thousands of dollars when they were first invented. Microprocessors supplanted massive, expensive computer systems and eventually became embedded into everything. We suspect that AI models and software that processes data with simulated reasoningeven hypothetical human-level AI or beyond (if it is ever achieved)will be no different. Tech companies come and go, the next new thing is created, and the cycle repeats itself.Benj EdwardsSenior AI ReporterBenj EdwardsSenior AI Reporter Benj Edwards is Ars Technica's Senior AI Reporter and founder of the site's dedicated AI beat in 2022. He's also a tech historian with almost two decades of experience. In his free time, he writes and records music, collects vintage computers, and enjoys nature. He lives in Raleigh, NC. 65 Comments
    0 Commenti 0 condivisioni 161 Views
  • WWW.NEWSCIENTIST.COM
    European cities face millions more deaths from extreme temperatures
    Tourists try to cool off in Rome, where a large increase in heat deaths is expected by 2099Massimo Valicchia/NurPhoto via Getty ImagesThere will be an extra 2.3 million temperature-related deaths in Europes main cities by 2099 without more action to limit warming and adapt to it, researchers predict. However, in cities in colder northern countries such as the UK, there will be fewer temperature-related deaths over this period, because the decline in deaths from cold will be greater than the increase in deaths from heat.We estimate a slight net decrease, but its very small compared to the big increase we could see in the Mediterranean region, says Pierre Masselot at the London School of Hygiene & Tropical Medicine. AdvertisementMasselots team started by looking at epidemiological studies on how deaths increase during periods of extreme heat or extreme cold. His team then used these statistical links to estimate how the number of excess deaths would change over the next century in various warming scenarios.The study looks at 850 cities home to 40 per cent of Europes population but not any rural areas. This is because the statistical links are stronger where lots of people live in a small area and are exposed to roughly the same conditions.If cities dont adapt, the net effect of climate change increases exponentially with greater warming. In a scenario similar to our current course, the number of excess deaths related to temperature would increase by 50 per cent, from 91 per 100,000 people per year in recent years to 136 per 100,000 people per year by 2099. Unmissable news about our planet delivered straight to your inbox every month.Sign up to newsletterAdaptive measures such as the wider use of air conditioning and planting more trees in inner cities would bring these numbers down, says Masselot, but to significantly reduce a populations vulnerability to heat requires substantial adaptive measures. This is much more than what we have already observed in many countries across the world.The team's estimates are based on the average daily temperatures in warming scenarios, and they don't include the possibility of much more extreme heatwaves. "We have found that usually this is good enough to be able to relate deaths to temperature," says Masselot.This is the most comprehensive study of its kind so far, he says. It includes more countries and suggests for the first time that even France and Germany will have more temperature-related deaths as the continent warms.Rising temperatures will have a wide range of effects on people, from their health to their productivity, he says. "Mortality is just one part of the story."Journal reference:Nature Medicine DOI: 10.1038/s41591-024-03452-2Topics:climate change
    0 Commenti 0 condivisioni 150 Views
  • WWW.NEWSCIENTIST.COM
    The psychologist exposing the mental gymnastics that conceal racism
    SocietyDespite widespread studies revealing the prevalence of racism, its impact is often overlooked. But there are ways to tackle hidden biases and systemic discrimination, says Keon West 27 January 2025 Becki GillKeon Westcould reel off anecdotes about the everyday racism he experiences but he wont. Personal accounts rarely convince anyone, he says, and, all too often, they are dismissed or put down to some other, less offensive, cause. Instead of the feelings that racist behaviour and accusations of racism provoke, he prefers to focus on facts.A social psychologist at Goldsmiths, University of London, West has consolidated hundreds of rigorous empirical studies on racism conducted over decades in his new book, The Science of Racism. By exploring how experiments can detect racism and measure its impact across societies, he builds a scientifically accurate picture of what contemporary racism is and the complexities that surround it.While it is clear that societys attempts to combat racism remain inadequate, there is plenty that can be done about it. The same studies that prove the existence of racism can also help us unpack the psychological gymnastics that nearly everyone performs to conceal their racist behaviours from themself. The idea is that, by becoming aware of these personal biases, many racist behaviours can gradually be dissolved.In this interview, West sheds light on ideas like reverse racism and systemic racism and lays out the science-backed methods of spotting racism in its various guises. Doing so, he hopes, will steer public discourse away from debating whether racism exists to confronting it head on.Amarachi Orie: What is racism?Keon West: There are two definitions that I think are useful. Theres one thats useful for running the scientific experiments: racism is any
    0 Commenti 0 condivisioni 172 Views
  • WWW.BUSINESSINSIDER.COM
    DeepSeek just dropped an update AI model called Janus Pro. It says the image generator is better than OpenAI's DALL-E.
    DeepSeek releases updated AI model Janus-Pro, outperforming rivals in benchmarks.The latest versions of Janus, a text-to-image generator, came out after DeepSeek's R1 last week.DeepSeek said R1 showed advanced reasoning and was trained at a fraction of the cost of US rivals.DeepSeek just released an updated open-source multimodal AI model.The Chinese startup on Monday shared a research paper and released updated versions of the model, called Janus-Pro-1B, and Janus-Pro-7B.According to its paper, DeepSeek says Janus-Pro outperforms OpenAI's DALL-E 3 and Stable Diffusion's 3 Medium text-to-image generators across multiple benchmarks.The latest release comes after DeepSeek unveiled its model R1 last week, which showed new "reasoning" capabilities.R1's release sent shockwaves across the tech industry and spooked investors. That's because DeepSeek demonstrated a breakthrough AI model that could outperform US rivals, but at what it says is just a fraction of the cost.DeepSeek is pricing access to its models way under what OpenAI charges, potentially undermining AI business models and assumptions across the sector. Contact the reporter Jyoti Mann via email at or via Signal at jyotimann.11. Reach out via a nonwork device.
    0 Commenti 0 condivisioni 148 Views
  • WWW.BUSINESSINSIDER.COM
    DeepSeek's cheaper models and weaker chips call into question trillions in AI infrastructure spending
    China's DeepSeek model challenges US AI firms with cost-effective, efficient performance.DeepSeek's model is 20-40 times cheaper than OpenAI's, using modest hardware.DeepSeek's efficiency raises questions about US investments in AI infrastructure.The bombshell that is China's DeepSeek model has set the AI ecosystem alight.The models are high-performing, relatively cheap, and compute-efficient, which has led many to posit that they pose an existential threat to American companies like OpenAI and Meta and the trillions of dollars going into building, improving, and scaling US AI infrastructure.The price of DeepSeek's open-source model is competitive 20 to 40 times cheaperBut the potentially more nerve-racking element in the DeepSeek equation for US-built models is the relatively modest hardware stack used to build them.The DeepSeek-V3 model, which is most comparable to OpenAI's ChatGPT, was trained on a cluster of 2,048 Nvidia H800 GPUs, according to the technical report published by the company.H800s are the first version of the company's defeatured chip for the Chinese market. After the regulations were amended, the company made another defeatured chip, the H20 to comply with the changes.Though this may not always be the case, the chip is the most substantial cost in the large language model training equation. Being forced to use less-powerful, cheaper chips, creates a constraint that the DeepSeek team has ostensibly overcome."Innovation under constraints takes genius," said Sri Ambati, CEO of open-source AI platform H2O.ai told Business Insider.Even on subpar hardware, training DeepSeek-V3 took less than two months, according to the report.The efficiency advantageDeepSeek-V3Its smaller size comes in part from different architecture to ChatGPT called a "mixture of experts." The model has pockets of expertise built-in, which go into action when called upon and sit dormant when irrelevant to the query. This type of model is growing in popularity and DeepSeek's advantage is that it built an extremely efficient version of an inherently efficient architecture."Someone made this analogy: It's almost as if someone released a $20 iPhone," said Foundry CEO Jared Quincy Davis told BI.The Chinese model used a fraction of the time, a fraction of the number of chips, and a less-capable, less expensive chip cluster. Essentially, it's a drastically cheaper, competitively capable model that the firm is virtually giving away for free.The model that is even more concerning from a competitive perspective, according to Bernstein is DeepSeek-R1, which is a reasoning model and more comparable to OpenAI's o1 or o3. This model uses reasoning techniques to interrogate its own responses and thinking. The result is competitive with OpenAI's latest reasoning models.R1 was built on top of V3 and the research paper released alongside the more advanced model doesn't include information about the hardware stack behind it. But, DeepSeek used strategies like generating its own training data to train R1, which requires more compute than using data scraped for the internet or generated by humans.This technique is often referred to as "distillation" and is becoming a standard practice, Ambati said.Distillation brings with it another layer of controversy, though. A company using its own models to distill a smarter, smaller model is one thing. But the legality of using other company's models to distill new ones depends on licensing.Still, DeepSeek's techniques are more iterative and likely to be taken up by the AI undsutry immediately.For years, model developers and startups have focused on smaller models since their size makes them cheaper to build and operate. The thinking was that small models would serve specific tasks. But what DeepSeek and potentially OpenAI's o3 mini demonstrate is that small models can also be generalists.It's not game overA coalition of players including Oracle and OpenAI, with cooperation from the White House, announced Stargate, a $500 billion data center project in Texas the latest in a long, quick procession of a large-scale conversion to accelerated computing. The shock from DeepSeek has called that investment into question, and the largest beneficiary Nvidia, is on a roller coaster as a result. The company's stock plummeted more than 13% Monday.But Bernstein said the response is out of step with the reality."DeepSeek DID NOT 'build OpenAI for $5M'," writes Bernstein analysts in a Monday investor note. The panic, especially on "X" is blown out of proportion, the analysts wrote.DeepSeek's own research paper on V3 explains: "the aforementioned costs include only the official training of DeepSeek-V3, excluding the costs associated with prior research and ablation experiments on architectures, algorithms, or data." So the $5 million figure is only mart of the equation."The models look fantastic but we don't think they are miracles," Bernstein continued. Last week China also announced a roughly $140 billion investment in data centers, in a sign that infrastructure is still needed despite DeepSeek's achievements.The competition for model supremacy is fierce, and OpenAI's moat may indeed be in question. But demand for chips shows no signs of slowing, according to Bernstein. Tech leaders are circling back to a centuries-old economic adage to explain the moment.Jevon's paradox is the idea that innovation begets demand. As technology gets cheaper or more efficient, demand increases much faster than prices drop. That's what providers of computing power like Davis, have been espousing for years. This week, Bernstein and Microsoft CEO Satya Nadella picked up the mantle too."Jevon's paradox strikes again!" Nadella posted on X Monday morning. "As AI gets more efficient and accessible, we will see its use skyrocket, turning it into a commodity we just can't get enough of," he continued.
    0 Commenti 0 condivisioni 173 Views