• Heres how PC gamers are trolling GPU scalper bots
    www.digitaltrends.com
    In response to the recent launch of Nvidias latest RTX 5090 and RTX 5080 graphics cards, eBay users have taken matters into their own hands to combat scalpers and automated purchasing bots. By listing fake productssuch as photographs or drawings of the RTX 5090, sometimes humorously framedsellers aim to deceive bots programmed to acquire these high-demand items for resale at inflated prices.These deceptive listings often mimic the appearance of genuine product offers, complete with pricing around the manufacturers suggested retail price (MSRP) or lower. Subtle warnings like read description are included to alert human buyers.Recommended VideosUpon reviewing the description, it becomes evident that the listing is for an image or drawing of the graphics card, not the actual hardware. Some sellers even specify that purchasers will receive a digital image via email or a random item from a donation store, clearly indicating the non-physical nature of the product. For instance, this listing asks users to read the description where it says THIS IS JUST A PICTURE OF THE RTX 5090 GRAPHICS CARD PRINTED IN BLACK AND WHITE FROM MY PRINTER. HUMANS DO NOT BUY! ZERO RETURNS! SALES ARE FINAL!Get your weekly teardown of the tech behind PC gaming This tactic has proven effective. A search for RTX 5090 on eBay reveals numerous such listings, with some even recording sales. While its unclear whether these purchases were made by bots or unsuspecting individuals, the prevalence of these fake listings has made it challenging for scalpers to identify genuine products.eBayThe motivation behind these actions stems from widespread frustration over the limited availability of Nvidias latest RTX 50-series desktop GPUs, which went on sale just yesterday, January 30. Despite the companys efforts, the RTX 5090 and RTX 5080 have been in short supply since their release, leading to significant demand and prompting enthusiasts to camp outside retailers in hopes of securing a unit.Gamers Nexus has done an excellent deep dive on the ongoing RTX 50-series GPU shortages. The video suggests that major retailers such as Micro Center, Best Buy, and Newegg had extremely limited stocks that were depleted within hours, leaving many enthusiasts empty-handed.In Japan, the situation was equally chaotic. Eager customers were seen hopping fences of neighboring properties to secure a GPU. To manage overwhelming demand and prevent scalping, a Japanese retailer implemented a lottery system just to buy one GPU. However, the lottery concluded before many attendees arrived, leading to further frustration among consumers.Notably, Nvidia acknowledged that its new RTX 5090 and 5080 graphics cards would face significant demand, potentially leading to stock shortages. Tim Adams, Nvidias head of GeForce community, stated in the companys forums, We expect significant demand for the GeForce RTX 5090 and 5080 and believe stock-outs may happen. He emphasized that the company and its partners are continuously shipping more stock to retailers to meet consumer demand.Reports suggest that these supply challenges may persist for up to three months, exacerbating the situation and giving scalpers more opportunities to exploit the market. While system integrators might offer a way to get an RTX 50-series GPU, expect to pay a premiumand likely end up with extra parts you dont need if you already own a PC.Editors Recommendations
    0 Comments ·0 Shares ·34 Views
  • EUs Big Tech Antitrust Fines Not a Tax, Competition Chief Ribera Says
    www.wsj.com
    Ribera said that regulators are bound by the rule of law and that it is the EU executives duty to protect consumers. Trump recently lashed out at the EU for its antitrust enforcement against U.S. businesses.
    0 Comments ·0 Shares ·52 Views
  • 10 Books to Read: The Best Reviews of January
    www.wsj.com
    The story of world music, the mystery of what the universe is made of, the tale of a scientific fraud and more.
    0 Comments ·0 Shares ·39 Views
  • Plato and Phocion: Athens Beyond Democracy
    www.wsj.com
    In the complex and often violent political arena of ancient Greece, ideals of civic engagement and self-determination mingled uneasily with the realities of power.
    0 Comments ·0 Shares ·40 Views
  • Buoy meets satellite soulmate in Love Me
    arstechnica.com
    a postapocalyptic love story about transformation Buoy meets satellite soulmate in Love Me Ars chats with directors Andy and Sam Zuchero and props department head Roberts Cifersons. Jennifer Ouellette Jan 31, 2025 1:15 pm | 0 Kristen Stewart and Steven Yeun star in Love Me Credit: Bleecker Street Kristen Stewart and Steven Yeun star in Love Me Credit: Bleecker Street Story textSizeSmallStandardLargeWidth *StandardWideLinksStandardOrange* Subscribers only Learn moreThere have been a lot of films and television series exploring sentient AI, consciousness, and identity, but there's rarely been quite such a unique take on those themes as that provided by Love Me, the first feature film from directors Andy and Sam Zuchero. The film premiered at Sundance last year, where it won the prestigious Alfred P. Sloan Feature Film Prize, and is now getting a theatrical release.(Some spoilers below.)The film is set long after humans and all other life forms have disappeared from the Earth, leaving just remnants of our global civilization behind. Kristen Stewart plays one of those remnants: a little yellow SMART buoy we first see trapped in ice in a desolate landscape. The buoy has achieved a rudimentary sentience, sufficient to respond to the recorded message being beamed out by an orbiting satellite (Steven Yeun) overhead to detect any new lifeforms that might appear. Eager to have a friendeven one that's basically a sophisticated space ChatBotthe buoy studies the vast online database of information about humanity on Earth the satellite provides. It homes in on YouTube influencers Deja and Liam (also played by Stewart and Yeun), presenting itself to the satellite as a lifeform named Me.Over timea LOT of timethe buoy and satellite (now going by Iam) "meet" in virtual space and take on humanoid avatars. They become increasingly more advanced in their consciousness, exchanging eccentric inspirational memes, re-enacting the YouTubers' "date night,"and eventually falling in love. But the course of true love doesn't always run smooth even for the last sentient beings on Earthespecially since Me has not been honest with Iam about her true nature.At its core, Love Me is less pure sci-fi and more a postapocalyptic love story about transformation. "We really wanted to make a movie that made everyone feel big and small at the same time," Sam Zuchero told Ars. "So the timescale is gigantic, 13 billion years of the universe. But we wanted to make the love story at its core feel fleeting and explosive, as first love feels so often."The film adopts an unusual narrative structure. It's split into three distinct visual styles: practical animatronics, classical animation augmented with motion capture, and live action, each representing the development of the main characters as they discover themselves and each other, becoming more and more human as the eons pass. At the time, the couple had been watching a lot of Miyazaki films with their young son."We were really inspired by how he would take his characters through so many different forms," Andy Zuchero told Ars. "It's a different feeling than a lot of Western films. It was exciting to change the medium of the movie as the characters progressed. The medium grows until it's finally live action." The 1959 film Pillow Talk was another source of inspiration, since a good chunk of that film simply features stars Rock Hudson and Doris Day chatting in a split screen over their shared party linewhat Andy calls "the early 20th century's version of an open Zoom meeting."Building the buoy The sequences with the SMART buoy were shot on location using animatronic robots. Bleecker Street The sequences with the SMART buoy were shot on location using animatronic robots. Bleecker Street The satellite scenes were shot on a soundstage. YouTube/Bleecker Street The satellite scenes were shot on a soundstage. YouTube/Bleecker Street The sequences with the SMART buoy were shot on location using animatronic robots. Bleecker Street The satellite scenes were shot on a soundstage. YouTube/Bleecker Street One can't help but see shades of WALL-E in the plucky little space buoy's design, but the basic concept of what Me should look like came from actual nautical buoys, per props department head Roberts Cifersons of Laird FX, who created the animatronic robots for the film. "As far as the general shape and style of both the buoy and our satellite, most of it came from our production designer," he told Ars. "We just walked around the shop and looked at 1,000 different materials and samples, imagining what could be believable in the future, but still rooted somewhat in reality. What it would look like if it had been floating there for tens of thousands of years, and if it were actually stuck in ice, what parts would be damaged or not working?"Cifersons and his team also had to figure out how to bring character and life to their robotic buoy. "We knew the eye or the iris would be the key aspect of it, so that was something we started fooling around with well before we even had the whole designcolors, textures, motion," he said. They ended up building four different versions: the floating "hero buoy," a dummy version with lighting but limited animatronics, a bisected buoy for scenes where it is sitting in ice, and a "skeleton" buoy for later in the film."All of those had a brain system that we could control whatever axes and motors and lights and stuff were in each, and we could just flip between them," said Cifersons. "There were nine or 10 separate motor controllers. So the waist could rotate in the water, because it would have to be able to be positioned to camera. We could rotate the head, we could tilt the head up and down, or at least the center eye would tilt up and down. The iris would open and close." They could also control the rotation of the antenna to ensure it was always facing the same way.It's always a challenge designing for film because of time and budget constraints. In the case of Love Me, Cifersons and his team only had two months to make their four buoys. In such a case, "We know we can't get too deep down the custom rabbit hole, we have to stick with materials that we know on some level and just balance it out," he said. "Because at the end of the day it has to look like an old rusted buoy floating in the ocean."It helped that Cifersons had a long Hollywood history of animatronics to build upon. "That's the only way it's possible to do that in the crazy film timelines that we have," he said. "We can't start from scratch every single time; we have to build on what we have." His company had timeline-based software to program the robots' motions according to the directors' instructions and play it back in real time. His team also developed hardware to give them the ability to completely pre-record a set of motions and play it back. "Joysticks and RC remotes are really the bread and butter of current animatronics, for film at least," he said. "So we were able to blend more theme park animatronic software with on-the-day filming style."On locationOnce the robots had been completed, the directors and crew spent several days shooting on location in February on a frozen Lake Abraham in Alberta, Canadaor rather, several nights, when the temperatures dipped to -20 F. "Some of the crew were refusing to come onto the ice because it was so intense," Sam Zuchero recalled. They also shot scenes with the buoy floating on water in the Salish Sea off the coast of Vancouver, which Andy Zuchero described as "a queasy experience. Looking at the monitor when you're on a boat is nauseating."Later sequences were shot amid the sand dunes of Death Valley, with the robot surrounded by bentonite clay strewn with 65 million-year-old fossilized sea creatures. The footage of the satellite was shot on a soundstage, using NASA imagery on a black screen. YouTube influencers Deja and Liam become role models for the buoy and satellite. Bleecker Street YouTube influencers Deja and Liam become role models for the buoy and satellite. Bleecker Street Me and Iam meet in virtual space. Bleecker Street Me and Iam meet in virtual space. Bleecker Street Attempting to recreate the influencers' "Date Night." Bleecker Street Attempting to recreate the influencers' "Date Night." Bleecker Street Me and Iam meet in virtual space. Bleecker Street Attempting to recreate the influencers' "Date Night." Bleecker Street Cifersons had his own challenges with the robot buoys, such as getting batteries to last more than 10 seconds in the cold and withstanding high temperatures for the desert shoot. "We had to figure out a fast way to change batteries that would last long enough to get a decent wide shot," he said. "We ended up giving each buoy their own power regulators so we could put in any type of battery if we had to get it going. We could hardwire some of them if we had to. And then in the desert, electronics hate hot weather, and there's little microcontrollers and all sorts of hardware that doesn't want to play well in the hot sun. You have to design around it knowing that those are the situations it's going into."The animated sequences presented a different challenge. The Zucheros decided to put their stars into motion-capture suits to film those scenes, using video game engines to render avatars similar to what one might find in Sims. However, "I think we were drinking a little bit of the AI technological Kool-Aid when we started," Andy Zuchero admitted. That approach produced animated versions of Stewart and Yeun that "felt stilted, robotic, a bit dead," he said. "The subtlety that Kristen and Steven often bring ended up feeling, in this form, almost lifeless." So they relied upon human animators to "artfully interpret" the actors' performances into what we end up seeing onscreen.This approach "also allowed us to base the characters off their choices," said Sam Zuchero. "Usually an animated character is the animator. It's very connected to who the animator is and how the animator moves and thinks. There's a language of animation that we've developed over the past 100 yearsthings like anticipation. If you're going to run forward, you have to pull back first. These little signals that we've all come to understand as the language of animation have to be built into a lot of choices. But when you have the motion capture data of the actors and their intentions, you can truly create a character that is them. It's not just an animator's body in motion and an actor's voice with some tics of the actor. It is truly the actors."Love Me opens in select theaters today. Trailer for Love Me. Jennifer OuelletteSenior WriterJennifer OuelletteSenior Writer Jennifer is a senior writer at Ars Technica with a particular focus on where science meets culture, covering everything from physics and related interdisciplinary topics to her favorite films and TV series. Jennifer lives in Baltimore with her spouse, physicist Sean M. Carroll, and their two cats, Ariel and Caliban. 0 Comments
    0 Comments ·0 Shares ·39 Views
  • Dell risks employee retention by forcing all teams back into offices full-time
    arstechnica.com
    Starting in March Dell risks employee retention by forcing all teams back into offices full-time "Dell may be missing out on some great talent..." Scharon Harding Jan 31, 2025 12:40 pm | 44 Credit: Smith Collection/Gado/Getty Images Credit: Smith Collection/Gado/Getty Images Story textSizeSmallStandardLargeWidth *StandardWideLinksStandardOrange* Subscribers only Learn moreDell is calling much of its workforce back into the office five days a week starting on March 3. The technology giant is framing the mandate as a business strategy, but theres reason to believe the policy may drive employee turnover.Business Insider detailed an internal memo today from CEO and Chairman Michael Dell informing workers that if they live within an hour of a Dell office, theyll have to go in five days a week."What we're finding is that for all the technology in the world, nothing is faster than the speed of human interaction, Dell wrote, per Business Insider. "A thirty-second conversation can replace an email back-and-forth that goes on for hours or even days."The publication said that those living further from an office can continue to work remotely. It's possible that employees could try to move to avoid the mandate, but it's unclear how that might work after it takes effect. Business Insider reported that Dell's note asked workers to hold questions for now because the company is still working through details, and additional information will be available soon."Adding further complication, Dell is said to have previously made fully remote workers ineligible for promotion. This has implications for the type of talent that will be able to rise within the company.Deena Merlen, partner at Reavis Page Jump LLP, an employment, dispute resolution, and media law firm, explained:"If only those who are willing and able to come into the office will get promoted, Dell will be left with a workforce where this qualification impacts the talent pool. Dell may be missing out on some great talent because of this added requirement."Marlen also noted the requirement for Dell's policies to consider the needs of workers with disabilities necessitating remote work. Dell would face "exposure to liability to the extent remote workers who are otherwise qualified are passed over for promotion because their disability requires them to work remotely as a reasonable accommodation" if it failed to oblige with workplace anti-discrimination laws, she told Ars.Similarly to Amazon, which issued a five-day return-to-office (RTO) mandate for corporate employees effective this month, Dell's RTO mandate may face real estate obstacles, which it would address in February, according to Business Insider.In a statement to Ars, Dells PR team said:"We continually evolve our business so we're set up to deliver the best innovation, value, and service to our customers and partners. That includes more in-person connections to drive market leadership."The road to full RTOAfter Dell allowed employees to work from home two days per week, Dells sales team in March became the first department to order employees back into offices full-time. At the time, Dell said it had data showing that salespeople are more productive on site. Dell corporate strategy SVP Vivek Mohindra said last month that sales' RTO brought huge benefits in "learning from each other, training, and mentorship.The companys manufacturing teams, engineers in the labs, onsite team members, and leaders had also previously been called into offices full-time, Business Insider reported today.Since February, Dell has been among the organizations pushing for more in-person work since pandemic restrictions lifted, with reported efforts including VPN and badge tracking.Risking personnelLike other organizations, Dell risks losing employees by implementing a divisive mandate. For Dell specifically, internal tracking data reportedly found that nearly half of workers already opted for remote work over being eligible for promotions or new roles, according to a September Business Insider report.Research has suggested that companies that issue RTO mandates subsequently lose some of their best talent. A November research paper (PDF) from the University of Pittsburgh, Baylor University, The Chinese University of Hong Kong, and Cheung Kong Graduate School of Business researchers that cited LinkedIn data found this particularly true for high-tech and financial firms. The researchers concluded that average turnover rates increased by 14 percent on average after companies issued RTO policies. This research, in addition to other studies, has also found that companies with in-office work mandates are at risk of losing senior-level employees especially.Some analysts dont believe Dell is in danger of a mass exodus, though. Bob O'Donnell, president and chief analyst at Technalysis Research, told Business Insider in December, "It's not like I think Dell's going to lose a whole bunch of people to HP or Lenovo."Patrick Moorhead, CEO and chief analyst at Moor Insights & Strategy, said he believes RTO would be particularly beneficial to Dell's product development.Still, some workers have accused Dell of using RTO policies to try to reduce headcount. There's no proof of this, but broader research, including commentary from various company executives outside of Dell, has shown that some companies have used RTO policies to try to get people to quit.Dell declined to comment about potential employee blowback to Ars Technica.Scharon HardingSenior Technology ReporterScharon HardingSenior Technology Reporter Scharon is a Senior Technology Reporter at Ars Technica writing news, reviews, and analysis on consumer gadgets and services. She's been reporting on technology for over 10 years, with bylines at Toms Hardware, Channelnomics, and CRN UK. 44 Comments
    0 Comments ·0 Shares ·40 Views
  • How DeepSeek ripped up the AI playbookand why everyones going to follow its lead
    www.technologyreview.com
    When the Chinese firm DeepSeek dropped a large language model called R1 last week, it sent shock waves through the US tech industry. Not only did R1 match the best of the homegrown competition, it was built for a fraction of the costand given away for free. The US stock market lost $1 trillion, President Trump called it a wake-up call, and the hype was dialed up yet again. DeepSeek R1 is one of the most amazing and impressive breakthroughs Ive ever seenand as open source, a profound gift to the world, Silicon Valleys kingpin investor Marc Andreessen posted on X. But DeepSeeks innovations are not the only takeaway here. By publishing details about how R1 and a previous model called V3 were built and releasing the models for free, DeepSeek has pulled back the curtain to reveal that reasoning models are a lot easier to build than people thought. The company has closed the lead on the worlds very top labs. The news kicked competitors everywhere into gear. This week, the Chinese tech giant Alibaba announced a new version of its large language model Qwen and the Allen Institute for AI (AI2), a top US nonprofit lab, announced an update to its large language model Tulu. Both claim that their latest models beat DeepSeeks equivalent. Sam Altman, cofounder and CEO of OpenAI, called R1 impressivefor the pricebut hit back with a bullish promise: We will obviously deliver much better models. OpenAI then pushed out ChatGPT Gov, a version of its chatbot tailored to the security needs of US government agencies, in an apparent nod to concerns that DeepSeeks app was sending data to China. Theres more to come. DeepSeek has suddenly become the company to beat. What exactly did it do to rattle the tech world so fully? Is the hype justified? And what can we learn from the buzz about whats coming next? Heres what you need to know. Training steps Lets start by unpacking how large language models are trained. There are two main stages, known as pretraining and post-training. Pretraining is the stage most people talk about. In this process, billions of documentshuge numbers of websites, books, code repositories, and moreare fed into a neural network over and over again until it learns to generate text that looks like its source material, one word at a time. What you end up with is known as a base model. Pretraining is where most of the work happens, and it can cost huge amounts of money. But as Andrej Karpathy, a cofounder of OpenAI and former head of AI at Tesla, noted in a talk at Microsoft Build last year: Base models are not assistants. They just want to complete internet documents. Turning a large language model into a useful tool takes a number of extra steps. This is the post-training stage, where the model learns to do specific tasks like answer questions (or answer questions step by step, as with OpenAIs o3 and DeepSeeks R1). The way this has been done for the last few years is to take a base model and train it to mimic examples of question-answer pairs provided by armies of human testers. This step is known as supervised fine-tuning. OpenAI then pioneered yet another step, in which sample answers from the model are scoredagain by human testersand those scores used to train the model to produce future answers more like those that score well and less like those that dont. This technique, known as reinforcement learning with human feedback (RLHF), is what makes chatbots like ChatGPT so slick. RLHF is now used across the industry. But those post-training steps take time. What DeepSeek has shown is that you can get the same results without using people at allat least most of the time. DeepSeek replaces supervised fine-tuning and RLHF with a reinforcement-learning step that is fully automated. Instead of using human feedback to steer its models, the firm uses feedback scores produced by a computer. Skipping or cutting down on human feedbackthats a big thing, says Itamar Friedman, a former research director at Alibaba and now cofounder and CEO of Qodo, an AI coding startup based in Israel. Youre almost completely training models without humans needing to do the labor. Cheap labor The downside of this approach is that computers are good at scoring answers to questions about math and code but not very good at scoring answers to open-ended or more subjective questions. Thats why R1 performs especially well on math and code tests. To train its models to answer a wider range of non-math questions or perform creative tasks, DeepSeek still has to ask people to provide the feedback. But even that is cheaper in China. Relative to Western markets, the cost to create high-quality data is lower in China and there is a larger talent pool with university qualifications in math, programming, or engineering fields, says Si Chen, a vice president at the Australian AI firm Appen and a former head of strategy at both Amazon Web Services China and the Chinese tech giant Tencent. DeepSeek used this approach to build a base model, called V3, that rivals OpenAIs flagship model GPT-4o. The firm released V3 a month ago. Last weeks R1, the new model that matches OpenAIs o1, was built on top of V3. To build R1, DeepSeek took V3 and ran its reinforcement-learning loop over and over again. In 2016 Google DeepMind showed that this kind of automated trial-and-error approach, with no human input, could take a board-game-playing model that made random moves and train it to beat grand masters. DeepSeek does something similar with large language models: Potential answers are treated as possible moves in a game. To start with, the model did not produce answers that worked through a question step by step, as DeepSeek wanted. But by scoring the models sample answers automatically, the training process nudged it bit by bit toward the desired behavior. Eventually, DeepSeek produced a model that performed well on a number of benchmarks. But this model, called R1-Zero, gave answers that were hard to read and were written in a mix of multiple languages. To give it one last tweak, DeepSeek seeded the reinforcement-learning process with a small data set of example responses provided by people. Training R1-Zero on those produced the model that DeepSeek named R1. Theres more. To make its use of reinforcement learning as efficient as possible, DeepSeek has also developed a new algorithm called Group Relative Policy Optimization (GRPO). It first used GRPO a year ago, to build a model called DeepSeekMath. Well skip the detailsyou just need to know that reinforcement learning involves calculating a score to determine whether a potential move is good or bad. Many existing reinforcement-learning techniques require a whole separate model to make this calculation. In the case of large language models, that means a second model that could be as expensive to build and run as the first. Instead of using a second model to predict a score, GRPO just makes an educated guess. Its cheap, but still accurate enough to work. A common approach DeepSeeks use of reinforcement learning is the main innovation that the company describes in its R1 paper. But DeepSeek is not the only firm experimenting with this technique. Two weeks before R1 dropped, a team at Microsoft Asia announced a model called rStar-Math, which was trained in a similar way. It has similarly huge leaps in performance, says Matt Zeiler, founder and CEO of the AI firm Clarifai. AI2s Tulu was also built using efficient reinforcement-learning techniques (but on top of, not instead of, human-led steps like supervised fine-tuning and RLHF). And the US firm Hugging Face is racing to replicate R1 with OpenR1, a clone of DeepSeeks model that Hugging Face hopes will expose even more of the ingredients in R1s special sauce. Whats more, its an open secret that top firms like OpenAI, Google DeepMind, and Anthropic may already be using their own versions of DeepSeeks approach to train their new generation of models. Im sure theyre doing almost the exact same thing, but theyll have their own flavor of it, saysZeiler. But DeepSeek has more than one trick up its sleeve. It trained its base model V3 to do something called multi-token prediction, where the model learns to predict a string of words at once instead of one at a time. This training is cheaper and turns out to boost accuracy as well. If you think about how you speak, when youre halfway through a sentence, you know what the rest of the sentence is going to be, says Zeiler. These models should be capable of that too. It has also found cheaper ways to create large data sets. To train last years model, DeepSeekMath, it took a free data set called Common Crawla huge number of documents scraped from the internetand used an automated process to extract just the documents that included math problems. This was far cheaper than building a new data set of math problems by hand. It was also more effective: Common Crawl includes a lot more math than any other specialist math data set thats available. And on the hardware side, DeepSeek has found new ways to juice old chips, allowing it to train top-tier models without coughing up for the latest hardware on the market. Half their innovation comes from straight engineering, says Zeiler: They definitely have some really, really good GPU engineers on that team. Nvidia provides software called CUDA that engineers use to tweak the settings of their chips. But DeepSeek bypassed this code using assembler, a programming language that talks to the hardware itself, to go far beyond what Nvidia offers out of the box. Thats as hardcore as it gets in optimizing these things, says Zeiler. You can do it, but basically its so difficult that nobody does. DeepSeeks string of innovations on multiple models is impressive. But it also shows that the firms claim to have spent less than $6 million to train V3 is not the whole story. R1 and V3 were built on a stack of existing tech. Maybe the very last stepthe last click of the buttoncost them $6 million, but the research that led up to that probably cost 10 times as much, if not more, says Friedman. And in a blog post that cut through a lot of the hype, Anthropic cofounder and CEO Dario Amodei pointed out that DeepSeek probably has around $1 billion worth of chips, an estimate based on reports that the firm in fact used 50,000 Nvidia H100 GPUs. A new paradigm But why now? There are hundreds of startups around the world trying to build the next big thing. Why have we seen a string of reasoning models like OpenAIs o1 and o3, Google DeepMinds Gemini 2.0 Flash Thinking, and now R1 appear within weeks of each other? The answer is that the base modelsGPT-4o, Gemini 2.0, V3are all now good enough to have reasoning-like behavior coaxed out of them. What R1 shows is that with a strong enough base model, reinforcement learning is sufficient to elicit reasoning from a language model without any human supervision, says Lewis Tunstall, a scientist at Hugging Face. In other words, top US firms may have figured out how to do it but were keeping quiet. It seems that theres a clever way of taking your base model, your pretrained model, and turning it into a much more capable reasoning model, says Zeiler. And up to this point, the procedure that was required for converting a pretrained model into a reasoning model wasnt well known. It wasnt public. Whats different about R1 is that DeepSeek published how they did it. And it turns out that its not that expensive a process, says Zeiler. The hard part is getting that pretrained model in the first place. As Karpathy revealed at Microsoft Build last year, pretraining a model represents 99% of the work and most of the cost. If building reasoning models is not as hard as people thought, we can expect a proliferation of free models that are far more capable than weve yet seen. With the know-how out in the open, Friedman thinks, there will be more collaboration between small companies, blunting the edge that the biggest companies have enjoyed. I think this could be a monumental moment, he says.
    0 Comments ·0 Shares ·40 Views
  • Photos of Melania Trump's Slovenia hometown show her humble beginnings
    www.businessinsider.com
    Melania Trump grew up in Sevnica, Slovenia.Sevnica, Slovenia. JURE MAKOVEC/AFP via Getty Images Trump was born in Novo Mesto, Slovenia, on April 26, 1970.She spent her childhood in Sevnica, a small town 30 miles away.Sevnica is located along the Sava River in central Slovenia.Sevnica, Slovenia. JURE MAKOVEC/AFP via Getty Images It has a population of 17,611, according to data collected by the Statistical Office of the Republic of Sloveniain 2022.When Melania Trump was born, Slovenia was a communist country ruled by President Josip Tito and known as Yugoslavia.Sevnica, Slovenia, covered in snow. Awakening/Future Publishing via Getty Images Slovenia became independent in 1991.Before Trump's rise to fame as a model and then FLOTUS, Sevnica was known for its furniture and clothing factories, as well as its annual salami festival.Sevnica, Slovenia. JURE MAKOVEC/AFP/AFP via Getty Images Sevnica produces over 150 kinds of salami, a feat celebrated at its annual Salamiada festival.Now, Sevnica produces a salami named after the first lady.A salami named First Lady in Sevnica, Slovenia. Srdjan Zivulovic/Reuters Sevnica's tourism doubled in the year before Donald Trump took office as interest in Melania Trump grew, a tour guide told Reuters in January 2017.For 2017 as a whole, the number of foreigners visiting Slovenia jumped 17% when compared to the previous year, with a total of 3.4 million visitors, Reuters reported in January 2018.The small town capitalized on its claim to fame as the former FLOTUS' hometown, offering tours, foods, and souvenirs named after her.As a child, Trump then known as Melanija Knavs lived in this block of Communist-era apartments.Melania Trump's childhood apartment in Sevnica, Slovenia. Awakening/Future Publishing via Getty Images Her father, Viktor Knavs, worked as a car salesmanTrump has one sister, Ines Knauss, and a half-brother, Denis Cigelnjak.Trump attended Sevnica's Savo Kladnik Elementary School.Melania Trump's elementary school in Sevnica, Slovenia. JURE MAKOVEC/AFP via Getty Images Mirjana Jelancic, a friend of Trump's who went on to become principal of the school, told ABC News in 2016 that the young Trump was "an angel" and "a very good student."Her family later moved to a modest house on Ribniki Street.Melania Trump's childhood home in Sevnica, Slovenia. JURE MAKOVEC/AFP via Getty Images When Trump and her sister, Ines, were in high school, the Knavs family moved to Ljubljana, Slovenia's capital. There, Trump was scouted by photographer Stane Jerko and signed with a modeling agency when she was 18.Trump remained connected to her hometown over the years, donating $25,000 to a medical center there in 2005.A medical center Melania Trump donated to in Sevnica, Slovenia. Awakening/Future Publishing via Getty Images Trump made the donation after her wedding in 2005, The New York Timesreported.Residents of Sevnica celebrated President Donald Trump's victory in the 2016 election.Sevnica residents celebrate President Donald Trump's victory in the 2016 election. JURE MAKOVEC/AFP via Getty Images Trump is the second first lady born outside the US. The first was John Quincy Adams' wife, Louisa Catherine Adams, who was born in London.Sevnica locals also gathered to watch both of Donald Trump's inaugurations.Locals toasted Donald Trump's 2017 inauguration in Sevnica, Slovenia. Srdjan Zivulovic/Reuters The Rotary Klub Sevnica held events in honor of Donald Trump's inauguration in 2017 and 2025. Proceeds from the 2025 event went toward a youth scholarship fund, the Slovenian outlet Dolenjski Listreported.American artist Brad Downey commissioned a monument of Trump from Slovenian sculptor Ales "Maxi" Zupevc in 2019 that was erected in a field outside Sevnica.A monument of Melania Trump in a field outside Sevnica, Slovenia. JURE MAKOVEC/AFP via Getty Images The wooden statue, modeled after Trump's blue Ralph Lauren inauguration dress, garnered mixed reviews.A bronze statue replaced the original wood one after it was vandalized and burned in 2020.A bronze replica depicting first lady Melania Trump. Ales Beno/Anadolu Agency via Getty Images A plaque at the site says the new bronze statue is "dedicated to the eternal memory of a monument to Melania which stood in this location from 2019-2020."
    0 Comments ·0 Shares ·39 Views
  • This startup is bringing AI agents to banks and money managers including UBS, Blue Owl Capital, and T. Rowe Price. Here's the deck it used to raise $8 million.
    www.businessinsider.com
    Auquan was founded in 2018 by Chandini Jain, a former analyst and derivatives trader.It launched an AI product in late 2023 that can automate research work usually done by analysts.It has since inked deals with UBS, Blue Owl Capital, and T. Rowe Price.The wave of fintechs building autonomous agents for finance firms designed to do the work of a junior analyst or banker is swelling.One such startup, Auquan, uses generative AI to automate the time-consuming but ubiquitous task of gathering and processing data and putting that information into a written template, like a due diligence report, an investment committee memo, or a pitch book.Demand for this kind of technology is heating up in the finance industry, which is often bogged down with manual processes around data management, processing, and analysis. Since launching in October 2023, Auquan has brought in close to $2 million in annual recurring revenue and secured UBS, T. Rowe Price, and Blue Owl Capital as customers, according to CEO and cofounder, Chandini Jain.From banking to software engineering to research, finance firms are keen to implement AI assistants that can carry out multi-step processes. While it's still too early to tell just how much the technology will impact adopters' bottom lines, venture-capital investors are writing checks to get in on the ground floor of what some industry leaders are calling a revolutionary technology.In addition to scoring big-name clients and its revenue stream last year, Auquan also closed $8 million in seed funding. The round was led by Peak XV and included Neotribe Ventures.Auquan has made inroads with various divisions at financial firms, from private-market investing to investment banking, as well as risk and compliance, and investor relations, Jain said. Across those functions, it's most heavily used by analysts or associates to produce documents or templates for their MDs, partners, or division heads.Due-diligence reports are a big use case for Auquan. The startup automates the creation of 3,300 due diligence reports for 20 different clients, saving them a cumulative 55,000 hours of work, according to customer estimates.How Auquan works behind the scenesAuquan is built to try to mimic the humans whose jobs it is doing, Jain said.The first step is accessing the raw data. Auquan pulls data from providers, like FactSet, CapIQ, and Pitchbook, as well as public data sets from government agencies and news sites. It can also plug into a client's internal file systems.The second part involves the user stating an intent with an example, such as "I want to create an investment committee memo and I want it to come out looking like this template document," Jain said. Under the hood, the tech relies on an "agent super orchestrator" that breaks down the specific jobs to be done and organizes several "mini agents" to take on each of those jobs, Jain said.In the investment committee memo example, there might be an agent that identifies the fields that need to be filled, another to run searches on underlying vendor data, another that scans public data, and a writing agent that takes all of the info and puts it into a company-specific format based on the template, like if a section should be presented in bullet points or a table. It's exported in the desired interface, such as a PowerPoint presentation or a Google Doc with the proper corporate branding. All of this happens automatically without human intervention, Jain said.The first draft is presented to the user as a starting point. The user can make edits and tweaks for future documents, she said. The agent super orchestrator will assign new mini agents as needed, she added.Pricing for Auquan is based on clients' estimated desired outcomes, Jain said. Examples of outcomes are producing one slide deck, one report, or one compliance check. Once the client chooses what workflow to automate, Auquan charges a dollar amount for that outcome, and multiplies it by how many times that process is expected to run, she said.Too much data, not enough peopleJain knows firsthand how labor intensive it is to extract insights from data. Before Auquan, she worked as an analyst at Deutsche Bank and derivatives trader at the Dutch market-maker and proprietary trading firm Optiver, where she was drowning in information with not enough time or help to distill it."If I or anyone on my team could make the case for why we thought any data set would help us make better decisions, we could buy it no questions asked," Jain said. "What we didn't have a lot of was resources or time to go through that information," she said.She would learn from conversations with financial clients that she wasn't alone in that problem. The broad applicability has won over investors.Here's the pitch deck Auquan used to raise $8 million.
    0 Comments ·0 Shares ·40 Views
  • The Sims and The Sims 2 are available now in new legacy collections
    metro.co.uk
    Torment your sims like its 2004 (EA)EA has re-released the first two games in The Sims series on PC, with almost every slice of DLC included.After several hints on social media, EA has officially announced the return of The Sims and The Sims 2.To mark the series 25th anniversary of the series (which is technically next week on February 4), both games have been re-released today on PC in new Legacy Collection bundles across the EA app, Epic Games Store, and Steam.Both collections, which contain basically all their respective DLC, have been updated to run on Windows 10 and 11 but theyre essentially the same games so dont expect any big quality-of-life improvements.The Sims: Legacy Collection costs 17.99 and The Sims 2: Legacy Collection is priced at 24.99. If you want both games, a 25th Birthday Bundle is priced at 34.99, saving you 8.The Sims: Legacy Collection contains all the DLC for the original game, but theres one DLC pack missing from The Sims 2: Legacy Collection the IKEA Home Stuff DLC presumably for licensing reasons. You can check out a list of all the confirmed DLC packs below.For those who spent hours in the original games, you can copy over old save files into these new versions as well, along with original mods.While these re-released collections arent the remasters some might have hoped for, this is the first time the original Sims has been released digitally so its nice to have a version which is easily accessible at least.EA has other The Sims projects in the works. The MySims: Cozy Bundle, which is already available on Switch, is set to come to PC on March 18, 2025, and while there wont be The Sims 5, developer Maxis is working on a multiplayer spin-off codenamed Project Rene.More TrendingThese Legacy Collections are only the beginning of the 25th anniversary celebrations, with a special livestream set to air on February 4 which promises updates for The Sims 4.The Sims and The Sims 2 Legacy Collection DLC listThe Sims Legacy CollectionThe Sims: Livin LargeThe Sims: House PartyThe Sims: Hot DateThe Sims: VacationThe Sims: UnleashedThe Sims: SuperstarThe Sims: Makin MagicThe Sims 4: Throwback Fit KitThe Sims 2 Legacy CollectionThe Sims 2: UniversityThe Sims 2: NightlifeThe Sims 2: Open for BusinessThe Sims 2: PetsThe Sims 2: Bon VoyageThe Sims 2: SeasonsThe Sims 2: FreeTimeThe Sims 2: Apartment LifeThe Sims 2: Holiday Party PackThe Sims 2: Family Fun StuffThe Sims 2: Glamour Life StuffThe Sims 2: Happy Holiday StuffThe Sims 2: Celebration! StuffThe Sims 2: H&M Fashion StuffThe Sims 2: Teen Style StuffThe Sims 2: Kitchen & Bath Interior Design StuffThe Sims 2: Mansion & Garden StuffThe Sims 4: Grunge Revival KitEmailgamecentral@metro.co.uk, leave a comment below,follow us on Twitter, andsign-up to our newsletter.To submit Inbox letters and Readers Features more easily, without the need to send an email, just use ourSubmit Stuff page here.For more stories like this,check our Gaming page.GameCentralSign up for exclusive analysis, latest releases, and bonus community content.This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply. Your information will be used in line with our Privacy Policy
    0 Comments ·0 Shares ·53 Views