Tech Enthusiasts - Power Users - IT Professionals - Gamers
التحديثات الأخيرة
-
WWW.TECHSPOT.COMNintendo Switch 2 teardown confirms Nvidia Tegra T239 chip, SK Hynix memory, and other detailsTL;DR: A teardown of the newly announced Nintendo Switch 2 has seemingly confirmed key hardware specs not yet officially announced. According to screenshots published by a reliable hardware modder, the device uses an Nvidia Tegra processor and SK Hynix memory. The teardown was performed by YouTuber and X user @KurnalSalts, known for his deep dive videos on Arm chips used in smartphones, laptops, AR headsets, and other gadgets. According to his since-deleted post, the Switch 2 is powered by Nvidia's Tegra T239 SoC, which comes with an Arm Cortex X1 HP-core, three Cortex A78 performance cores, and four Cortex A55 efficiency cores, paired with a custom Ampere-based GPU with 12 SMs and 1,536 CUDA cores. The tipster also revealed that the Switch 2 uses memory modules from SK Hynix, though the exact memory configuration remains unconfirmed. Earlier rumors suggested that it uses 12GB of LPDDR5 RAM in dual-channel mode with a 128-bit memory interface. The teardown also revealed a 256GB UFS 3.1 flash storage module from SK Hynix and what looks like a Wi-Fi chip from MediaTek. Krunal says he will publish his trademark deep dive into the Tegra chip and the rest of the hardware in the near future. Hardware enthusiasts are hoping that the promised video will reveal more details about the Switch 2, including information about the process node used to manufacture the Nvidia SoC. The processor powering the Switch 2 is believed to be made by Samsung, but there's been significant speculation in the recent past about whether it's based on the company's 8nm DUV foundry node or the newer 5nm EUV process. While the Digital Foundry YouTube channel is doubling down on the 8nm rumors, some Nintendo communities on Reddit and X believe that Nvidia switched to the 5nm technology for its new SoC. Nintendo announced the Switch 2 in January before sharing more details earlier this month. The new console features multiple hardware and software upgrades over its predecessor, including a bigger display, improved controls, enhanced audio, and 4K output for TV. Priced at $449.99, it goes on pre-order today at Best Buy, Target, Walmart, and GameStop. It is slated to hit store shelves in North America on June 5.0 التعليقات 0 المشاركات 11 مشاهدةالرجاء تسجيل الدخول , للأعجاب والمشاركة والتعليق على هذا!
-
Nintendo files DMCA subpoena odering Discord to identify"Teraleak" Pokémon leakerWhat just happened? Nintendo, probably the most litigious games company in the world, has requested a DMCA subpoena ordering Discord to reveal the identity of the person behind last year's Pokémon "Teraleak." The leaker allegedly hacked Pokémon developer Game Freak and posted a slew of data covering not only unreleased and upcoming work, but also personal information about employees. In October 2024, Discord user GameFreakOUT posted 1 terabyte of data to a Discord server called FeakLeak that included so much unseen Pokémon-related material that the community gave it the nickname Teraleak. The leak included information about upcoming projects such as Pokémon Legends: Z-A, minutes from a Pokémon Company meeting, concept art from the original 1997 anime, source code for the DS games Pokémon HeartGold and SoulSilver, and much more. The source of all this was Game Freak's internal servers, which meant it also included employee details. Game Freak said data on more than 2,000 current and former members of staff had been breached in August 2024. Six months after the Teraleak incident, Polygon reports that Nintendo filed a request for a subpoena on April 18 in the US District Court for the Northern District of California. The subpoena would order Discord to reveal GameFreakOUT's name, address, phone number, and email address. Nintendo issued DMCA requests at the time of the leak to try to remove the data, but it can still be found online. // Related Stories Nintendo never said what it would do if the subpoena is granted and the leaker's identity is revealed, though it's easy to guess its plans. The company is famous for coming down hard on those who it calls copyright infringers. The most famous case is that of Gary Bowser, who was sentenced 40 months in federal prison (he was released after a year) for selling modchips and jailbreaks used to circumvent Nintendo's security measures. He also has to pay Nintendo a total of $14.5 million. While the leaker in this instance never sold anything, Nintendo in 2021 took two Pokémon Sword and Shield leakers to court. They had to pay The Pokémon Company $150,000 each in damages and attorneys' fees. The size of the Teraleak, the fact it included employee details, and the alleged hacking mean GameFreakOUT would likely receive a harsher punishment. Nintendo owns around 33% of Pokémon. Game Freak and The Pokémon Company own the other two-thirds. Masthead: Michael Rivera0 التعليقات 0 المشاركات 30 مشاهدة
-
WWW.TECHSPOT.COM4chan has been offline for over a week, and it's probably not coming backTL;DR: 4chan has long been one of the internet's most infamous communities, playing a central role in the rise of various memes and controversial incidents. Following a cyberattack that abruptly took the site offline, comments from an anonymous former moderator suggest its chances of returning are slim. Access to 4chan was disrupted on April 14, and troves of internal data appeared online shortly after. Members of Soyjak, a rival forum, claimed responsibility, declaring victory in a feud that has spanned several years. The attackers claimed they had infiltrated 4chan's systems for over a year before executing the attack. At least one admin account was compromised, a previously deleted board was revived and defaced, and visiting 4chan now results in a 503 error message. Information from the leaks indicates that the site's source code was extremely outdated and vulnerable to numerous exploits. The hack also exposed administrators' email addresses and revealed that they could see the IP addresses of everyone who posted on the ostensibly anonymous forum. Additionally, the attackers obtained the personal information of paid subscribers. Speaking on condition of anonymity, a 4chan moderator told TechCrunch that the damage likely extends beyond what has been publicly revealed. Given that the attackers appeared to gain complete control over the forum – and access has not been restored more than a week later – its return seems unlikely. // Related Stories 4chan was founded in 2003 by Christopher "moot" Poole, then a high school student, as an alternative space for discussing anime culture. Poole based the site's design and code on Japan's 2chan, one of the world's largest online communities. A reference to the viral "Chicken Jockey" meme from the Minecraft movie is likely 4chan's final post. Poole's English-language version quickly gained notoriety for its irreverent and often offensive content. In 2015, he sold 4chan to Hiroyuki Nishimura, the founder of 2channel (not to be confused with 2chan). The site's cultural influence and infamy only grew throughout the 2010s. Widely known trends like Pepe the Frog, wojaks, rage comics, and trolling were popularized on 4chan, but the forum also gained a reputation as one of the internet's darkest. 4chan played a central role in the rise of movements like QAnnon, the incel community, GamerGate, and the alt-right. The site was also linked to multiple mass shootings, the 2014 celebrity photo leak scandal, and other serious incidents. 4chan's influence on 21st-century culture is undeniable, but if it really is gone, many won't miss it.0 التعليقات 0 المشاركات 24 مشاهدة
-
WWW.TECHSPOT.COMNintendo apologizes as Switch 2 demand overwhelms supply in JapanFacepalm: You knew it would happen. Despite producing and sitting on inventory for a year and locking pre-orders behind a Nintendo Online subscription, there will not be enough Switch 2s to go around in Japan, but just wait. It will happen in the US too. The company will hold a second lottery sometime after launch but still won't have enough to cover initial orders. It's Switch one all over again. On Monday, the My Nintendo Store in Japan opened its first round of pre-orders for the long-awaited Switch 2 – and closed them just as fast. Within hours, the store marked the new system's availability as "sold out," with pre-order applications closed until at least May 6 – a day after launch. Nintendo President Shuntaro Furukawa said pre-orders in Japan alone amounted to more than 2.2 million, far exceeding its ability to deliver. The number even exceeds what Nintendo expects to supply during a second lottery round. "In order to avoid the trouble of those who were not selected in the first lottery sale having to reapply, My Nintendo Store will automatically carry over those who were not selected in the first lottery sale to the second lottery sale," Furukawa explained via X. "However, even including the quantity for the second lottery sale, we cannot fulfill all of the applications we received. We deeply apologize for not being able to meet your expectations despite our prior preparations." Unlike typical online pre-orders, Nintendo structured this release as a lottery. Users had to log in with a verified Nintendo account and register interest during a limited window. Winners will be randomly selected and notified after the application period ends. However, it's not entirely random. The company said it would give users who have paid for at least one year of Nintendo Online and have logged at least 50 hours of gameplay by April 2 higher priority. It's a harsh restriction designed to curb scalping bots and mass purchases, similar to strategies employed during PlayStation 5 scarcity, but did it work? // Related Stories Although Nintendo's lottery made it harder for automated bots and bulk buyers, anecdotal reports suggest that some scalpers have adapted. Japanese resellers on platforms like Mercari and Yahoo Auctions have already listed Switch 2 "pre-order reservations" at inflated prices despite the lottery still being open outside Japan. Mind you, Nintendo has not even announced winners yet. These listings don't guarantee a console, only an entry into the lottery – yet some buyers who were late on the draw are willing to take the risk. These early gray-market listings make clear the pre-order system isn't airtight. However, it's a far cry from the chaos of earlier console launches. In past cycles, scalpers openly boasted about automated systems that could buy dozens of units in seconds. The Switch 2's limited application-based rollout has at least forced them to work harder. Nintendo hasn't revealed how many units it allocated for this first wave, so it's difficult to gauge how much of the sellout reflects genuine demand versus opportunistic flipping. What is clear, though, is that interest in the Switch 2 is high – and the company's efforts to rein in scalpers, while imperfect, have shifted the landscape. Image credit: The Shortcut0 التعليقات 0 المشاركات 28 مشاهدة
-
WWW.TECHSPOT.COMMeta expands Ray-Ban smart glasses with live translation, visual AI, and new framesIn brief: Meta is broadening the reach and capabilities of its Ray-Ban smart glasses, unveiling a suite of new features and style options that signal a significant step forward in wearable technology. The company announced that its live translation tool, previously limited to select early adopters, is now rolling out to all markets where Ray-Ban Meta glasses are available. This update enables real-time translation in English, French, Italian, and Spanish, allowing users to hold conversations across language barriers and hear instant translations through their glasses. For travelers and users without reliable internet access, the feature can also function offline, provided the necessary language packs are downloaded in advance. Meta is also introducing new color lens combinations for the Skyler frame, including a shiny chalky gray paired with sapphire transitions lenses and a shiny black option that can be fitted with either clear or green-tinted lenses. Meta is also pushing the boundaries of what smart glasses can do with the company's AI assistant, Meta AI. In the United States and Canada, users will soon be able to engage in more natural, free-flowing conversations with Meta AI, which can continuously process visual information from the glasses' camera. This "see what you see" capability allows the assistant to provide context-aware responses – whether identifying landmarks, offering cooking advice, or translating a foreign menu in real time. The feature, previously in beta, is now poised for general release. Communication features are also expanding. The glasses will soon support sending and receiving direct messages, photos, and both audio and video calls via Instagram, complementing existing integrations with WhatsApp, Messenger, and native phone messaging apps. // Related Stories Music lovers will find new reasons to embrace the update, as Meta is extending support for popular streaming services such as Spotify, Amazon Music, Apple Music, and Shazam beyond North America. Users in more regions can now control music playback and access information about the songs they're listening to, provided their default language is set to English. The international rollout continues, with Meta confirming plans to launch the Ray-Ban Meta smart glasses in Mexico, India, and the United Arab Emirates, though specific release dates remain under wraps. Meanwhile, users in the European Union will soon gain access to Meta AI and its visual search capabilities, further bridging the gap in feature availability across regions.0 التعليقات 0 المشاركات 28 مشاهدة
-
WWW.TECHSPOT.COMGoogle Chrome abandons plans to phase out third-party cookiesWhat just happened? In a significant reversal that will send ripples through the advertising industry, Google has announced that it will no longer introduce a standalone prompt for third-party cookies in its Chrome browser. The decision marks a dramatic departure from the company's long-standing plan to phase out cookies entirely, a move that has been in waiting for several years now and was closely monitored by regulators, advertisers, and privacy advocates alike. The announcement, delivered by Anthony Chavez, VPt of Privacy Sandbox at Google, confirmed that Chrome users will continue to manage their third-party cookie preferences through existing privacy and security settings, rather than being presented with a new, explicit prompt. "We've made the decision to maintain our current approach to offering users third-party cookie choice in Chrome, and will not be rolling out a new standalone prompt for third-party cookies," Chavez wrote in a blog post on April 22. He emphasized that users can still choose the best option for themselves within Chrome's settings. This policy shift effectively halts Google's multi-year campaign to eliminate third-party cookies from Chrome, a browser that commands over 60 percent of the global market. The original plan, announced in 2020, aimed to bring Chrome in line with competitors like Firefox and Safari, which had already blocked third-party cookies by default. Google's approach, however, was more cautious, citing the need to balance user privacy with the economic realities of the ad-supported web. The company's Privacy Sandbox initiative was intended to develop alternative technologies that would enable targeted advertising while preserving user privacy. These included tools such as the Topics API and various new APIs for ad measurement and fraud prevention. Despite these efforts, industry feedback revealed deep concerns. Many in ad tech argued that the proposed replacements couldn't match the scalability or real-time processing capabilities of third-party cookies, while publishers worried about revenue loss and the technical complexity of implementing new systems. Regulatory scrutiny also played a decisive role in Google's change of course. In April 2024, the UK's Competition and Markets Authority (CMA) intervened, requesting a pause in the rollout over concerns that Google's dominance in both browsers and digital advertising could be further entrenched by the proposed changes. The CMA demanded assurances that any new system would not unfairly advantage Google's own ad products. // Related Stories Meanwhile, privacy advocates and organizations such as the Electronic Frontier Foundation continued to criticize Google's alternatives, arguing they still enabled user tracking and introduced new privacy concerns. Chavez acknowledged these divergent perspectives in his post, noting ongoing engagement with both industry stakeholders and regulators. While the complete removal of third-party cookies is now off the table, he said the Privacy Sandbox project will continue in a modified form. Google plans to keep developing privacy features – such as IP Protection for Incognito users – and will gather additional feedback before updating its roadmap for future technologies. Critics responded swiftly. The Movement for an Open Web, a group that had previously challenged Google's plans before the CMA, described the announcement to The Verge as "an admission of defeat." They argued that Google's attempt to reshape the digital advertising ecosystem in its own favor was ultimately stymied by regulatory and industry resistance. For now, third-party cookies will remain a fixture in Chrome, leaving the digital advertising industry grappling with the implications.0 التعليقات 0 المشاركات 50 مشاهدة
-
WWW.TECHSPOT.COMRunning on empty: California is about to run out of license plate combinationsTL;DR: California is on pace to run out of license plate character combinations by the end of the year. Since 1980, the state has used a sequence consisting of one number, followed by three letters, followed by three more numbers. Fortunately, the state already has a solution and it's relatively simple: they're just going to reverse the current order. The current sequence for non-commercial vehicles started with 1AAA000 and will end with 9ZZZ999. Try as I might, I was unable to determine exactly why California went with the one number / three letters / three numbers sequence to begin with. Perhaps it has something to do with regional registrations or sheer consistency? Why not just allow for each of the seven character slots to be either a number or a letter? If my math is correct, that'd result in more than 78 billion possible combinations – plenty for the foreseeable future and beyond. Other options could include adding an eighth character or reusing retired sequences, although that would probably get cumbersome from a legal standpoint. Fortunately, the state already has a solution and it's relatively simple: they're just going to reverse the current order. Once 9ZZZ999 has been issued, the next plate will be in the format of three digits, three letters, and one number. So, 000AAA1 or 001AAA1, depending on how they decide to do it. 100AAA1 could also be an option if the "no leading zeroes" rule stands. License plate collectors will no doubt be keeping an eye on the transition, and it is expected that the last old plate and the first new plate could become quite valuable. Most residents, on the other hand, likely won't even notice the change. // Related Stories It's worth reiterating that this change only applies to standard-issue, non-commercial plates. Like many other states, California also offers special interest license plates for those interested in supporting specific causes or organizations such as breast cancer awareness, environmental causes, pets, or colleges. You'll usually pay extra for these and depending on what you choose, a portion of the fee could go to support said organization.0 التعليقات 0 المشاركات 50 مشاهدة
-
WWW.TECHSPOT.COMOpenAI wants to buy Chrome if Google is forced to sellThe big picture: If Google really is forced to sell Chrome – as proposed by the DOJ after the company was ruled a monopoly in its antitrust trial – OpenAI could emerge as a potential buyer. The ChatGPT maker has admitted it's interested in acquiring the world's most popular browser and turning it into an "AI-first" experience. Following Judge Amit Mehta's ruling that Google was a monopolist in online search last year, the Justice Department pushed for the immediate sale of Chrome. The remedies phase of the trial began this week in Washington. Nick Turley, head of product for ChatGPT at OpenAI, was one of the DOJ's witnesses on Tuesday. He testified that OpenAI had contacted Google last year about a partnership that would improve ChatGPT. The chatbot already uses Bing's search data, but Turley mentioned there had been "significant quality issues" with a company referred to as "Provider No. 1," which was likely a reference to Microsoft. "We believe having multiple partners, and in particular Google's API, would enable us to provide a better product to users," OpenAI told Google in an email that was revealed during the trial. Google turned down the offer as it believed the deal could harm the company's lead in search. Turley added that OpenAI doesn't have any partnership with Google today. Turley was also asked if OpenAI would be interested in purchasing Chrome if Google is forced to sell its browser. "Yes, we would, as would many other parties," he replied. // Related Stories In November 2024, reports claimed OpenAI was considering releasing a Chromium-based web browser with ChatGPT integration that could compete with Chrome. The company hired two key Chrome developers last year, Ben Goodger and Darin Fisher, founding members of the Chrome team. Chrome has dominated the global browser market since its user share passed Internet Explorer in 2012. Today, it has a 66% overall share and 4 billion users. Second-place Safari has an 18% share. If OpenAI were to buy Chrome, Turley predicted that it would become an "AI-first" experience. That means tight integration with ChatGPT and other OpenAI products, while data from those billions of users could be used to train its AI systems. Another judge this month ruled that Google built and maintained an illegal monopoly in key segments of the online display advertising industry. The decision could lead to the government breaking up Google's advertising operations.0 التعليقات 0 المشاركات 46 مشاهدة
-
WWW.TECHSPOT.COMIntel's overclocking tool offers 7.5% performance gains without voiding warrantyWhat just happened? Intel has released a new overclocking tool called "200S Boost" that can increase performance on select Intel systems without voiding the warranty. The utility is specifically designed for Core Ultra 200S series processors, which received a mixed response from reviewers following their launch late last year. The new feature offers a simple overclocking option for systems powered by unlocked Arrow Lake chips when paired with compatible Z890 motherboards and supported memory modules. It will be available via a BIOS update on motherboards from Asrock, Asus, MSI, Gigabyte, and other major vendors. 200S Boost will reportedly allow users to achieve higher fabric, die-to-die, and memory frequencies, resulting in performance gains for low-latency workloads such as gaming. Intel claims the feature can increase inter-die fabric frequencies on 200S-series processors from 2.6GHz to 3.2GHz (VccSA ≤ 1.2V), and die-to-die frequencies from 2.1GHz to 3.2GHz (VccSA ≤ 1.20V). It can also help overclock DDR5 memory from 6,400 MT/s to 8,000 MT/s. The best part about 200S Boost is that using it to tune your PC won't void your warranty if something goes wrong with the CPU. That's because Intel continues to offer its standard three-year limited warranty on processors, regardless of whether they've been overclocked using this feature or run at default settings without any modifications. Supported CPUs include: Intel Core Ultra 9 285K Intel Core Ultra 7 265K Intel Core Ultra 7 265KF Intel Core Ultra 5 245K Intel Core Ultra 5 245KF Tom's Hardware tested the new tool and found that it delivers an average performance boost of around 7.5%. Their test system featured a Core Ultra 9 285K CPU, an MSI MEG Z890 ACE motherboard, and an RTX 5090 Founders Edition graphics card. // Related Stories On the memory side, six configurations and three memory speeds were tested across 16 games at 1080p. Moving from 6400 MT/s to 8000 MT/s, Baldur's Gate 3 saw the largest performance gain at 11.6 percent, while A Plague Tale: Requiem showed a more modest 3.7 percent improvement. Tom's also noted that relatively affordable DDR5-7200 memory kits delivered nearly the same performance in most games and applications, potentially making them a better choice for improved system stability. The publication additionally benchmarked several productivity applications to evaluate the impact of the new overclocking tool. It found that software known to benefit from overclocked hardware saw slight performance gains, while other applications remained largely unaffected.0 التعليقات 0 المشاركات 45 مشاهدة
-
WWW.TECHSPOT.COMIntel to adopt TSMC's next-gen 2nm process for upcoming Nova Lake CPUsSomething to look forward to: Upcoming chips from Intel, Apple, and AMD will utilize next-gen semiconductors featuring gate-all-around (GAA) transistors. While Intel is set to debut its own 18A process incorporating GAA later this year, reports suggest that the company's CPUs slated for 2026 will actually be among the first to adopt TSMC's version of the technology. Intel plans to build its upcoming Nova Lake CPUs on TSMC's forthcoming 2nm semiconductor process node, according to Economic Daily News. If accurate, next year's desktop PCs could be among the first devices to feature 2nm technology, alongside the iPhone 18 Pro. Both TSMC and Intel declined to comment on the report, however the Taiwanese chipmaker is expected to begin 2nm trial production at its Hsinchu plant soon, aiming to improve yield rates ahead of mass production in the second half of the year. TSMC's 2nm node, which will utilize GAA transistors to reduce power leakage and enhance performance, is also expected to power some AMD chips and the flagship iPhone SoC scheduled for release in 2026. Intel has already used TSMC's 3nm process for the compute tiles on its Arrow Lake Core Ultra 200 chips, and may decide to upgrade to the foundry's 2nm node for the same section in Nova Lake. Nova Lake is expected to succeed Arrow Lake in desktop and possibly high-end laptop CPUs next year. According to Tom's Hardware, Nova Lake will also require a new motherboard socket, LGA 1954, which could feature over 2,000 pins. Reliable leaker Olrak recently shared shipping manifests from NBD.ltd referencing voltage regulator testing tools and various jig models that mention LGA1954. Meanwhile, Intel is preparing to integrate GAA technology into its 18A node, which is currently in risk production and is expected to enter mass production in time for the launch of the company's Panther Lake processors later this year. A brief for the upcoming 2025 VLSI Symposium notes that GAA and backside power delivery will improve 18A's density scaling and performance by over 30 percent compared to Intel 3. // Related Stories Panther Lake will follow Intel's low-power Core Ultra 200V laptop processors, scheduled for release in the second half of 2025. The company's Clearwater Forest server processors, expected in 2026, will also use 18A. With 18A, Intel aims to regain a competitive edge by bringing GAA and backside power delivery to market ahead of TSMC. TSMC, in turn, plans to introduce its version of backside power delivery a year later with its A16 node.0 التعليقات 0 المشاركات 55 مشاهدة
-
WWW.TECHSPOT.COMApple and Meta hit with combined $797 million fine for violating EU's DMA antitrust rulesWhat just happened? The European Commission has just hit Apple and Meta with combined fines of almost $1 billion. It marks the first fines handed out by the Commission under its Digital Markets Act (DMA), and arrives just after President Trump threatened to levy tariffs against any countries that penalize US companies. Apple was handed the larger fine of 500 million euros ($570 million), while Meta has to pay 200 million euros ($228 million), making a combined total of 700 million euros, or $797 million. In addition to its $570 million fine, Apple has been slapped with a cease-and-desist order requiring it to make further product changes by June. If it fails to comply with this order, the Commission can fine it for every additional day it refuses to cooperate. The penalties come after a year-long investigation in which the Commission found that Meta forced Facebook and Instagram users to either pay a subscription fee to avoid ads or consent to their personal data being used for targeted advertising. In response to the Commission's findings, Meta has modified its ad approach in the EU, now offering unpaid users a version of the platforms with fewer unskippable, full-screen personalized ads. However, in a compliance report published on March 6, the company argued that it has "continued to receive additional demands that go beyond what is written in the law," despite taking steps to align with the DMA. The Commission is currently examining this model to determine if it complies with the rules. Apple, meanwhile, broke the DMA's steering rule. This requires gatekeepers – Apple, Meta, Alphabet, Amazon, ByteDance, and Microsoft – to allow business users (like app developers or online sellers) to steer customers to offers or alternative distribution channels outside the gatekeeper's platform, without penalties or restrictions. // Related Stories There was some good news for the companies. The Commission has also closed an investigation into Apple's compliance with the DMA's rules on browsers and default apps following changes that it introduced. Moreover, Facebook's Marketplace will no longer be designated as a regulated service, so it will no longer fall under the DMA's remit. An Apple representative said it will appeal the decision, which it called "yet another example of the European Commission unfairly targeting" the company and forcing it to "give away (its) technology for free." "We have spent hundreds of thousands of engineering hours and made dozens of changes to comply with this law, none of which our users have asked for. Despite countless meetings, the Commission continues to move the goal posts every step of the way," the representative said. Meta said it also plans to appeal the ruling. "The European Commission is attempting to handicap successful American businesses while allowing Chinese and European companies to operate under different standards," said Joel Kaplan, Meta's chief global affairs officer. "This isn't just about a fine; the Commission forcing us to change our business model effectively imposes a multi-billion-dollar tariff on Meta while requiring us to offer an inferior service. And by unfairly restricting personalized advertising the European Commission is also hurting European businesses and economies." Apple and Meta must pay the fines within 60 days or risk further financial penalties. Under its rules, the Commission could have fined Meta up to $16 billion and Apple $39 billion based on their earnings last year.0 التعليقات 0 المشاركات 36 مشاهدة
-
WWW.TECHSPOT.COMNvidia GeForce RTX 5060 Ti 8GB Review: Instantly ObsoleteNvidia succeeded in delaying reviews of their 8GB RTX 5060 Ti, but they couldn't hide them forever. After about a week's delay, we can show you just how underwhelming this product really is. In case you missed it, Nvidia launched the GeForce RTX 5060 Ti last week, available in both 8GB and 16GB configurations. Leading up to this release, we learned that Nvidia was actively holding back the 8GB model from reviews. This meant it wouldn't appear in initial coverage but would still be available for purchase when reviews went live – or shortly thereafter – which is exactly what happened. We picked up the Asus Prime model for $720 AUD, currently the most affordable RTX 5060 Ti in stock. The most affordable 16GB model at the time was the MSI Ventus 2X for $880, a 22% premium. The issue is that both the 8GB and 16GB versions are branded simply as RTX 5060 Ti. While the GPU configuration is identical, the difference in memory capacity makes them fundamentally different products. 8GB of VRAM in 2025 for a GPU as fast as the RTX 5060 Ti – which, to be fair, isn't exactly impressive in terms of performance per dollar – is still too little memory for what this class of GPU is expected to handle. There are countless real-world scenarios where the RTX 5060 Ti will suffer severely, or even become unusable, due to the limited 8GB frame buffer. We're going to explore a few of those cases today. The core problem with a product like the RTX 5060 Ti is that many buyers will base their expectations on the performance of the 16GB model, only to choose the cheapest version available – which will often be the 8GB model. That results in people unknowingly buying a product that is arguably already obsolete. To be clear, 8GB of VRAM is still enough for the majority of games today, and in many cases where it's not, lowering the visual settings can still provide a playable experience. However, it's no longer sufficient for an optimal experience in many of the latest titles – and this situation is only going to deteriorate over the next few years. Most people buying a GeForce 50-series graphics card right now, especially something like the RTX 5060 Ti, are likely planning to use it for at least the next three years, if not longer. We don't even want to imagine how poorly 8GB cards will perform by then. It'll likely mirror what we're currently seeing with 4GB GPUs. And let's be honest: when spending over $400, do you really want to be constantly worrying about VRAM? Tweaking settings just to squeeze under the memory cap shouldn't be necessary at this price point. That's a compromised and frustrating experience. The 8GB model is supposed to have an MSRP of $380, but the lowest listing we've found on Newegg is $420, with most models priced at $440 or higher. Meanwhile, 16GB models start at $430, though many go for $480 or more. Even in the worst-case scenario, you're looking at only a 14% premium for double the memory. That makes the 8GB model a serious trap for buyers unaware that two distinct versions of the same product exist. The point is, there should never have been an 8GB version of the RTX 5060 Ti, and Nvidia knows it. They know this is a weak product. They know that 8GB of VRAM in 2025 is far from adequate. And they also know that many of you know this. But they also know they can make a lot of money from it, because many gamers aren't tech-savvy and will just buy the cheapest option. They're also counting on the pre-built PC market. By selling people a product that's already outdated, Nvidia ensures they'll return sooner than they otherwise would – whether for the next generation or even a mid-cycle refresh. We've been saying it for years: this is planned obsolescence. And if you can't see it now, well you're Nvidia's favorite type of customer. Test Notes and System Specs This review takes a different approach from our usual lineup of blue bar graphs, performance summaries, and cost-per-frame evaluations. To highlight just how poorly the 8GB RTX 5060 Ti performs in 2025, we purchased one and spent several days running extensive side-by-side tests. In the graphs, you'll find data such as average frame rate, 1% low FPS, VRAM usage, and frametime graphs that highlight stuttering and frame pacing issues. These issues are easier to showcase in the video version of this review if you want to check that out. For all of this testing, we're using a PCIe 5.0-enabled AM5 system with the 9800X3D and 32GB of DDR5-6000 CL30 memory. This setup is essentially a best-case scenario for running over the VRAM buffer with an RTX 5060 Ti. Performance will be worse on PCIe 4.0 systems, and drastically worse on PCIe 3.0. Gaming Benchmarks The Last of Us Part II 4K DLSS Quality, Very High Preset We'll start with The Last of Us Part II, running at 4K. Now, you might think 4K is a questionable choice for a product like this, but we'd argue otherwise for a few reasons. Firstly, the RTX 5060 Ti is capable of 4K gaming – especially with upscaling like DLSS set to quality mode. Secondly, high-quality 4K high refresh rate monitors now cost less than the RTX 5060 Ti itself and can deliver a truly stunning visual experience. The key takeaway here is that the 16GB card averaged 68 fps, while the 8GB model barely surpassed 30 fps, with frequent frametime spikes. By the end of our test, the 16GB version delivered 120% better 1% low performance – a massive difference. We know some will dismiss 4K results, so let's move to 1440p. 1440p DLSS Quality, Very High Preset Performance does improve for the 8GB model at 1440p, but it still suffers from severe frametime issues. Meanwhile, the 16GB card delivers frame rates around 30% higher. We're using the Asus Prime model for both versions, and while the 16GB card clocks about 1% higher, that minor difference doesn't explain the performance gap. By the end of testing, the 16GB version was 34% faster on average and delivered 215% better 1% lows. 1440p DLSS Quality, Very High Preset + Frame Generation Frame generation has been a major selling point for both the GeForce 40 and 50 series. But when enabled, the additional VRAM requirements severely affect the 8GB model, dropping 1% lows into the single digits – making for a truly poor experience. 1440p DLSS Quality, High Preset What if we drop down to the High preset? At 1440p with DLSS enabled, we encounter occasional stutters on the 8GB card, but overall performance is acceptable. Still, the 16GB card maintains a clear lead – 18% faster on average, with 35% better 1% lows. 1440p DLSS Quality, Medium Preset Switching to the Medium preset cleans up frame pacing for the 8GB model, making the experience smoother. Even then, the 16GB model still delivers 12% higher average performance and 11% better 1% lows. 1440p DLSS Quality, Low Preset Even with the Low preset, the 16GB version remains 8% faster on average. It's not a huge difference, but still surprising to see any gain considering we're nearing the absolute limit of the 8GB card's capability. 1080p Native, Very High Preset At native 1080p using the Very High preset, the game demands over 9GB of VRAM. This causes the 8GB model to struggle significantly, with terrible frametime performance – even at this lower resolution. As a result, the 16GB card delivered 25% better average frame rates and an astounding 320% improvement in 1% lows. Let's move on to another title. Final Fantasy XVI 1080p Native, Ultra Preset Here we're testing Final Fantasy XVI at native 1080p using the Ultra preset. While the 8GB model offers a playable experience, the 16GB version still comes in 14% faster. That said, 1080p is quite low by 2025 standards – so let's jump up to 1440p. 1440p Native, Ultra Preset At native 1440p, the 16GB card pulls ahead by a significant margin – 58% higher average frame rate and 218% better 1% lows. While we personally wouldn't play this game at around 50 fps, we know some prioritize visual fidelity over frame rates. 1440p DLSS Quality, Ultra Preset Using DLSS for upscaling delivers what we'd consider a more optimal setup. The 16GB card averaged 74 fps, making it 80% faster than the 8GB model, which only managed 41 fps. 1440p DLSS Quality + Frame Generation, Ultra Preset Out of curiosity, we enabled frame generation. It helped the 8GB model somewhat, but the 16GB card was still 24% faster on average, with 30% better 1% lows. Despite the seemingly decent frame rates, the experience on the 8GB model felt noticeably worse. 1440p Native, High Preset Finally, running the game at native 1440p with the High preset allowed the 8GB card to roughly match the 16GB version, trailing by just a few percent. Indiana Jones and the Great Circle 1080p Native, Medium Preset Getting the 8GB RTX 5060 Ti to run Indiana Jones and the Great Circle wasn't easy. Only the Low and Medium presets worked, and even then, we couldn't test 4K with DLSS enabled using the Medium preset, as the game would immediately crash to desktop. At native 1080p using just the Medium preset, the 8GB card averaged 114 fps, while the 16GB version was 7% faster. Overall, the 8GB card handled this scenario fairly well. 1440p Native, Medium Preset However, moving to 1440p, the 8GB card begins to struggle. While it maintains decent frame time consistency, the 1% lows were 90% higher on the 16GB model, and average frame rates improved by 82%. Under these conditions, the 16GB card delivered a far superior experience. 1080p Native, Ultra Preset The 16GB version also performed well at 1080p using the Ultra preset – and even at 1440p with upscaling. In contrast, the 8GB model crashed to desktop whenever we attempted anything above the Medium preset. In this case, the difference was night and day. Hogwarts Legacy 1440p Native, Ultra Preset When playing Hogwarts Legacy at native 1440p with the Ultra preset, both GPUs performed similarly in terms of average frame rate. However, the 8GB card suffered from texture pop-in issues – something we've previously seen with the 8GB RTX 4060 Ti. 1440p Native, High Preset, Ray Tracing High Performance issues for the 8GB card became much more severe once ray tracing was enabled. Using the High preset with High ray tracing at 1440p, the 16GB model delivered 62% higher average frame rates and a massive 483% improvement to 1% lows. The 8GB version, by comparison, was an unplayable stuttering mess. 1440p Native, Medium Preset, Ray Tracing Medium We then tried the Medium preset with Medium ray tracing, and the 8GB model still exhibited frequent frame time spikes and poor 1% lows. The 16GB card was up to 96% faster in this scenario. Horizon Forbidden West 4K DLSS Performance, Very High Preset Next up is Horizon Forbidden West at 4K using DLSS in Performance mode with the Very High preset. These settings were completely unplayable on the 8GB card, which delivered 1% lows of just 9 fps. In contrast, the 16GB card averaged 72 fps and maintained 1% lows above 60 fps, making it 350% faster on average and 589% faster in 1% lows. 1440p DLSS Quality, Very High Preset Dropping to 1440p with DLSS set to Quality helped the 8GB card somewhat, but the game was still not playable, with 1% lows of only 18 fps. The 16GB model, meanwhile, was even faster than it was at 4K, averaging nearly 100 fps – 234% faster overall, or 344% faster based on 1% lows. 1080p Native, Very High Preset At 1080p native resolution with the Very High preset, the 8GB model was finally playable, although frame pacing issues remained. The 16GB card, on the other hand, delivered a flawless experience, averaging 100 fps with 1% lows above 80 fps – making it nearly 80% faster on average, with 131% better 1% lows. 1440p DLSS Quality, High Preset At 1440p with DLSS set to Quality and using the High preset (not even the highest), the 16GB card still pulled significantly ahead. A quick note: we mistakenly scaled the overlay stats in RivaTuner during testing, as we had to revisit the 16GB model to ensure both GPUs were tested under the same lighting and time-of-day conditions. Despite this minor discrepancy, both cards were tested under identical settings. Under these conditions, the 16GB version was 148% faster in average frame rate and 193% faster in 1% lows. The 8GB model also struggled with frame pacing, resulting in a noticeably jittery experience. 1440p DLSS Quality, Medium Preset Finally, at 1440p with DLSS Quality and the Medium preset, we found a configuration where the 8GB model was usable – not great, but functional. However, frame time issues persisted, and the 16GB card still delivered 47% higher 1% lows. Space Marine 2 4K DLSS Q, Ultra Preset, 4K Textures Space Marine 2 offers an interesting look at the 8GB vs. 16GB comparison, and a key reminder of why benchmark graphs alone don't always tell the full story. Using the Ultra preset with the 4K texture pack enabled at 4K with DLSS set to Quality, both GPUs delivered frame rates comfortably above 60 fps. Oddly enough, the 8GB card appeared slightly faster – until you realize it's not fully rendering the game. Looking at static image comparisons, it's immediately obvious how poor the 8GB presentation is compared to the 16GB model. The differences are striking. The low-resolution, muddy textures on the 8GB card are worsened by frequent pop-in as the game attempts to load higher-resolution assets – resulting in a flickery mess. Under these settings, very few textures render correctly with the 8GB card, while the 16GB model delivers both excellent visuals and performance. We've seen this issue before – games like Halo Infinite and Forspoken come to mind – where performance may look fine on paper, but the actual presentation is severely degraded due to limited VRAM. A Plague Tale: Requiem 4K DLSS Quality, Ultra Preset Next, in A Plague Tale: Requiem, we tested at 4K using DLSS Quality with the Ultra preset. While the 16GB model didn't break performance records, it still averaged 52 fps – 33% faster than the 8GB card – and delivered 51% better 1% lows. 4K DLSS Performance, Ultra Preset Using DLSS Performance mode at 4K yields more realistic results for this title. Here, both cards averaged over 60 fps, but the 16GB model still managed 23% better 1% lows. 4K DLSS Performance, High Preset + Ray Tracing On For over 80 fps, we enabled ray tracing at 4K with DLSS Performance and the High preset. The 16GB card averaged 86 fps with 1% lows of 72 fps. The 8GB card, however, was unable to maintain playable performance and actually crashed during this test. Assassin's Creed Shadows 1440p DLSS Balanced, Very High Preset Testing Assassin's Creed Shadows at 1440p with DLSS set to Balanced and the Very High preset, the 8GB card struggled with poor frame pacing. The 16GB card was 52% faster in 1% lows and 50% faster on average. Most importantly, the game was enjoyable at 66 fps – but not at 44 fps with erratic frame times on the 8GB model. 1440p DLSS Balanced + Frame Generation, Very High Preset Enabling frame generation didn't fix the stuttering issues on the 8GB model. In this configuration, the 16GB version was 62% faster in average frame rate. Cyberpunk 2077 1440p DLSS Quality, Ray Tracing: Medium At 1440p with DLSS Quality and ray tracing set to Medium, the 8GB card struggled badly. While the 16GB model averaged 54 fps – a 26% uplift – the real difference was in 1% lows, which were 73% higher, making gameplay on the 8GB card noticeably worse. 4K DLSS Performance, Ray Tracing: Low If you're hoping to enjoy ray tracing at 4K with DLSS Performance and the Low RT preset, the 8GB card falls apart – delivering just 29 fps on average with 1% lows of 16 fps. The 16GB card was up to 206% faster. Marvel Rivals 4K DLSS Performance, Ultra Preset In Marvel Rivals at 4K with DLSS Performance and the Ultra preset, the 16GB model averaged 69 fps with 1% lows of 53 fps. While not ideal for a competitive shooter, it's at least playable. Compared to the 8GB version, the 16GB card delivered 30% better average frame rates and an 89% improvement in 1% lows. Spider-Man 2 4K DLSS Performance, Very High Preset At 4K with DLSS Performance and the Very High preset, the game was unplayable on the 8GB card, with terrible frame pacing and an average frame rate below 20 fps. In contrast, the 16GB version averaged just over 60 fps, with a 533% improvement in 1% lows. 1440p DLSS Quality, Very High Preset Dropping to 1440p with DLSS Quality, the 8GB model remained broken and unusable, while the 16GB card averaged over 80 fps with 1% lows of 47 fps – resulting in smooth and enjoyable gameplay. 1440p DLSS Quality, High Preset Switching to the High preset at 1440p, the 8GB card was still inadequate. The 16GB version averaged over 90 fps, offering a 221% increase in average frame rate and a 358% boost in 1% lows. 1080p Native, Very High Preset We then dropped to native 1080p, where upscaling isn't ideal, using the Very High preset. Even here, the 8GB card averaged just 49 fps. The 16GB card was nearly 60% faster, averaging 78 fps. 1080p Native, High Preset Finally, using the High preset at 1080p, the 8GB model still performed poorly with erratic frame times and subpar overall performance. The 16GB version was 35% faster in average frame rate and 58% faster in 1% lows. Simply put, 8GB of VRAM is a terrible choice for Spider-Man 2. Star Wars Jedi: Survivor 4K DLSS P, Epic Preset Moving on to Star Wars Jedi: Survivor, we didn't even enable ray tracing here. At 4K using DLSS Performance with the Epic preset, the 16GB model averaged 68 fps, while the 8GB version was limited to just 32 fps. That's a 113% performance advantage in favor of the 16GB card and a 104% improvement in 1% lows. Additionally, the 8GB card rendered the game incorrectly, with missing textures and visual artifacts. Alan Wake 2 1440p DLSS Q, High Preset A game where the 8GB model performed well was Alan Wake 2. Here, we observed very similar performance between both versions of the RTX 5060 Ti. As we mentioned at the start of this review, this is what you'll see in the majority of games. God of War Ragnarök 4K DLSS P, Ultra Preset God of War Ragnarök is another example where 8GB of VRAM was mostly adequate. There was a small performance hit, but at 4K with DLSS set to Performance and the Ultra preset enabled, the 8GB card managed fine – at least in our test area. It's possible other sections of the game could expose limitations. Black Myth: Wukong 1440p DLSS Q, Very High Preset Black Myth: Wukong also ran reasonably well on the 8GB model, though the 16GB card did deliver slightly better performance. This could change in more demanding areas or during longer gameplay sessions. The important point is that even in games where performance seems fine, we're operating right on the edge of the 8GB VRAM buffer. What We Learned We just looked at 15 games, each with numerous examples where the 8GB version of the RTX 5060 Ti is clearly held back by its memory buffer. In each case, the 16GB model delivered a consistently superior experience under the same conditions. It's painfully obvious that an 8GB frame buffer is no longer satisfactory for PC gaming beyond the most entry-level products, which can't cost more than about $150. Since such products no longer exist in the current market, neither should 8GB VRAM configurations. Realistically, 12GB should be the new minimum. But if you're aiming for an uncompromised experience, especially over the typical lifespan of a GPU, then 16GB is what you need – and we're confident that this will become increasingly obvious in the coming years. With that in mind, don't let anyone convince you that the RTX 5060 Ti isn't suitable for 4K gaming. As we've shown, with enough VRAM, it absolutely is. And 1440p gaming with upscaling is exactly what a $400 GPU should be capable of – and it is, provided it comes with 16GB of VRAM. We've seen people argue that the RTX 5060 Ti is meant for 1080p gaming, but that's simply not true. That line of reasoning does a disservice to consumers. As clearly demonstrated here, the RTX 5060 Ti is very capable at both 1440p and 4K with upscaling – provided it has 16GB of VRAM. Our take is straightforward: if you're buying a new graphics card in 2025 and spending around $300, it should come with at least 12GB of VRAM. If you're spending $400 or more, it needs to have 16GB. And anything beyond that – like a future RTX 5080 – should offer at least 24GB, especially with next-gen consoles expected to launch within the lifespan of these GPUs. Unfortunately, we suspect this will turn into another RTX 3080 10GB situation – except at a much higher price. And once again, this benefits Nvidia, as their planned obsolescence strategy will push RTX 5080 owners to upgrade sooner than necessary. As for the GeForce RTX 5060 Ti, our biggest issue is that the 8GB and 16GB versions share the same name. Realistically, there should only be one configuration. The performance gap between them is too substantial to ignore. Ideally, the RTX 5060 Ti should come with 16GB, and a non-Ti version could offer 12GB – or better yet, 16GB as well. There's no justification for an 8GB model in this product tier. Another factor worth considering is resale value. Based on sold listings on eBay, the 16GB version of the RTX 4060 Ti is fetching 42% more than the 8GB version, despite only costing up to 25% more at launch. Expect that disparity to grow even wider for the RTX 5060 Ti. Two years from now, the 8GB versions will likely be nearly worthless in comparison. To wrap things up: the 8GB version of the RTX 5060 Ti is possibly the worst trap we've ever seen laid for mainstream gamers. Nvidia has really outdone themselves here. Their efforts to suppress early reviews and exploit uninformed buyers are unacceptable. And they're planning to do the same with the RTX 5060 – but rest assured, we'll be there to cover that as well. This is yet another reason why the GeForce 50 series may go down as Nvidia's worst generation ever. And the most frustrating part? It didn't have to be this way. For now, we're done with the 8GB RTX 5060 Ti. But on the bright side, Nvidia has inadvertently provided us with the perfect case study for examining VRAM limitations moving forward. Shopping Shortcuts: Nvidia RTX 5060 Ti 16GB on Amazon Nvidia RTX 5060 Ti 8GB on Amazon Nvidia GeForce RTX 5070 on Amazon AMD Radeon RX 9070 on Amazon AMD Radeon RX 9070 XT on Amazon Nvidia GeForce RTX 5080 on Amazon Nvidia GeForce RTX 5090 on Amazon0 التعليقات 0 المشاركات 44 مشاهدة
-
WWW.TECHSPOT.COMCATL's new battery tech promises 800-km range and five-minute chargingForward-looking: CATL has announced a series of breakthroughs that could reshape the EV industry, promising batteries that are cheaper, lighter, faster to recharge, and more resilient in extreme temperatures, all while extending driving range. The company, which supplies a third of the world's EV batteries to major automakers including GM and Tesla's Shanghai plant, unveiled these just ahead of the Shanghai Auto Show. At a press event reminiscent of a high-profile car launch, China's leading battery maker CATL detailed innovations that could bring electric cars closer to price and performance parity with their gasoline-powered counterparts within the next few years. Batteries account for at least a third of an EV's cost, making CATL's progress particularly significant for automakers worldwide. One of the most notable developments is CATL's new approach to auxiliary batteries. Traditionally, EVs have relied on a single large battery pack, but CATL's design introduces a secondary battery that shares space in the vehicle's underbody. This auxiliary battery is the first commercially available EV battery to eliminate graphite from one of its poles, which could eventually reduce costs and increase energy density by 60 percent per cubic inch. According to Gao Huan, CATL's chief technology officer for EVs in China, this innovation could either extend a car's range or allow for smaller battery packs, freeing up more passenger space. The auxiliary battery also serves as a backup, an increasingly important feature as more vehicles adopt self-driving technologies that demand uninterrupted power supplies. // Related Stories CATL's co-president for research and development, Ouyang Chuying, indicated that these graphite-free batteries could appear in production vehicles within two to three years, though he declined to name specific automakers. However, the company acknowledged that removing graphite comes with trade-offs, namely that such batteries recharge more slowly and have a shorter lifespan. CATL has also made strides in charging speed for its main batteries. The latest iteration of its flagship Shenxing battery cell can add 520 kilometers (about 320 miles) of range with just five minutes of charging, surpassing even the recent advancements announced by rival BYD and placing CATL ahead of Western competitors like Tesla and Mercedes-Benz. The second-generation Shenxing battery offers an 800-kilometer range on a single charge, achieving a peak charging speed of 2.5 kilometers per second. CATL's Gao emphasized that the new batteries do not compromise on energy density and are slated to be installed in more than 67 electric vehicle models this year. In addition to lithium-based innovations, CATL is pushing forward with sodium-ion battery technology. The company's new Naxtra brand of sodium-ion batteries, set to enter mass production in December, promises over 90 percent charge retention even at temperatures as low as minus 40 degrees Celsius. This makes them especially attractive for vehicles operating in the frigid climates of northern China, where traditional lead-acid batteries often fail. The first customer for these batteries will be freight trucks from First Auto Works, based in Changchun, a region known for its harsh winters. Sodium-ion batteries are considered a safer and more affordable alternative to lithium-based cells, largely because sodium is abundant and inexpensive. The new Naxtra battery boasts an energy density of 175 watt-hours per kilogram, nearly matching the widely used lithium iron phosphate batteries. CATL's founder, Robin Zeng, has suggested that sodium-ion batteries could eventually replace up to half of the market for lithium iron phosphate batteries, which the company currently dominates. Beyond technical specifications, CATL has demonstrated the safety of its sodium-ion batteries through rigorous stress tests, including puncturing and cutting the cells without causing fires or explosions – a notable shift from the company's stance just five years ago. These batteries are also being positioned as a solution for internal combustion vehicles, offering compatibility with existing electrical systems, though some models may require modifications to accommodate the new battery size. CATL's rapid pace of innovation comes even as the company faces increased competition and market pressures. Last month, the company reported a 15 percent growth in net profit for 2024, its slowest rate in six years, amid a prolonged price war in China's EV market. Still, with over 18 million cars equipped with its batteries operating in more than 66 countries, CATL's influence on the future of electric mobility remains formidable.0 التعليقات 0 المشاركات 64 مشاهدة
-
Doom can now run in a self-contained QR code. Sort ofIn context: QR codes were originally designed to efficiently track the types and quantities of automobile parts. Today, thanks to smartphones and mobile apps, their use has expanded far beyond that. If you really know your trade, you could even try packing a functional program into a single QR code – and maybe run Doom on it, because why not? A resourceful developer named Kuber Mehta has taken the "Can it run Doom?" meme to new heights with a wild new project that pushes the boundaries of extremely limited execution environments. While the Backdooms project doesn't technically run the original Doom engine inside a QR code, Mehta says he was directly inspired by id Software's legendary shooter – as well as the viral "Backrooms" creepypasta – to develop his concept. Backdooms is a compressed, self-extracting program encoded entirely within a single QR code. When scanned, it launches an infinitely generated HTML environment resembling Doom-style corridors, which players can navigate and interact with. The game runs entirely in modern web browsers and doesn't require an internet connection – the entire game is stored in the URL itself. Mehta, a computer science and artificial intelligence student in New Delhi, spent a week exploring how to maximize QR code storage and compression. He ultimately chose a Doom-like interactive experience to demonstrate his progress, but the same technique could, in theory, be used to encode lightweight web apps within QR codes, unlocking new possibilities for ultra-portable software delivery. The developer chronicled his journey on the MindDump blog, where he explained the absurd premise – running code within a 3KB QR code – alongside the origin of the idea and the detailed process behind creating Backdooms. Notably, Mehta had to rely on a technique called minification – or in this case, extremely aggressive minification – to squeeze a functional HTML program into such a tiny space. This compressed code generates graphics, Doom-like corridors, enemies to shoot at, and even music. A breakthrough came when Mehta received a helpful hint from a chatbot, which suggested using DecompressionStream – a little-known Web API available in all modern browsers. Thanks to this component, the Backdooms code can be dynamically decompressed and executed directly in the browser. The game can be played on desktops, smartphones, and potentially other devices via a link or by scanning the QR code available on the project's GitHub page. // Related Stories Though only loosely related to Doom, Backdooms keeps the "Can it run Doom?" tradition alive. Developers continue to push the boundaries of where the open-source FPS engine can run. Recent feats include running Doom on a Collector's Edition game box, inside TypeScript's type system, within a Microsoft Word document, and even directly on a GPU.0 التعليقات 0 المشاركات 53 مشاهدة
-
New Nvidia GeForce hotfix driver addresses crashes and black screen issuesGeForce Hotfix Display Driver version 576.15 is based on our latest Game Ready Driver 576.02. A GeForce driver is an incredibly complex piece of software, We have an army of software engineers constantly adding features and fixing bugs. These changes are checked into the main driver branches, which are eventually run through a massive QA process and released. Since we have so many changes being checked in, we usually try to align driver releases with significant game or product releases. This process has served us pretty well over the years but it has one significant weakness. Sometimes a change that is important to many users might end up sitting and waiting until we are able to release the driver. The GeForce Hotfix driver is our way to trying to get some of these fixes out to you more quickly. These drivers are basically the same as the previous released version, with a small number of additional targeted fixes. The fixes that make it in are based in part on your feedback in the Driver Feedback threads and partly on how realistic it is for us to quickly address them. These fixes (and many more) will be incorporated into the next official driver release, at which time the Hotfix driver will be taken down. To be sure, these Hotfix drivers are beta, optional and provided as-is. They are run through a much abbreviated QA process. The sole reason they exist is to get fixes out to you more quickly. The safest option is to wait for the next WHQL certified driver. But we know that many of you are willing to try these out. These hotfix drivers represent a lot of additional work by our engineering teams, I hope they provide value for you. We'll try it out and see if people like the idea and want us to continue. What's New: This hotfix addresses the following issue: [RTX 50 series] Some games may display shadow flicker/corruption after updating to GRD 576.02 [5231537] Lumion 2024 crashes on GeForce RTX 50 series graphics card when entering render mode [5232345] GPU monitoring utilities may stop reporting the GPU temperature after PC wakes from sleep [5231307] [RTX 50 series] Some games may crash while compiling shaders after updating to GRD 576.02 [5230492] [GeForce RTX 50 series notebook] Resume from Modern Standy can result in black screen [5204385] [RTX 50 series] SteamVR may display random V-SYNC micro-stutters when using multiple displays [5152246] [RTX 50 series] Lower idle GPU clock speeds after updating to GRD 576.02 [5232414] For questions, please visit the FAQ below: Nvidia DCH/Standard Display Drivers for Windows FAQ0 التعليقات 0 المشاركات 54 مشاهدة
-
WWW.TECHSPOT.COMThe Oblivion remake is real, it's gorgeous, and it's out nowAt last: Get ready to close the gate of Oblivion. Bethesda just made it official. The Elder Scrolls IV Remastered launched today. Much of the game has been rebuilt from scratch so it's not just a cosmetic refresh. It's got a modernized UI, streamlined leveling, and much more. A massive leak last week revealed almost everything fans wanted to know about the long-rumored Oblivion remake. The cache included screenshots and side-by-side comparisons. An Xbox Support representative even let it slip that the game would launch on April 21. Well, it's a day late, but Bethesda greeted us this morning with a live feed officially revealing the reboot, and from the looks of it, all the rumors were true aside from the release date. Even some of the side-by-side comparisons appear to have come straight from Bethesda's presentation (masthead). However, Bethesda managed to throw us a few surprises. The most pleasant is that TES: Oblivion Remastered is available as of "right now." The April release is somewhat surprising. May seemed more likely, but maybe that was just me being pessimistic. What's more surprising is that it is available on most platforms, including PlayStation 5! Considering Microsoft has kept most of Bethesda's newer titles away from its biggest rival, it is remarkable that it didn't at least make it a timed exclusive. Xbox may view the Oblivion remake differently than new releases like Starfield and TESVI, which are exclusives (for now). Whatever the case, it's a smart move - millions of PS5 owners will snap this title up, substantially boosting sales. Also read: 26 Years of The Elder Scrolls On appearances alone, there is little reason that fans shouldn't pick up this carefully done remaster unless it turns out to be buggy, a real possibility given Bethesda's track record. It is one of the finer makeovers I have seen recently. It looks gorgeous. I've included screenshots throughout, but please do check out the live footage in the masthead - stills just don't do it justice. The game didn't just get a new coat of paint. Design studio Virtuos rebuilt all models and environments from scratch. Virtuos said it used the Oblivion game engine as the heart of the game while Unreal 5 produced the stunning visual aesthetic and special effects. "We've leveraged nearly every major feature from the latest version of Unreal 5," said Virtuos Executive Producer Alex Murphy. Utilizing Unreal Engine clearly paid off in spades. The only real question is, did developers give the old Oblivion engine any love? I recall revisiting Oblivion on the PS3 a few years ago and had to give up because the control scheme felt too clunky and outdated. Virtuos said that it updated a lot of gameplay elements, like the user interface and experience. Leveling is not as janky anymore - no more hopping around like a crazy rabbit just to level up that agility stat. However, Murphy failed to mention anything about the game controls. Overlooking control modernization would be a rookie misstep, so here's hoping that Virtuos remembered something so simple yet fundamental to the player experience. Some fans in the forums questioned whether Bethesda would include the two Oblivion DLCs, Knights of the Nine and Shivering Isles, or if it would split them off to sell separately and push a deluxe bundle. Good news: Oblivion Remastered includes all original DLC. Bad news (depending on how you view it): There is a deluxe version, which offers two weapon and armor skins, a digital artbook, and the soundtrack. The Standard Edition is $50, while the Deluxe costs $60. If you don't want to commit to the deluxe bundle, you can always upgrade later for $10. The Elder Scrolls IV: Oblivion Remastered is available on PC through Steam, Xbox Series X|S, and PlayStation 5. // Related Stories I'm anxious to hear early reviews, especially from our readers. It will also be interesting to see if this opens the doors for other TES remasters. Morrowind, anyone?0 التعليقات 0 المشاركات 62 مشاهدة
-
WWW.TECHSPOT.COMNew SD Express 8.0 cards double the speed of today's fastest microSDsForward-looking: Most highly recommended microSD cards offer maximum read speeds of around 250 MB/s, but the impending release of the Nintendo Switch 2 has increased demand for significantly faster memory cards. Just as microSD Express technology begins to gain mainstream acceptance, one vendor has introduced a new standard that doubles the theoretical performance. Adata has unveiled a new performance standard for SD Express cards, boasting a 1.6 GB/s maximum read speed and a 1.2 GB/s write speed – roughly double the fastest models currently available. Although the company hasn't disclosed release details, users shopping for Nintendo Switch 2 memory cards will have another, faster (and likely more expensive) option. The SD Express standard was introduced in 2018 with version 7.0, offering read speeds of up to 1 GB/s by leveraging NVMe SSD technology. However, because few portable devices required such high transfer rates, SD Express languished in obscurity for years, while most users and manufacturers stuck with more affordable and established options. Also read: microSD and SD Card Buying Guide Nintendo's upcoming Switch 2 handheld could change that when it launches on June 5. It is expected to be the first mass-market device to require microSD Express cards, and stores across Japan have already reported selling out of them. Games purchased on physical Switch 2 game cards or installed on memory cards will benefit from significantly faster load times compared to the original Switch. For example, Nintendo recently demonstrated that The Legend of Zelda titles can load new areas more than twice as quickly on the Switch 2, taking just a few seconds. Lexar microSD Express card Online retail listings show that only SanDisk and Lexar currently offer microSD Express cards, featuring read speeds of around 900 MB/s. Adata has raised the bar with the new SD 8.0 standard, although it remains unclear how quickly other vendors will follow suit. // Related Stories Adata has yet to reveal pricing details, too, but these next-gen memory cards are unlikely to be cheap. SanDisk's 256 GB SD 7.0 cards start at $60, while Lexar's 512 GB models retail for around $100. In addition, Adata recently announced several new flash memory and SSD products. The UE720 is a USB 3.2 Gen2 flash drive with read and write speeds of 500 MB/s and 450 MB/s, respectively, available in capacities up to 256 GB. The company's new EC680 M.2 SSD enclosure uses a USB 3.2 Gen2x1 interface and a Type-C connector to achieve read/write speeds of approximately 1,050/1,000 MB/s. It supports 2230, 2242, and 2280 form factors.0 التعليقات 0 المشاركات 65 مشاهدة
-
WWW.TECHSPOT.COMNew study reveals cybersecurity threats in next-generation DNA sequencingA hot potato: Next-generation DNA sequencing (NGS) faces mounting scrutiny over its cyber vulnerabilities. While NGS has revolutionized fields ranging from cancer diagnostics to infectious disease tracking, a new study warns that the systems enabling these advances could also be exploited as a gateway for hackers and malicious actors. The research, published in IEEE Access and led by Dr. Nasreen Anjum of the University of Portsmouth's School of Computing, is the first to systematically map cyber-biosecurity threats across the entire NGS workflow. NGS technology, which allows for rapid and cost-effective sequencing of DNA and RNA, underpins not only cancer research and drug development but also agricultural innovation and forensic science. Its ability to process millions to billions of DNA fragments simultaneously has dramatically lowered the cost and increased the speed of genome analysis, making it a staple in labs worldwide. However, the study highlights a less-discussed side of this technological leap: the growing number of vulnerabilities at each stage of the NGS pipeline. From sample preparation to sequencing and data analysis, each step relies on specialized instruments, complex software, and networked systems. According to Dr. Anjum, these interconnected processes create multiple points where security can be breached. As vast genomic datasets are increasingly stored and shared online, the risk of cybercriminals accessing and misusing this sensitive information grows. The study warns that such breaches could enable not only privacy violations or identity tracing but also more sinister possibilities, such as data manipulation or the creation of synthetic DNA-encoded malware. "Protecting genomic data isn't just about encryption – it's about anticipating attacks that don't yet exist," Dr. Anjum said, calling for a fundamental rethink in how the field approaches security. // Related Stories The research was conducted with experts from Anglia Ruskin University, the University of Gloucestershire, Najran University, and Shaheed Benazir Bhutto Women's University. The team identified several emerging threats, including AI-driven manipulation of genomic data and advanced re-identification techniques that could compromise individual privacy. These risks, they argue, extend beyond the individual to threaten scientific integrity and even national security. Despite these dangers, Dr. Anjum notes that cyber-biosecurity remains a neglected area, with fragmented protections and little collaboration between the disciplines of computer science, bioinformatics, biotechnology, and security. To address these challenges, the study recommends a suite of practical solutions: secure sequencing protocols, encrypted data storage, and AI-powered anomaly detection systems. The authors urge governments, regulatory bodies, and academic institutions to prioritize investment in research, education, and policy development to close the current gaps in biosecurity. The urgency of these recommendations is heightened by the rapid drop in sequencing costs and the proliferation of NGS applications. Where sequencing a human genome once cost tens of thousands of dollars, some companies now offer the service for as little as $200, with prices expected to fall further. This affordability has democratized access to genomic data and expanded the attack surface for potential cyber threats.0 التعليقات 0 المشاركات 35 مشاهدة
-
WWW.TECHSPOT.COMMicrosoft warns AI is making it faster and easier to create online scamsIn brief: It seems one profession that really loves generative AI is that of the cybercriminal. Microsoft warns that the technology has evolved to the point where creating an online scam can now take minutes rather than days or weeks and requires little technical knowledge. In its latest edition of the Cyber Signals report, Microsoft writes that AI has started to lower the technical bar for fraud and cybercrime actors looking for their own productivity tools. The range of cyber scams AI can be used for is extensive. The tools can, for example, help create social engineering lures by scanning and scraping the web to build detailed profiles of employees or other targets. There are also cases of complex fraud schemes that use AI-enhanced product reviews and AI-generated storefronts, with scammers creating entire sham websites and fake e-commerce brands, complete with fabricated business histories and customer testimonials. Scammers can even use AI for customer service chatbots that can lie about unexplained charges and other anomalies. It's long been reported that advancing deepfake technology is making this a popular tool for scammers. We've seen it used to create fake celebrity endorsements, impersonate friends and family members, and, as Microsoft notes, for job interviews – both hiring and applying – conducted via video calls. The company notes that lip-syncing delays, robotic speech, or odd facial expressions are giveaway signs that the person on the other end of a video call might be a deepfake. Microsoft recommends that consumers be wary of limited-time deals, countdown timers, and suspicious reviews. They should also cross-check domain names and reviews before making purchases, and avoid using payment methods that lack fraud protections, such as direct bank transfers and cryptocurrency payments. Tech support scams are also on the rise. While AI doesn't always play a part in these incidents, tech support scammers often pretend to be legitimate IT support from well-known companies and use social engineering tactics to gain the trust of their targets. The Windows Quick Assist tool, which lets someone use a remote connection to view a screen or take it over to fix problems, is regularly used in these scams. As such, Microsoft is adding warnings to Quick Assist and requires users to check a box acknowledging the security implications of sharing their screen. Microsoft also recommends using Remote Help instead of Quick Assist for internal tech support. While the post focuses on the dangers of AI scams, it also notes that Microsoft continues to protect its platforms and customers from cybercriminals. Between April 2024 and April 2025, Microsoft stopped $4 billion worth of fraud attempts, rejected 49,000 fraudulent partnership enrollments, and blocked about 1.6 million bot signup attempts per hour. // Related Stories0 التعليقات 0 المشاركات 41 مشاهدة
-
WWW.TECHSPOT.COMFTC sues Uber over deceptive subscription billing and cancellation practicesWhat just happened? The FTC has filed a lawsuit against Uber over allegations that it engaged in deceptive billing and cancellation practices related to its Uber One subscription service. According to the agency, the ride-hailing giant made the process of cancelling needlessly difficult, charged some people during their free trial, and even signed up customers without their consent. Uber launched Uber One in 2021 with the lure of free delivery on eligible Uber Eats orders, discounts, priority service, and exclusive offers. The subscription costs $10 per month or $96 per year. The FTC's complaint alleges that Uber made ending Uber One subscriptions intentionally difficult, despite promising customers they could "cancel anytime." It's claimed that some customers who signed up for a free trial were charged before the trial ended, even though Uber said they could cancel freely during this period. The complaint adds that for some people, ending an active subscription involved navigating through up to 23 screens and taking 32 actions. Uber One perks The agency alleges that Uber would remove the option to cancel from its app if a customer was within 48 hours of their billing date. In these cases, users were told to contact customer support without being told how to reach them. There are also cases of customers who did reach customer support and were promised a return call but were billed for another cycle while waiting to hear back. // Related Stories Some customers complained that they were signed up for Uber One without giving their consent. One person said they were charged despite not having an Uber account. The FTC also disputes Uber's claim that Uber One saves customers $25 per month due to its benefits (the website now claims it is $27 per month). The agency says the figure is inaccurate and doesn't account for the subscription's monthly cost when calculating savings. Uber said it was "disappointed" that the FTC had chosen to move forward with the lawsuit. It said that canceling Uber One can now be done anytime in-app and takes most people less than 20 seconds. It added that it does not sign up or charge customers without their consent. The FTC alleges that Uber's practices violate the FTC Act and the Restore Online Shoppers' Confidence Act (ROSCA). "Americans are tired of getting signed up for unwanted subscriptions that seem impossible to cancel," said FTC Chairman Andrew Ferguson. "The Trump-Vance FTC is fighting back on behalf of the American people. Today, we're alleging that Uber not only deceived consumers about their subscriptions, but also made it unreasonably difficult for customers to cancel."0 التعليقات 0 المشاركات 49 مشاهدة
-
WWW.TECHSPOT.COMThere is a solitary black hole wandering near the center of our galaxy, astronomers confirmInvisible Dark: Lone black holes passing through the galaxy should be a pretty common occurrence in the Milky Way, but they are notoriously hard to spot. According to recently published research, we have now confirmed the existence of the first-ever lone black hole. And it's essentially in our neighborhood. A team of US astronomers led by Kailash Sahu said they have finally discovered the first isolated stellar-mass black hole traveling through space by itself. The researchers initially spotted this dark object in 2022, in the Sagittarius constellation, but their claim was disputed by a different team. However, the two groups are now in agreement: this particular black spot in the vastness of space really is a black hole. Supermassive black holes are traditionally located at the center of large galaxies, like the well-known Sagittarius A* lying at the center of the Milky Way. Potential candidates for "wandering" supermassive black holes, moving through space after being ejected from their original location, have been considered as well. The black hole described in the recently published research was discovered thanks to precise stellar observations made through the Hubble Space Telescope. The researchers made their original discovery by analyzing Hubble measurements recorded between 2011 and 2017, while their latest work relies on more Hubble data taken between 2021 and 2022. Additional observations by the orbiting Gaia telescope were also used. The wandering black hole was discovered thanks to the object's influence on surrounding stars. The black hole has no "companion" star, but it made itself known while passing in front of a dim background star. The "gravitational lens" effect magnified that star's light, shifting its position in space as well. The black hole passed the star in 2011, the researchers explain, but the star's position is still changing to this day. "It takes a long time to do the observations," Sahu stated, adding that "everything is improved if you have a longer baseline and more observations." The latest data confirms that the wandering black hole is around seven times the mass of our Sun. Based on the new observations, the second team of researchers revised their original hypothesis about the dark object, which they thought could be a neutron star. They now estimate the object has around six times the mass of the Sun, which is consistent with the new research by Sahu's team. // Related Stories The first wandering black hole ever discovered currently lies 5,000 light-years from Earth, so it should be much closer to our planet than Sagittarius A* (27,000 light-years). New solitary black holes could be discovered thanks to the Nancy Grace Roman Space Telescope, which is expected to launch in 2027 – if the current US administration does not cut all "unnecessary" funds from space exploration projects and NASA before then.0 التعليقات 0 المشاركات 52 مشاهدة
-
WWW.TECHSPOT.COMNew Windows 11 setting lets users kill stubborn apps instantly from taskbarIn a nutshell: Microsoft has quietly introduced a powerful new feature to Windows 11, allowing users to deal with unresponsive applications faster. The "End Task" button, now available directly from the taskbar, streamlines a process that previously required several steps and a trip into the depths of Task Manager. For years, the standard response to a frozen app was either to reboot the system or summon Task Manager – often by pressing Ctrl + Alt + Delete – and hunt through the list of running processes to find and terminate the problematic program. While effective, this approach was cumbersome. The new feature spotted by Windows Latest makes that process faster and more convenient. By enabling the "End Task" option, users can right-click any open application on the taskbar and immediately force it to close. To activate the tool, go to Settings > System > For Developers and toggle on the "End Task" setting. Once enabled, the option appears in the context menu whenever you right-click an app's icon on the taskbar. Its effectiveness sets "End Task" apart from the familiar "Close Window" option. While "Close Window" merely requests that an application shut down – sometimes leaving background processes running or failing to close unresponsive apps – "End Task" forcefully terminates the entire process. This mirrors the functionality of Task Manager's "End Task" command, but with the added convenience of being accessible from the taskbar. Windows first attempts a standard shutdown when the button is pressed, like clicking the "X" in an app's title bar. If the application fails to respond, Windows escalates by identifying the main process and any related processes and terminating them all, ensuring that even stubborn, unresponsive programs are closed. This is particularly useful for apps that hang or freeze, bypassing the need to track down every process in Task Manager manually. However, its power is limited. The "End Task" button cannot terminate system processes such as File Explorer; Task Manager remains indispensable for these. Additionally, users should be aware that using "End Task" is akin to pulling the plug: any unsaved data in the forcibly closed application will be lost, as the app is not given a chance to save its state or perform cleanup routines. // Related Stories This feature is tucked away in the For Developers section of Settings and does not require enabling Developer Mode. It is available to all users running supported builds of Windows 11. Image credit: Windows Latest0 التعليقات 0 المشاركات 47 مشاهدة
-
WWW.TECHSPOT.COMGen Z "digital natives" to be taught empathy, time management, and phone etiquette in soft skills programA hot potato: It's been said that those born at the dawn of the internet age have grown up lacking the life skills essential for many jobs. In an attempt to address this problem, Gen Z students in Manchester, England, are to learn "soft skills" that include empathy, time management, and speaking to people in person and on the phone. Generation Z, typically defined as those born between 1997 and 2012 (give or take), is typically considered to consist of digital natives. The term is defined as a person who grew up with the presence of digital technology or in the information age, making them comfortable and fluent in all things tech – but not typing, apparently. Being a digital native might mean that Gen Z tends to be more tech savvy, but being raised during a time when most interactions moved online and the world experienced turbulent periods has left many of this generation with few social skills. One employer said digital natives struggled to find work as they were too afraid to speak on the phone or do face-to-face job interviews. The Guardian reports that the Unesco-partner non-profit Higher Health launched Skills 4 Living in Greater Manchester this week. It hopes to reach 10,000 young people in the city and has partnered with higher education providers, including the University of Manchester. While the curriculum will be delivered online, students will be expected to complete assessments by interacting in person with others. In addition to learning empathy and time-management, there will be seminars on spotting fake news, staying safe on the internet, how to challenge racism, sexism and homophobia, gambling awareness, and avoiding scams. It's believed that growing up with the internet, social media, and texting has left Gen Z with fewer "everyday but essential" communication skills than older generations. Courtesy of Cake.com There are also more cases of mental health issues among young people than in the past. Prof Sandeep Ranote, a leading child psychiatrist said, "When I started in my career as a consultant in 2005, one in 10 young people had a diagnosable mental health condition. We're now [at] one in five. That's not okay. Could it have been prevented? Yes is the answer. This is a toolkit to prepare young people for, even in the space of 25 years, a very different global world." // Related Stories In December, a survey found that over a quarter of executives wouldn't consider hiring a recent college graduate today due to a lack of soft skills that included communications, problem solving, adaptability, and conflict resolution. Worldwide, about a fifth of those aged between 15 and 24 were not in employment, education, or training in 2023. While the lack of soft skills will be a factor -- and some put it down to laziness and selectiveness -- others blame the rise of useless university degrees.0 التعليقات 0 المشاركات 56 مشاهدة
-
Logitech quietly raises prices on popular PC accessories by up to 25% after tariffsWhat just happened? Industry watchers have been closely monitoring signs of rising prices in consumer technology. Thanks to research by YouTuber Cameron Dougherty, we now have clear evidence of price increases in popular PC accessories. Dougherty has done the legwork by analyzing a broad range of Logitech products, reporting price hikes of up to 25 percent on some of the company's most sought-after keyboards and mice, among other items. In his video, Dougherty raises questions about the impact of ongoing tariffs and the future affordability of tech gear in the United States. Flagship products such as the Logitech MX Master 3S mouse and the K400 Plus Wireless Touch Keyboard were among those affected. The latter increased in price from $27.99 to $34.99 – a modest $7 jump that nonetheless represents a significant 25 percent rise. Dougherty's findings also note that while some products have become more expensive, others have remained stable or even dropped in price. For instance, the G Pro X Superlight mouse dropped from $159.99 to $149.99. To verify these claims, Tom's Hardware conducted its own investigation and corroborated several of Dougherty's observations. For example, the MX Keys S keyboard is now listed at $130 on Logitech's official website, reflecting an 18 percent increase. The MX Master 3S mouse has climbed 20 percent, from $100 to $120. The K400 Plus Wireless Touch keyboard's price hike, though smaller in absolute terms, stands out for its percentage jump. Notably, these increases have not been accompanied by any public announcement from Logitech. Some items have appeared on sale at major retailers like Amazon, but the discounted prices are still higher than historical norms, suggesting a new baseline has been established. The reasons behind these changes are complex but appear to be closely tied to the turbulent tariff environment. The Trump administration's tariffs on imported goods, especially those from China, have sent ripples through the tech industry. // Related Stories Many manufacturers, including Logitech, rely heavily on Chinese production, leaving them particularly vulnerable to these policy shifts. Earlier this month, Logitech withdrew its financial forecast for the upcoming fiscal year, explicitly citing ongoing uncertainty around tariffs as a driving factor. While some tariffs have been temporarily paused, those on Chinese imports remain steep, forcing companies to navigate a landscape of unpredictable costs and supply chain disruptions. Logitech is not alone in adjusting its pricing. Other brands, such as accessory maker Anker which is based in China, have also raised prices on products like chargers, with reported increases of around 18 percent. Industry experts caution that these adjustments may not be the last, as manufacturers continue to adapt to evolving trade policies and the potential for further escalation in the U.S.-China trade dispute.0 التعليقات 0 المشاركات 77 مشاهدة
-
WWW.TECHSPOT.COMSam Altman says polite ChatGPT users are burning millions of OpenAI dollarsManners are not ruining the environment: The costs of training and running artificial intelligence model are massive. Even excluding everything but electricity, AI data centers burn through over $100 million a year to process user prompts and model outputs. So, does saying "please" and "thank you" to ChatGPT really cost OpenAI millions? Short answer: probably not. Some shocking headlines involving the costs of being polite to AI chatbots like ChatGPT have circulated over the past few days. A few examples include: Your politeness could be costly for OpenAI – TechCrunch Saying 'please' and 'thank you' to ChatGPT costs OpenAI millions, Sam Altman says – Quartz Being nice to ChatGPT might be bad for the environment. Here's why – Laptop The news stems from an offhand comment Sam Altman made on X. It began with a simple question: How much money has OpenAI lost in electricity costs from people saying "please" and "thank you" to its language models? Altman replied, "Tens of millions of dollars well spent – you never know." That one-liner was enough to send outlets like the New York Post and Futurism down a rabbit hole of speculation, trying to estimate the computing cost of civility. The logic goes like this: every extra word adds tokens to a prompt, and those extra tokens require more computational resources. Given the scale of ChatGPT's user base, these seemingly trivial additions can add up. // Related Stories However, several factors complicate the math behind Altman's comment. First is the actual cost per token. ChatGPT says GPT-3.5 Turbo costs roughly $0.0015 per 1,000 input tokens and $0.002 per 1,000 output tokens. "Please" and "thank you" typically add between two and four tokens in total. So the cost per use amounts to tiny fractions of a cent – somewhere around $0.0000015 to $0.000002 per exchange. Based on rough estimates, that amount translates to about $400 a day or $146,000 a year. That's several orders of magnitude lower than "tens of millions." As for real energy costs, the US Energy Information Administration's Electric Power Research Institute estimates OpenAI's monthly electricity bill at around $12 million, or $140 million a year. That figure includes every interaction – not just polite ones. So while it's theoretically possible that courteous prompts account for more than $10 million annually, we simply don't have the data to break that down. Only OpenAI's internal metrics can say for sure. Furthermore, Altman's phrasing wasn't literal. The follow-up – "you never know" – suggests the remark was tongue-in-cheek. It reads more like a wry endorsement of politeness than a real financial estimate. He likely meant that in an era when courtesy feels increasingly rare, maybe it's worth the negligible cost, whether $400 or $40 million. Sure, bots don't have feelings – but if humanity ends up answering to a superintelligent AI someday, it might just remember who was polite – "you never know." Image credit: Abaca Press0 التعليقات 0 المشاركات 61 مشاهدة
-
WWW.TECHSPOT.COMOpen source AI is the new Linux, only fasterWhy it matters: When Liang Wenfeng launched his advanced AI model DeepSeek on Hugging Face, it marked a turning point for artificial intelligence and the global open-source movement. Its debut shifted the focus from a Chinese national achievement to a broader story about how open collaboration can cross borders and reshape innovation. MongoDB Developer Relations head and open-source advocate Matt Asay argues that DeepSeek represents more than just Chinese innovation – it shows how open source reshapes ownership, collaboration, and the pace of technological progress. "It stopped being Chinese the minute it was released on Hugging Face and no one can put the open source genie back in the bottle – not even the U.S. government," Asay wrote in Info World, where he moonlights as a contributing writer. DeepSeek's release sparked a wave of global developer activity, including a high-profile effort from the Beijing Academy of Artificial Intelligence (BAAI), which launched a rival project called OpenSeek. The initiative aims to outperform DeepSeek while bringing together the global open-source community to advance algorithms, data, and infrastructure progress. Policymakers, particularly in the United States, responded swiftly and harshly – adding the BAAI to a government blacklist. To Asay, attempts to rein in open-source artificial intelligence are futile and reflect a profound misunderstanding of the movement. "DeepSeek didn't just have a moment. It's now very much a movement," he observes. "The open source AI ecosystem surrounding it has rapidly evolved from a brief snapshot of technological brilliance into something much bigger – and much harder to stop." The scale of the movement is staggering. Thousands of developers – from academic researchers to hobbyists – are working to refine and expand open-source artificial intelligence models like DeepSeek. Platforms like Hugging Face now serve as global collaboration hubs, driving innovation faster than even the most nimble corporate labs. While Hugging Face may be a single company, the communities it fosters are far more durable – and beyond the reach of centralized control. // Related Stories This democratization of artificial intelligence is already reshaping the real world. Companies like Perplexity are incorporating open-source models into consumer products, proving that advanced AI is no longer the sole domain of tech giants or state-funded labs. Asay envisions a future where powerful AI tools are within reach of everyone – designed to be modified, improved, and expanded by a global network of developers. To him, the parallels to the early rise of Linux are unmistakable. "It's Linux all over again. One passionate start becomes a movement, then infrastructure, then a global standard," he explains. "The key difference is that this time it's happening in months, not decades," Linux thrived not due to government or corporate backing but because it sparked a wave of developer contributions and innovation. This same dynamic is now driving the rapid advancement of open-source AI. In contrast, organizations clinging to proprietary models, like OpenAI, are fighting a losing battle. As Asay puts it, "They're attempting to dam an ocean," underscoring the futility of trying to contain a movement defined by decentralization and collaboration. While some companies nod to open-source ideals, few have matched the transparency and openness shown by efforts like DeepSeek and OpenSeek. Asay is clear-eyed about the challenges facing policymakers. "Open source isn't subject to export controls or trade embargoes. It's a pull request away, all day every day," he notes. Attempts to slow or block the spread of open-source AI will only backfire, harming domestic innovation and pushing leadership elsewhere. The lesson from recent technology history is that open ecosystems, driven by global collaboration, adapt and evolve far faster than closed, centralized projects. The rise of DeepSeek and its open-source successors marks a fundamental shift in how technology is developed and distributed. Governments, corporations, and developers now face a choice: engage with the open-source AI movement or watch others pull ahead. Open-source artificial intelligence is not a distant trend – it is already transforming the landscape. "No one can own this wave, no one can stop it, and no one can contain it," Asay concludes.0 التعليقات 0 المشاركات 67 مشاهدة
-
WWW.TECHSPOT.COMSynology to require branded hard drives for future NAS modelsCertified Overspending: Synology is known for its NAS appliances (network-attached storage) and other related products. While the company doesn't manufacture its own disk drives, it is now selling a "certified" line of HDDs for "maximum" reliability and compatibility. The next high-end NAS line from Synology will require the use of the company's branded hard disk drives. The manufacturer announced the change in a recent press release, stating it will increasingly rely on a proprietary ecosystem for upcoming storage products. This new requirement will affect NAS models in the Plus series launching in 2025 and beyond, Synology said. NAS appliances using Synology-branded hard drives will reportedly offer customers several benefits, including higher performance, improved reliability, and more efficient support. However, the Plus Series line of 3.5-inch HDDs are essentially standard drives sourced from established manufacturers like Toshiba and Seagate. These drives use conventional magnetic recording technology to ensure consistent performance during I/O operations. Plus series NAS models released before 2025 will remain compatible with traditional, non-certified hard drives – though this does not apply to XS Plus or rack-mounted models. Even hard drives already in use with older Plus NAS appliances should continue to function "without restrictions," but they may lose access to certain features in the future. Some of these restrictions include the inability to create storage pools and the loss of access to official support. Synology has indicated it will not assist customers using "incompatible" storage media. Additional features that will soon be limited to Synology-branded drives include volume-wide deduplication, lifespan analysis, and automated firmware updates. Also see: QNAP and Synology buying recommendations in our Best Storage 2025 list // Related Stories Synology has confirmed the need for branded drives, stating that its "Product Compatibility List" will be updated with additional hard drive models. These drives have been "thoroughly vetted" through extensive testing and a rigorous validation process designed to minimize failures and compatibility issues over time. Customers will also be able to submit third-party drives for testing, offering a chance for those units to meet Synology's "stringent" standards and be validated for use. This, the company believes, provides a flexible enough ecosystem for users unwilling to pay a premium for Synology-branded drives, though the company presents this new proprietary approach as a significant improvement for the sake of reliability.0 التعليقات 0 المشاركات 78 مشاهدة
-
WWW.TECHSPOT.COMNew Star Wars movie starring Ryan Gosling set for 2027, Lucasfilm confirmsSomething to look forward to: Lucasfilm just gave Star Wars fans a huge reason to celebrate with the announcement of a new installment in the blockbuster franchise. Provisionally titled Star Wars: Starfighter, the film is set to begin production this fall, with a major theatrical release planned for Memorial Day 2027. The film will star Oscar-nominated actor Ryan Gosling, known for roles in the 2023 fantasy comedy Barbie and the 2004 romantic drama The Notebook. It will be directed by Shawn Levy, who helmed Marvel's 2024 blockbuster Deadpool & Wolverine. Jonathan Tropper will serve as one of the lead writers. Starfighter will take place five years after Star Wars: Episode IX – The Rise of Skywalker, though it won't serve as a direct sequel to the 2019 film. Instead, it will be a standalone story featuring "an entirely new adventure" with a cast of all-new characters. It remains unclear what role Gosling will play, as neither Lucasfilm nor the actor has revealed any character details. However, speaking at a fan event in Japan, Gosling expressed excitement about the project, saying it has "so much adventure, so much heart and original character." He also claimed that the movie will offer an opportunity to showcase "a side of the universe that we may not have seen." Speaking about his personal connection to the "galaxy far, far away," Gosling shared that he used to dream about Star Wars even before seeing any of the movies. He even showed an image of his childhood Star Wars-branded bedding, saying it helped shape his early understanding of what movies could be. Director Shawn Levy added that the new film will blend the "fun" and spirit of Star Wars with a fresh, original storyline. He admitted to feeling "scared, nervous, and intimidated" by the responsibility of working within such a beloved universe, but expressed confidence that the story and cast would strike the right chord with die-hard fans. // Related Stories Before Starfighter arrives in theaters, Disney is set to release another film in the expanded Star Wars franchise next year. Titled The Mandalorian & Grogu, it will continue the story from the hit 2019 series The Mandalorian and will be directed by Jon Favreau. Disney is also developing additional Star Wars films, including one featuring Daisy Ridley reprising her role as Rey Skywalker in a follow-up to The Rise of Skywalker. However, since its announcement in 2023, the project has faced multiple delays, with rumors suggesting it may have been quietly shelved. Lucasfilm maintains that the film is still in development, though no official updates have been shared in years.0 التعليقات 0 المشاركات 59 مشاهدة
-
WWW.TECHSPOT.COMYou can now generate AI videos on your gaming laptop with just 6GB of VRAMIn brief: AI video generation may soon no longer be limited to expensive subscriptions or high-powered servers. Thanks to a recent breakthrough, even a gaming laptop could generate full-length AI videos. The breakthrough comes from Lvmin Zhang of GitHub and Maneesh Agrawala of Stanford University. The duo developed FramePack, a neural network architecture that enables high-quality video diffusion with as little as 6GB of VRAM. This is a significant achievement, especially given the model's size – 13 billion parameters – which allows it to generate full 60-second clips at 30 FPS using only a mid-range GPU. The key lies in how FramePack operates. Traditional video diffusion models rely on previously generated frames to predict the next one. As the video length increases, so does the "temporal context" – the number of past frames the model must consider – resulting in higher memory demands. This is why most models require 12GB of VRAM or more to run efficiently. FramePack flips that on its head. Instead of letting memory usage balloon with longer clips, it compresses input frames based on importance into a fixed-length context, keeping the memory footprint compact and consistent regardless of video duration. This innovation allows the model to process thousands of frames, even with large architectures, on laptop-grade GPUs. It also enables training with batch sizes comparable to those used in image diffusion models. But FramePack doesn't just reduce memory demands, it also addresses drifting – a common issue where video quality degrades over time. By using intelligent compression patterns and scheduling techniques, FramePack helps maintain visual consistency from beginning to end. // Related Stories To top it off, the model includes a user-friendly GUI. Users can upload images, enter text prompts, and view a live preview as frames are generated. On an RTX 4090, optimized generation speeds reach up to 0.6 frames per second. Naturally, performance is lower on less powerful GPUs, but even an RTX 3060 can handle it. Currently, FramePack supports Nvidia's RTX 30, 40, and the new 50 series GPUs, provided they support FP16 or BF16 data formats. There's no confirmed support yet for AMD or Intel GPUs, but the model works across multiple operating systems, including Linux. You can find full model details and source code on GitHub.0 التعليقات 0 المشاركات 105 مشاهدة
-
ChatGPT gets scarily good at guessing photo locations, sparking doxxing concernsA hot potato: Now that people have mostly stopped using ChatGPT to turn themselves into action figures, it seems the next trend involving the AI is using it to guess locations based on photos. While some are finding this reverse location search functionality fun, it raises several privacy concerns, especially when it comes to doxxing. OpenAI released its latest o3 and o4-mini models last week, which can "reason" through uploaded images. This means it can crop, rotate, and zoom in on photos, even if they're of poor quality. Combined with the models' other abilities, people have found that they are particularly good at identifying locations in uploaded photos. Users are feeding o3 images of everything from restaurant menus to selfies and telling the model to imagine it is playing the online guessing game GeoGuessr, which tasks players with guessing locations based on Google Street View images. It's easy to see this as all fun and games, but there's a potentially darker side. This reverse image search could easily allow someone to be doxxed – the public revealing of where they live or are located – based on minute details in an image that most humans would not notice. A simple selfie with few background items, or a story on social media, could be fed into ChatGPT to learn where it was taken. // Related Stories While users have praised the o3 model's ability to identify locations from images, it isn't something that arrived with the latest releases. TechCrunch notes that GPT-4o, which was released without image reasoning, was able to come up with the same answers as o3 more often than not, and it did so in less time. However, there was one instance in the publication's testing where o3 was able to correctly guess that a picture of a purple rhino head mounted in a bar was from a Williamsburg speakeasy – GPT-4o thought it was from a UK pub. It's important to note that even o3 doesn't get its guesses right every time, and sometimes it gets stuck in a loop when trying to determine a location. An OpenAI spokesperson said that visual reasoning will make its tools more helpful in areas like accessibility, research, or identifying locations in emergency response. As for preventing doxxing, the spokesperson said the models refuse requests for private or sensitive information, and the company has added safeguards intended to prohibit the models from identifying private individuals in images. Masthead: Alex Shuper0 التعليقات 0 المشاركات 54 مشاهدة
المزيد من المنشورات