TechSpot
TechSpot
Tech Enthusiasts - Power Users - IT Professionals - Gamers
1 people like this
711 Posts
2 Photos
0 Videos
0 Reviews
Recent Updates
  • Snap CEO Evan Spiegel takes a dig at Meta's history of copying Snapchat
    www.techspot.com
    Trolling 101 Snap CEO Evan Spiegel has taken a public jab at Mark Zuckerberg and Meta, updating his LinkedIn profile with an unexpected addition to his job titles. Alongside "loving husband and father of four boys," he now lists himself as "VP Product @ Meta." This tongue-in-cheek update clearly references Meta's history of replicating features that Snapchat first introduced. The rivalry between Snap and Meta is long-standing, dating back to 2013 when Spiegel famously rejected Facebook's $3 billion acquisition offer. Since then, Meta's platforms, including Facebook and Instagram, have consistently rolled out features that closely resemble Snapchat's innovations.Perhaps the most notable example of this imitation is Instagram Stories, launched in 2016. The feature, which allows users to post ephemeral photos and videos, was a direct copy of Snapchat's core functionality, introduced three years earlier. Meta didn't stop there, later expanding the Stories feature to Facebook and WhatsApp.Other Snapchat innovations that found their way to Instagram include disappearing direct messages and face-altering filters, further blurring the lines between the two platforms.Meta's aggressive copying of Snapchat's features has significantly impacted Snap's market share and growth trajectory. After its launch, Instagram Stories quickly outpaced Snapchat in daily active users, surpassing 400 million by 2018 more than double Snapchat's total user base.This disparity led to stagnation and even a decline in Snapchat's user growth, as many users migrated to Instagram for its larger network and integrated features. // Related StoriesInterestingly, Spiegel's attitude toward Meta's copycat behavior has been somewhat ambivalent. In 2018, he expressed a sense of pride, stating that it was the "best feeling" to create something so simple and elegant that competitors could only replicate it exactly. However, he also quipped that he wished Meta would copy Snapchat's data protection practices as well.Meta and Instagram executives have shown little remorse for their imitative practices. Kevin Weil, former Instagram VP of product, defended the approach in 2017, arguing that it would be foolish to ignore good ideas simply because they originated elsewhere. He framed it as a natural part of the tech industry's evolution, where innovative concepts spread across the entire sector.Despite this justification, some acknowledgment of Snapchat's influence has been made. Former Instagram CEO Kevin Systrom once conceded that Snapchat "deserved all the credit" for the Stories feature.
    0 Comments ·0 Shares ·54 Views
  • Used Seagate drives sold as new traced back to crypto mining farms
    www.techspot.com
    A hot potato: A widespread scandal involving used Seagate hard drives fraudulently sold as new has continued to escalate, with new evidence suggesting that the drives originated from Chinese cryptocurrency mining farms. The drives, many of which had logged 15,000 to 50,000 hours of prior use, were reportedly altered to appear unused before re-entering the retail supply chain. The first reports of affected drives surfaced in January, when consumers noticed inconsistencies in supposedly new Seagate Exos disk drives used in data centers. The issue has since expanded globally, with over 200 confirmed cases across Europe, Australia, Thailand, and Japan. While Seagate maintains that these products did not come from its official distribution channels, the scandal raises serious concerns about unauthorized resellers and supply chain security.An investigation by Heise suggests that the fraudulent HDDs were sourced from mining operations in China to mine Chia, a cryptocurrency that caused a surge in HDD demand before becoming economically unviable.During the Chia mining boom, demand for high-capacity hard drives skyrocketed, leading to shortages and price surges. However, as the profitability of Chia mining declined, many operations shut down, flooding the market with used hardware. It now appears that some of these used drives were relabeled and resold as new, deceiving both retailers and consumers.Although standard SMART parameters track HDD usage, these values were reset to obscure the actual wear and tear. However, a more in-depth check using FARM (Field-Accessible Reliability Metrics) values can reveal a drive's true operational history. Consumers concerned about their purchases can verify their drives using Smartmontools version 7.4+ (use command: smartctl -l farm /dev/sda) or Seagate's SeaTools software.Retailers impacted by the scandal have taken different approaches to address customer complaints. Some have acknowledged that they unknowingly sold these manipulated drives and are offering refunds or exchanges. // Related StoriesOthers have set up dedicated customer service portals to handle the issue, while a few insist on verifying the affected drives before providing compensation. Many retailers stress that they purchased these HDDs from suppliers they trusted, and they were unaware of any tampering before selling them to consumers.Seagate has distanced itself from the fraudulent sales, saying that it did not distribute the affected drives. It urges affected buyers to report their cases via fraud@seagate.com. The company has launched an internal investigation and is collaborating with retailers and law enforcement to track the fraudulent resellers.
    0 Comments ·0 Shares ·53 Views
  • We're getting closer to having practical quantum computers - here's what they will be used for
    www.techspot.com
    In 1981, American physicist and Nobel Laureate, Richard Feynman, gave a lecture at the Massachusetts Institute of Technology (MIT) near Boston, in which he outlined a revolutionary idea. Feynman suggested that the strange physics of quantum mechanics could be used to perform calculations.The field of quantum computing was born. In the 40-plus years since, it has become an intensive area of research in computer science. Despite years of frantic development, physicists have not yet built practical quantum computers that are well suited for everyday use and normal conditions (for example, many quantum computers operate at very low temperatures). Questions and uncertainties still remain about the best ways to reach this milestone.Explainer: What is Quantum Computing?What exactly is quantum computing, and how close are we to seeing them enter wide use? Let's first look at classical computing, the type of computing we rely on today, like the laptop I am using to write this piece.Editor's Note:Guest author Domenico Vicinanza is an Associate Professor of Intelligent Systems and Data Science at Anglia Ruskin University. Domenico's areas of expertise include Audio and Music Technology, Electrical and Electronic Engineering and he worked as a scientific associate at CERN for seven years. This article is republished from The Conversation under a Creative Commons license.Classical computers process information using combinations of "bits", their smallest units of data. These bits have values of either 0 or 1. Everything you do on your computer, from writing emails to browsing the web, is made possible by processing combinations of these bits in strings of zeroes and ones.Quantum computers, on the other hand, use quantum bits, or qubits. Unlike classical bits, qubits don't just represent 0 or 1. Thanks to a property called quantum superposition, qubits can be in multiple states simultaneously. This means a qubit can be 0, 1, or both at the same time. This is what gives quantum computers the ability to process massive amounts of data and information simultaneously.Imagine being able to explore every possible solution to a problem all at once, instead of once at a time. It would allow you to navigate your way through a maze by simultaneously trying all possible paths at the same time to find the right one. Quantum computers are therefore incredibly fast at finding optimal solutions, such as identifying the shortest path, the quickest way.Think about the extremely complex problem of rescheduling airline flights after a delay or an unexpected incident. This happens with regularity in the real world, but the solutions applied may not be the best or optimal ones. In order to work out the optimal responses, standard computers would need to consider, one by one, all possible combinations of moving, rerouting, delaying, cancelling or grouping, flights.Different qubits can be linked via the quantum phenomenon of entanglementEvery day there are more than 45,000 flights, organized by over 500 airlines, connecting more than 4,000 airports. This problem would take years to solve for a classical computer.On the other hand, a quantum computer would be able to try all these possibilities at once and let the best configuration organically emerge. Qubits also have a physical property known as entanglement. When qubits are entangled, the state of one qubit can depend on the state of another, no matter how far apart they are.This is something that, again, has no counterpart in classical computing. Entanglement allows quantum computers to solve certain problems exponentially faster than traditional computers can.Opposing views: Google says commercial quantum computing applications will arrive within five years meanwhile, Nvidia CEO Jensen Huang recently said "very useful" quantum computers are still 20 years awayA common question is whether quantum computers will completely replace classical computers or not. The short answer is no, at least not in the foreseeable future. Quantum computers are incredibly powerful for solving specific problems such as simulating the interactions between different molecules, finding the best solution from many options or dealing with encryption and decryption. However, they are not suited to every type of task.IBM scientist Dr. Maika Takita on a quantum computer in a high-tech laboratory, handling the intricate components of a cryogenic cooling system essential for maintaining qubits at ultra-low temperatures.Classical computers process one calculation at a time in a linear sequence, and they follow algorithms (sets of mathematical rules for carrying out particular computing tasks) designed for use with classical bits that are either 0 or 1. This makes them extremely predictable, robust and less prone to errors than quantum machines. For everyday computing needs such as word processing or browsing the internet, classical computers will continue to play a dominant role.There are at least two reasons for that. The first one is practical. Building a quantum computer that can run reliable calculations is extremely difficult. The quantum world is incredibly volatile, and qubits are easily disturbed by things in their environment, such as interference from electromagnetic radiation, which makes them prone to errors.The second reason lies in the inherent uncertainty in dealing with qubits. Because qubits are in superposition (are neither a 0 or 1) they are not as predictable as the bits used in classical computing. Physicists therefore describe qubits and their calculations in terms of probabilities. This means that the same problem, using the same quantum algorithm, run multiple times on the same quantum computer might return a different solution each time.To address this uncertainty, quantum algorithms are typically run multiple times. The results are then analyzed statistically to determine the most likely solution. This approach allows researchers to extract meaningful information from the inherently probabilistic quantum computations.From a commercial point of view, the development of quantum computing is still in its early stages, but the landscape is very diverse with lots of new companies appearing every year. It is fascinating to see that in addition to big, established companies like IBM and Google, new ones are joining, such as IQM, Pasqal and startups such as Alice and Bob. They are all working on making quantum computers more reliable, scalable and accessible.A wafer with photonic chips for quantum computingIn the past, manufacturers have drawn attention to the number of qubits in their quantum computers, as a measure of how powerful the machine is. Manufacturers are increasingly prioritizing ways to correct the errors that quantum computers are prone to. This shift is crucial for developing large-scale, fault-tolerant quantum computers, as these techniques are essential for improving their usability.Google's latest quantum chip, Willow, recently demonstrated remarkable progress in this area. The more qubits Google used in Willow, the more it reduced the errors. This achievement marks a significant step towards building commercially relevant quantum computers that can revolutionize fields like medicine, energy and AI.After more than 40 years, quantum computing is still in its infancy, but significant progress is expected in the next decade. The probabilistic nature of these machines represents a fundamental difference between quantum and classical computing. It is what makes them fragile and hard to develop and scale.At the same time, it is what makes them a very powerful tool to solve optimization problems, exploring multiple solutions at the same time, faster and more efficiently that classical computers can.
    0 Comments ·0 Shares ·55 Views
  • Unexpected fees shock U.S. consumers as Trump ends $800 duty-free imports from China
    www.techspot.com
    What just happened? President Trump's recent implementation of a 10 percent tariff on Chinese imports has sent shockwaves through the e-commerce world, disrupting online shoppers, shipping providers, and e-commerce platforms. The change stems from the reversal of the de minimis rule, a regulation that allows U.S. consumers to receive foreign packages valued under $800 duty-free. After some immediate backlash, Trump reversed his decision temporarily until a proper system to collect tariffs on packages under $800 is put in place. This exemption had fueled the growth of cross-border e-commerce, particularly benefiting platforms selling low-cost items from China like Temu and Shein. Now, the sudden imposition of tariffs on previously exempt low-value packages has led to unexpected import fees for consumers and confusion among shipping providers.Social media has been flooded with complaints about these new costs. One TikTok user shared her frustration over a DHL notice demanding an extra $115.91 for package delivery, exclaiming, "I'm calling out all shopping girlies: We've been hit."Shipping providers have struggled to adapt to the new regulations. UPS initially applied fees to all Chinese imports as if they were valued at $800, regardless of their actual worth, and is now working on contingency plans. USPS is preparing to collect import duties on all inbound packages from China and Hong Kong, having briefly suspended and then reinstated parcels from these regions. Meanwhile, DHL has introduced additional charges on packages from China, contributing to consumer sticker shock.In response to the backlash, President Trump issued a new executive order temporarily reinstating the de minimis exemption. However, this revival is conditional, lasting only until adequate systems are in place to process and collect tariff revenue on packages under $800.The policy shift has also impacted major e-commerce platforms specializing in direct-from-China shopping. Temu and Shein now require Chinese merchants to pay an additional 30 percent levy on all retail goods sold through their platforms a cost that will likely be passed on to consumers, as merchants struggle to maintain their already thin profit margins. // Related StoriesAlthough the reversal of the de minimis rule was ostensibly aimed at curbing the flow of fentanyl and precursor chemicals into the United States, its consequences extend far beyond its intended purpose.The change has disrupted e-commerce firms that built their business models around low-value, duty-free shipments to U.S. shoppers. American consumers, accustomed to purchasing inexpensive items like $5 shirts, $10 lamps, and $20 shoes from Chinese platforms, may soon face higher prices.As the situation continues to evolve, online shopping from China is undergoing a dramatic shift. Consumers, shipping providers, and e-commerce platforms must now navigate an uncertain landscape. While the full impact on online shopping habits and the broader e-commerce industry remains to be seen, one thing is clear: the era of effortless, ultra-cheap imports from China may be coming to an end.
    0 Comments ·0 Shares ·55 Views
  • Amazon, Google, Microsoft, and Meta push AI spending to new heights, set to surpass $320 billion this year
    www.techspot.com
    Cutting corners: The artificial intelligence arms race among tech giants is reaching new heights as industry leaders unveil ambitious spending plans for 2025. This surge in expenditure comes despite recent developments suggesting that such massive investments might not be necessary namely, the sudden (and arguably too early to call) success of Chinese startup DeepSeek, which claims to have developed an AI model comparable to those of Google and OpenAI at a fraction of the cost. Amazon has set the bar exceptionally high, announcing an unprecedented investment of over $100 billion in infrastructure, primarily focused on expanding its cloud computing arm, Amazon Web Services. This massive outlay represents a significant increase from the company's already substantial $77 billion expenditure in 2024, which itself was more than double the $48 billion spent in 2023. Amazon CEO Andy Jassy justified this enormous investment by citing "significant signals of demand" in the AI space."The AI opportunity is as big as it comes, and that's why you're seeing us invest to meet that moment," Alphabet CEO Sundar PichaiGoogle's parent company, Alphabet, is not far behind, with CEO Sundar Pichai revealing plans to invest $75 billion in 2025, a 42 percent increase from the $53 billion spent in 2024. "The AI opportunity is as big as it comes, and that's why you're seeing us invest to meet that moment," Pichai said in explanation. He also addressed the DeepSeek development, suggesting that it would actually add to demand by demonstrating how new techniques could make AI more accessible and spur new lines of research.Microsoft has committed to spending $80 billion on expanding its Azure cloud platform. CEO Satya Nadella made this declaration at the World Economic Forum in Davos, underscoring the company's determination to maintain its competitive edge in AI. Microsoft's investment strategy is closely tied to its partnership with OpenAI, as it seeks to integrate advanced AI capabilities across its product lineup.Meta is also ramping up its AI investments. CEO Mark Zuckerberg has pledged to spend "hundreds of billions" more on AI over the long term, building upon the $40 billion invested in 2024. Meta's AI strategy differs somewhat from its competitors, focusing on improving ad targeting on its social media platforms and enhancing user experiences across its suite of apps.The combined capital expenditure of these four tech giants Microsoft, Alphabet, Amazon, and Meta reached a staggering $246 billion in 2024, a 63 percent increase from 2023. Their collective spending is projected to exceed $320 billion in 2025. // Related StoriesThese enormous investments stand in stark contrast to the apparent approach taken by DeepSeek. The Chinese AI lab claims to have built a reasoning model with capabilities similar to those of Google and OpenAI's products but at a significantly lower cost. To be sure, there is skepticism about DeepSeek's claims, particularly regarding the cost of developing its model. Nonetheless, the splash it has made in the AI scene has raised questions about the necessity of the massive spending plans announced by the tech giants.However, the major players seem undeterred by DeepSeek's achievement. They continue directing their investments toward building and expanding data centers, acquiring specialized AI chips, and conducting extensive research and development in AI technologies. The companies are competing to create more advanced large language models and to integrate AI capabilities across their product lines and services.Beyond the public tech giants, significant investments are also flowing into AI startups. OpenAI's Sam Altman has formed a partnership with SoftBank and Oracle to invest $100 billion in AI-related U.S. infrastructure, with the potential to increase to half a trillion dollars over time.The scale of these investments reflects the tech industry's conviction in AI's transformative potential, despite the challenges posed by more efficient models like DeepSeek's."Could there be an AI winter at some point?" Rishi Jaluria, an analyst at RBC Capital Markets, said to the Financial Times. "Sure. But if you're in a position to be a leader, you can't take your foot off the gas."
    0 Comments ·0 Shares ·78 Views
  • Obsidian's fantasy RPG Avowed is the first to enable Microsoft Store, Xbox, and Battle.net cross-buy
    www.techspot.com
    Why it matters: When Obsidian's upcoming RPG, Avowed, launches on February 18, players who access the game through Game Pass or purchase a digital copy outside of Steam will be able to play it on Xbox Series consoles, Microsoft's Xbox app for PC, or Blizzard's Battle.net launcher. Whether future Microsoft-published games will support Battle.net cross-buy remains unclear, but this move suggests a potential expansion of Blizzard's platform within Microsoft's ecosystem. Microsoft's Game Pass subscription has garnered praise for offering relatively low-cost access to a consistent stream of new and popular titles. However, PC subscribers have long expressed frustration with the Xbox app used for installing games through Game Pass. Blizzard's games still rely on the company's well-regarded Battle.net launcher, and Obsidian's Avowed is the first non-Blizzard title that allows subscribers and buyers to choose between the Xbox app and Battle.net, possibly indicating a shift in policy.Following Microsoft's historic $69 billion acquisition of Activision Blizzard, Blizzard's library was added to Game Pass, but its games remained tied to Battle.net instead of being transitioned to the Xbox app. While most Game Pass titles run through Microsoft's storefront, playing Diablo, Diablo 4, Overwatch 2, and StarCraft on PC Game Pass requires installing Battle.net.Also see: Most Anticipated PC Games of 2025 (Avowed is one of them)Although PC users often bemoan the need for additional launchers, Blizzard fans likely prefer Battle.net to the Xbox app. The latter has faced years of criticism due to bugs and a relatively light feature set. Although Battle.net mostly only hosts Blizzard titles, fans consider it more reliable by comparison.Non-Blizzard games like Call of Duty and Crash Bandicoot have been available through Battle.net for years, but Avowed's inclusion suggests that Microsoft may be willing to bring more of its first-party catalog to the platform. Moreover, Avowed is the first title to allow non-subscribers to seamlessly switch between playing on Xbox consoles, the Xbox app, or Battle.net. // Related StoriesTo enable cross-buy, users must link their Battle.net and Microsoft accounts by logging into Battle.net, clicking on their username in the top-right corner, navigating to Account Settings > Connections, and selecting Xbox Network to log into their Microsoft account.In addition to providing an alternative to the Xbox app, this integration also creates a potential pathway for Game Pass subscribers to play Avowed on the Steam Deck. Hopefully, Microsoft extends Battle.net cross-buy to other in-house titles like the upcoming Doom: The Dark Ages and South of Midnight, or recent releases like Indiana Jones and the Great Circle.Avowed is an open-world action RPG set in the Pillars of Eternity universe, heavily focusing on nonlinear quests. The game launches on Xbox consoles, Game Pass, Battle.net, and Steam on February 18. Players who purchase the premium edition will gain early access starting February 13. The game requires 75GB of storage space, and Obsidian recommends an Nvidia GeForce RTX 3080 or AMD Radeon RX 6800 XT for smooth gameplay.
    0 Comments ·0 Shares ·72 Views
  • YouTube rakes in record $10.4 billion from ads even as users grumble about aggressive strategy
    www.techspot.com
    In a nutshell: YouTube achieved record-breaking ad revenue in the fourth quarter of 2024, raking in a staggering $10.4 billion from advertisements alone. This astronomical figure, representing a 13.8 percent increase from the previous year, comes amid mounting user dissatisfaction with the platform's aggressive ad strategy. YouTube's advertising model has long been a source of frustration for its vast user base. The platform's approach to monetization, which often involves interrupting videos with unskippable ads, is seen by many as intrusive and detrimental to the viewing experience.Despite the discontent, the numbers tell a different story. The platform's ability to generate revenue seems unaffected by user grumbling. That is largely due to its vast content library, much of which is unique to the platform, and thus the lack of comparable alternatives. This captive audience provides a steady stream of ad viewers, even if they are reluctant ones.Alphabet CEO Sundar Pichai attributed much of the revenue jump to the 2024 U.S. presidential election. Combined spending on YouTube ads by the Democratic and Republican parties was almost double what they spent in the 2020 election, he said. This political advertising bonanza contributed significantly to YouTube's coffers, with over 45 million people watching election-related content on the platform on election day alone.Source: App Economy InsightsYouTube offers an escape from its ad-laden experience through its Premium subscription service, priced at $14 per month or $140 per year. However, many users balk at the cost, viewing it as an expensive solution to a problem of YouTube's own making. // Related StoriesDespite user reluctance, YouTube's subscription revenues bundled under Google subscriptions, platforms, and devices in its earnings statement increased from $10.8 billion in Q4 2023 to $11.6 billion in Q4 2024. According to Anat Ashkenazi, Alphabet's CFO, subscription products are growing primarily due to increased paid subscribers across YouTube TV, YouTube Music Premium, and Google One.Now, YouTube is betting big on AI to turbocharge its advertising strategies. Philipp Schindler, chief business officer at Alphabet, highlighted the potential of AI in marketing, citing a case study where Petco utilized AI-powered campaigns on YouTube. The company achieved a 275 percent higher return on ad spend and a 74 percent higher click-through rate than its social benchmarks, Schindler reported.He also noted that Google AI-powered video campaigns on YouTube deliver a 17 percent higher return on advertising spend than manual campaigns, according to Nielsen analysis.While this may be music to advertisers' ears, for users, it could mean even more precisely targeted and potentially more irritating ad experiences in the future.
    0 Comments ·0 Shares ·81 Views
  • Hubble captures massive "bullseye" galaxy with unprecedented nine rings
    www.techspot.com
    The big picture: NASA's Hubble Space Telescope has captured an extraordinary cosmic event that astronomers are calling the "Bullseye." The massive galaxy LEDA 1313424 has been observed with an unprecedented nine star-filled rings, the result of a dramatic collision with a smaller blue dwarf galaxy. The discovery was made by Imad Pasha, a doctoral student at Yale University, who stumbled upon the unique galaxy while examining a ground-based imaging survey. "This was a serendipitous discovery," Pasha explained. "When I saw a galaxy with several clear rings, I was immediately drawn to it. I had to stop to investigate it."Subsequent observations using both the Hubble Space Telescope and the W. M. Keck Observatory in Hawaii confirmed the presence of nine rings, far surpassing the previous record of two or three rings observed in other galaxies.The rings are believed to have formed 50 million years ago when a blue dwarf galaxy plunged through the center of LEDA 1313424 like an arrow. This cosmic collision created ripples similar to those formed when a pebble is dropped into a pond.This is an extremely rare event, according to Professor Pieter G. van Dokkum, a co-author of the study, which appeared in The Astrophysical Journal Letters. "We're catching the Bullseye at a very special moment in time. There's a very narrow window after the impact when a galaxy like this would have so many rings."The Bullseye galaxy dwarfs our Milky Way, measuring an impressive 250,000 light-years across nearly two and a half times larger than our home galaxy. The blue dwarf galaxy responsible for creating this cosmic spectacle now lies 130,000 light-years away, connected to the Bullseye by a thin trail of gas. // Related StoriesThe Hubble Space Telescope's exceptional resolution was instrumental in identifying most of the rings, particularly those clustered at the galaxy's center. "This would have been impossible without Hubble," Pasha said.Perhaps most exciting for the scientific community is how the Bullseye galaxy aligns with existing theoretical models. "That theory was developed for the day that someone saw so many rings," van Dokkum explained. "It is immensely gratifying to confirm this long-standing prediction with the Bullseye galaxy."The rings' formation and expansion closely match predictions, with the first rings forming quickly and spreading outward, followed by subsequent rings created as the dwarf galaxy passed through.This discovery opens up new avenues for research into galaxy collisions and their long-term effects. Astronomers will now work to determine which stars existed before and after the collision and how the galaxy may evolve over billions of years.The chance discovery of the Bullseye galaxy also hints at the potential for future findings. Van Dokkum is particularly excited about the prospects offered by NASA's upcoming Nancy Grace Roman Space Telescope. "Once NASA's Nancy Grace Roman Space Telescope begins science operations, interesting objects will pop out much more easily. We will learn how rare these spectacular events really are."
    0 Comments ·0 Shares ·78 Views
  • Meta used pirated books to train its AI models, and there are emails to prove it
    www.techspot.com
    Facepalm: A group of authors has sued Meta, alleging that the company used unauthorized copies of their books to train its generative AI models. While Meta has denied any wrongdoing, newly unsealed messages suggest that executives and engineers were well aware of their actions and that they were violating copyright law. The lawsuit filed by Sarah Silverman, Richard Kadrey, and other writers and rights holders against Meta may be entering its most critical phase. The authors have obtained internal company emails in which Meta employees openly discussed "torrenting" well-known archives of pirated content to train more powerful AI models.Meta previously acknowledged using certain controversial datasets, arguing that such practices should be considered fair use. The company also admitted to downloading a massive dataset known as "LibGen," which contains millions of pirated books. However, the newly unsealed emails reveal deeper concerns within Meta about acquiring and distributing this data through the BitTorrent network.According to the emails, Meta downloaded and shared at least 81.7 terabytes of data across multiple contentious datasets, including 35.7 terabytes from Z-Library and LibGen archives. The plaintiffs allege that Meta engaged in an "astonishing" torrenting scheme, distributing pirated books at an unprecedented scale.In an April 2023 message, Meta researcher Nikolay Bashlykov wrote, "torrenting from a corporate laptop doesn't feel right." The message ended with a smiling emoji, but a few months later, his tone shifted significantly.In September 2023, Bashlykov stated that he was consulting Meta's legal team because using torrents and thereby "seeding" terabytes of pirated data was clearly "not OK" from a legal standpoint. // Related StoriesMeta was apparently aware that its engineers were engaging in illegal torrenting to train AI models, and Mark Zuckerberg himself was reportedly aware of LibGen. To conceal this activity, the company attempted to mask its torrenting and seeding by using servers outside of Facebook's main network. In another internal message, Meta employee Frank Zhang referred to this approach as "stealth mode."Like other major tech firms, Meta is pouring massive amounts of money into AI development and generative AI services. The company, which aims to populate its aging social networks with AI-generated personas and bots, recently filed a motion to dismiss the lawsuit led by Silverman and other authors. However, the newly revealed emails detailing Meta's involvement in torrenting and distributing pirated books could significantly complicate its legal defense.
    0 Comments ·0 Shares ·82 Views
  • Nvidia DLSS 4 Ray Reconstruction Analysis: Fixing Ugly Ray Tracing Noise
    www.techspot.com
    Ray tracing is typically seen as a trade-off between visuals and performance enhancing lighting realism at the cost of FPS. However, there's a second trade off that is talked about less: noise.I recently dedicated a video to discuss just this because there are games that don't use a high enough resolution for their effects, others with noticeable surface grain, heavy denoising that results in surface boiling, or games where ray tracing hurts texture quality to the extremes of causing noticeable responsiveness issues with lighting.All of these problems end up hurting what is supposed to be a technology that improves visual quality.What is Ray Reconstruction?Partly acknowledging this, Nvidia launched DLSS Ray Reconstruction back in 2023 to address many of these issues. As an AI-based denoiser for ray-traced games, it did offer some improvements in sharpening certain effects, but issues with noise and loss of detail remained in common scenarios.Nvidia has now updated Ray Reconstruction as part of its DLSS 4 technology suite. The DLSS 4 version replaces the old convolutional neural network (CNN) with a new transformer model.Essentially, this is a larger, higher-quality AI model with further tuning to improve denoising quality. Does this updated version address many of my complaints about ray tracing noise? Well, it's time to find out.The good news about DLSS 4 Ray Reconstruction is that it's available on all Nvidia RTX GPUs, unlike Multi Frame Generation, which is exclusive to the GeForce 50 series. While the new transformer AI model is larger and more performance-taxing depending on the GPU architecture, it doesn't require any specific architectural component just Tensor cores so compatibility remains the same as DLSS 3-era Ray Reconstruction.As you'll see later, the performance hit isn't the same across every GPU generation, but at least it works all the way back to the GeForce 20 series.Like other DLSS 4 technologies, there are two ways to access DLSS 4 Ray Reconstruction. Either it's integrated into the game itself, as seen in titles like Cyberpunk 2077 and Hogwarts Legacy, or you can override a DLSS 3 Ray Reconstruction game to use the DLSS 4 version instead via Nvidia's driver override feature.Either way, a game must already support Ray Reconstruction, but this allows most Ray Reconstruction-enabled games to be instantly upgraded to the latest version. Outside of this, third-party tools like DLSS Swapper and DLSS Updater can upgrade DLSS 3 games to DLSS 4.For the image quality analysis in this article (watch the video), I'll be directly comparing DLSS 3 vs. DLSS 4 Ray Reconstruction across several examples. To do this, I've taken DLSS 4 games and manually downgraded them to DLSS 3.7.20 Ray Reconstruction the most recent version prior to DLSS 4 while keeping everything else the same. This allows us to isolate the impact of Ray Reconstruction improvements.In some games, it's possible to switch between the transformer and CNN models for Ray Reconstruction without swapping the DLLs, but this usually affects both the ray reconstruction and upscaling components at the same time.I wanted to keep the Super Resolution model at the DLSS 4 level while changing only Ray Reconstruction, which is why I used a downgrade method for comparison. The examples in this video were captured at 4K using a GeForce RTX 5090.DLSS 4 Ray Reconstruction Image QualityLet's start with the good stuff. DLSS 4 Ray Reconstruction is a significant improvement in stability. In particular, many surfaces are less prone to boiling with the new version, eliminating one of the ugly side effects of weak denoising.In Star Wars Outlaws, for example, a game prone to boiling with DLSS 3 Ray Reconstruction the new DLSS 4 version shows huge improvements in surface stability. On the left, DLSS 3 causes a completely stationary surface to bubble when it shouldn't, whereas on the right, DLSS 4 is much cleaner. It's not 100% free of this artifact but is significantly reduced. In fact, in other areas of the image, stability improves to near-perfect levels.For a better representation of image quality comparisons, check out the HUB video below:In other examples, like Cyberpunk 2077, this improvement is most noticeable in motion. There's less bubbling as denoised ray-traced effects move, giving them greater stability and consistency from frame to frame. This is especially noticeable on some globally illuminated surfaces, where DLSS 4 Ray Reconstruction brings a significant improvement.The new version is cleaner in motion and resolves a decent level of detail more quickly. In some cases, the difference in quality is night and day so much so that DLSS 3 can look like the game is running at a lower resolution. And keep in mind, this is denoising, not upscaling, which remains the same in both examples. Changes in exposure are also much less likely to cause brief boiling, indicating that the new transformer model handles lighting adjustments better.At times, this increase in stability combines with better denoising to produce higher-resolution reflections. In the Alan Wake II example, metal reflections show boiling with DLSS 3, but switching to DLSS 4 Ray Reconstruction reduces it, making the reflection appear higher resolution since edges are more defined and consistent in motion.There's also the classic fan example that Nvidia used to demonstrate DLSS 4, and we can confirm that this is a real benefit in games. As the fan spins, revealing the roof behind each blade, DLSS 4 is much more stable, reducing both boiling and ghosting.Again, this is ghosting caused by denoising, not upscaling. Rippling water surfaces also benefit from DLSS 4 Ray Reconstruction often a source of noise but the new version does a better job of smoothing these details without introducing additional issues.One of the major issues with the original Ray Reconstruction technology was its inability to resolve texture detail on surfaces that also required denoising. In these cases, Ray Reconstruction prioritized smoothing noise over preserving textures, leading to blurry, muddy surfaces. In many instances, DLSS 4 is a major improvement in this area.For example, in Cyberpunk 2077, when looking at some shiny tiles, DLSS 3 Ray Reconstruction removes much of the marbling and surface detail, whereas DLSS 4 retains texture detail. This applies to both stationary and moving scenes, and in both cases, DLSS 4 is a clear upgrade in texture quality. In fact, the motion example looks horrible and extremely blurry with DLSS 3, whereas with DLSS 4, it looks much more like a 4K image.This improvement extends to other games like Star Wars Outlaws, where the combination of reduced boiling and better texture preservation results in significantly higher-quality surfaces across the game world. In the Alan Wake II elevator door example, DLSS 3 completely hides the fact that the surface is brushed metal, but DLSS 4 Ray Reconstruction restores this detail along with other benefits.Here's another example in Cyberpunk 2077, where the roof of the car retains clearer textures in motion with DLSS 4 Ray Reconstruction compared to previous versions. Similar benefits can be seen on various surfaces throughout the game.It's impressive how much DLSS 4 Ray Reconstruction improves surface stability, resolution, and detail and how bad prior versions of denoising can look in comparison. Denoising that blurs textures and causes boiling is not a real solution effective denoising should preserve detail as well, and we're getting much closer to that with DLSS 4.The Downsides of Ray ReconstructionDLSS 4 Ray Reconstruction isn't perfect though. Some areas haven't improved much over DLSS 3 Ray Reconstruction, and in certain cases, there are actually regressions, which isn't great. For example, in the three main games we tested all of which have native DLSS 4 integrations we noticed that DLSS 4 could occasionally introduce strange surface artifacts.In Star Wars Outlaws, stationary ray-traced effects sometimes display a grid pattern. We've zoomed in for clarity, but it's also visible at a normal viewing distance on a 32-inch 4K panel. A similar issue appears in Cyberpunk 2077, though less frequently, usually in globally illuminated areas.We also noticed some cases where DLSS 4 Ray Reconstruction reduced texture quality, separate from the previous issue. About 80% of the time, texture quality is noticeably better, but in the remaining 20%, textures can become blurrier or more smoothed out. For example, in Alan Wake II, the scene below shows improved textures in one area but worse textures in a darker section.Fortunately, in most cases, this regression isn't enough to hurt overall image quality compared to DLSS 3, as the majority of the image sees improvements especially in terms of boiling and surface artifacts. However, some tweaks to the model are necessary to ensure texture quality is consistently preserved along with denoising.Ray Reconstruction also still suffers from a noticeable difference in clarity between stationary shots and motion where standing still always results in a higher-quality image than when moving.This is, of course, due to how denoising works: if there are no changes between frames, denoising can temporally accumulate and resolve a higher level of detail. But when there are changes, achieving similar detail levels becomes much harder. Still, we would have liked to see more improvements in this area.The reality is that when you stop moving, after a second or two, Ray Reconstruction appears to slowly "load in" higher-quality surfaces. It isn't actually loading anything it's just the temporal system benefiting more and more from the lack of motion during that time.But when actually gaming, this can be noticeable, especially because in some instances, as soon as you take a step or move the camera, the overall render quality decreases. Now, DLSS 4 Ray Reconstruction in motion is certainly superior to DLSS 3, but in this area, it seems to work in basically the same way.Since there's still quite a bit of temporal accumulation happening, there haven't been major improvements to the responsiveness of the denoiser. It still takes a few frames to fully respond to lighting changes, which can create a floaty feeling when moving around and looking at reflections or illumination on surfaces.The quality of each step in the accumulation and resolution process is improved, but the number of temporal samples being used for denoising seems quite similar to previous iterations. This still causes issues in games that don't use a particularly high ray count for ray tracing. So this pet peeve we have with the responsiveness of ray-traced lighting persists even with DLSS 4 Ray Reconstruction.Performance TestingAs for performance, DLSS 4 Ray Reconstruction is more taxing than DLSS 3 Ray Reconstruction. We tested four different graphics cards using the latest version compared to DLSS 3, focusing solely on the impact of Ray Reconstruction, not upscaling. For each GPU, we adjusted the settings to something realistically playable while maintaining a similar output frame rate.GeForce RTX 5090Starting with the GeForce RTX 5090, we tested at 4K with DLSS Quality enabled, typically using the highest in-game settings. This results in around 60 FPS in Alan Wake II and Cyberpunk 2077, or a higher 90 FPS in Star Wars Outlaws.Using DLSS 4 Ray Reconstruction caused a 4% hit in Alan Wake II, a 5% hit in Cyberpunk 2077, and a 7% hit in Outlaws. So, not an insignificant impact, but quite worthwhile given the image quality is generally much better.GeForce RTX 4070 SuperThe RTX 4070 Super is less powerful and uses the previous-generation Ada Lovelace architecture. To run these games at around 60 FPS, we had to drop to 1440p with DLSS Quality and slightly lower the presets as well.But despite these changes, the impact from DLSS 4 Ray Reconstruction is quite similar to the RTX 5090: a 4% hit in Alan Wake II, a 6% hit in Cyberpunk 2077, and a 6% hit in Outlaws. Based on this, it's safe to say the Tensor cores in Blackwell and Ada are good enough to run the new transformer model at an acceptable performance cost.GeForce RTX 3090The RTX 3090 is roughly equivalent to an RTX 4070 Super in ray tracing performance, so we used the exact same settings for both GPUs. The main difference here is the architecture. What was interesting to note is that the 3090 suffers from a greater hit when using DLSS 4 Ray Reconstruction: an 18% loss to FPS in both Alan Wake II and Cyberpunk 2077.Interestingly, there is no performance impact in Star Wars Outlaws, but this is because it seems like the game forces the use of the CNN model on generations prior to the 40 series. Unlike the other two games, there is no option to switch between models the game does it for you. In this instance, it chooses Transformer for the 40 and 50 series and CNN for the 30 and 20 series, hence no impact between DLSS 3 and 4 in Outlaws on the RTX 3090.The most likely explanation for the performance difference between Ampere and Ada Lovelace is the difference in Tensor core architecture, with Ada supporting a wider range of precisions that are potentially being utilized here.GeForce RTX 2080 TiAs for Turing with the RTX 2080 Ti, this GPU is really only suitable for low-quality ray tracing today. We had to dial back the settings to 1080p with DLSS Quality and the lowest quality settings. In Alan Wake II, we saw a 27% FPS reduction using DLSS 4 Ray Reconstruction vs. DLSS 3, and a 32% reduction in Cyberpunk 2077.This makes it hard to recommend the use of the newer model on older 20 series GPUs, though it's unclear how much it matters given the overall ray tracing performance of these cards.What We LearnedOverall, we were quite impressed testing DLSS 4 Ray Reconstruction certainly much more impressed than when testing Multi-Frame Generation. The new transformer model generally results in a significant increase in visual quality. There are major improvements to stability, surface boiling, texture preservation, and overall detail, which help deliver better ray-traced surfaces.Using this technology makes ray tracing feel like less of a downgrade in detail, minimizing the "visual cost" of achieving better lighting quality and accuracy. A lot of these surface and detail issues were clear problems with DLSS 3 Ray Reconstruction and other denoising methods used in today's ray-traced games. It's nice to get ray-traced lighting, but the noise can be pretty distracting in some games, giving the presentation a soupy, low-resolution look, especially in motion.DLSS 4 is a big step in the right direction toward improving how ray-traced games look, and in all the games we tested, we were much happier with the DLSS 4 output.While it is quite an impressive improvement that should be instantly noticeable in games, I wouldn't say it's "fixed" all of my ray tracing noise complaints. DLSS 4 Ray Reconstruction still has some surface boiling in worst case scenarios, there's still a noticeable difference in detail between standing still and motion, and it still struggles to be responsive and high quality when ray counts are low. There's also a few regressions compared to DLSS 3, like the occasional weird artifact and some reductions to texture quality.But for the most part, it's worth using especially on RTX 50 and RTX 40 series GPUs, where the performance impact relative to DLSS 3 seems to be around 5%. It struggles more on Ampere-based RTX 30 GPUs but could still be worth using, while on RTX 20 series cards, you can basically forget about it due to both the performance cost and the lack of ray tracing performance in modern games. It's great to see Nvidia delivering this sort of update without locking it to a specific GPU generation, allowing as many people as possible to benefit.We do have a couple of recommendations for game developers based on this testing. First, if you're developing a ray-traced game, consider integrating DLSS 4 Ray Reconstruction. There are some other good denoising solutions in games, but often they're pretty lackluster and create a bad output. The quality from DLSS 4 is great, and there are far fewer downsides compared to DLSS 3 Ray Reconstruction.Second, we recommend allowing gamers to choose the model used for upscaling and ray reconstruction separately. In Cyberpunk 2077 and Alan Wake II, you can choose between the CNN and Transformer models, but this applies to both upscaling and denoising together. Separating those options would allow for more performance fine-tuning, especially on older GPUs like Ampere, where ray reconstruction can be taxing.Nvidia also needs to keep working on this technology because there's room for improvement with more training and tweaks. The weird artifacts need fixing, and a stronger focus on responsiveness would be nice to reduce that gap in quality between standing still and motion.Shopping Shortcuts:Nvidia GeForce RTX 5080 on AmazonNvidia GeForce RTX 5090 on AmazonAMD Radeon RX 7900 XTX on AmazonNvidia GeForce RTX 4070 Super on AmazonAMD Radeon RX 7800 XT on AmazonAMD Radeon RX 7900 XT on Amazon
    0 Comments ·0 Shares ·77 Views
  • PUBG: Blindspot is a 5v5 tactical shooter that resembles a top-down Rainbow Six Siege
    www.techspot.com
    Months after unveiling under the codename "Project Arc," the upcoming PUBG spin-off has received an official title PUBG: Blindspot. A free demo for the top-down tactical multiplayer shooter drops on February 20, a few days ahead of Valve's Steam Next Fest.Krafton describes the game as a competitive match-based shooter that transfers PUBG's brand of "realistic" tactical action from the main game's open-world environments into a close-quarters setting. The smaller maps and five-versus-five gameplay will ensure faster and shorter matches. Overall, Blindspot sounds like a mirror opposite of PUBG: Battlegrounds.The list of weapons is typical for this game type: assault rifles, submachine guns, shotguns, sniper rifles, DRM, pistols, grenade launchers, and more. Some familiar guns include the Mk14 and P90. Players can also utilize blue zone grenades, recon drones, proximity explosives, flashbangs, and smoke grenades.Although top-down games normally afford players a range of vision beyond what their avatars would see, Blindspot hides opponents and other vital items behind walls and other obstructions. This line-of-sight mechanic makes explosives and hammers essential since they can destroy walls. This feature is akin to the fog of war systems in strategy games. The line-of-sight mechanic makes maintaining and destroying cover a crucial aspect of the game.Furthermore, anything visible to one player becomes visible to the entire team, enabling instant communication. Automatically sharing visual information diminishes the need for voice chat, facilitating teamwork between randomly matched strangers.Blindspot follows bomb mission rules: an attacking team must reach and hack a hidden "Crypt" to activate a "Blue Chip." Defenders must prevent opponents from reaching the Crypt by blocking doors and erecting barricades. The game features a few characters with distinct specialties, like close-range attacks, long-range combat, or gas grenades.Krafton introduced Project Arc in early November and held an invitational competition later that month. A brief closed beta ran in January, but release details beyond the demo remain unclear. Steam Next Fest will include hundreds of free demos, including Blindspot, and runs from February 24 to March 3. // Related StoriesAlthough the company is mainly known for PUBG, Krafton will soon release its take on The Sims inZOI. When the company unveiled it last year, the life simulation game impressed audiences with its high-end Unreal Engine 5 graphics and flexible character customization system. Early Access availability begins March 28.
    0 Comments ·0 Shares ·79 Views
  • Openreach is testing 50Gbps broadband in the UK using Nokia kits
    www.techspot.com
    The big picture: Openreach Limited is the organization responsible for managing the telephone and fiber infrastructure owned by BT Group. The company connects nearly all homes and businesses in the UK to broadband and phone networks, and is currently planning a significant upgrade to its internet performance. The question now is: when will customers be able to pay for a "real" 50Gbps internet connection service? Openreach and Nokia have successfully tested what the two companies call the first "live" 50Gbps-class broadband connection from a residential location in the UK. The test took place in Ipswich, Suffolk, where the infrastructure company achieved actual speeds of 41.9Gbps downstream and 20.6Gbps upstream.Currently, Openreach uses Gigabit Passive Optical Network technology for its fiber-to-the-premises internet service, which reaches over 17 million premises in the UK. A GPON network can provide up to 2.5Gbps downstream and 1.24Gbps upstream speeds on each trunk line, which is why internet companies are now deploying new XGS-PON kits to offer up to 10Gbps over their "dark fiber" infrastructure.Openreach plans to launch its first 1Gbps connections based on XGS-PON around April 2025, but this new test demonstrates how far internet connectivity could go in the (hopefully) not-so-distant future.The company used Nokia's 50G PON fiber kits linked to BT's fiber infrastructure. According to Director of Network Technology Trevor Linney, the trial and Nokia's XGS-PON-ready equipment should serve as proof that Openreach is actively working to enable a generational leap in internet connectivity for UK customers.Meanwhile, Nokia highlighted its ability to increase network capacity in a flexible and efficient manner. The company's kit can support PON connectivity technologies up to 10Gbps, 50Gbps, and even 100Gbps, should the need for such speeds arise in the future.Openreach stated that 50Gbps internet speeds would be beneficial in various use cases, including remote virtual and augmented reality, 8K video streaming, high-fidelity teleconferencing, and much more. Generative AI is also part of the equation, with the ability to better synchronize and train GenAI models over the network. // Related StoriesAccording to telecom analyst Paolo Pescatore, 50Gbps FTTP connectivity is likely still far off, but preparations should begin now. Users and companies are developing "insatiable" appetites for data consumption, Pescatore noted, so a robust and capable network will be essential moving forward.
    0 Comments ·0 Shares ·73 Views
  • www.techspot.com
    WTF?! It's ironic how, in the information age, governments slap hefty fines on companies for not adequately securing user data but, in the same breath, demand that these companies give state agencies unrestricted access to the very same data. No matter what country you are from, warrantless mass surveillance is wrong. George Orwell's 1984 was a warning, not an instruction manual. The British government has ordered Apple to allow blanket access to user data stored online. The "technical capability notice" demands a backdoor into its encrypted iCloud services, which state agencies can use to access backups of any global customer without a court order. The UK Home Office issued the decree under the Investigatory Powers Act of 2016, aptly referred to as the "Snoopers' Charter."The UK's Investigatory Powers Act allows agencies to compel technology companies to assist in intercepting and obtaining communications data. While it is meant to expedite criminal investigations, the notice Apple received extends beyond targeted data requests. Instead, the government wants full access to all encrypted information. The demand challenges Apple's Advanced Data Protection feature, which ensures only validated users can access their data. Even Apple personnel cannot decrypt customer accounts.The Irish Sun asked Apple for comment, but a spokesperson said that the company could not legally reveal details of the notice. A representative for the UK's Home Office was equally reluctant to share."We do not comment on operational matters, including, for example, confirming or denying the existence of any such notices," the spokesperson said.Apple has consistently maintained a firm stance on user privacy, asserting that it will not create backdoors into its products. It prominently fought the US government over the same issue multiple times after receiving demands from the FBI insisting it crack the phones of suspected criminals.As it has in the past, Apple has indicated a refusal to comply with the order, suggesting it might withdraw certain security features from the UK market rather than compromise its global security standards. It is too early to tell if Cupertino's counterproposal will sway UK officials. However, it is most certainly to gain support from British iPhone owners, who will likely apply pressure against this invasion of privacy. // Related StoriesCritics argue that obliging Apple to create a backdoor could set a dangerous precedent, potentially leading other countries to demand similar access. This situation has global cybersecurity implications and likely conflicts with other nations' privacy laws. Civil liberties group Big Brother Watch calls the order a severe threat to privacy rights and has called for its withdrawal."We are extremely troubled by reports that the UK Government has ordered Apple to create a backdoor that would effectively break encryption for millions of users," said Rebecca Vincent, Big Brother Watch's interim director of privacy. "[This is] an unprecedented attack on privacy rights that has no place in any democracy."Apple is not the first firm to face the UK's controversial stance on encryption. In 2023, encrypted messaging companies WhatsApp and Signal threatened to exit the UK market rather than compromise encryption protocols. This newest development against Apple could further strain relationships between the UK government and technology firms.Image credit: Electronic Frontier Foundation
    0 Comments ·0 Shares ·75 Views
  • Bouncing Beholder is a fully fledged platformer game packed into just 1,024 bytes
    www.techspot.com
    Byte-sized: Bite-sized JavaScript platformer Bouncing Beholder is astonishingly small, taking up just 1,024 bytes roughly a thousand times less data than a single promotional screenshot for a flagship video game. Despite its minuscule size, it includes all the essential elements of a classic platformer. Games keep growing larger every year, but a group of developers is pushing back against the trend. We've already seen astonishingly small versions of Tetris, Snake, and Doom, but their tiny file sizes often come with trade-offs like clunky controls, limited animation, and a lack of color.Bouncing Beholder, however, is different. Despite its minuscule size, it delivers a full-fledged side-scrolling adventure with smooth animations, responsive physics, randomly generated terrain, collectibles, and hazards to avoid.Players control a perpetually bouncing eyeball using the arrow keys, navigating a deceptively inviting landscape filled with dangers. The goal is to collect as many coins as possible while avoiding hazards. And when you inevitably lose, the randomized level design ensures that every replay feels fresh.The game was written by Marijn Haverbeke, who originally created it for the JS1K coding competition way back in 2010. It was recently rediscovered thanks in part to its strict file size limit. Haverbeke achieved this astonishing level of optimization through a series of clever coding tricks. These include abbreviating long variable names and representing game states using mathematical formulas instead of storing data directly. // Related StoriesFor example, coin locations aren't pre-defined but instead follow a simple rule: coins appear on any platform whose height is divisible by six. Collecting one slightly lowers the platform's height, effectively removing the coin.Haverbeke also devised a system to automatically shorten lengthy method names from the HTML5 Canvas API used to render graphics. Instead of writing canvas.quadraticCurveTo(), he can simply use qt(). When your entire game fits within a single kilobyte, every character matters.In fact, the code is so tightly packed that modern compression tools like Google's Closure Compiler actually increased the file size rather than reducing it.As Haverbeke humorously notes in his blog: "In terms of productivity, this is an awful way of coding. But it certainly is fun. Not to mention that it gives me an excuse to use every kind of weird hack I can think of."The full game code can be found over on Haverbeke's blog.
    0 Comments ·0 Shares ·76 Views
  • Hidden backdoor in Go package remained undetected for years
    www.techspot.com
    The big picture: The Go programming language was designed to offer a C-like syntax while prioritizing memory safety and security. Also known as Golang, Go has been growing in popularity among both legitimate developers and resourceful cybercriminals. Go, one of the most popular programming languages alongside "traditional" standards such as Python, C, and Visual Basic, was exploited to turn legitimate open-source projects into malicious software. The heart of the issue lay in the Google-owned proxy.golang.org service, which acts as a mirror for developers to quickly fetch and install Go modules without needing to access their original GitHub repositories.The supply chain attack was recently discovered by security company Socket Inc., which played a key role in taking the malicious package down. The Go Module Mirror hosted a modified version of a legitimate Go package called boltdb, which is used by thousands of other software packages. This malicious version entered the Google proxy server in 2021 and was served to Go developers at least until last Monday.Google's proxy service prioritizes caching for performance reasons, as Socket explained, and retains a cached package even after the original source has been modified. The cybercriminals used a typosquatting technique to create a new repository on GitHub (boltdb-go/bolt), with a URL that resembled the original, clean one (boltdb/bolt).The malicious module contained a backdoor payload managed by the threat actors through an external command-and-control server. After the module was fetched by Google's Go Module Mirror, the cybercriminals modified the GitHub repository by reverting the package to a clean version. This allowed the backdoor to go unnoticed while hiding in the proxy server for years.The backdoor was designed to create a hidden IP and port address, which were used to check the C2 server for further orders and commands. The IP belonged to hosting company Hetzner Online, a legitimate and trustworthy infrastructure provider, which offered an additional layer of "invisibility" to the malware. // Related StoriesSocket explained that, unlike other "indiscriminate" malicious operations, this particular Go backdoor was designed to maximize the likelihood of successful attacks and remain undetected for as long as possible. The company also faced resistance from Google in its efforts to take the malicious package offline.The security enterprise first requested the proxy managers remove the backdoored module last week, but the issue remained unresolved. After a follow-up this week, Google's Go Module Mirror finally addressed the problem a few days ago.
    0 Comments ·0 Shares ·65 Views
  • Windows 10's extended support starts at $61 per device, Microsoft reveals new details
    www.techspot.com
    What just happened? As the clock ticks down to the official end of support for Windows 10 this October, Microsoft is encouraging users to upgrade to its latest operating system, Windows 11. But, as promised, it is also offering a path to those who wish to stick with the familiar Windows 10 environment. The problem? That path has turned out to be an expensive one. In a recently updated support document, Microsoft has revealed new details about its Extended Security Updates (ESU) program for Windows 10. This program aims to provide critical security updates for users who are not ready to make the leap to Windows 11. However, this extended support comes at a price, which may cause some organizations to reconsider their IT strategies.The ESU program will be available for devices running Windows 10 version 22H2, with costs starting at $61 per device for the first year of coverage, from November 2025 to November 2026.Microsoft has also announced that these costs will double annually, with the program capped at three years. It's a pricing structure that seems designed to encourage eventual migration to Windows 11, rather than indefinite reliance on an aging operating system.The update confirms that the ESU program is cumulative, meaning that if an organization joins in the second year, it will be required to pay for the previous year's coverage as well. However, Microsoft has thrown some users a bone: Windows 10 virtual machines running in Windows 365 or Azure Virtual Desktop will receive ESUs at no additional charge, a benefit for organizations heavily invested in Microsoft's cloud services.The company emphasizes that while Windows 10 PCs will continue to function after the end-of-support date, upgrading to Windows 11 is strongly recommended for "the best, most secure computing experience."But user adoption tells a different story. According to the latest data from Statcounter, Windows 10 continues to dominate the market with a 60.37 percent share of Windows installations. Windows 11, despite Microsoft's efforts, has only recently seen an uptick in adoption, reaching 36.6 percent in January 2025. This represents a notable increase from 34.12 percent the previous month, which is likely driven by the impending end of Windows 10 support. // Related StoriesThe narrowing gap between Windows 10 and Windows 11 usage is likely a welcome sight for Microsoft's OS team. However, the persistence of Windows 10's market dominance highlights Microsoft's challenges in convincing users to embrace the change.
    0 Comments ·0 Shares ·76 Views
  • Nvidia cuts RTX 4060 supply ahead of rumored RTX 5060 launch in March
    www.techspot.com
    Rumor mill: With the GeForce RTX 5070 and RTX 5070 Ti set to launch later this month, attention is turning toward Nvidia's future entries in the Blackwell line, namely the RTX 5060 series. According to reports, Team Green has started limiting the supply of RTX 4060 chips to manufacturing partners, a strong indication that the successor could arrive around the rumored March window. According to a report on the Board Channels forums, which features PC-related stories from sources close to supply chains and AIC partners in Asia, Nvidia has started reducing RTX 4060 supply as of February.If true, Nvidia will be slashing RTX 4060/Ti supply to AIC brand manufacturers by 60% compared to what they received in the fourth quarter of 2024. The reduction relates to the Asian region, but Nvidia would be expected to expand this change globally.The RTX 4060 hasn't seen noticeable price drops recently, a common sign of an upcoming successor as Nvidia clears inventory. However, price reductions may still occur soon.The RTX 4060 series is one of the most popular lineups among Steam survey participants. The laptop version is the second most common GPU in the chart, with the desktop variant in third place and the RTX 4060 Ti in fifth. It wouldn't be surprising to see the desktop 4060 eventually replace the RTX 3060 as the top GPU among participants.The RTX 5060 and RTX 5060 Ti are both rumored to launch in March priced at $329 and $419, respectively. It's also claimed that the cards won't use 16-pin power connectors, sticking instead with the standard 8-pin connectors found in the RTX 4060 and RTX 4060 Ti.However, unlike the current-gen RTX 4060 and 4060 Ti, the RTX 5060 series is said to require at least a 650W power supply, up from the 550W needed for its predecessors. The memory configurations will likely carry over to the new generation 8GB for the 5060, and 16GB/8GB versions for the 5060 Ti except the Blackwells will use GDDR7. // Related StoriesThe RTX 5000 series is proving to be a major disappointment in the eyes of most gamers. If you want an even better illustration of why, check out our feature: Nvidia's RTX 5080 is Actually an RTX 5070.
    0 Comments ·0 Shares ·75 Views
  • Acer unveils new Predator Helios Neo AI laptops with RTX 5070 and 5070 Ti GPUs, starting at $1,900
    www.techspot.com
    Something to look forward to: Acer has announced new products in its Predator Helios Neo series that are among the first laptops to feature RTX 5070 and RTX 5070 Ti mobile GPUs. They come in a variety of options, including Mini LED and OLED displays, with prices starting at $1,899. The new Predator Helios Neo 16 AI and Predator Helios Neo 18 AI gaming laptops pair the mid-range Blackwell mobile GPUs with Intel's Arrow Lake-HX CPUs.Both laptops can be fitted with an RTX 5070 or RTX 5070 Ti alongside a Core Ultra 7 processor 255HX or a Core Ultra 9 processor 275HX, which should give them plenty of gaming performance. The now-compulsory addition of "AI" at the end of the name indicates these are more products that will have their artificial intelligence capabilities promoted heavily even though a study showed how the inclusion of the term can put consumers off buying something.Both devices offer four display types. In the case of the Predator Helios Neo 16 AI, users can choose a 16-inch 2560 x 1600 OLED screen with a 240Hz refresh rate, a standard panel with the same resolution at either 240Hz or 180Hz, or a 1920 x 1200 screen at 180Hz. The OLED model has a 1ms response time, which increases to 3ms in the other variants.Related reading: The Best Laptops - Early 2025The desktop-replacement Predator Helios Neo 18 AI swaps out the OLED option for an 18-inch 2560 x 1600 Mini LED at 250Hz, a 2560 x 1600 LED at 240Hz or 165Hz, or a 1920 x 1200 LED at 165Hz. All four options have 3ms response times. // Related StoriesThe new Helios laptops can be specced with up to 64GB of DDR5 RAM (6,400 MHz), while storage consists of up to 2TB of PCIe Gen 4 in the two SSD slots. They also come with 90Wh batteries, Wi-Fi 6E support, FHD IR cameras, and four-zone keyboard lighting.Port-wise, the laptops offer one Thunderbolt 4 port, one USB-C 3.2 Gen 2, three USB-A ports, one HDMI 2.1, a 3.5mm headphone/mic jack, a microSD slot, and an ethernet port.The Predator Helios Neo 16 AI is expected to start at $1,900 (RTX 5070/Ultra 7 255HX/FullHD screen) when it launches in the US this April. It will be 1,699 in the EMEA region at launch in May. The Predator Helios Neo 18 AI is expected to start at around $2,200 in the US this May. This bigger laptop will start at 1,799 in EMEA regions upon release in June.
    0 Comments ·0 Shares ·77 Views
  • Researchers develop self-healing asphalt that repairs cracks, stops potholes from forming
    www.techspot.com
    Something to look forward to: Researchers have developed a new type of asphalt capable of repairing its own cracks over time. This material, inspired by the regenerative abilities of trees and certain animals, aims to address the problem of potholes in the UK, which cost millions annually in repairs, not to mention causing significant frustration for drivers. The exact mechanisms of crack formation in asphalt are not fully understood, but they often originate from the hardening of bitumen due to oxidation. To tackle this issue, scientists from King's College London and Swansea University collaborated with researchers in Chile on ways to reverse this process and effectively "stitch" asphalt back together.The researchers used artificial intelligence, specifically leveraging Google Cloud's AI capabilities, to develop the self-healing asphalt by combining materials science with advanced modeling techniques.The key innovation is a sophisticated blend of natural spore microcapsules and waste-based rejuvenators. In laboratory experiments, researchers demonstrated that this new asphalt material could heal a microcrack in less than an hour.The asphalt mixture incorporates tiny plant spores filled with recycled oils. These microcapsules are smaller than a strand of hair and are designed to rupture when cracks begin to form in the asphalt. When the road surface is compressed by passing traffic, the spores release their oil, softening the bitumen and allowing it to flow back together. This process enables the asphalt to mend its own cracks over time, effectively "stitching" the material back together.The researchers used machine learning algorithms to analyze organic molecules in the bitumen for insights into the molecular structure and behavior of asphalt materials. They developed data-driven models that accelerate atomistic simulations and advance research into the crack formation processes. The AI also helped identify chemical properties that contribute to self-healing capabilities and enable the creation of virtual molecules, similar to techniques used in drug discovery.Asphalt production for construction and maintenance in the UK is a massive undertaking, with over 20 million tons produced annually. While the industry has been moving towards more sustainable practices by incorporating recycled materials like food waste, the persistent issue of cracks and potholes has remained challenging. // Related StoriesThis difficulty arises due to asphalt's composition of binder, aggregates, and air voids, which complicates the prediction of crack initiation and propagation. Various contributing factors, including traffic loading, temperature fluctuations, oxidation, moisture infiltration, and construction quality, further complicate the process and make it hard to model accurately.The self-healing asphalt is still in the development phase, but it holds promise for improving infrastructure and promoting sustainability worldwide.
    0 Comments ·0 Shares ·67 Views
  • www.techspot.com
    WHAT NEXT? If something runs on an electronic chip, someone will inevitably try to run Doom on it. The latest addition to the growing list of devices, appliances, and other objects that can play id Software's 1993 classic is a $50 HDMI adapter Apple released over a decade ago. The process required deep analysis of the dongle's surprisingly complex internals. Tinkerer John "Nyan Satan" recently demonstrated Doom running on Apple's Lightning Digital AV adapter. Although the gameplay looks choppy, the fact that the dongle released over a decade ago can handle the game at all is impressive. In some ways, it compares surprisingly well to the PCs that ran id Software's iconic first-person shooter in 1993.The minute-long clip shows a MacBook feeding data and inputs to the $49 adapter, which runs the game and connects to a monitor atop a tangle of wires. While the frame rate isn't nearly as high as on a modern PC, other devices have fared worse in Doom's three-decade history of unconventional ports.In a lengthy 2019 Twitter thread, John explained that Apple's HDMI dongles contain an SoC labeled S5L8747. Little is known about it except that it receives a 25MB firmware bundle from any iPhone it's connected to and features 256MB of RAM 32 times the 8MB listed in Doom's original system requirements.Modding Apple Digital AV adapters to connect to PCs or Macs is relatively simple, as they are fundamentally HDMI devices. Predictably, one of the first responses to John's analysis was the inevitable question: Can it run Doom? Six years later, he demonstrated that, yes, it can. // Related StoriesAccording to Tom's Hardware, John plans to further optimize performance to 60fps, add audio support, integrate controller compatibility (removing the need for a Mac), and eventually release the mod publicly. Installing it, however, requires a jailbroken iOS device.Thanks to Doom's legendary status and its practically nonexistent system requirements (by modern standards), it has become a favorite benchmark for running software on low-power devices. The ever-growing list of things that can run Doom now includes:
    0 Comments ·0 Shares ·63 Views
  • www.techspot.com
    Forward-looking: If you're a console owner who can't wait to play Grand Theft Auto 6 this fall, then here's some good news: Rockstar has once again confirmed that it's on track to release the game during the season. Unfortunately for us PC owners, a version for our platform still hasn't been announced, suggesting this will be another Rockstar game that takes months, and hopefully not years, to come to PC. It's been more than two years since the record-breaking GTA 6 trailer landed on YouTube. Since then, Rockstar has narrowed the release date down to fall 2025, but it still hasn't mentioned anything about a PC version something that didn't change during Take-Two's financial report yesterday.CEO Strauss Zelnick admitted that while GTA 6's fall launch date remained on track, delays can never be ruled out."We believe that arrogance is the enemy of continued success," Zelnick said in a post-report Q&A session. "We run scared. We're looking over our shoulder. Our competitors are not asleep. And so what do we do about that? We try to be the most creative, the most innovative, and the most efficient company in the entertainment industry. And Rockstar Games in particular seeks perfection in everything they do. And we believe that if we do that right, and we focus on delivering for consumers, that's our best opportunity to succeed."GTA 6 is set to become the most hyped game of all time. Zelnick has ramped up the excitement in the past, repeating the "seeking perfection" line last year. He also said it would offer an experience never seen before. And at the Game Awards in November, Rockstar's David Manley said it would boast "mind-blowing things."Relayed reading: Most Anticipated PC Games of 2025 // Related StoriesThe lack of news regarding a PC version is disappointing, especially in these modern times when PC ownership is so high and most machines can vastly outperform consoles. A former developer at Rockstar said the reason PC ports come later is because the company wants to prioritize what it sells (i.e., console games). Complications in building a PC version for multiple hardware configurations and the testing it involves are big factors in why PC versions of Rockstar's titles arrive much later, too.Rockstar's history with PC ports doesn't instill confidence in those hoping GTA 6 for PC will arrive soon after the fall. It took seven months for GTA 3 to arrive, eight months for San Andreas, eight months for GTA 4, and almost two years for GTA 5. Even Red Dead Redemption 2 took over a year, landing on PS4 and Xbox One in October 2018 before arriving on PC in November 2019.Away from GTA 6, Take-Two is still set to release Mafia: The Old Country this summer, while Borderlands 4 will arrive before the end of 2025. No word yet on when BioShock creator Ken Levine's Judas will be released.One thing gamers aren't looking forward to is the possibility of GTA 6 costing up to $100. An analyst recently said he "hopes" this will happen, as it could raise the average price of video games across the industry.
    0 Comments ·0 Shares ·77 Views
  • Latest Windows 11 build adds full MIDI 2.0 support and improved performance
    www.techspot.com
    In a nutshell: The MIDI 2.0 standard was introduced in 2020, nearly 40 years after the original version. MIDI remains a crucial technology for musicians and music producers, and its utility on PCs is set to expand as Microsoft plans to enhance support in Windows. Microsoft recently released Windows 11 Insider Preview Build 27788 to users in the Canary Channel. This update could be a significant milestone for musicians using Windows, as it introduces major improvements to the Musical Instrument Digital Interface MIDI standard.MIDI, a foundational music technology dating back to 1983, defines a communication protocol, digital interface, and electrical connectors that allow multiple musical instruments to work together. Microsoft is undertaking a complete rewrite of MIDI support for Windows, delivered through the open-source Windows MIDI Services project.If you've ever played an electronic instrument or arranged music in MIDI format, you may find the new Windows MIDI Services technology particularly exciting. Microsoft states that the new stack supports both MIDI 1.0 and the more advanced MIDI 2.0 standard. It is compatible exclusively with 64-bit operating systems and finally extends full MIDI support to all modern Windows architectures, including Arm64-based Copilot+ PCs.The MIDI 2.0 standard introduces high-speed data transmission, high-fidelity messages, endpoint discovery and negotiation, and more. Windows 11's new MIDI stack further enhances performance with improved timing, reduced jitter in data exchange, and a faster MIDI driver that supports both MIDI 1.0 and 2.0 devices. It also features automatic API translation between the two versions, ensuring seamless compatibility.Microsoft is making a concerted effort to provide first-class MIDI support in Windows 11. The standard remains widely used, not just in music production but also in certain industrial applications. The new USB MIDI 2.0 class driver, included with Preview Build 27788, was developed in collaboration with the Association of Musical Electronics Industry of Japan and AmeNote. According to Microsoft, it ensures compatibility with both MIDI 2.0 and legacy MIDI 1.0 devices. // Related StoriesHowever, the latest Windows 11 preview isn't just about MIDI improvements. Microsoft is also introducing a "seamless" 1-click resume feature for OneDrive, allowing users to quickly pick up where they left off when switching between devices. Additionally, the Microsoft Store now supports selective installation for certain games, enabling users to save SSD space by skipping unwanted components, such as single-player campaigns or high-resolution textures in multiplayer-focused titles like Call of Duty and Halo.
    0 Comments ·0 Shares ·79 Views
  • Researchers create reasoning model for under $50, performs similar to OpenAI's o1
    www.techspot.com
    Why it matters: Everyone's coming up with new and innovative ways to work around the massive costs involved with training and creating new AI models. After DeepSeek's impressive debut, which shook Silicon Valley, a group of researchers has developed an open rival that reportedly matches the reasoning abilities of OpenAI's o1. Stanford and University of Washington researchers devised a technique to create a new AI model dubbed "s1." They have already open-sourced it on GitHub, along with the code and data used to build it. A paper published last Friday explained how the team achieved these results through clever technical tricks.Rather than training a reasoning model from scratch, an expensive endeavor costing millions, they took an existing off-the-shelf language model and "fine-tuned" it using distillation. They extracted the reasoning capabilities from one of Google's AI models specifically, Gemini 2.0 Flash Thinking Experimental. They then trained the base model to mimic its step-by-step problem-solving process on a small dataset.Others have used this approach before. In fact, distillation is what OpenAI was accusing DeepSeek of doing. However, the Stanford/UW team found an ultra-low-cost way to implement it through "supervised fine-tuning."This process involves explicitly teaching the model how to reason using curated examples. Their full dataset consisted of only 1,000 carefully selected questions and solutions pulled from Google's model.TechCrunch notes that the training process took 30 minutes, using 16 Nvidia H100 GPUs. Of course, these GPUs cost a small fortune around $25,000 per unit but renting works out to under $50 in cloud compute credits. // Related StoriesThe researchers also discovered a neat trick to boost s1's capabilities even further. They instructed the model to "wait" before providing its final answer. This command allowed it more time to check its reasoning to arrive at slightly improved solutions.The model is not without its caveats. Since the team used Google's model as its teacher, there is the question that s1's skills, while impressive for its minuscule cost, may not be able to scale up to match the best AI has to offer just yet. There is also the potential for Google to protest. It could be waiting to see how OpenAI's case goes.
    0 Comments ·0 Shares ·78 Views
  • Intel Xeon server CPU sales hit 14-year low as AMD gains ground
    www.techspot.com
    The big picture: The past several years have seen some of the worst figures in Intel's 56-year history. The company faces trouble on all fronts, from foundry to consumer processors and server chips. As Intel reorganizes its foundry operations, new analysis paints a grim picture of its server business's current trajectory. According to SemiAnalysis, Intel's 2024 server processor volume declined for the third year straight. Following the precipitous drop in 2023, the company's data center business has reached a 14-year low.Based on data from Intel's 10K reports, analysts have charted Intel's data center CPU volume since 2011. The numbers aren't exact and appear as a percentage of 2011's total, but they reveal a peak in 2021, followed by an ongoing decline. The most dramatic fall occurred in 2023 when server processor volume plummeted by 50 percent of 2011's count. Intel's volume has fallen by over half since 2021.Chipzilla is losing ground to AMD in the consumer and server CPU markets. Late last year, the company admitted that it has no plans to answer to the 3D V-Cache that has made Team Red's Ryzen chips the undisputed masters of gaming desktops. Instead, Intel is bringing similar technology to its upcoming Clearwater Forest data center processors because it considers servers a more critical market.Set for launch sometime this year, Clearwater Forest and Intel's laptop-focused Panther Lake CPUs will also decide the fate of another business the company struggles with: semiconductor manufacturing. Both will prove whether the company's 18A node can compete against TSMC's leading 3nm and 2nm processes. // Related StoriesAlso read: Intel's takeover dilemma: A Gordian knot of funding and politicsHowever, Intel already turned to TSMC for last year's Lunar Lake notebook CPUs and may do so again for future chips in all sectors. The desktop Nova Lake processors, expected to emerge in 2026, will use transistors from Intel and another manufacturer, likely TSMC.Amid questions over the survival of Intel semiconductors, the company spun its foundry division into a separate entity that it admits will have to "earn" its business just like TSMC or Samsung. Intel claims that other prospective clients have successfully powered on products based on 18A, suggesting that the node's development is progressing smoothly.Sliding revenue and other problems led to the recent departure of CEO Pat Gelsinger, prompting Bill Gates to say that Intel has "lost its way." The Microsoft co-founder noted that Intel has fallen behind in foundry and chip design over the last decade.Rumors of a buyout are circulating, but it remains unclear who, if anyone, would want to acquire Intel.
    0 Comments ·0 Shares ·63 Views
  • Malicious apps on Android and iOS scan screenshots to steal cryptocurrencies
    www.techspot.com
    Editor's take: Taking screenshots on modern mobile devices is incredibly easy. However, inexperienced users often overlook the potential security risks of saving images containing sensitive data. This oversight can lead to financial losses, as cybercriminals are always ready to exploit such lapses in operational security. Kaspersky has uncovered a new malware campaign designed to breach users' crypto wallets and steal Bitcoin and other cryptocurrencies. Dubbed SparkCat, the malware leverages advanced optical character recognition technology integrated into modern smartphone platforms to scan for recovery phrases used to access crypto wallets. Notably, it affects both Android and iOS ecosystems.SparkCat was found embedded in several Android and iOS apps, some of which were available in official app stores. The malware employs a malicious SDK that integrates Google's OCR technology, enabling it to scan users' photo galleries for screenshots and extract crypto wallet recovery codes from images.The infected apps discovered on Google Play had been downloaded over 242,000 times. Meanwhile, some malicious apps targeting iOS remain available for download, including two AI chat tools (WeTink and AnyGPT) and a Chinese food delivery app (ComeCome).Kaspersky believes the SparkCat campaign has likely been active since March 2024. The malicious apps featured a previously unseen protocol written in Rust, which proved useful for communicating with command-and-control servers operated by the cybercriminals behind the attack.The origin of SparkCat remains unclear. Kaspersky has not determined whether the infection was part of a sophisticated supply chain attack or the result of deliberate actions by the app developers. The malware employs tactics previously observed by researchers in 2023, when ESET analysts discovered malicious "implants" in Android and Windows apps designed to scan images for crypto wallet access codes. // Related StoriesSparkCat underscores the risks of poor security practices on personal mobile devices. Saving screenshots in a phone's gallery is already a potential vulnerability, but for users who have invested in cryptocurrency, it can turn into a serious security threat.
    0 Comments ·0 Shares ·69 Views
  • Rookie robocallers impersonate FCC, accidentally target actual FCC employees
    www.techspot.com
    WTF?! A group of robocallers impersonating FCC employees made the amateur mistake of trying to scam actual commission employees last year. They likely had no idea they had inadvertently dialed the very regulators responsible for cracking down on them. The calls occurred on the night of February 6 and the morning of February 7 last year, as the FCC recently revealed. Over a dozen FCC employees, along with some of their relatives, received automated calls featuring an artificial voice introducing itself as the "FCC Fraud Prevention Team." The robotic voice instructed recipients to press 1 to speak with a representative immediately or 2 to schedule a callback.One recipient reported being told they needed to fork over $1,000 in Google gift cards to avoid going to jail for "crimes against the state."Now, the FCC doesn't even have a "Fraud Prevention Team," so as you can imagine, it wasn't too thrilled about having its name dragged like that. In an announcement on Tuesday, the commission proposed a $4.5 million fine against the voice service provider Telnyx, accusing the company of enabling the robocalls by violating "Know Your Customer (KYC)" rules.Some details about how the robocalls went down have also come to light. On February 6, 2024, Telnyx accepted two new customers using the aliases "Christian Mitchell" and "Henry Walker," who provided fake Canadian addresses and sketchy "mariocop123.com" email domains. The robocallers paid for the service with cryptocurrency to cover their tracks.Over the next day, the "MarioCop" accounts launched approximately 1,800 calls before Telnyx caught on and shut down the accounts. // Related StoriesFrom Telnyx's perspective, it provides services that allow businesses to easily build AI voice bots and apps for making automated calls. It likely didn't expect clients to use its platform for low-effort human scam calls let alone to target federal agency staff.When contacted by Ars Technica, Telnyx denied the FCC's allegations and said it would contest the penalty. The company will get a chance to respond to these allegations and try to convince the FCC that it shouldn't be fined. In some cases, telecom disputes like this end up settling for a lower amount.
    0 Comments ·0 Shares ·71 Views
  • Gigabyte is offering access to a $300K supercomputer for free, but there's a catch
    www.techspot.com
    In a nutshell: Gigabyte has an intriguing offer for those who need access to a supercomputer. Through its Giga Computing subsidiary, the company offers qualified users the opportunity to test drive one of the world's most advanced supercomputers for free. Of course, as with all "free" offers, there is a catch. In a week-long trial dubbed "Launchpad," Giga Computing will grant users access to its cutting-edge Gigabyte G383-R80 server. This machine is powered by four of AMD's latest Instinct MI300A APUs, combining CPU and GPU horsepower for accelerated computing tasks. It also features up to eight hot-swappable 2.5" NVMe/SATA/SAS bays and 12 PCIe 5.0 x16 slots with capacities of up to 61.44TB. Networking is no joke either, with built-in 10Gbit/s Ethernet and support for add-in cards like QSFP56.The server is well suited for demanding AI training, inference workloads, and high-performance computing applications. Several other builds are available, including an Nvidia HGX H100 system.The program is not without some limitations. Users only get access for seven days. However, they can request an extension of up to two weeks if needed. Gigabyte also selects who gets access during the trial period. Distributors are ineligible, and applicants must outline their project and need for access.If Gigabyte deems a proposal worthy, it will reach out within three business days to clarify the details and instructions. Once approved, users get remote access to the server within two weeks. They will also be required to get their project up and running within three days. So this isn't a program for just any average Joe. It is intended for serious researchers or professionals.The trial program is open to applicants worldwide who want to put their latest tech through its paces. The company aims to help users get familiar with the hardware and ensure their software runs smoothly on these high-powered systems. After the trial, Gigabyte will purge user accounts and any related data to protect privacy. TechRadar notes that users wishing to preserve their data can set up a permanent Gigabyte G383-R80 account with the same configuration. The catch is that it'll cost $304,207. // Related Stories
    0 Comments ·0 Shares ·80 Views
  • Tablet shipments rose 9.2% worldwide in 2024, reversing three-year slump
    www.techspot.com
    The big picture: Smartphones weren't the only consumer tech category to see a rebound in 2024. According to the latest report from Canalys, global tablet shipments rose to 147.6 million units last year up 9.2 percent compared to 2023. The growth, which was observed in every region except North America, reversed three consecutive years of decline and is expected to continue. Apple led all other manufacturers in 2024 with nearly 57 million iPads shipped good for a 38.6 percent market share. The company's most recent release, the iPad mini 7th gen, debuted in late October and features the A17 Pro processor.Samsung finished in second place with 27.8 million tablets shipped and an 18.8 percent slice of the pie, while Huawei rounded out the top three with 10.7 million slates shipped.It was a strong year for Chinese vendors. Xiaomi ranked fifth overall, shipping 9.2 million tablets an impressive 73.1 percent year-over-year growth. Meanwhile, Huawei, the other Chinese tablet maker in the top five, saw nearly 30 percent annual growth.Other Chinese brands outside the top five are working to expand beyond their home market. Honor, for example, is focusing on growing its footprint in Indonesia and recently introduced new bundles in the UK following the launch of its Magic7 Pro smartphone in Q4.The future looks bright as well. Canalys research manager Himani Mukka pointed to a recent channel survey in which 52 percent of partners that sell tablets expect shipments to increase in 2025. Only 16 percent of those surveyed anticipate a decline the remaining 32 percent expect shipments will be flat year over year. // Related StoriesRelated reading: Global smartphone market grew seven percent in 2024 as vendors brace for a challenging 2025As for Mukka's opinion, the manager believes the consumer tablet market is in for a more constrained performance this year but "there will be pockets of opportunity for vendors to target."Where do you stand with tablets? Are they part of your tech repertoire, or do you mostly rely on a smartphone / laptop for portable computing?
    0 Comments ·0 Shares ·78 Views
  • Bill Gates says Intel has lost its way, fallen behind in chip design and fabrication
    www.techspot.com
    Big quote: The last few years have not been kind to Intel. The company has seen its fortunes fall as rivals continue to make great strides, both financially and technologically. In a recent interview, Microsoft co-founder Bill Gates shared his thoughts on the situation, stating that Intel has "lost its way." An interview with Gates by the Associated Press notes how the billionaire has a soft spot for Intel. The publication suggesting that his career might have gone down a different path had Team Blue not created the first commercial microprocessor, the Intel 4004 in 1971. It led to more advanced chips that powered personal computers, resulting in the need for software for these PCs.While Microsoft has been on the rise since Satya Nadella became CEO in 2014, Intel has endured its most difficult period in decades. There were delays transitioning from the 14nm to 10nm process, followed by the delay of 7nm. Intel has also constantly lost market share to AMD, faced Apple dropping the company in favor of its own silicon, dealt with security vulnerabilities, struggled with Raptor Lake issues, and lost ground to chip rivals. Financial troubles have also hurt the company, culminating in the ousting of CEO Pat Gelsinger last year."I am stunned that Intel basically lost its way," Gates said. He added that Intel co-founder Gordon Moore "always kept Intel at the state of the art. And now they are kind of behind in terms of chip design and they are kind of behind in chip fabrication."Nvidia, TSMC, and Qualcomm are all ahead of Intel in various areas of chip manufacturing and design, and catching up isn't going to be easy, if not impossible.Gates also highlights how Intel essentially missed the AI chip revolution, though he did have praise for former CEO Pat Gelsinger. // Related Stories"I thought Pat Gelsinger was very brave to say, 'No, I am going to fix the design side, I am going to fix the fab side.' I was hoping for his sake, for the country's sake that he would be successful. I hope Intel recovers, but it looks pretty tough for them at this stage."Related: Intel's takeover dilemma: A Gordian knot of funding and politicsIntel has also been falling behind AMD in the consumer CPU market. Team Red is dominating the Amazon.com processor sales chart while continuing to do well abroad. Intel's only solace could be that more Steam survey participants (63%) still use its CPUs.There have been rumors that Intel could be bought out Broadcom looked like a potential buyer for a while but funding the company's fabs will require tens of billions of dollars and take years to get back on track, making it a less appealing proposition. Given the amount of money the US government has poured into them, shutting down its fabs isn't an option for Intel.
    0 Comments ·0 Shares ·69 Views
  • DJI takes a risky bet, removes no-fly zones as US ban looms
    www.techspot.com
    TL;DR: DJI is at a critical juncture as it faces a potential automatic ban on its products in the US. With less than a year to persuade the Trump administration and US lawmakers to reconsider, the company has made a bold move by announcing the removal of its self-imposed no-fly zones a decision that has raised eyebrows and sparked concerns across the drone industry. The timing of the announcement has been particularly controversial, coming less than a month after a small DJI drone collided with a plane battling the Los Angeles wildfires. Despite the incident, DJI is moving forward with its plan to eliminate restrictions that previously prevented its drones from flying over sensitive areas such as airports, power plants, and even the White House.Meanwhile, a critical deadline looms for the China-based company. Concerned that DJI drones could be used to collect sensitive information and transmit it to China, lawmakers earlier this year proposed the Countering CCP Drones Act, which aimed to add DJI to the FCC's blacklist. While the act was ultimately excluded from the final version of the National Defense Authorization Act this month, the NDAA still includes language with similar provisions.In an extensive interview with The Verge, Adam Welsh, DJI's head of global policy, acknowledged that the company faces an uphill battle in convincing the public that eliminating no-fly zone restrictions is the right move. "Geofencing has been in place for more than 10 years, and we recognize any change to something that's been in place for 10 years can come as a bit of a shock to people," he said.Welsh argued that while geofencing was initially implemented to fill regulatory gaps when consumer drones first entered the market, it was never a foolproof solution.Welsh points out that regulatory agencies have taken alternative approaches to drone safety, prioritizing operator training, airspace permissions, and remote ID technology rather than mandating geofencing. "They have stuck to the basic principle that the operator should be in control of the drone, the airplane, or any other kind of aviation object at all times," Welsh said.Critics argue that removing these restrictions could heighten safety risks. However, DJI contends that geofencing itself comes with significant drawbacks. // Related StoriesWayne Baker, DJI's public safety integration director, highlighted the challenges faced by first responders as an example. "An autistic child that's missing in inclement weather we didn't have the time to go through 'here's our permissions' and all that."The company also cites the growing burden of processing unlock requests as a key factor in its decision. While DJI insists that cost savings were not the primary motivation, Welsh acknowledged that "the burden on our internal resources had been growing exponentially." The company had invested in round-the-clock staffing to handle these requests, aiming to process them within an hour.DJI's decision raises broader questions about balancing operator freedom with public safety. Welsh likened geofencing to a car that prevents its owner from driving to certain places even after receiving permission or that restricts speed in designated areas. "I don't think people would accept it," he argued. Like traditional aircraft pilots, he believes drone operators should be responsible for understanding and adhering to flight restrictions.As the debate unfolds, DJI faces the challenge of persuading regulators and the public that this move enhances rather than compromises safety. The company is banking on improved operator education and existing regulatory frameworks to maintain safe drone operations. With the specter of a US ban looming, DJI's strategy amounts to a high-stakes bet on operator responsibility and regulatory alignment.
    0 Comments ·0 Shares ·71 Views
More Stories