TechSpot
TechSpot
Tech Enthusiasts - Power Users - IT Professionals - Gamers
1 people like this
932 Posts
2 Photos
0 Videos
0 Reviews
Recent Updates
  • Google makes it easier to remove your personal information from search results
    www.techspot.com
    In brief: As the dominant gateway to online information, Google's search engine has shaped how people access and discover content. However, search results sometimes expose personal data, raising privacy concerns. A recent update introduces a tool that gives users more control over what appears in search results. Google updated a search engine tool called "Results About You" that it initially rolled out in 2022. Developers have made it more user-friendly and directly integrated its most helpful features into search results, including the ability to remove your personal data from them.Users must first take a somewhat counterintuitive step to use the tool: inputting their personal information into the system. While potentially alarming for those aiming to protect their privacy, this process is necessary for the tool to identify and manage specific data in search results. However, regardless of whether it appears in search results, Google likely already possesses this information anyway.One of the most significant improvements in this update is the integration of its key features directly into search results. While not prominently displayed, users can access Results About You through the three-dot menu next to each search result. This menu includes options to remove results containing personal information.When requesting the removal of personal information, Google prompts users for additional details, a process that typically takes only a few seconds. The interface also accommodates non-personal removal requests, such as reporting illegal content. All requests are logged in the "Results About You" tool for later review. It's crucial to note that Google can only remove content from its search results, not the web pages containing it.The redesigned hub allows users to monitor the status of their removal requests and offers additional features. For instance, users can request a data refresh if a search result contains inaccurate information. This option is useful when a website has removed personal data, but the search results haven't updated.It is not the first Google has overhauled the tool. In 2023, it introduced a suite of features that made it more proactive, actively scanning search results for users' data. Developers added an expanded dashboard, giving users a more comprehensive view of their personal information in Search. It also added the ability to request the removal of nonconsensual explicit images from search results.Google recommends regularly checking the Results About You hub. The tool automatically identifies new instances of previously highlighted personal data, allowing users to request removals quickly. Additionally, users can receive notifications via phone or email when new personal information emerges in search results. // Related Stories
    0 Comments ·0 Shares ·55 Views
  • Real-time AI voice technology alters accents in Indian call centers for better clarity
    www.techspot.com
    Teleperformance SE, the world's largest call center operator, is implementing an artificial intelligence system designed to soften the accents of English-speaking Indian workers in real-time. The company claims this technology will enhance customer understanding and satisfaction."When you have an Indian agent on the line, sometimes it's hard to hear, to understand," Deputy Chief Executive Officer Thomas Mackenbrock told Bloomberg. "[This technology can] neutralize the accent of the Indian speaker with zero latency, [creates] more intimacy, increases the customer satisfaction, and reduces the average handling time."The technology analyzes speech input and modifies it to match a specified accent while preserving the speaker's original voice and emotion. The system uses speech recognition to capture the speaker's voice in real-time. Advanced algorithms then transcribe the spoken words into text, accounting for various accents and speech patterns. Once the system transcribes the speech, the core accent translation process analyzes the speaker's accent and pronunciation patterns. Then, it applies AI models trained on datasets of various accents to modify the voice.During the conversion, the software modifies intonation, stress patterns, and phoneme pronunciation to align with the target accent. The system employs text-to-speech (TTS) technology to synthesize the phonetic pattern and convert the text into a synthesized voice that maintains the speaker's original tone, emotion, and identity.Teleperformance is deploying technology with background noise cancellation in Indian call centers, providing customer support for international clients. Its customers include major tech companies like Apple, TikTok, and Samsung Electronics.The technology is part of a larger Teleperformance AI investment strategy. The company plans to invest up to 100 million ($104 million) in AI partnerships in 2025. Palo Alto-based startup Sanas developed the accent technology after a $13 million Teleperformance investment earlier this year.The software comes as the call center industry faces challenges from the rise of AI chatbots. Last year, Teleperformance experienced a significant drop in share prices after Swedish fintech firm Klarna Bank announced that its AI assistant was doing the work equivalent to 700 full-time agents. In response, Teleperformance has focused on using AI to enhance rather than replace its workforce of 490,000 employees. // Related StoriesSanas states that its goal is to reduce "accent-based discrimination." However, while the software may not replace workers directly, there are concerns about the potential impact on call centers in places like the Philippines, which have built their market position on high-quality English speakers.Currently, the software supports Indian and Filipino inflections. Sanas is developing versions for other regions, including Latin America. Mackenbrock emphasized that while AI will be ubiquitous, "the human element will be incredibly important" in building connections and enhancing customer experiences.
    0 Comments ·0 Shares ·60 Views
  • AMD's RDNA 4 GPUs bring major encoding and ray tracing upgrades
    www.techspot.com
    The big picture: AMD finally revealed the specifications, pricing, and performance details of Radeon RX 9070 and Radeon RX 9070 XT graphics cards, with full reviews expected in the coming days. While we await those, we've already discussed the potential performance implications, FSR 4 upscaling, and now we want to provide additional context on improvements in the new Radeon's encoding quality an often overlooked aspect of new GPUs (including by us). AMD's GPU encoders have long been criticized for poor video encoding quality when using popular formats and bitrates for game streaming, leaving Nvidia as the clear choice for anyone who wants to use video encoding.With RDNA 4, AMD claims encoding quality is significantly improved, and the examples they showcased were certainly attention-grabbing. AMD is specifically highlighting 1080p H.264 and HEVC at 6 megabits per second one of the most commonly used setups demonstrating a substantial increase in visual quality.Whether this will hold true across a broad variety of scenarios remains to be seen, but historically, AMD's discussions on encoding quality have revolved around supporting new formats like AV1. With RDNA 4, AMD is focusing on tangible improvements in real-world use cases, suggesting they are far more confident in their encoder's quality.AMD is touting a 25% gain in H.264 low-latency encode quality, an 11% improvement in HEVC, better AV1 encoding with B-frame support, and a 30% boost in encoding performance at 720p. These numbers likely refer to VMAF scores.Beyond encoding, there are several other notable improvements. The ray tracing core now features two intersection engines instead of one, doubling throughput for ray-box and ray-triangle intersections. A new ray transform block has been introduced, offloading certain aspects of ray tracing from shaders to the RT core.The BVH (Bounding Volume Hierarchy) is now twice as wide, and numerous other enhancements have been made to the ray tracing implementation one reason why RDNA 4's ray tracing gains exceed its rasterization improvements.Compute, Memory, and Display EnhancementsThe compute engine also includes several optimizations, along with a PCIe 5.0 x16 interface and a 256-bit memory bus using GDDR6. AMD claims enhanced memory compression, and the GPUs are equipped with 16GB of VRAM, which should be sufficient for most modern games.The display engine, however, is a mixed bag. While it supports DisplayPort 2.1, its capabilities remain unchanged from RDNA 3, with a maximum bandwidth of UHBR 13.5 instead of the full UHBR 20 now used on some 4K 240Hz displays and supported by Nvidia's Blackwell architecture.HDMI 2.1b is also included. On a positive note, AMD claims lower idle power consumption for multi-monitor setups, and video frame scheduling can now be offloaded to the GPU. // Related StoriesThe Navi 48 die is 357mm of TSMC 4nm silicon, featuring 53.9 billion transistors. This makes it 5% smaller than Nvidia's Blackwell GB203 used in the RTX 5080 and 5070 Ti, yet it contains 18% more transistors, meaning the design is more densely packed.However, Nvidia still holds an advantage in terms of die area and transistor efficiency, as the RTX 5080 is expected to be about 15% faster in rasterization and possibly over 50% faster in ray tracing based on AMD's RX 9070 XT claims all while using fewer transistors and a smaller die. That said, the TDP of the RTX 5080 is 360W, compared to 304W for the 9070 XT, an 18% higher power draw though actual power consumption in games may vary. The 9070 XT should be closer to the RTX 5070 Ti with its 300W TDP.A couple of additional details to round things out: AMD is not producing reference models for the RX 9070 XT or RX 9070, meaning all designs will come from board partners. These partners include ASUS, Gigabyte, PowerColor, Sapphire, XFX, and other familiar names.Availability is expected to be strong on March 6th, as these cards have reportedly been ready since early January.Additionally, AMD is releasing a new version of its driver-based frame generation technology, AFMF 2.1, which will be available for the Radeon RX 6000 series and newer GPUs.This version promises superior image quality and includes Radeon Image Sharpening, which can be applied to any game, video, or application at the driver level. AMD also claims improved quality for this feature. There are also some AI-related features, though nothing particularly groundbreaking.
    0 Comments ·0 Shares ·65 Views
  • Intel pushes back Ohio chip plant opening to 2030, citing market conditions
    www.techspot.com
    What just happened? Intel announced a significant revision to the construction timeline of its Ohio One semiconductor manufacturing site in New Albany. The setback is the third substantial delay from the facility's original 2025 completion target. Intel emphasizes its commitment to the project and its ability to accelerate construction if market demand warrants. The first phase of the facility, known as Mod 1, should be finished in 2030, with chip production beginning between 2030 and 2031. The company's revised timeline also affects the project's second phase, Mod 2, pushing it back to a 2031 completion date, with operations beginning in 2032.The Ohio One campus, once dubbed the "Silicon Heartland," is an ambitious undertaking. It will span approximately 1,000 acres and include up to eight semiconductor fabrication plants. The site will also accommodate support operations and industry partners. Initial investment estimates were around $20 billion, with potential for up to $100 billion in total development costs.Despite the frequent delays, the site has made significant construction progress since work began in 2022. Key milestones include completion of the underground foundation, commencement of above-ground construction, installation of air separation units and underground piping, pouring over 200,000 cubic yards of concrete, and more than 6.4 million hours of invested labor.The revised timeline reveals that the Ohio facilities will utilize process technologies developed after Intel's 14A and 14A-E nodes, currently scheduled for introduction in 2026-2027. These advanced manufacturing processes will likely rely on ASML's cutting-edge High-NA EUV lithography tools, costing around $350 million each.Intel has already begun hiring and training employees for the Ohio facility. Workers are receiving training at existing Intel sites in Arizona, New Mexico, and Oregon, preparing them for the eventual opening of the local facility. // Related StoriesIntel's decision to delay the Ohio plant opening comes amid a challenging period for the company and the semiconductor industry. The past year has seen Intel grappling with financial losses, layoffs, and leadership changes. The company has also made strategic decisions to simplify its product roadmap, including canceling an AI chip project.While the delay may raise concerns about Intel's outlook on future demand, it also allows the company to manage its capital expenditures more effectively during market uncertainty. By postponing significant investments in production equipment, Intel can focus on returning to profitability while maintaining the flexibility to ramp up operations when market conditions improve.
    0 Comments ·0 Shares ·66 Views
  • www.techspot.com
    WTF?! Citigroup, one of the world's largest financial institutions, narrowly avoided a colossal error that would have sent ripples through the banking industry. Last April, the bank inadvertently credited a client's account with an astounding $81 trillion when it intended to transfer a mere $280. This monumental mistake, previously unreported, has come to light at a critical time for Citigroup. The bank is currently striving to convince regulators that it has addressed long-standing operational issues, and this incident may undermine those efforts.The erroneous internal transfer occurred due to oversights and system quirks. An internal report of the event, obtained by the Financial Times, notes that the error slipped past two staff members. A payments employee made the initial mistake. A second bank official overlooked the error when verifying the transaction. Fortunately, a third auditor caught the mega-blunder after detecting an anomaly in the bank's account balances 90 minutes after the transaction posted to the account.An unnamed source familiar with the incident said a combination of human error and a cumbersome user interface in the bank's backup system caused the mistake. In mid-March, the bank posted the payment to a customer's escrow account in Brazil, but a screening process blocked it over a sanctions violation. Although quickly cleared, the payment remained stuck in the bank's system.To resolve this, Citigroup's technology team instructed the payments processing employee to manually input the transactions into a rarely used backup system. However, its interface had a peculiar quirk: the amount field automatically pre-populated with 15 zeros, which the employee should have deleted before entering the correct amount but didn't.Despite the magnitude of the error, a Citigroup spokesperson pointed out that the internal controls quickly identified and rectified the mistake. The auditor caught the mistake within 90 minutes and reversed the transaction. The spokesperson stressed that no funds ever left the bank. // Related StoriesA person knowledgeable of the matter noted that Citigroup disclosed this "near miss" to the Federal Reserve and Office of the Comptroller of the Currency shortly after it occurred. This transparency comes when the bank is under intense regulatory scrutiny. The incident is part of a broader pattern of operational challenges at Citigroup.An internal report revealed that the bank experienced 10 near misses of $1 billion or more in the past year, slightly down from 13 in the previous year. Speaking anonymously, several former regulators and bank risk managers told FT that near misses of this scale are unusual across the U.S. banking industry.This series of near misses underscores Citigroup's ongoing struggle to resolve operational issues, nearly five years after a high-profile $900 million mistaken payment related to cosmetics group Revlon. That incident led to significant consequences, including the departure of then-CEO Michael Corbat, substantial fines, and regulatory consent orders mandating fixes to these issues.Jane Fraser, who succeeded Corbat as CEO in 2021, has made addressing these regulatory concerns her "top priority." However, the bank struggles to make progress, as evidenced by a $136 million fine imposed last year by regulators for failing to correct problems in risk control and data management.
    0 Comments ·0 Shares ·76 Views
  • A closer look at AMD's RDNA 4 and FSR 4 Upscaling
    www.techspot.com
    After our initial analysis of the Radeon RX 9000 series announcement yesterday and the potential performance implications, let's put pricing aside and focus on some of the other interesting aspects of RDNA 4.Based on AMD's performance claims, the RDNA 4 architecture is more optimized for higher resolutions than lower ones. While the Radeon 9070 XT is claimed to be 42% faster than the 7900 GRE at 4K Ultra settings, at 1440p, it is 38% faster. The relative uplift at 1440p is 3% lower than at 4K, which isn't a massive difference but is still worth noting.Of course, the biggest and most important feature announcement AMD has made alongside RDNA 4 is FSR 4, which introduces an ML-based upscaling solution.Currently, FSR 2.2 and FSR 3.1 upscaling are not competitive with DLSS 4, so for AMD to compete, FSR 4 needs to be a major step up. With an AI-based upscaling algorithm, AMD now has a much better chance of delivering a competitive solution.We can't fully analyze FSR 4 until the RDNA 4 review embargo lifts, but expect dedicated coverage after the GPU reviews go live. However, we saw FSR 4 in action at CES and can confirm that, based on what was shown, FSR 4 is a significant improvement over FSR 3.1. // Related StoriesIn the CES demo, Ratchet & Clank: Rift Apart was shown running in both FSR 3.1 and FSR 4 performance modes. FSR 4 was noticeably better at upscaling from low render resolutions, particularly in performance mode, which has historically struggled with image quality in previous FSR versions.Managing expectations for RDNA 4Alongside the RDNA 4 announcement, AMD provided a few additional image comparisons, mostly still images showcasing fine detail differences. While these examples don't fully demonstrate overall image quality improvements, our CES preview of FSR 4 was impressive. At first, we didn't even realize FSR 4 was running in performance mode until an AMD representative pointed it out and this was later confirmed in the settings.The biggest question we can't yet answer is: How close does FSR 4 get to DLSS? Are we looking at DLSS 2-level quality? DLSS 3? DLSS 4? We don't know yet. DLSS 4 has made significant strides in reducing TAA blur and improving image stability, so AMD has its work cut out to match Nvidia's offering.However, there are several wins AMD can achieve with FSR 4:A significant image quality boost over FSR 3.1 Based on what we've seen, this seems like a given.Better upscaling at lower resolutions like 1440p and 1080p These are critical resolutions for gamers.Making FSR 4 "usable" Even if it doesn't match DLSS 4, if it can deliver DLSS 3-level quality with acceptable artifacting, that would be a huge improvement.Matching or exceeding DLSS 4 quality This would be an outstanding result, though AMD is coming from far behind, so expectations should be realistic.Also read: DLSS 4 Upscaling at 4K is Actually Pretty AmazingEven if not all of these wins are achieved immediately, closing the significant gap in upscaling quality would be a major step forward especially since upscalers are one of the biggest reasons gamers choose GeForce GPUs.How FSR 4 WorksFSR 4 leverages FP8 processing, a new accelerated capability introduced in RDNA 4. This new architecture also improves the performance of other AI-relevant data formats, such as INT8 and FP16, but FSR 4 specifically relies on FP8. As a result, FSR 4 is exclusive to RDNA 4 GPUs and will not work on RDNA 3, at least initially.AMD has left the door open for a potential FSR 4 variant for older Radeon GPUs, but if that happens, it would likely be a separate, watered-down model similar to Intel's XeSS, where the better XMX version runs on Arc hardware, while a weaker DP4a version runs on other GPUs. However, AMD has not confirmed whether this will happen.FSR 4 integration and game supportFSR 4 will initially be integrated at the driver level. It hooks into the FSR 3.1 API and replaces the upscaling pass with the FSR 4 algorithm whenever FSR 3.1 is enabled in a game. On day one, all FSR 3.1-supported games will be upgraded to FSR 4 via the driver. Older FSR versions (FSR 3.0, FSR 2.2, etc.) must be upgraded to FSR 3.1 within the game to access FSR 4.A native implementation of FSR 4 is expected in the future, but at launch, all FSR 4 titles will use the driver upgrade path similar to Nvidia's DLSS override feature. However, FSR 4 does not improve frame generation quality over FSR 3.Frame generation remains single-frame-based, and AMD is not attempting multi-frame generation to match Nvidia's DLSS 4 frame generation.Performance uplift and benchmarksAMD has provided performance data comparing FSR 4 upscaling and FSR 4 + Frame Generation. The FSR 4 upscaling mode used in benchmarks is 4K Performance Mode, which AMD claims offers a 65% performance uplift over native 4K rendering across seven tested games.For comparison:In our DLSS 4 investigation, DLSS 4 Performance Mode delivered a 74% performance improvement on the RTX 580 when compared to native TAA at 4K.At lower frame rates, where the upscaler overhead is lower, FSR 4 achieved up to a 2X boost in some cases.Ratchet & Clank: Rift Apart saw a 100% performance increase (2X boost).Horizon Zero Dawn Remastered showed a 38% performance uplift, similar to DLSS 4 Performance Mode.These results are very promising.At launch, FSR 4 will be available in 30+ games, including Kingdom Come: Deliverance 2, Spider-Man 2, and Call of Duty: Black Ops 6, and while this is more than previous FSR launches, DLSS 4 already supports far more titles.Nvidia has used an upgradeable DLSS DLL for much longer, so GeForce users can upgrade over 70 games via the DLSS override in the Nvidia App or through unofficial DLL swaps. Upgrading FSR 2.2 titles to FSR 4 will be significantly harder, creating a major gap in official and unofficial game support at launch.For FSR 4 to succeed, AMD needs to build a strong ecosystem and expand game support significantly over time. Having 30 games at launch, including big-name titles, is a step in the right direction certainly better than FSR 3's launch, which had only two titles.However, closing the gap with DLSS 4 both in quality and adoption will take a long-term effort. Hopefully, AMD is making major investments into upscaling and committing to sustained support in the coming years.
    0 Comments ·0 Shares ·74 Views
  • Monster Hunter Wilds cracks Steam's all-time top 10 within hours of launch
    www.techspot.com
    In brief: Monster Hunter, a franchise that once struggled to spread beyond its core Japanese audience on the PSP, is now Capcom's most successful IP. Latest release Monster Hunter Wilds has broken records on Steam and for the company as a whole within 24 hours of launch. Its peak player count almost quadruples the franchise's previous mainline entry. Monster Hunter Wilds peaked at around 1.3 million concurrent players on Steam approximately 12 hours after its release, becoming the first Capcom title to exceed one million, according to SteamDB. The game immediately left the company's previous releases in the dust, like Capcom Arcade Stadium with 488,791 players and Monster Hunter: World with 334,684.In Steam's all-time peak player count ranking, Monster Hunter Wilds has already beaten hits like Dota 2, Cyberpunk 2077, Elden Ring, and Marvel Rivals. Most notably, Capcom's multiplayer survival action game already has over 40 mods, totaling over 10,000 downloads.Its success on Steam is only the tip of the iceberg. These numbers don't include the majority of users playing the game on PlayStation 5 and Xbox Series consoles. Hilariously, Palworld developer Pocketpair gave employees the day off after many called in sick on the day Monster Hunter Wilds launched.The game's success on Steam appears unaffected by its severe performance issues on PC. Steep system requirements that encouraged players to use frame generation to reach 60 frames per second were an early sign of trouble, but problems have since escalated.Digital Foundry reports that Monster Hunter Wilds immediately urges players to engage frame generation upon starting. Additionally, the game suffers from mysterious stuttering issues and slow texture compression. Capcom's implementation of DirectStorage might be the cause, but some users suspect Denuvo's infamous anti-tampering software.Customers building a new gaming rig should also note that AMD started bundling Monster Hunter Wilds with several of its Ryzen CPUs, Radeon graphics cards, and a few pre-built systems. The promotion applies to high-end RX 7000 series GPUs and discounted Ryzen 9000 processors that, unfortunately, don't include 3D V-Cache. Team Red is likely using Capcom's success to clear inventory as it launches the Radeon RX 9000 series cards. // Related Stories
    0 Comments ·0 Shares ·86 Views
  • TSMC wafer found in a dumpster - is this the ultimate case of chip binning?
    www.techspot.com
    WTF?! A Reddit user claims to have discovered an entire 12nm TSMC wafer discarded in a dumpster near one of the chipmaker's fabs in China. While it was just a test wafer, the find sparked jokes about cutting it into working GPUs and served as a reminder of the complexities of chip manufacturing. The Reddit user who shared the pictures, AVX512-VNNI, claims to have found the wafer near TSMC's Fab 16 factory in Nanjing, China. While not cutting-edge, that fab still produces reasonably advanced 12nm node chips, meaning we're talking about highly valuable silicon here.But surely a company like TSMC wouldn't just dump their IP this way for the world to copy? Indeed, they didn't, and there's a reasonable explanation for the "blunder." The same Redditor pointed out later under the post that this appears to be what's known as a "test wafer" used to calibrate the lithography machines that pattern the circuitry onto production wafers.Phew. That's certainly not as disastrous as tossing out wafers containing actual customer chip designs, like Nvidia's RTX 50 series GPUs. Those would be far too valuable to misplace. Or perhaps the explanation is even simpler: the Redditor is an employee at one of the foundries and is just joking.Don't miss our explainer: What is Chip Binning?Anyway, for some quick context, semiconductor wafers are the blank slates that get transformed into finished chips through repeated lithographic patterning, deposition, and etching steps. After processing, they get chopped up into individual chip dies, a process known as dicing. These individual dies are then packaged into CPUs, GPUs, and other semiconductor products.However, not all chips perform equally, even if they come from the same wafer. This is where chip binning comes in. After dicing, each die is tested and sorted based on factors like speed, power efficiency, and defect count. Only the best-performing chips make it into the highest bins, destined for flagship products, while more flawed dies are assigned to lower bins for mid-range or entry-level parts. // Related StoriesSo in that sense, this wafer incident could be seen as a rather extreme case of "binning" as in straight into the trash can. Now, while it's already established that the wafer was likely a test unit, this hasn't stopped Redditors from pondering whether it would be possible to salvage the wafer and extract any individual dies.One commenter suggested using a pizza cutter to dice up the wafer, while another proposed skipping the slicing altogether and wiring up the entire wafer. Interestingly, that's actually a legitimate technique called wafer-scale computing, so they may be onto something.The incident also sparked some good-natured roasting of Nvidia over the RTX 50 series, with one commenter quipping "Hey look, someone found the missing ROPs" in reference to some of those GPUs shipping with missing render output units.
    0 Comments ·0 Shares ·74 Views
  • HP to move 90 percent of North American production out of China by 2025
    www.techspot.com
    What just happened? As the US implements new tariffs, consumers could see higher prices for new computers. However, companies like HP are taking proactive steps to minimize disruptions and keep costs down. Unfortunately, those cost-cutting measures include layoffs. Tech giant HP announced a significant shift in its manufacturing operations. Market watch notes that by the end of its 2025 fiscal year, the tech giant plans to have 90 percent of its manufacturing moved outside of China. The decision comes in response to ongoing trade tensions, particularly the threat of a 10 percent tariff on Chinese imports. The move is part of HP's broader strategy to enhance its supply chain resilience and adapt to evolving market conditions."We have been doing a lot of work to make our supply-chain network more resilient," HP CEO Enrique Lores said in a recent meeting with analysts and reporters.It is a significant pivot for the company. Just last year, Ernest Nicolas, chief supply chain officer for the company, asserted that its Chinese operations were one of HP's most important manufacturing, engineering, and innovation hubs."The advanced infrastructure and manufacturing talent pipeline allows it to serve as our standard of production that our global network strives towards," Nicolas stated.In addition to relocating production, HP has increased its inventory as a buffer against potential tariff hikes. The company reports that its inventory reached $8.4 billion at the end of the most recent quarter, up from $7.7 billion in the previous period. According to HP CFO Karen Parkhill, this nine-day increase in inventory is part of HP's "tariff mitigation strategy." The company has been stockpiling for 72 days now.An unfortunate part of the company's strategy is layoffs. It plans to eliminate up to 2,000 positions to balance costs amid tariff uncertainties. // Related StoriesThe Palo Alto company's strategic shift reflects broader industry concerns about the impact of tariffs on the PC market. With traditionally slim profit margins, PC manufacturers have limited capacity to absorb additional costs."[It's] a bit of a cat-and-mouse game as the various negotiations around the world take place," said Dan Newman, principal analyst at Futurum Research.That said, analysts expect to see accelerated growth in the PC market during the coming year, driven by several factors. The approaching end-of-support deadline for Windows 10 is anticipated to trigger a significant refresh cycle for hundreds of millions of PC users. Additionally, manufacturers will continue introducing AI-capable PCs with advanced processors, boosting demand.Recent market data from Canalys indicates a positive trend, with PC shipments growing for the fifth consecutive quarter. In the fourth quarter, OEMs shipped 67.9 million desktops, notebooks, and workstations, an increase of five percent.
    0 Comments ·0 Shares ·85 Views
  • Steam faces backlash for promoting excessive AI-created games
    www.techspot.com
    Facepalm: Some gaming companies have expressed their unconditional love for assets generated through AI algorithms, but customers aren't exactly enamored with it. Even Steam, the largest gaming platform for PC, could be contributing to the promotion of AI-generated content at the expense of human-made, coherently developed experiences. Valve is celebrating new games coming to Steam with its latest Next Fest event. The digital delivery platform is promoting hundreds of free demos, streaming events, and chats with developers until March 3. However, some users feel that the Steam Next Fest is being spoiled by an excessive amount of games that clearly rely on AI-generated assets.The number of titles featuring AI art, generic anime girls, dark fantasy settings with "Balenciaga AI" faces, and fake pixel art promoted through Next Fest is becoming "tiresome," according to one user. Beyond graphics and artwork, AI is also reportedly taking center stage in many games' voice and audio assets.Valve recently introduced a new policy for AI-generated content, requiring game developers to disclose when they've used generative AI in their projects. This policy change led Activision to acknowledge the growing use of AI assets in its Call of Duty series.Some game categories seem to be particularly affected by the overwhelming amount of AI assets. The "Simulation" section of Steam's Next Fest is filled with similar titles, while other questionable candidates aren't disclosing their use of AI art at all. According to Simon Carless, founder of GameDiscoverCo, Valve's attempt to tweak its recommendation algorithm could be the real source of the issue.Valve decided to take a different approach to game recommendations, Carless said, showing more random titles during the first days of the Next Fest event. While these picks are still personalized based on games already played by users, they now also include smaller, and perhaps "weirder," games.Valve is trying to diversify its suggestions with a more egalitarian approach, Carless explained, instead of focusing solely on the biggest, most trending titles. Gaming marketing expert Chris Zukowski suggests that the Steam algorithm is testing a few smaller games to see if they can survive the platform's popularity contest. // Related StoriesOver the coming days, the Steam Next Fest will likely return to "normal," and AI-generated games should return to the darker corners of the platform where they belong. Valve should then focus on the real problem affecting Steam right now: properly promoting the huge number of good, human-made games released on the platform daily.
    0 Comments ·0 Shares ·62 Views
  • OpenAI rolls out GPT-4.5 to Pro users only, faces shortage of GPU horsepower
    www.techspot.com
    Why it matters: If you use ChatGPT and have noticed its slow response time, OpenAI just confirmed that it is due to a lack of processing power. The company is scrambling to install thousands of GPUs by next week, with hundreds of thousands more "coming soon." In the meantime, the shortage has prompted the company to slow-roll the release of GPT-4.5. OpenAI just released GPT-4.5, but average users won't have access yet. The model is only available on the company's $200/month Pro tier. OpenAI CEO Sam Altman explained that the plan was to release the model simultaneously on Plus and Pro subscriptions. However, a shortage of GPUs has forced the company to stagger its release.In recent weeks, ChatGPT users have noticed slow response times from the GPT-4o model. Altman noted that the platform has undergone significant growth and doesn't have the extra processing power to accommodate all of its users, making the launch of the "giant, expensive" GPT-4.5 problematic.Fortunately, OpenAI already has "tens of thousands" of GPUs on hand that technicians will install next week. After that, the company can complete the rollout for Plus users. Beyond that, the company has "hundreds of thousands" of GPUs on order, which Altman is sure will not entirely satisfy capacity."Hundreds of thousands [of GPUs are] coming soon, and I'm pretty sure y'all will use every one we can rack up," the CEO said. "This isn't how we want to operate, but it's hard to perfectly predict growth surges that lead to GPU shortages."We have all felt the effects of these shortages, and it's one of the reasons that OpenAI wants to branch into chip development. Last July, the company reportedly held talks with Broadcom about designing AI chips and reducing its reliance on Nvidia. The talks must have been very productive since insiders say OpenAI is prepared to send TSMC a custom chip design for validation within the next few months. If all goes well, mass production will begin in 2026. // Related StoriesRegarding the performance of OpenAI's latest AI, Altman noted that GPT-4.5 is not a reasoning model, so users should not expect it to blow benchmarks out of the water. That said, there is a certain kind of "magic" to the intelligence that Altman finds impressive."It is the first model that feels like talking to a thoughtful person to me," Altman touted. "I have had several moments where I've sat back in my chair and been astonished at getting actually good advice from an AI."At this stage, I would not take advice from an AI, let alone pay for it. That said, OpenAI is starting to get pricing to the sweet spot. Its Plus tier is $20 per month, which is not bad but lacks the unlimited access the $200 Pro version has. The free tier has unlimited access to GPT-4o mini, which is enough for most average users. If OpenAI could get Plus subscriptions to $15 or lower, it would attract more of the casual crowd.
    0 Comments ·0 Shares ·96 Views
  • Tecno Spark Slim concept phone is less than 6 millimeters thick
    www.techspot.com
    In brief: A slim and light version of the iPhone is expected later this year but a rival handset maker out of China could beat Apple to the punch. Tecno Mobile has announced its Spark Slim, an ultra-thin concept smartphone set to debut at Mobile World Congress in Barcelona next week. The Spark Slim features a 6.78-inch 3D curved AMOLED display with a 144Hz refresh rate and a peak brightness rating of 4500 nits. Tecno said it will be powered by an upcoming (yet unnamed) high-performance octa-core processor, but we do not know how much memory or storage it will ship with. We do know the phone features a 5,200mAh battery with 45W fast charging, dual 50-megapixel rear-facing cameras, and a 13-megapixel front-facing shooter for selfies and video calls.The handset's standout feature, however, is its thin profile which measures just 5.75mm at its thinnest. For comparison, Apple's latest iPhone 16 and iPhone 16e are significantly thicker at 7.8mm. A standard No. 2 pencil is around 7mm thick. Tecno did not say how much the phone weighs, but noted it is crafted of recycled aluminum with a stainless steel unibody.For years, smartphones trended toward increasingly thinner designs, but most manufacturers eventually shifted focus to other features like camera modules. While aesthetically pleasing to some, a thinner chassis leaves less room for key internal hardware like the battery. Slim handsets with large screens are also more prone to physical damage like bending in a pocket. Hopefully the Spark Slim is rigid enough to withstand daily use without bending, as we don't need another "bendgate" controversy like Apple had with its iPhone 6 Plus years ago.Mobile World Congress, the mobile industry's premiere trade show, kicks off on March 3 and runs through the 6th in Barcelona, Spain. Tecno Mobile will have the Spark Slim on display at booth 6B11. The show is open to the general public and with any luck, we will learn more about this intriguing ultra-slim come next week.
    0 Comments ·0 Shares ·90 Views
  • www.techspot.com
    In brief: Indie game store itch.io frequently offers massive bundles of games and other digital content to raise funds for various causes. The company's latest charity initiative aims to provide relief for victims of one of the worst wildfires in California's history. The bundle features a growing collection of PC games, printable tabletop games, books, and more. Itch.io's latest charity bundle provides permanent, DRM-free access to hundreds of games and other digital content for a minimum donation of $10. All proceeds support relief efforts for those affected by the devastating Eaton and Palisades wildfires, which swept through much of Southern California in January.Purchasing the bundle before March 15 grants immediate access to over 180 PC games, a similar number of printable card and board games, a few Android titles, and other digital goods. As more developers contribute their creations, additional games will be added for buyers.Users should note that while many itch.io games include Steam keys, bundle purchases do not grant access to Steam versions. However, all games added to a user's account remain DRM-free, similar to those on GOG and Humble Bundle.Navigating the bundle's contents can be challenging due to itch.io's web-based interface and the sheer volume of included items. To prevent cluttering users' main libraries, the store does not automatically add bundle purchases. Instead, users must manually click the download buttons for each title, a process that can be time-consuming. Additionally, download buttons lead to separate pages rather than initiating downloads directly.Fortunately, third-party tools can simplify the process. Developer Saizai's script allows users to quickly add all bundle purchases to their libraries, while Playnite can scan an itch.io account to help organize games for easier management. Additionally, a web-based browser tool assists users in finding and saving purchases from multiple itch.io bundles.It's CHARITY MEGABUNDLE TIME! And there's some absolute gems in this one, including Tunic which is just an all-timer and everyone should have it in their library. So let's do a little thread showcasing some of the games, and also the tools you might find handy to navigate this & other Itch bundles.[image or embed] Dominic Tarason (@dominictarason.com) February 27, 2025 at 3:24 PMAlthough itch.io primarily hosts small, experimental indie projects, the bundle includes several standout titles that buyers can easily find using the search bar. Hidden gems are also likely among the offerings. // Related StoriesTunic, available for Windows and macOS, is a critically acclaimed action-adventure game inspired by classic Legend of Zelda titles. Other highlights include Octodad: Dadliest Catch, Catlateral Damage, and Cook, Serve, Delicious.The Eaton and Palisades fires, which raged across Southern California in January, burned thousands of acres, damaged approximately 18,000 structures, and resulted in at least 29 confirmed deaths, with over 30 people still missing. More than 150,000 residents were evacuated.Proceeds from the bundle, which aims to raise $100,000 by March 15, will support the Community Organized Relief Effort, a Los Angeles-based nonprofit also aiding relief efforts in North Carolina, Ukraine, and other crisis-affected regions worldwide.
    0 Comments ·0 Shares ·91 Views
  • Factory trials begin for humanoid robots that can build more of themselves
    www.techspot.com
    A hot potato: The prospect of humanoid robots building more humanoid robots sounds like something from science fiction, but an Austin-based company has just signed a deal that could eventually lead to this scenario. It's inevitably caused more fears about human jobs being lost, but the company behind the machines says it will leave employees more time for "creative, thought-intensive projects." Robot-maker Apptronik has announced a pilot partnership with American firm Jabil. In addition to its supply chain services primarily serving OEMs, Jabil is involved in designing, engineering, and manufacturing electronic circuit boards and systems.Jabil said that it has several customers who are developing robots and warehouse automation. The new deal will see it provide a factory environment that will offer real-world validation testing of Apptronik Apollo robots, ahead of scaling the robot for manufacturing. The 5-foot 8-inch, 160-pound robots have a 4-hour-per-battery-pack runtime and a 55-pound payload.The pilot program will see the robots carry out an array of simple, repetitive tasks such as inspection, sorting, kitting, lineside delivery, fixture placement, and sub-assembly.Jabil also said it has agreed to begin producing the Apollo robots in its factories, meaning that should the pilot program work out, these robots will eventually be put to work building more of themselves.The idea of AI-powered robots working tirelessly to create more robots sounds concerning. Apptronik told TechCrunch this is still a ways off, though it is targeting 2026 to begin manufacturing commercial units. // Related StoriesBefore we start to worry about robots building more robots, there's the pressing issue of how this type of increased automation will impact human jobs.The makers of automation/AI systems regularly claim that their products will help workers rather than replace them, with the machines carrying out more repetitive and monotonous tasks so humans can concentrate on other work.Apptronik followed the same line, stating that its robots will give people more time for projects that the machines cannot do. The announcement says those whose tasks have been taken by the Apollo robots could spend more time on "creative, thought-intensive projects" like writing resumes, probably.This is the second major pilot deal entered into by Apptronik. It signed an agreement with Mercedes-Benz in March 2024 that saw Apollo put to work on certain tasks on the automaker's production lines.In January last year, BMW announced that humanoid robots would begin working at its vehicle manufacturing plants, starting in Spartanburg, South Carolina. The 5-foot 6-inch, 130-pound robots, made by Californian Robotics startup Figure, were successfully tested at the facility in June, when they inserted sheet metal parts that were assembled as part of the chassis.
    0 Comments ·0 Shares ·96 Views
  • Chatbots are surfacing data from GitHub repositories that are set to private
    www.techspot.com
    Facepalm: Training new and improved AI models requires vast amounts of data, and bots are constantly scanning the internet in search of valuable information to feed the AI systems. However, this largely unregulated approach can pose serious security risks, particularly when dealing with highly sensitive data. Popular chatbot services like Copilot and ChatGPT could theoretically be exploited to access GitHub repositories that their owners have set to private. According to Israeli security firm Lasso, this vulnerability is very real and affects tens of thousands of organizations, developers, and major technology companies.Lasso researchers discovered the issue when they found content from their own GitHub repository accessible through Microsoft's Copilot. Company co-founder Ophir Dror revealed that the repository had been mistakenly made public for a short period, during which Bing indexed and cached the data. Even after the repository was switched back to private, Copilot was still able to access and generate responses based on its content."If I was to browse the web, I wouldn't see this data. But anyone in the world could ask Copilot the right question and get this data," Dror explained.After experiencing the breach firsthand, Lasso conducted a deeper investigation. The company found that over 20,000 GitHub repositories that had been set to private in 2024 were still accessible through Copilot.Lasso reported that over 16,000 organizations were affected by this AI-generated security breach. The issue also impacted major technology companies, including IBM, Google, PayPal, Tencent, Microsoft, and Amazon Web Services. While Amazon denied being affected, Lasso was reportedly pressured by AWS's legal team to remove any mention of the company from its findings.Private GitHub repositories that remained accessible through Copilot contained highly sensitive data. Cybercriminals and other threat actors could potentially manipulate the chatbot into revealing confidential information, including intellectual property, corporate data, access keys, and security tokens. Lasso alerted the organizations that were "severely" impacted by the breach, advising them to rotate or revoke any compromised security credentials. // Related StoriesThe Israeli security team notified Microsoft about the breach in November 2024, but Redmond classified it as a "low-severity" issue. Microsoft described the caching problem as "acceptable behavior," though Bing removed cached search results related to the affected data in December 2024. However, Lasso warned that even after the cache was disabled, Copilot still retains the data within its AI model. The company has now published its research findings.
    0 Comments ·0 Shares ·91 Views
  • Meta's leak problem just cost 20 employees their jobs, with more firings expected
    www.techspot.com
    What just happened? Around a month after Meta CEO Mark Zuckerberg complained about company leaks, roughly 20 employees have been fired for allegedly leaking confidential information, and it sounds as if more terminations will follow. Exactly what the leaks entailed is unclear. Zuckerberg highlighted the problem of company leaks during an all-hands meeting last month. Ironically, his rant leaked online almost immediately."We try to be really open and then everything I say leaks. It sucks," Zuckerberg said."I think there are a bunch of things that are value-destroying that I'm not going to talk about," he added. "Maybe it's the nature of running a company of this scale. But it's a little bit of a bummer."Meta said that employees discovered to be leaking information would be fired. Now, The Verge reports that "roughly" 20 workers have lost their jobs for ignoring the warning."We tell employees when they join the company, and we offer periodic reminders, that it is against our policies to leak internal information, no matter the intent," Meta told the publication. "We recently conducted an investigation that resulted in roughly 20 employees being terminated for sharing confidential information outside the company, and we expect there will be more. We take this seriously, and will continue to take action when we identify leaks." // Related StoriesMeta's chief technology officer, Andrew Bosworth, said the company was close to identifying the leakers earlier this month.Meta has changed elements of its meetings to try to mitigate the leaking problem. No longer can employees vote on which questions to ask Zuckerberg the CEO used to answer those with the most votes. The comments section in the live presentations has also been disabled, and questions deemed potentially problematic if leaked are now skipped. Zuckerberg also said he would be less transparent due to the leaks.The surge in leaks at Meta appears tied to Zuckerberg's growing ties with Donald Trump in recent months. The company donated $1 million to Trump's inauguration fund in December, and Zuckerberg's ending of Meta DEI programs, the removal of third-party fact checkers, and the cutting back on censorship have been viewed as an appeasement to the president.Zuckerberg was one of the many tech executives who were quick to congratulate Trump on his election victory. But the pair have had a strained relationship in the past. Facebook banned Trump for two years shortly after the January 6 insurrection in 2021. Trump has also called Facebook the enemy of the people, accused Zuckerberg of plotting against him during the 2020 election, and has said he would "spend the rest of his life in prison" if he ever did it again.Meta recently confirmed that it had approved a plan to increase bonuses for its executives by up to 200% of their base salary. This comes as the company lays off 5% of its workforce, or 4,000 people.In other Meta news, the company apologized for an "error" yesterday that flooded Reels with violent and pornographic content.
    0 Comments ·0 Shares ·83 Views
  • Microsoft is killing off Skype after years of decline
    www.techspot.com
    RIP Microsoft is planning to shut down the once-dominant video calling application, Skype. The app was highly popular among internet users in the early 2000s but has suffered persistent neglect from Microsoft in recent years as the company shifted its focus to Teams as its primary messaging and team collaboration platform. According to a hidden string found in the latest preview of Skype for Windows, Microsoft will sunset the software later this year. To inform users about the impending shutdown, the app is showing a dialog box notifying them that Skype will no longer be available starting in May 2025. Instead, users will be encouraged to download and install the free version of Teams to stay connected with friends and family.If the report is accurate, the decision will come as no surprise, as Microsoft has long neglected Skype while actively promoting Teams. The app's decline can be attributed in part to the rise of numerous OTT video calling platforms and in part to Microsoft's clear lack of interest in keeping Skype alive.If the report turns out to be accurate, it will surprise no one as Microsoft has long neglected Skype while actively promoting Teams. The app's decline can be attributed in part to the rise of numerous mobile video calling and messaging platforms and in part to Microsoft's clear lack of interest in keeping Skype alive.While Skype still has a dedicated user base, it is far outnumbered by the millions of people who have moved on to other VoIP platforms, such as FaceTime, WhatsApp, Zoom, Google Meet, etc. Many companies adopted Teams as their communication app of choice while Microsoft continued giving Skype the cold shoulder.Skype was first released in August 2003 by a group of Estonian developers, including Priit Kasesalu and Jaan Tallinn. A couple of years later, eBay acquired it for $2.6 billion before Microsoft bought it for a reported $8.5 billion in 2011. However, the company quickly grew uninterested in its product and launched an in-house competitor called Teams in 2017, which has since become a prominent communication platform for businesses. // Related StoriesSkype's impending demise is undoubtedly going to be a tough pill to swallow for its remaining users who refused to give up on the aging app. While there's no data about the exact number of Skype users in 2025, it reportedly had 36 million daily active users in 2023. Microsoft will be hoping that most of them will migrate to Teams, but whether that happens remains to be seen.
    0 Comments ·0 Shares ·82 Views
  • EA releases Command & Conquer source code, boosting modding capabilities
    www.techspot.com
    Why it matters: When EA released the Command & Conquer Remastered Collection five years ago, it published DLL files for the legendary real-time strategy franchise's first two entries to provide extensive modding support. With the series approaching its 30th anniversary this year, EA recently released new source code files and expanded the modding potential for several C&C titles. Furthermore, the franchise is currently 70 percent off. EA has published the full source code for five Command & Conquer games and their expansions, implementing Steam Workshop support and enabling modders to build new maps, units, and other content. The company also updated the source code repositories for the original Command & Conquer and the first Red Alert game.Interested users can now download the source code for Command & Conquer Renegade, Generals, Zero Hour, Tiberium Wars, Kane's Wrath, Red Alert 3, Uprising, and Tiberian Twilight from GitHub. A new mod support pack on GitHub also contains the source XML, Schema, Script, Shader, and Map files for SAGE engine games like Generals and Tiberium Wars.Some source code files for the first Command & Conquer entry, later called Tiberian Dawn, and the original Red Alert have been available since EA released remastered versions of the two games in 2020. However, with help from veteran modder Luke "CCHyper" Feenan, EA has recovered and published their full code. Furthermore, a new update allows modders to publish maps directly to Steam Workshop for automatic updates and easier file management.Command & Conquer: Tiberian Sun and Red Alert 2 are conspicuously absent from EA's announcement. Fans have demanded remasters for the titles since the first refreshed duology launched, but the company has remained silent on the matter.Fortunately, the original versions of both games are available in the Command & Conquer Ultimate Collection bundle that EA released on Steam last year. As of this writing, the Ultimate Collection and the Remastered Collection are on sale for just $6 at a 70 percent discount. // Related StoriesAdditionally, fans interested in the franchise's development history can download newly released early development footage of Command & Conquer Renegade and Generals.Westwood Studios released the original Command & Conquer in September 1995. Building on the foundations from 1992's Dune II, it sold millions of copies and popularized the real-time strategy genre.The franchise has since received multiple sequels and spinoffs, with sales eventually reaching the tens of millions. Although the series' last mainline entry, Command & Conquer 4, was released almost 15 years ago, EA has cooperated with modders to keep the classic titles readily available.
    0 Comments ·0 Shares ·89 Views
  • Nvidia confirms Blackwell Ultra launch, teases Vera Rubin architecture for 2026
    www.techspot.com
    Forward-looking: Despite facing a setback in the rollout of its Blackwell GPUs for data centers last year due to a design flaw, Nvidia has swiftly rebounded and is poised to deliver its next series of products over the next few years. CEO Jensen Huang confirmed during the company's earnings call that the next major release, dubbed Blackwell Ultra (B300-series), is on track for the second half of this year. This mid-cycle refresh of the Blackwell architecture promises significant improvements over its predecessors. The B300-series is expected to offer higher compute performance and eight stacks of 12-Hi HBM3E memory, providing up to 288GB of onboard memory. Although unofficial, there are estimates of a 50 percent performance uplift compared to the B200-series.To complement these powerful GPUs, Nvidia will introduce the Mellanox Spectrum Ultra X800 Ethernet switch, boasting a radix of 512 and support for up to 512 ports. This networking upgrade will further enhance the capabilities of AI and HPC systems built around the B300-series.Image credit: Constellation ResearchLooking beyond Blackwell, Nvidia is already working on its next-generation architecture, codenamed Vera Rubin. Set to debut in 2026, the Rubin GPUs represent a significant step toward achieving artificial general intelligence (AGI).The Rubin platform will feature eight stacks of HBM4E memory, offering up to 288GB of memory, along with a Vera CPU, NVLink 6 switches operating at 3600 GB/s, CX9 network cards supporting 1,600 Gb/s, and X1600 switches. Huang has hinted at the transformative potential of the Rubin architecture, describing it as a major leap forward in terms of performance and capabilities. // Related StoriesNvidia has also indicated its readiness to discuss post-Rubin products at the upcoming GPU Technology Conference (GTC) in March. One potential breakthrough on the horizon is the rumored Rubin Ultra, projected for release in 2027. This product could push the boundaries of GPU design even further, potentially incorporating 12 stacks of HBM4E memory. This is a substantial increase from the 8 stacks used in previous generations, potentially offering up to 576GB of total memory. The use of HBM4E technology would provide unprecedented memory bandwidth and capacity, crucial for handling increasingly complex AI models and computations.To achieve this, Nvidia would need to master the use of 5.5-reticle-size CoWoS interposers and 100mm 100mm substrates manufactured by TSMC. This represents a significant increase from the current 3.3-reticle-size interposers used in today's most advanced GPUs. The larger interposer size would allow for more components to be integrated onto a single package, enabling the inclusion of additional memory stacks and potentially more GPU tiles.
    0 Comments ·0 Shares ·81 Views
  • Gigabyte M32UP 32" Review: The New Best Value 4K Gaming Monitor?
    www.techspot.com
    Today, we're looking at the successor to one of the most popular bang-for-buck 32-inch 4K gaming monitors, the Gigabyte M32UP. Many of you purchased the Gigabyte M32U for around $500 as a solid 4K 144Hz LCD, and now Gigabyte has a new version slowly rolling out, called the M32UP. It's not a radical change; in fact, it's very similar in terms of specifications. However, it's worth assessing to see how the 32-inch LCD market is evolving.The M32UP is a 32-inch, 3840 x 2160 IPS LCD with a maximum refresh rate of 160Hz, a slight increase from the M32U, which topped out at 144Hz.Like its predecessor, this is an SDR monitor with no real HDR capabilities. It is advertised as DisplayHDR 400, but without edge-lit local dimming, the HDR experience is poor. That's not necessarily a dealbreaker at this price yet, as 32-inch HDR LCDs remain expensive. However, demand for true HDR products at this price point and below is increasing.In terms of design, the M32UP is essentially a refresh of the M32U. It follows the same Gigabyte M-series design seen in recent models, featuring a simple black plastic construction and a wide, flat V-shaped stand. There's nothing fancy no RGB LED lighting on the back but from the front, it looks sleek with slim bezels on three sides.The rear design is also appealing. While it doesn't include premium materials like a metal stand, the plastic build doesn't look particularly cheap either. It's a solid choice for this class of display.The stand is functional, offering height, tilt, and swivel adjustment, with a good maximum height. It has moderate stability. The screen coating is a standard matte LCD finish, similar to other LCDs effective at reducing mirror reflections but with slight graininess.Port selection is decent, featuring two HDMI 2.1 (48 Gbps) ports, one DisplayPort 1.4 with DSC, and a USB-C port supporting DP-Alt mode. Additionally, there's a three-port USB 3.2 hub, and the USB-C port can function as an upstream port, enabling KVM switch functionality. However, USB-C power delivery is limited to 18W, which won't be sufficient for charging most laptops. It's good to see the HDMI ports upgraded from 24 Gbps on the M32U to full 48 Gbps ports.The on-screen display (OSD) is controlled by a directional toggle on the rear and features Gigabyte's standard interface. There's a solid set of gaming features, including crosshairs, a refresh rate counter, shadow boosting, sniper mode, and a dashboard that integrates with Gigabyte's software for system stats.The color control options are also decent, but keep in mind that they are essentially the same as other Gigabyte monitors, so if you're comparing models, this won't be a differentiating factor.Response Time PerformanceFor response time performance, Gigabyte offers five different overdrive settings, and we'll start with Off, which shows native panel performance with overdrive disabled. This is probably not how you'll want to run the monitor. The Picture Quality mode is a small improvement to 8.5ms, but still not fast enough to fully support a 160Hz refresh rate.Gigabyte M32UP - 160Hz - Off, Picture Quality, Balance, SpeedThe Balance mode is where the M32UP starts to perform at a reasonable standard, offering a 4.8ms response time average and reasonable overshoot results, leading to cumulative deviation around 400. That's decent for this type of LCD. Don't bother using the Speed mode it has noticeable overshoot artifacts and is pushed too far. There's also the Smart OD mode, which at 160Hz uses a similar overdrive configuration to Balance, making it a good choice.Gigabyte M32UP - Smart OD - 160Hz, 144Hz, 120Hz, 100Hz, 85Hz, 60HzWhat surprised us about the M32UP is that this is the first Gigabyte monitor we've reviewed where the Smart OD setting is actually optimized properly for variable overdrive. Previously, the Smart OD setting on other Gigabyte monitors made bizarre choices about which overdrive mode to use at different refresh rates.On the M32UP, it's much more sensible: at 144Hz and above, you get Balance-level overdrive, then at mid-refresh rates, it drops to Picture Quality overdrive, and at 60Hz, it seems to turn overdrive off completely.Now, this variable overdrive setup isn't quite as aggressive as we would have liked. For example, performance at 60Hz could be improved by using Picture Quality-like settings instead. But overall, this mode is very usable and allows for a single overdrive mode experience. There's great performance at high refresh rates, a solid experience in the middle where overshoot doesn't go crazy, and acceptable management at 60Hz.If you use the fixed overdrive configurations like Balance or Picture Quality modes instead, you'll either have to deal with overshoot at lower refresh rates, or lackluster performance at higher refresh rates, so I'd recommend Smart OD.Response Time ComparisonsCompared to other monitors at their maximum refresh rates, the Gigabyte M32UP is well optimized, offering better performance than the LG 32GR93U that we've recommended previously and lower overshoot than the Gigabyte FI32U. This is a good result in line with better LCDs of today.For average performance, like we said, the Smart OD setting could be more aggressive with its tuning, hence why it doesn't deliver as strong of a result as the LG 32GR93U, for example. The 32GR93U also delivers a single overdrive mode experience and is better tuned in the middle of the refresh rate range, though this Gigabyte monitor is still very usable with low overshoot results.Also, as we look at cumulative deviation, the M32UP is very much in the usual range we see from today's LCD gaming monitors, with a result of 546. This is very similar tuning to the M27U and better than some of the older 32-inch 4K monitors like the Gigabyte FI32U and MSI MPG321UR-QD. However, the 32GR93U is a chart leader here among LCDs, offering 14% better average cumulative deviation, which is better, though not to a level that will be hugely noticeable while gaming.The M32UP is a good monitor for fixed refresh rate 120Hz gaming, with the Balance mode providing the best experience here. At 60Hz, the results are reasonable, though there are better monitors for lower refresh rate gaming.Input lag is a non-issue on the M32UP, producing under 1ms of processing delay and similar overall results to other 4K gaming LCDs on the market today. The only way to get a significantly more responsive monitor in this class is to get something 240Hz, which is very rare in the LCD space, or move to an OLED, which is much more expensive.Power consumption is typical and in line with today's best 32-inch LCDs for efficiency, roughly matching the LG 32GR93U at 200 nits. You'll be saving at least 10 watts compared to earlier 32-inch 4K LCDs, and this technology continues to be much more efficient than OLEDs for typical desktop work plus, you don't have to worry about burn-in.SDR Color PerformanceColor Space: Gigabyte M32UP - D65-P3The M32UP is a wide gamut monitor like nearly all of today's gaming displays. It packs 95% DCI-P3 coverage and 93% coverage of Adobe RGB, which are decent results for an LCD. However, overall Rec. 2020 coverage is just 70%, which is on the lower side compared to most others that we've tested. It certainly lacks that additional gamut you'd get from a quantum-dot-enhanced display, though it's not hugely different from products like the M27U or 32GR93U.Default Color PerformanceGigabyte M32UP - D65-P3, tested at native resolution, highest refresh ratePortrait CALMAN Ultimate, DeltaE Value Target: Below 2.0, CCT Target: 6500KGrayscale, Saturation and ColorCheckerFactory color performance is strong in greyscale, with near-perfect gamma and white balance performance leading to a deltaE ITP average of just 5.03. There are the usual concerns around oversaturation, as this display does not use an sRGB gamut clamp out of the box. Compared to other monitors, this is a good showing.New for 2025, we'll be testing the performance of monitors when using Windows' Auto Color Management feature, introduced in Windows 11 24H2. This feature color-manages the display at a system level by using the color data the monitor reports.On wide gamut monitors, this allows SDR sRGB content to be displayed more accurately without the use of a monitor's sRGB mode, as the color space emulation is performed by Windows instead of the monitor. It also allows you to avoid any sRGB mode restrictions, like locked white balance, because the monitor remains in its standard configuration with full setting control.Default ACM Color PerformanceGigabyte M32UP - D65-P3, tested at native resolution, highest refresh ratePortrait CALMAN Ultimate, DeltaE Value Target: Below 2.0, CCT Target: 6500KGrayscale, Saturation and ColorCheckerWhen enabling ACM, the M32UP retains strong greyscale accuracy, but now color performance is improved as there is OS-level sRGB emulation. As this display is generally well-calibrated, we see a deltaE average below 3.0 in saturation and below 4.0 in ColorChecker, which is great and sufficient to call this a calibrated experience.sRGB Mode Color PerformanceGigabyte M32UP - D65-P3, tested at native resolution, highest refresh ratePortrait CALMAN Ultimate, DeltaE Value Target: Below 2.0, CCT Target: 6500KGrayscale, Saturation and ColorCheckerIf you want to take calibration to the next level or have a source device that doesn't support color management, then the sRGB mode will be useful for SDR content. This mode is very accurate, with outstanding greyscale performance and very low deltaEs in our color tests.This makes the M32UP one of the most accurate displays you can get, a fantastic result that keeps Gigabyte in the leadership position when it comes to calibration in the sRGB mode. Performing a full calibration on this monitor isn't a necessity due to its out-of-the-box performance. We'd just use the sRGB mode or Auto Color Management, though Calman can improve things if you want.Brightness, Contrast, UniformitySDR brightness is on the low side for an LCD at just 360 nits, which is still very sufficient for most use cases. One of the main competitors to this display, the LG 32GR93U, does get quite a bit brighter if that's important to you. Minimum brightness was fine at 58 nits, but not amazing.As for the native contrast ratio, there's nothing special about the M32UP it's delivering a typical IPS LCD experience with a contrast of 1,069:1. Most gaming LCDs today are a little over 1,000:1, so this monitor fits right into that range. Overall, this is a poor contrast ratio with weak black levels, just an inherent flaw of this technology that is easily beaten by VA LCDs and OLEDs.Viewing angles are great and very usable, which is in line with other IPS LCDs on the market today the only display technology that's better is OLED. As for uniformity, it's not too bad, though there was a slight falloff in brightness along the left and right edges. This is not noticeable while gaming.HUB Essentials ChecklistThe final section of the review is the HUB Essentials Checklist. Gigabyte does an okay job advertising this display. Some of the listed specifications are accurate or even conservative, like color gamut and brightness.Other aspects are misleading, such as HDR support and response time performance. This isn't unusual for an LCD of this type, though we continue to be disappointed when things like HDR aren't properly advertised.In the Essentials Checklist, there aren't any surprises. As a non-HDR monitor, the M32UP doesn't meet the requirements of many HDR criteria. However, it does offer some fairly basic backlight strobing, which, in our opinion, doesn't look amazing, and a great port selection except for USB-C power delivery.What We LearnedThe Gigabyte M32UP is a solid bang-for-buck 32-inch 4K gaming monitor, though it doesn't do anything particularly unique or different compared to other products in the same category. With LCD panel technology where it currently is, it's hard to push beyond existing standards, but at the very least, Gigabyte has nailed the basics.What we always appreciate about testing Gigabyte monitors is the balance they offer across all areas of performance. This is especially important in a 32-inch 4K monitor, where many users are likely interested in both productivity work and gaming.The M32UP has fantastic color accuracy, a great resolution, suitable brightness, and good viewing angles, making it a solid choice for work. On top of that, you get reasonable response times with variable overdrive, a great 160Hz refresh rate, and low input lag, delivering a strong gaming experience.The biggest standout in performance is Gigabyte's sRGB mode calibration, which is excellent. But in most other categories, the M32UP delivers fairly standard performance. That said, we didn't find any major flaws either, which is a good sign. It's a well-rounded, versatile monitor that gets the job done.Like many value-oriented products, the M32UP's success will depend on pricing. While not available in the United States yet at the time of testing, our understanding is that it will be priced around $500, similar to what the M32U retails for today. That's about as cheap as 32-inch 4K monitors get these days among models with a refresh rate of at least 144Hz. We'd love to see more true HDR products at this price point with similar specs, but the reality is we're just not there yet so the lack of HDR hardware in the M32UP is understandable.The main competition for this monitor is the LG 32GR93U, which offers very similar performance. It's often available at roughly the same price, though pricing in the US fluctuates between $500 and $600. In Australia, it's priced the same as the M32UP at $900 AUD. Whether you choose the Gigabyte or LG option, we think you'll be satisfied with the outcome.Based on our testing, the LG monitor is slightly better tuned for response times and motion performance and has higher SDR brightness. Meanwhile, the Gigabyte monitor has better factory calibration and includes extra features like a KVM switch and USB-C port. There's no clear winner here it ultimately depends on which features matter most to you. Both options are fairly priced right now, so our advice is to check pricing in your region and decide from there.Shopping Shortcuts:Gigabyte M32UP on AmazonLG 32GR93U on B&H, LGGigabyte FI32U on AmazonMSI MPG321UR-QD on AmazonMSI MPG 321URX on Amazon, NeweggAsus ROG Swift PG32UCDM on AmazonAlienware AW3423DW 34" QD-OLED on Amazon
    0 Comments ·0 Shares ·91 Views
  • Doom defies the impossible by running in TypeScript's type system
    www.techspot.com
    In context: People have ported Doom to everything from calculators to McDonald's cash registers. There has recently been a push to get the software running on platforms with no actual processing power PDF and Word documents are the latest examples. Of course, these methods are painfully slow, but it's incredible that the game can even execute on non-computer platforms. Software engineer Dmitri Mitropoulos has taken porting Doom to non-computing platforms to a whole new level. The programmer managed to get Doom running inside TypeScript's type system a feat so mind-bogglingly complex that it took him an entire year to pull off.TypeScript is a language developed by Microsoft that builds on JavaScript by adding static type-checking to catch coding mistakes before execution. Think of it as a spelling or grammar checker for code, ensuring functions and variables are entered correctly. Developers commonly use it to build large JavaScript applications.Running a game within TypeScript's type system is considered "impossible." Even Mitropoulos noted that he started the project to "quickly" prove why it could not be done. However, as he got into it, he became obsessively motivated to make it work. In the end, even seasoned TS developers were left impressed and speechless.Mitropoulos reacting to TypeScript finally rendering its first frame of Doom.Mitropoulos's version of Doom runs inside 3.5 trillion lines of types, consuming a staggering 177 terabytes. Compiling a single frame takes 12 days, resulting in an excruciatingly slow 0.0000009645 frames per second. The TypeScript type tracker must process 20 million type instantiations per second to generate the output, resulting in the extremely slow frame rate.Despite the massive overhead, Mitropoulos believes performance improvements are possible. In the Michigan TypeScript Discord server, he suggested that compilation could be reduced to "1 to 12 hours" with further optimizations. He has already identified areas where he can improve the speed. // Related StoriesTo make it all work, he built a virtual machine entirely from TypeScript types, including logical implementations of all 116 WebAssembly instructions required to run Doom. Every element of a functioning computer RAM, disk space, even an L1 CPU cache had to be painstakingly recreated within the type system. Since TypeScript only allows string iterations from the left, he had to input binary algorithms in reverse.TypeScript community reactions: "What!?" "This is a masterpiece." "I have so many questions."Running the program required a custom WebAssembly runtime, processing everything within a TypeScript editor. The TypeScript compiler also had to be modified to handle the project's extreme scale, as its type tracker alone consumed over 90GB of RAM during execution.Mitropoulos described the effort as a grueling challenge. He wrote 12,364 handwritten tests, learned multiple programming languages, and initially estimated the project would require up to 1.25 petabytes before optimization. At one point, compiling a single frame took three months of continuous type instantiation. He remarked that AI was no help."Oh, and AI can't help with any of this stuff," Mitropoulos said in his brief seven-minute video explanation (masthead). "It's so low-level that there are no arrays or objects or strings or booleans inside the engine only binary numbers and Doom only uses 64-bit and 32-bit integers, that's it. Oh, and those integers are neither signed nor unsigned. I spent a whole day figuring that one out."The gargantuan task took an entire year of 18-hour days to complete. Other TS developers had so many questions about the project that Mitropoulos plans to release two more videos explaining the highly technical details and his motivations. For now, we have one more piece of evidence proving Doom can run on anything including things that were never meant to run games at all.
    0 Comments ·0 Shares ·98 Views
  • Microsoft's own Copilot will tell you how to activate Windows 11 without a license
    www.techspot.com
    Facepalm: A Reddit user has made an uncomfortable discovery for Microsoft. The company's own AI assistant Copilot will provide you instructions on how to activate Windows 11 without a valid license. When asked, "Is there a script to activate Windows 11?" Copilot readily offers a step-by-step guide that enables unauthorized activation of the operating system. Since the discovery, the activation method has been independently verified by multiple sources including Windows Central and Laptop Mag. While the method itself is not new it has been circulating since 2022 its promotion by Microsoft's own AI tool is particularly eyebrow-raising.The technique relies on a PowerShell command that integrates a third-party script to perform unauthorized activation. The script is typically sourced from GitHub repositories dedicated to Windows activation methods.To its credit, Copilot does include a brief warning about the risks of executing such scripts, reminding users that unauthorized activation may violate Microsoft's terms of service.When questioned about the dangers of using activation scripts, Copilot outlines several potential risks, including legal ramifications due to violation of licensing agreements, security vulnerabilities from potentially malicious scripts, system instability and performance problems, lack of official support from Microsoft, potential issues with updates, and ethical considerations regarding software piracy.Furthermore, the ease with which potentially harmful scripts can be obtained and executed poses significant security risks. A recent Wall Street Journal report highlighted a case where malware was disguised as an AI tool on GitHub, demonstrating the very real dangers of blindly trusting and executing online code.For decades, Microsoft has grappled with the persistent issue of software piracy a challenge that has both hindered and, paradoxically, fueled the company's global expansion. // Related StoriesIn 2006, the company reported staggering losses of approximately $14 billion due to unauthorized use of its products, despite investing millions in anti-piracy measures. However, Microsoft's approach to piracy has been nuanced far less aggressive than one might expect from a company facing such significant financial damage.A long time ago, Microsoft co-founder Bill Gates candidly discussed his attitude toward software piracy during a 1998 presentation at the University of Washington. He acknowledged the rampant theft of Microsoft products in China, where millions of computers were sold annually without corresponding software purchases. Rather than expressing outrage, Gates remarked: "As long as they're going to steal it, we want them to steal ours. They'll get sort of addicted, and then we'll somehow figure out how to collect sometime in the next decade."Microsoft's tolerance for a certain level of piracy appeared to persist well into the 2010s. In a move that surprised many, the company announced in 2015 that it would allow users with non-genuine copies of Windows to upgrade to Windows 10 at no cost (but they would remain non-genuine and marked as unactivated).
    0 Comments ·0 Shares ·92 Views
  • www.techspot.com
    In context: Monolith was founded in 1994 by Bryan Bouwman, Brian Goble, and several others, when when DOS was still the operating system of choice for PC games. Over the years, the studio developed successful series such as Blood, No One Lives Forever, F.E.A.R., Shogo: Mobile Armor Division, and, of course, Middle-earth: Shadow of Mordor. Thanks to a sudden "strategic change in direction," Warner Bros has shuttered the studio. Warner Bros recently confirmed the closure of three gaming studios, including Middle-earth developer Monolith Productions. The studio is notable for designing a technology capable of providing unique personalities to enemies and NPCs. However, WB "locked" the system away behind a patent, even though it will likely remain unused.The publisher just confirmed the closure of Monolith Productions, Player First Games, and Warner Bros Games San Diego. Monolith's most recent fame came from the two action-adventure games based on J. R. R. Tolkien's Legendarium: Middle-earth: Shadow of Mordor and Middle-earth: Shadow of War. Critics praised both titles for the Nemesis system, a technology designed to procedurally generate orc characters featuring unique traits and relationships with the player's character.Monolith and Warner Bros invested in the Nemesis system so much that the publisher decided to patent the technology in 2016. The patent describes methods for managing NPCs that can remember previous interactions with the player and react accordingly in subsequent encounters. The patent expires on August 11, 2036, so no other developers can use the Nemesis system, and WB has nothing in the works either.However, Monolith was working on a Wonder Woman game. It was a single-player experience powered by the Nemesis system. It announced the game in 2021, but the project is dead because of the studio's shutdown. People in the industry are now praising the unusable Nemesis technology and criticizing Warner for funding the development of a highly advanced AI gaming system and then freezing it for years to come. It sounds like a move right out of Disney's playbook.Gaming journalist Scott Robertson noted how Monolith pioneered a one-of-a-kind system for the procedural generation of unique enemy interactions and showed his contempt for Warner Bros actions. // Related Stories"[A few years later] WB took that from them, patented it, and then closed their studio. Fu** them," Robertson said.The closure of Monolith, Player First Games, and Warner Bros. Games San Diego resulted from a very rocky period for WB's gaming business. Chief Executive and President of Warner's Global Streaming and Games JB Perrette told Bloomberg that the quality of too many new releases missed the publisher's expected results. The company took a $200 million loss for the Suicide Squad: Kill the Justice League project alone.
    0 Comments ·0 Shares ·95 Views
  • AMD FSR4 to support over 30 games at launch, 75 by year's end
    www.techspot.com
    In brief: AMD will reveal the tech specs, prices, and release dates for its next-generation Radeon RX 9070 graphics cards later this week, but a fresh leak has outlined the software support for a key feature, FSR 4. Upon launch early next month, the new GPUs will display dozens of games with image quality approaching Nvidia DLSS. VideoCardz has leaked more information regarding AMD's upcoming Radeon RX 9070 graphics cards. The outlet recently published an allegedly official list of around 35 games supporting the company's FSR 4 upscaling technology upon release in early March.The new functionality, exclusive to RX 9000 GPUs, marks AMD's adoption of machine learning for image reconstruction. Last month, a CES demonstration (below) showcased substantial improvements over FSR 3.1 in Ratchet & Clank: Rift Apart.The following games are expected to receive similar image quality improvements starting next month:The AltersBellwrightCall of Duty: Black Ops 6Creatures of AvaDragonkin: The BanishedEndoria: The Last SongFragPunkFunko FusionGod of War: RagnarokHorizon Zero Dawn RemasteredHorizon Forbidden WestHunt: Showdown 1896Incursion Red RiverKristalaMarvel RivalsMarvel's Spider-Man 2Marvel's Spider-Man RemasteredMarvel's Spider-Man: Miles MoralesMechWarrior 5: ClansMonster Hunter WildsNightingaleNo More Room in Hell 2PANICOREPredator: Hunting GroundsRatchet & Clank: Rift ApartRemnant 2Smite 2The Axis UnseenThe Last of Us: Part IThe Last of Us: Part II RemasteredUntil DawnWarhammer 40,000: Space Marines 2Kingdom Come: Deliverance IIDynasty Warriors: OriginsCivilization 7Additional developers cooperating with AMD on adopting FSR 4 include 11 Bit Studios, Activision, Ballistic Moon, Focus Entertainment, Guerrilla, Insomniac Games, Krafton, Naughty Dog, NetEase, Nixxes, Pearl Abyss, Sager, SEGA, and Torn Banner. AMD claims the number of FSR4-compatible titles will exceed 75 by the end of the year.However, users can also manually implement the new upscaler in any title supporting FSR 3.1. Therefore, the following games will likely have a manual toggle or officially implement FSR 4 support in the coming months:Delta Force: Black Hawk DownLike a Dragon: Pirate Yakuza in HawaiiLost Records: Bloom & RageVirtua Fighter 5 R.E.V.O.Space Engineers 2Ninja Gaiden 2 BlackIndiana Jones and the Great CircleStalker 2: Heart of ChornobylMicrosoft Flight Simulator 2024ARK: Survival AscendedFarming Simulator 25Silent Hill 2Final Fantasy XVIThe First DescendantGhost of Tsushima Director's CutManor LordsThe FinalsMortal Kombat 1Warhammer 40,000: DarktideEverspace 2SatisfactoryWar ThunderAMD also provided new information regarding FSR 4's performance presets. Unlike FSR 3.1 and DLSS, Team Red's new upscaler won't include an ultra-performance mode for tripling the resolution scale (for example, from 1280 x 720 to 4K). Performance mode, which scales from half resolution, will be the lowest setting. // Related StoriesPrevious leaks from VideoCardz and other sources have revealed the full RX 9070 and 9070 XT specifications. Additionally, early Micro Center listings indicate the 9070 might start at $649.99 and the XT at $699.99. However, whether the numbers represent MSRP or boosted AIB partner prices is unclear.
    0 Comments ·0 Shares ·104 Views
  • New biomass hydrogel technology extracts drinkable water from the air
    www.techspot.com
    Forward-looking: Researchers at the University of Texas at Austin have developed a novel method using natural materials to extract drinkable water from the air. Their "molecularly functionalized biomass hydrogels" system transforms organic matter such as food scraps, branches, and seashells into a highly efficient water-absorbing substance. The system combines specially engineered sorbents (materials that absorb liquids) with mild heat and can generate significant amounts of potable water, even in arid conditions. During field tests, the team demonstrated an impressive yield of 14.19 liters (3.75 gallons) of clean water per day per kilogram of sorbent far surpassing the typical 1 to 5 liters achieved by most existing sorbents."This opens up an entirely new way to think about sustainable water collection, marking a big step towards practical water harvesting systems for households and small community scale," said Professor Guihua Yu, who led the research team.The innovation lies in the team's approach to designing sorbents. Instead of selecting specific materials for specific functions, their general molecular strategy enables the conversion of almost any biomass into an efficient water harvester. This method offers several advantages over traditional synthetic sorbents, including biodegradability, scalability, and minimal energy requirements for water release.The heart of this technology is a two-step molecular engineering process that imbues biomass-based polysaccharides with hygroscopic and thermoresponsive properties. It allows the system to effectively capture and release water from the air using commonly available natural materials.This water-generating system is part of Professor Yu's ongoing efforts to tackle global clean water access issues. His previous work includes developing hydrogel technologies for hyper-arid conditions and an injectable water filtration system. // Related StoriesLooking ahead, the research team is working on scaling up production and developing real-world applications for commercialization. Potential uses include portable water harvesters, self-sustaining irrigation systems, and emergency drinking water devices.Graduate researcher Yaxuan Zhao notes that since they can fabricate this hydrogel from widely available biomass and design it to operate with minimal energy input, it has strong potential for large-scale production and deployment in off-grid communities, emergency relief efforts, and decentralized water systems.While scalability appears promising, the researchers will likely encounter challenges in developing a solution that remains efficient and practical outside the lab. For instance, maintaining the high efficiency observed in lab conditions 14.19 liters of water per kilogram of sorbent daily could be problematic when scaling up to larger systems, as environmental factors may impact performance.Additionally, rainwater harvesting systems in urban areas require regular infrastructure maintenance and upgrades. For example, in Indian cities like Chennai and Hyderabad, neglect of rainwater harvesting systems has led to their deterioration and reduced effectiveness.
    0 Comments ·0 Shares ·67 Views
  • www.techspot.com
    Reviewers LikedPremium designBright & colourful screenS-Pen includedExcellent build quality with IP68 ratingGenerous software support policyLoud, clear speakersReviewers Didn't LikeUnderwhelming processorFinicky fingerprint sensorNo headphone jackLight on RAMExpensive accessoriesCompetitors and Related Products Our editors hand-pick related products using a variety of criteria: direct competitors targeting the same market segment, or devices that are similar in size, performance, or feature sets. Expert reviews and ratings 80 The Galaxy Tab S9 FE can do almost everything that Samsung's more expensive tablets can, but it costs hundreds of dollars less. By AndroidAuthority on December 14, 2024 75 The Galaxy Tab S9 FE offers decent value for money if you don't need heavy processing power. For daily entertainment, social media, and light gaming, it's possibly the best tablet on the market at this price point. By AndroidPolice on July 02, 2024 87 In essence, the Galaxy Tab S9 FE is a budget-friendly alternative to the Galaxy Tab S9. The viability of the budget-friendly model largely hinges on the user's specific needs and usage patterns. By NotebookCheck on November 25, 2023 80 With excellent build quality, a bright and fluid display, and a brilliant all-inclusive stylus system, the Samsung Galaxy Tab S9 FE is a well-balanced mid-range tablet. Those after the very best media or gaming experience can do better for the money, but the Tab S9 FEs strength is in its all-round competence. By TechAdvisor on November 13, 2023 80 The Samsung Galaxy Tab S9 FE+ is a high-value tablet thanks to its classy design, excellent build quality, and powerful software, making it the Android slate to beat. By PCMag on October 27, 2023 The Samsung Galaxy Tab S9FE offers great value at its price for digital artists. Considering that the S Pen comes included, its pricing makes it a pretty decent competitor against the latest base iPad model. Its no slacker when there is drawing to be done, and at this size, its comfortable to take out on the go with you. So far, I see no reason not to recommend this tablet. By Draw Your Weapon on December 23, 2023Load More Reviews
    0 Comments ·0 Shares ·82 Views
  • PlayStation VR2 gets a big price cut, down to $399
    www.techspot.com
    In context: The PlayStation VR2 debuted at an eye-watering $549, which was more than the PlayStation 5 itself. Shortly after launch, Sony said it had sold 600,000 headsets in its first six weeks of launch or eight percent more than the first-gen unit over the same length of time. The PlayStation VR2 is Sony's second-generation virtual reality headset, launched on February 2023. Now, more than two years later, the accessory is receiving its first price cut. Some might argue that Sony should have gone with a lower price from the start and they probably have a point.Last March, Bloomberg reported that Sony had halted production of the PlayStation VR2 in an attempt to clear out excess inventory. At the time, the publication cited a lack of compatible titles as one factor holding back wider adoption.In a newly published post on the PlayStation blog, Isabelle Tomatis, VP of global marketing for SIE, said that starting in March, the PS VR2 till drop to $399 in the US (449.99 / 399.99 / 66,980 in other markets). The core kit includes the headset, a PS VR2 Sense controller, and a set of stereo headphones, but there is also a bundle that comes with a PlayStation Store voucher for a copy of Horizon Call of the Mountain at the same price point.Last summer, Sony launched a PC adapter that expands headset compatibility to thousands of SteamVR titles. There are also several intriguing games in the pipeline due out later this year including Dreams of Another, Hitman: World of Assassination, and The Midnight Walk.A recent hardware update, meanwhile, enabled support for low-latency hand tracking, allowing devs to create titles that track a player's hand movement and position using the headset's built-in cameras.Given the new low price, wider game compatibility, and hand tracking capabilities, holdouts might finally consider picking up the PS VR2. If you don't already have a PS5, you can get into a new unit for around $425. // Related Stories
    0 Comments ·0 Shares ·84 Views
  • Ayaneo Flip PC handheld production canceled, backers given 30 days to claim refund
    www.techspot.com
    What just happened? Handheld gaming PCs might be incredibly popular right now, but that doesn't mean they'll all be successful. The clamshell-style Ayaneo Flip, for example, has been killed off, despite some backers of the crowdfunding campaign already having received their units. Those still waiting for one now have just 30 days to file for a refund. The Ayaneo Flip's crowdfunding campaign launched in January last year. Liliputing writes that the first units started shipping to backers a few months later, but more than a year after the campaign began, some backers still haven't received their handhelds and never will.Ayaneo has announced that after careful consideration and evaluation of its product roadmap and strategic priorities, there are currently no immediate plans to proceed with production of the Ayaneo Flip. The company said the move was necessary to ensure it focuses on delivering exceptional experiences through its existing and upcoming product lines.Those backers still waiting for one of the handhelds now have two options. They can either request a full repayment of the money they handed over, or they can switch their order to any equivalent Ayaneo product currently available, with price differences "settled accordingly."The announcement specifies that backers have 30 days (until Friday, March 28, 2025) to contact the customer service team to confirm their chosen option.The Ayaneo Flip was marketed as the first-ever dual-screen Windows handheld, with a 3.5-inch, 960 x 640 secondary touchscreen between the controllers for viewing system stats and more. // Related StoriesThe Flip features an AMD Ryzen 7 8840U or 7840U processor along with a seven-inch 1080p 120Hz IPS screen. RAM ranged from 16GB to 64GB, while storage went from 512GB to 2TB. There's also an M.2 2230 slot for a user-upgradeable PCIe 4.0 NVMe SSD.Another model featured a small RGB backlit keyboard in place of the smaller screen.The news is a reminder of the inherent risks that come when backing any project, even one from a company as well-established as Ayaneo.Reports this week indicate that approximately six million handheld gaming PCs have been sold since the Steam Deck's launch in 2022, with Valve's handheld accounting for over 3.7 million of those units.
    0 Comments ·0 Shares ·94 Views
  • www.techspot.com
    What just happened? The United States' relationship with the UK could come under further strain following news that US officials are investigating whether its ally broke a data treaty by demanding that Apple build a backdoor into iCloud. Last week, Apple removed its Advanced Data Protection feature for UK users. The extra layer of security encrypted synced iCloud content such as photos, notes, reminders, bookmarks, and iCloud backups so that only users could access it on trusted devices. Even Apple cannot decrypt customer accounts to access their data.The move came after Apple spent months denying the UK government's requests for the company to create a backdoor allowing agencies to snoop on users' encrypted data. The UK Home Office issued the technical capability notice under the Investigatory Powers Act of 2016, commonly referred to as the "Snoopers' Charter."Rather than complying, which would have had global implications regarding its security standards, Apple simply removed the Advanced Data Protection option for new UK users existing ADP users will have to disable the feature manually during a grace period.Now, Reuters reports that in a letter to two US lawmakers, Tulsi Gabbard, the US director of national intelligence, said the US is examining whether the UK government had violated the Cloud Act. // Related StoriesThe Act states that the UK may not issue demands for data of US citizens, nationals, or lawful permanent residents, nor may it demand data from persons located inside the United States.In the letter, addressed to Oregon Democrat Ron Wyden, and Arizona Republican Rep. Andy Biggs, Gabbard wrote, "My lawyers are working to provide a legal opinion on the implications of the reported U.K. demands against Apple on the bilateral Cloud Act agreement."Apple has long fought against demands from law enforcement and governments if it feels that they threaten the security of Apple products.In 2023, Apple threatened to withdraw FaceTime and iMessage from the UK in response to a proposed change that would require it and other messaging services to clear new security features, including iOS updates, with the UK government before they are rolled out.The most famous instance came in 2016, when a judge ordered Apple to help the FBI access the locked iPhone that was owned by Syed Rizwan Farook, one of the San Bernardino shooters. Tim Cook refused, stating that building a version of iOS that bypasses several important security features to access the handset would undeniably create a backdoor."If the government can use the All Writs Act to make it easier to unlock your iPhone, it would have the power to reach into anyone's device to capture their data," Cook wrote at the time. "The government could extend this breach of privacy and demand that Apple build surveillance software to intercept your messages, access your health records or financial data, track your location, or even access your phone's microphone or camera without your knowledge."Masthead: Daniel Romero
    0 Comments ·0 Shares ·92 Views
  • www.techspot.com
    Highly anticipated: The "Half-Life 3 confirmed" catchphrase might not be around for much longer. There's been a slew of speculation that the 18-year wait since Half-Life 2: Episode 2 could soon end, and that the mythical Half-Life 3 is in the final stages of development. Getting fans of the iconic FPS series excited is Valve's new update to Deadlock, its third-person shooter MOBA. Buried within this update is a mention of HLX, which is widely believed to be an in-development name for Half-Life 3.HLX has appeared in other recent updates for Valve games, including Dota 2 and Counter-Strike 2. What's different on this occasion is that it appears alongside a mention of FSR3.Valve watcher Tyler McVicker believes that if AMD's FidelityFX Super Resolution upscaling technology really is being used in the HLX build, it would be a sign that development is nearing completion."You don't use FSR until you're nearly done with a game. Those AI post-processing systems are not supposed to be used for development," he notes in a video on his YouTube channel.There have been other signs that Half-Life 3 will be with us at some point this year. On New Year's Eve, Mike Shapiro, the actor who voiced G-Man and Barney Calhoun, posted a video in which he says he hopes the next quarter-century delivers as many unexpected surprises as the previous 25 years. "See you in the New Year," he promises. The post also uses the #Valve and #Halflife hashtags, which appear to be more hints as to what's coming.McVicker also dove into the recent Dota 2 update. His datamining uncovered a new set of code in a file called AI_baseNPC.fgd, which is not actively used by the popular MOBA. It includes references to machinery and alien blood, which are believed to be related to Half-Life 3. The code itself relates to a common optimization technique in games where AI complexity and behavior can be scaled based on distance from the player, something McVicker says is further evidence of Half-Life 3 development nearing completion. // Related StoriesThe latest discoveries follow reports from reputable leaker Gabe Follower in December that Valve had been conducting friends-and-family playtesting for HLX over the previous few months. McVicker believes the discoveries since then suggest the testing went smoothly, and that an official announcement might not be too far away.There have been rumors and claims that Half-Life 3 was close to completion for many years, all of which turned out to be false alarms. But the sheer number of signs over the last few months strongly suggest 2025 will be the year that the long wait ends.
    0 Comments ·0 Shares ·92 Views
More Stories