• Trideo Strategic Expansion in the North American 3D Printing Market
    3dprintingindustry.com
    Trideo, an Argentinean 3D printer manufacturer specializing in FDM printers for large-scale and industrial additive manufacturing, is expanding into the North American market.Founded in Buenos Aires in 2015 by Laurent Rodriguez, Simon Gabriac, and Nicolas Berenfeld, Trideo has established itself as a provider of high-performance 3D printers. The company expanded to Brazil in 2021 and, more recently, to Mexico. With the opening of its Mexico City office in late 2024, Trideo aims to strengthen its presence in North America and extend its services to clients in the region, including the Caribbean.15kg print of Diego Maradona. Photo via Trideo.Our expansion into Mexico is a strategic step to bring our innovative 3D printing solutions closer to a growing market, said Nicolas Berenfeld, CEO of Trideo. We are committed to offering cutting-edge technologies and contributing to the development of the additive manufacturing industry in the region.The company is targeting rising demand for 3D printing solutions in industries such as automotive, aerospace, manufacturing, and academic research.Trideo large-format 3D printers3D printing not only optimizes industrial production but also provides a sustainable alternative to traditional manufacturing methods. Our goal is to keep innovating and delivering solutions that enhance both efficiency and sustainability, said Simon Gabriac, Trideos CTO.One of Trideos most innovative products is the Big T, a large-format 3D printer with a 1000 x 1000 x 1000 mm build volume. Its capability to produce large-scale parts makes it ideal for industrial applications requiring robust, custom components.Large-scale printing offers several advantages, such as reducing the need for joints in smaller components, enhancing strength and aesthetics, optimizing production efficiency,Model printed with the Big T. Photo via Trideo.Other 3D printers include the T600 HT, a 600 x 600 x 400 mm 3D printer featuring a heated chamber reaching 200C, designed for high-performance materials. The Pellet Extrusion System, an add-on for the Big T, enables waste material recycling, reinforcing sustainability in the manufacturing process.Additionally, Trideo has developed Independent Dual Extruder (IDEX) 3D printers, which allow for simultaneous dual-part printing, optimizing production time.Large-scale prototype. Photo via Trideo.Manufacturing & Digital Transformation in Mexico & the CaribbeanAdditive manufacturing is ever more present in Latin America. Companies like MANUFACTURA are leveraging 3D printing for sustainable innovation, as seen in their development of bioceramic bricks made from eggshells, an eco-friendly building material that offers an alternative to traditional building materials.Similarly, projects like The Wood Project are utilizing 3D printing to repurpose wood waste, converting it into sustainable structures, further showcasing how digital manufacturing is reshaping production processes.Meanwhile, the Caribbean has seen applications of additive manufacturing in construction. A notable development is CyBe Constructions collaboration with Betonindustrie Brievengat (BIB) to build the regions first 3D-printed homes in Curaao. This initiative aims to address housing shortages by utilizing advanced concrete 3D printing techniques, offering efficient and sustainable building solutions.Similarly, Innova Building Solutions Inc, based in Trinidad and Tobago, is pioneering affordable housing through 3D construction. Their conceptual model showcases the potential of 3D printing in creating scalable homes ranging from 600 to 1,200 sq. ft., emphasizing design flexibility and rapid construction timelines. For instance, the walls of a 600 sq. ft. home can be printed in approximately 30 hours, significantly reducing traditional construction durations., Innova Building Solutions Inc, based in Trinidad and Tobago, is pioneering affordable housing through 3D construction. Their conceptual model showcases the potential of 3D printing in creating scalable homes ranging from 600 to 1,200 sq. ft., emphasizing design flexibility and rapid construction timelines. For instance, the walls of a 600 sq. ft. home can be printed in approximately 30 hours, significantly reducing traditional construction durations.Featured image shows a large-scale prototype. Photo via Trideo.Who won the 20243D Printing Industry Awards?Subscribe to the 3D Printing Industry newsletter to keep up with the latest 3D printing news.You can also follow us on LinkedIn and subscribe to the 3D Printing Industry Youtube channel to access more exclusive content.
    0 Comments ·0 Shares ·37 Views
  • Can regulatory oversight alone unlock cloud competition?
    www.computerweekly.com
    Cloud computings rise is a success story under scrutiny. It has been nothing short of transformative, enabling businesses to scale operations, innovate rapidly, and optimise costs. It has become an essential pillar of modern enterprise IT, supporting mission-critical workloads across industries. From finance and healthcare to artificial intelligence (AI) and retail, the cloud is now the undisputed underlying infrastructure for digital transformation.Yet, as public cloud hyperscalers, such as Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform solidify their dominance, concerns over market competition, licensing restrictions, and barriers to switching are gaining momentum. The UKs Competition and Markets Authority (CMA) is taking a closer look at whether the UK cloud market is functioning fairly or whether customers are being locked into specific ecosystems with limited flexibility.These regulatory discussions are unfolding at a pivotal moment for the cloud market. There is a growing number of IT providers with hybrid and multi-cloud subscription-based services. Broadcom, for instance, with its acquisition of VMware, has a streamlined portfolio focused on private, public, and/or hybrid cloud flexibility. Given VMware's footprint in enterprise IT, Broadcom is positioning itself as a viable alternative (see VMware Cloud Foundation box) for end users seeking to escape cloud hyperscaler lock-in, as well as a useful partner for cloud service providers seeking to compete with the hyperscalers. The question is whether regulatory oversight alone can truly open the market, or if market forces can further help reduce hyperscaler dominance and end their deep ecosystem entrenchments.The cloud computing industry has reached a point where a few major providers dictate the market. The CMAs concerns are not unfounded the three major cloud hyperscalers AWS, Microsoft, and Google together control a sizable share of the UKs cloud infrastructure market, benefiting from deep enterprise relationships, extensive service ecosystems, and economies of scale that are difficult to match. And this is true not just in the UK, but in other major markets, ranging from the European Union to the United States. These advantages create structural challenges for organisations seeking to diversify their cloud strategy, whether they are end-users that rely on cloud service providers or other cloud service providers seeking to compete with the hyperscalers.One of the most significant barriers to competition is the cost of switching providers. Many organisations that initially embraced public cloud find themselves facing egress fees, technical dependencies, and licensing restrictions from hyperscalers that make hybrid-cloud adoption by end users more complex and costly than expected. For example, Microsofts licensing practices have come under scrutiny, with the argument that it unfairly raises the cost of running Windows workloads on competing platforms. If hyperscalers can no longer rely on egress fees and licensing constraints to retain customers, they may need to rethink service deprecation policies, reduce redundant offerings, and provide clearer pricing structures Bola Rotibi, chief of enterprise research, CCS InsightYes, hyperscaler dominance isnt purely a result of anti-competitive behaviour. These companies have earned their positions in part through innovation and strategic investment. AWS revolutionised developer and infrastructure-focused cloud services, making them easily accessible and aligned to their specific operational needs. Microsoft, on the other hand, has leveraged its strong enterprise footprint to make Azure a seamless extension of its software stack. Its offerings are widely deployed and deeply embedded into corporate IT infrastructures.The challenge regulators face is determining whether these advantages give hyperscalers the ability to lock-in customers and create an unfair playing field, or if they simply reflect the natural evolution of an industry where scale and efficiency drive competitive success.The banking industry offers a compelling case study in regulatory-driven competition. Open banking policiesforced large financial institutions to provide API access to fintech companies, enabling new players to compete with established banks. The result was a surge in innovation, improved customer services, and increased choice, benefiting both startups and traditional financial institutions.Could a similar pro-competition model be applied to cloud computing? If regulators push for greater data portability, reduced egress fees, and fairer licensing models, hyperscalers could be forced to compete more on service quality rather than continue to benefit from vendor lock-in mechanisms. This would encourage a more diverse cloud ecosystem, allowing alternative cloud service providers to expand the overall market, while potentially providing end users with more cloud-based options to better utilise their data and applications. Yet, there are important differences between banking and cloud computing. Unlike financial institutions, which can adapt through open application programming interface (APIs) and partnership models, cloud providers operate at a scale that requires enormous capital investment in infrastructure, networking, and security. Regulators must be careful not to create unintended consequences for example, excessive restrictions could reduce the incentive for hyperscalers to invest in next-generation cloud technologies.One similarity that does exist between banking and cloud computing is the presence of emerging alternatives in the cloud market that are poised to compete with the hyperscalers. This is where Broadcoms acquisition of VMware and the resulting business model adjustments become particularly relevant.For businesses looking to escape single-vendor cloud dependency, VMware Cloud Foundation (VCF) presents viable options, including private cloud, public cloud, or a combination of both through a hybrid cloud model. Where hyperscaler cloud platforms promote open ecosystem engagement and standards, they still pivot towards a design strategy that reinforces their own ecosystems. On the other hand, VCFs architectural principle is based on building for interoperability and offering a consistent, enterprise-grade cloud experience across private and public clouds.One of VCFs biggest advantages is its ability to support both virtual machines (VMs) and Kubernetes-based workloads on a single platform. Many enterprises are still running legacy applications that rely on Virtual Machines (VMs), yet also need to modernise with cloud-native, containerised applications. Instead of forcing businesses to choose between two separate architectures, VCF seamlessly integrates both. It is a perspective that has not escaped Broadcoms competitors in the market. A clear acknowledgment of businesses reliance on VMs and the slow transition to containerised operations is Red Hats launch of OpenShift Virtualisationa competing unified platform designed to manage both virtual machines and containers, helping accelerate the shift toward modernised, container-based workloads.Additionally, recent total cost of ownership studies have indicated that VCF delivers 40-52% cost savings compared to bare-metal or alternative cloud-native solutions. This is particularly relevant in an era where businesses are reevaluating cloud costs and looking for ways to optimise spending while maintaining operational flexibility.Security and compliance are also key considerations. Many regulated industriesincluding financial services, healthcare, and government sectorsrequire hybrid cloud models to comply with data sovereignty laws. VCF enables organisations to deploy a unified cloud infrastructure while ensuring that sensitive workloads remain under direct control.As regulatory conversations evolve, VCFs value proposition as a flexible, secure, and cost-effective option for end users and enabler for cloud service providers align well with industry needs and even CMA objectives.Regulating dominant cloud providers is a complex balancing act. If done well, it could promote a healthier, more competitive ecosystem, ensuring that businesses can choose cloud providers based on functionality rather than contractual obligations. If done poorly, it may slow down innovation, increase complexity, and create compliance burdens for all providers.It is a balancing act well understood by the CMA, the regulatory body tasked by the UK government with helping drive growth without violating its central mandate of promoting competition and protecting consumers. One potential outcome of regulation is that the hyperscalers themselves may be forced to improve. If hyperscalers can no longer rely on egress fees and licensing constraints to retain customers, they may need to rethink service deprecation policies, reduce redundant offerings, and provide clearer pricing structures. In a competitive landscape that values service quality over forced loyalty, businesses could ultimately benefit from more transparency, innovation, and choice.Yet, like in the case of financial services, regulation alone will not create more competition in the cloud marketplace. The presence of competitive options and enablers should be a factor when considering regulatory measures. In addition, businesses should take on greater responsibility for cloud architecture decisions, ensuring that vendor flexibility is a key consideration from the outset. Too often, organisations become entrenched in a single-provider cloud model not because of external constraints, but because of internal planning deficiencies. Choosing among private, public, and/or hybrid clouds requires investment in integration, governance, and skill developmentregulation can lower barriers, but companies must still take proactive steps to build adaptable, future-proof IT environments.The CMAs scrutiny of the cloud market represents a critical turning point for the cloud computing industry. If regulators successfully lower switching costs, enforce fairer licensing policies, and promote data portability, end users will have more options, and other cloud providers will be better positioned to capitalise on a more competitive market.However, success wont be determined by regulation alone. Regulation can create opportunities, but those opportunities need to be seized within the impacted market. The hyperscalers are not passive playersthey will adapt, innovate, and respond to regulatory changes in ways that could preserve their market dominance. Broadcoms opportunity lies in its ability to clearly articulate the value of various cloud models, simplify adoption, and prove the long-term benefits of its platform for both end-users and other cloud service providers. The cloud landscape is evolving, and the next 12 months will determine whether the hyperescalers maintain their stronghold, or if a more competitive and flexible cloud market grows significantly. Either way, the cloud market will not look the same a year from nowand given the enterprise footprint of VMware, Broadcom has a unique chance to shape its future.Bola Rotibi is chief of enterprise research at CCS Insight
    0 Comments ·0 Shares ·29 Views
  • What happened when a tech journalist experimented with AI on a PC?
    www.computerweekly.com
    Over the past few months, the editorial team at Computer Weeklys French sister title, LeMagIT, has been evaluating different versions of several free downloadable large language models (LLMs) on personal machines. These LLMs currently include Google's Gemma 3, Meta's Llama 3.3, Anthropic's Claude 3.7 Sonnet, several versions of Mistral (Mistral, Mistral Small 3.1, Mistral Nemo, Mixtral), IBM's Granite 3.2, Alibaba's Qwen 2.5, and DeepSeek R1, which is primarily a reasoning overlay on top of distilled versions of Qwen or Llama.The test protocol consists of trying to transform interviews recorded by journalists during their reporting into articles that can be published directly on LeMagIT. What follows is the LeMagIT teams experiences:We are assessing the technical feasibility of doing this on a personal machine and the quality of the output with the resources available. Let's make it clear from the outset that we have never yet managed to get an AI to work properly for us. The only point of this exercise is to understand the real possibilities of AI based on a concrete case.Our test protocol is a prompt that includes 1,500 tokens (6,000 characters, or two magazine pages) to explain to the AI how to write an article, plus an average of 11,000 tokens for the transcription of an interview lasting around 45 minutes. Such a prompt is generally too heavy to fit into the free window of an online AI. That's why it's a good idea to download an AI onto a personal machine, since the processing remains free, whatever its size.The protocol is launched from the LM Studio community software, which mimics the online chatbot interface on the personal computer. LM Studio has a function for downloading LLMs directly. However, all the LLMs that can be downloaded free of charge are available on the Hugging Face website.Technically, the quality of the result depends on the amount of memory used by the AI. At the time of writing, the best result is achieved with an LLM of 27 billion parameters encoded on 8 bits (Google's Gemma, in the "27B Q8_0" version), with a context window of 32,000 tokens and a prompt length of 15,000 tokens, on a Mac with SOC M1 Max and 64 GB of RAM, with 48 GB shared between the processor cores (orchestration), the GPU cores (vector acceleration for searching for answers) and the NPU cores (matrix acceleration for understanding input data).In this configuration, the processing speed is 6.82 tokens/second. The only way to speed up processing without damaging the result is to opt for an SOC with a higher GHz frequency, or with more processing cores.In this configuration, LLMs with more parameters (32 billion, 70billion, etc) exceed memory capacity and either don't even load, or generate truncated results (a single-paragraph article, for example). With fewer parameters, they use less memory and the quality of writing falls dramatically, with repetitions and unclear information. Using parameters encoded on fewer bits (3, 4, 5 or 6) significantly speeds up processing, but also reduces the quality of writing, with grammatical errors and even invented words.Finally, the size of the prompt window in tokens depends on the size of the data to be supplied to the AI. It is non-negotiable. If this size saturates memory, then you should opt for an LLM with fewer parameters, which will free up RAM to the detriment of the quality of the final result.Our tests have resulted in articles that are well written. They have an angle, a coherent chronology of several thematic sections, quotations in the right place, a dynamic headline and concluding sentence. Regardless of the LLM used, the AI is incapable of correctly prioritising the points discussed during the interview However, we have never managed to obtain a publishable article. Regardless of the LLM used, including DeepSeek R1 and its supposed reasoning abilities, the AI is systematically incapable of correctly prioritising the various points discussed during the interview. It always misses the point and often generates pretty but uninteresting articles. Occasionally, it will write an entire, well-argued speech to tell its readers that the company interviewed... has competitors.LLMs are not all equal in the vocabulary and writing style they choose. At the time of writing, Meta's Llama 3.x is producing sentences that are difficult to read, while Mistral and, to a lesser extent, Gemma have a tendency to write like marketing agencies, using flattering adjectives but devoid of concrete information.Surprisingly, the LLM that writes most beautifully in French within the limits of the test configuration is Chinese Qwen. Initially, the most competent LLM on our test platform was Mixtral 8x7B (with an x instead of an s), which mixes eight thematic LLMs, each with just 7 billion parameters.However, the best options for fitting Qwen and Mixtral into the 48GB of our test configuration are, for the former, a version with only 14 billion parameters and, for the latter, parameters encoded on 3 bits. The former writes unclear and uninteresting information, even when mixed with DeepSeek R1 (DeepSeek R1 is only available as a distilled version of another LLM, either Qwen or Llama). The latter is riddled with syntax errors.The version of Mixtral with parameters encoded on 4 bits offered an interesting compromise, but recent developments in LM Studio, with a larger memory footprint, prevent the AI from working properly. Mixtral 8x7B Q4_K_M now produces truncated results.An interesting alternative to Mixtral is the very recent Mistral Small 3.1 with 24 billion parameters encoded on 8 bits, which, according to our tests, produces a result of a quality fairly close to Gemma 3. What's more, it is slightly faster, with a speed of 8.65 tokens per second.According to the specialists interviewed by LeMagIT, the hardware architecture most likely to support the work of generative AI on a personal machine is one where the same RAM is accessible to all types of computing cores at the same time. In practice, this means using a machine based on a system-on-chip (SoC) processor where the CPU, GPU and NPU cores are connected together to the same physical and logical access to the RAM, with data located at the same addresses for all the circuits.When this is not the case that is, when the personal machine has an external GPU with its own memory, or when the processor is indeed a SoC that integrates the CPU, GPU and NPU cores, but where each has access to a dedicated part in the common RAM - then the LLMs need more memory to function. This is because the same data needs to be replicated in each part dedicated to the circuits.So, while it is indeed possible to run an LLM with 27 billion parameters encoded in 8 bits on a Silicon M Mac with 48 GB of shared RAM, using the same evaluation criteria, we would have to make do with an LLM with 13 billion parameters on a PC where a total of 48 GB of RAM would be divided between 24 GB of RAM for the processor and 24 GB of RAM for the graphics card.This explains the initial success of Silicon M-based Macs for running LLMs locally, as this chip is a SoC where all the circuits benefit from UMA (unified memory architecture) access. In early 2025, AMD imitated this architecture in its Ryzen AI Max SoC range. At the time of writing, Intel's Core Ultra SoCs, which combine CPU, GPU and NPU, do not have such unified memory access.Writing the prompt that explains how to write a particular type of article is an engineering job. The trick to getting off to a good start is to give the AI a piece of work that has already been done by a human - in our case, a final article accompanied by a transcript of the interview - and ask what prompt it should have been given to do the same job. Around five very different examples are enough to determine the essential points of the prompt to be written, for a particular type of article. The trick is to give the AI a piece of work that has already been done by a human and ask what prompt it should have been given to do the same job However, AI systematically produces prompts that are too short, which will never be enough to write a full article. So the job is to use the leads it gives us and back them up with all the business knowledge we can muster.Note that the more pleasantly the prompt is written, the less precisely the AI understands what is being said in certain sentences. To avoid this bias, avoid pronouns as much as possible ("he", "this", "that", etc) and repeat the subject each time ("the article", "the article", "the article"...). This will make the prompt even harder to read for a human, but more effective for AI.Ensuring that the AI has sufficient latitude to produce varied content each time is a matter of trial and error. Despite our best efforts, all the articles produced by our test protocol have a family resemblance. It would be an effort to synthesise the full range of human creativity in the form of different competing prompts.Within the framework of our test protocol and in the context of AI capabilities at the time of writing, it is illusory to think that an AI would be capable of determining on its own the degree of relevance of all the comments made during an interview. Trying to get it to write a relevant article therefore necessarily involves a preliminary stage of stripping down the transcript of the interview.In practice, stripping the transcript of an interview of all the elements that are unnecessary for the final article, without however eliminating elements of context that have no place in the final article, but which guide the AI towards better results, requires the transcript to be rewritten. This rewriting costs human time, to the benefit of the AI's work, but not to the benefit of the journalist's work.This is a very important point - from that point onwards, AI stops saving the user time. As it stands, using AI means shifting work time from an existing task (writing the first draft of an article) to a new task (preparing data before delivering it to an AI).Secondly, the description in 1,500 tokens of the outline to follow when writing an article only works for a particular type of article. In other words, you need to write one outline for articles about a startup proposing an innovation, a completely different outline for those about a supplier launching a new version of its product, yet another outline for a player setting out a new strategic direction, and so on. The more use cases there are, the longer the upstream engineering work will take.Worse still, to date our experiments have only involved writing articles based on a single interview, usually at press conferences, so in a context where the interviewee has already structured his or her comments before delivering them. In other words, after more than six months of experimentation, we are still only at the simplest stage. We have not yet been able to invest time in more complex scenarios, which are nevertheless the daily lot of LeMagIT's production, starting with articles written on the basis of several interviews.The paradox is as follows - for AI to relieve a user of some of their work, that user has to work more. On the other hand, on these issues, AI on a personal machine is on a par with paid AI online.Read more about using LLMsGoogle claims AI advances with Gemini LLM - Code analysis, understanding large volumes of text and translating a language by learning from one read of a book are among the breakthroughs of Gemini 1.5.Prompt engineering is not for dummies - This is a guest post written by Sascha Heyer in his capacity as senior machine learning engineer at DoiT International and oversees machine learning.What developers need to know about Large Language Models - A developer strolls casually into work and gets comfy in their cubicle. Suddenly theres an update alert on the laptop screen - a new generative artificial intelligence function has been released.
    0 Comments ·0 Shares ·27 Views
  • Want to learn Linux from legends? This mentorship pairs you with top developers
    www.zdnet.com
    You'll get priceless Linux experience from developers such as Linux Foundation Fellow Shuah Khan and kernel stable maintainer Greg Kroah-Hartman. Here's how to apply.
    0 Comments ·0 Shares ·42 Views
  • Garmin Connect Plus brings AI to your wrist, but it's not free
    www.zdnet.com
    If you want AI-powered 'Active Intelligence,' training guidance, and more, you'll need to pay for Connect Plus.
    0 Comments ·0 Shares ·38 Views
  • Hisense Unveils Its 2025 ULED TV RangeIncluding Multiple 100-Inch Models
    www.forbes.com
    Global technology brand Hisense has taken the wraps off its 2025 premium ULED TV range, revealing its biggest range of screen sizes to date and plenty of features for both film fans and gamers to be excited about.All of the new ULED series, claims Hisense, are powered by next-generation AI processing which, in Hisenses own words, works effortlessly behind the scenes to deliver smarter, more intuitive picture enhancementswithout the need for manual adjustments. This processing can work, its claimed, on everything from contrast to colour accuracy and motion clarity in real time.Hisense's four-series strong range of 2025 ULED Mini LED TVs includes multiple 100-inch models. Photo: HisenseOther key features Hisense claims for its 2025 ULED TV range include advanced Quantum Dot color technology, improved local dimming systems, deeper and more natural black tones, richer colors and more brightness all with a focus on getting more impact out of the high dynamic range picture technology thats now being used to enhance the look of more and more films and TV shows on both streaming services and 4K Blu-ray discs.Hisenses new focus on AI even extends to the new ULED ranges audio, as the sets offer an AI-enhanced system that apparently optimizes Dolby Atmos mixes to create a more theatrical, room-filling sound stage.On the gaming front, most models in Hisenses latest ULED range support frame rates up to 165Hz; offer Dolby Vision Game modes so that you can enjoy gaming in Dolbys premium HDR format without high levels of input lag; and even, apparently, support AI-driven motion processing. Though this latter feature presumably increases the TVs input lag, and so likely wont be a good option to activate for fast-reaction games such as Call Of Duty.Interestingly, it also appears that all of Hisenses new ULED TVs will use either the Google TV smart system complete with Amazon Alexa, Google Assistant or Apple Homekit compatibility or Fire TV, rather than Hisenses own VIDAA smart platform. Hisenses announcement today does only apply to its U.S. TV range, though, so its possible that VIDAA will still appear on some of its European ULED models given Google TVs issues with carrying some of that territorys biggest terrestrial broadcaster catch up services.Lets look now at each of the four new ULED series in turn, starting with the flagship U9s.2025 U9 SeriesPreviously only available in 75 and 85-inch screen sizes, Hisenses U9 series for 2025 adds a 65-inch option as well. The U9QGs are powered by Hisenses most powerful new AI-bolstered processor, the Hi-View AI Engine X, which includes such features as AI 4K Upscaling, AI Super Resolution and AI Noise Reduction features for enhanced conversion of sub-4K sources to the TVs native 4K resolution, plus AI Local Dimming, AI HDR Upscaler and AI Depth Enhancement features working in tandem to deliver general brightness, contrast and depth of field improvements.Hisense claims that the new U9 series delivers higher peak brightness levels too (though it doesnt quote an actual number on this), while new ultra low reflection panels both stop reflections getting between you and the pictures youre watching, as well as delivering much wider effective viewing angles.The 165Hz gaming support is present and correct on these flagship ULED models, along with variable refresh rate support that includes AMDs FreeSync Premium Pro system.A so-called CineStage X Surround system, finally, delivers Dolby Atmos and DTS Virtual X playback over a 5.1.2-channel speaker system (4.1.2 on the 65-inch model).If you're wondering what a 100-inch TV looks like in a living room, here's a rendition of the ... More 100-inch U8QG in a swanky apartment.Photo: Hisense2025 U8 SeriesThe new U8QG TV series will include screen sizes up to 100 inches, and offer enhanced brightness that this time Hisense is prepared to put a number on: 5000 nits. The U8QGs are also claimed to carry more local dimming zones than their predecessors, while processing comes from the latest version of Hisenses Hi-View AI Engine Pro system.Quantum Dots are on hand to deliver a wider color range, while the U8QGs maintain the 165Hz gaming refresh rate support. Theres support for all of the four key HDR formats, HDR10, HDR10+, HLG and Dolby Vision, and the sets have achieved IMAX Enhanced certification, indicating that theyve been deemed capable of doing justice to IMAX Enhanceds special video mastering system.The U8QGs audio, finally, has 82W of power and a 4.1.2-channel count on hand to play back soundtracks that can include the Dolby Atmos format.The new Hisense U7QG.Photo: Hisense2025 U7 SeriesThese mid-range Hisense ULED models are set to combine premium feature counts with a wide range of screen sizes (from 55 to 100 inches) and aggressive pricing. Hisense hasnt yet revealed pricing details on its new ULED TVs, but it does state in its unveiling information that the U7 series will feature an accessible sub-$1k price point. This price will presumably only apply to the 55-inch model, of course; somehow I dont think youll be able to get the 100-inch model for less than $1K!Despite their relative affordability, the U7QGs will continue to use Mini LED lighting (actually Hisense calls the U7QGs lighting system Mini LED Pro).Hisense is pitching this range as particularly appropriate for gamers, emphasising that as well as still supporting a 165Hz game mode, the U7QGs carry a Game Booster feature that claims to take frame rates up to 288Hz; can handle variable refresh rates including AMDs FreeSync Premium Pro format; and carry a Dolby Vision Gaming mode.You still get anti-glare screens even at this level of Hisenses ULED range, though the U7QGs audio provision drops to a (still actually promising) 60W of power across a 2.1.2-channel speaker system.For those of you who can't read, this is what the 2025 Hisense U6QG TVs will look like. Photo: Hisense2025 U6 SeriesThis most affordable series in Hisenses 2025 ULED TV range will be available in screen sizes ranging from 55 all the way up to 100 inches, giving fans of king-sized TVs a potentially very affordable way of achieving their home theater dreams.They still use Mini LED lighting and continue to support all four of the main HDR picture formats. There are a few pretty substantial feature compromises/differences to take on board, though. Starting with the fact that unlike Hisenses other new ULED models, this one turns to Amazons Fire TV smart system rather than Google TV.Maximum frame rate support for gamers drops to 144Hz, too, while the AMD FreeSync support drops to the Premium rather than Premium Pro level. The U6QGs audio set up diminishes to a relatively basic 2.1-channel affair too though theres still support for Dolby Atmos decoding.Its not clear from Hisenses announcement if the new U6 series still get anti-glare screens, though Hisense does confirm that this series will not benefit from the Hi-View AI Engine Pro processor that the U8 and U7 series get.Hisense concludes its 2025 ULED TV reveal by stating that more details, including on-sale dates and pricing, will be revealed later in the year.Related ReadingHisense Reveals Two New Giant TVs One Debuting Tri-Chroma LED TV Technology And One Using MicroLEDsSony Unveils Eye-Popping Next-Gent TV Technology And Again, It Isnt OLEDTCL Unveils New High-Performance Mini LED TV Range With Bang & Olufsen Sound
    0 Comments ·0 Shares ·28 Views
  • Warner Bros.' push to curb second-hand sales led to a groundbreaking game mechanic
    www.techspot.com
    In brief: The Middle-Earth games from the recently closed studio Monolith are largely known for the unique Nemesis system, which procedurally generated new enemy behavior based on the player's actions. Critics praised Monolith for its creativity, but a former Warner Bros executive recently revealed that the design actually emerged from a very pragmatic goal discouraging players from trading their games in. In a recent video that was quickly removed, former WB Games vice president Laura Fryer revealed the inspiration behind an innovative game mechanic from Monolith's Middle-earth games. Although players often criticize game companies for using gimmicks to further monetize their products, the same goal was behind the creation of the well-regarded Nemesis system.The first in a duology of action-adventure games set in J. R. R. Tolkien's world, 2014's Middle-earth: Shadow of Mordor, was developed during a time when large publishers frequently complained about second-hand game sales. Multiplayer and large open worlds became more common in games with big budgets because players traded them in less often, which presumably meant that a higher percentage of customers bought them new instead of used.Fryer explained that Monolith's game engine didn't support the open worlds common in other major titles, and the company was unwilling to implement multiplayer. Instead, the studio tried to increase player retention by making its game more dynamic and less predictable.Monolith's answer, the Nemesis system, enabled NPCs to remember players' actions and change their behavior accordingly during later encounters. Although the mechanic was popular, it remains unclear whether it helped minimize trade-ins and second-hand sales. Nowadays, publishers prefer to lock customers in with digital purchases and subscriptions, which can't be traded.Unfortunately, despite implementing the Nemesis system in two successful Middle-earth games, Warner Bros suddenly shut down Monolith and other subsidiary studios last month. The company was developing a Wonder Woman title that also implemented the memorable mechanic. // Related StoriesTo make matters worse, the Nemesis system likely won't appear in another game for the foreseeable future because Warner Bros patented it. The publisher currently has no other games in production, and holds the patent on the mechanic until August 11, 2036. Fittingly, YouTube removed Fryer's video following a Warner Bros. copyright claim.Before the Middle-earth titles, Monolith's three-decade history included well-regarded PC classics like Blood, No One Lives Forever, Shogo: Mobile Armor Division, and F.E.A.R.
    0 Comments ·0 Shares ·35 Views
  • Mozilla Firefox will soon reduce the need for dangerous DLL injection for enterprise users
    www.techspot.com
    For context: Admins and programmers sometimes use "DLL injection" to insert customized code into a process or program. They generally use this method to change or add to the behavior of applications, such as browsers. However, it can also cause compatibility, reliability, or security issues when these programs receive regular updates. Mozilla recently released Firefox version 136.0.3, which will likely be the last minor release before a new iteration drops in April. Further out, the company plans to bring a new enterprise-focused upgrade in May to make the browser more stable and safer. Starting with version 138, Firefox will provide a new way to prevent data loss incidents without resorting to troublesome DLL injection practices.Writing on Mozilla Hacks, Firefox developer Haik Aftandilian explained DLL injection and how corporations use the technique to customize the browser's internal routines. In essence, external code libraries, which Firefox users can list through the "about:third-party" internal page, can extend Firefox's functionality or usefulness.However, DLL injection is still problematic because different codebases interact at the most fundamental level of the execution chain. The internet is built upon software interoperating over documented standards, Aftandilian said, and injecting foreign DLLs into undocumented internals of an application isn't exactly the best way to achieve software cooperation.Modern web browsers like Firefox are modified, improved, and updated monthly, and foreign DDLs need to keep up with this development rate to avoid serious bugs or compatibility issues. What works today might bork the whole system next month. Unpredictable behaviors caused by DLL injection are difficult to test and debug, often requiring software updates for both the browser and the offending DLL.Firefox 138 will include a new SDK for Data Loss Prevention (DLP) programs, which should be able to interact with the browser without resorting to DLL injection. These apps monitor the system for potential data loss incidents, an essential focus for enterprise organizations these days. // Related StoriesMay's Firefox release includes the Content Analysis SDK, a lightweight protocol similar to technology developed by Google for Chrome. The SDK bridges the gap between the browser and a DLP agent. Mozilla's version is compatible with Chrome's implementation, so software vendors can provide a single DLP agent that works with both browsers.The Content Analysis SDK is intended for business use cases and will only be available in Firefox 138 through Firefox Enterprise Policies. Admins can configure policies to customize the browser's behavior across a fleet of computers, such as limiting browser extensions or setting specific security options.
    0 Comments ·0 Shares ·34 Views
  • The 55-inch Amazon Fire TV 4-Series is marked down to $310 today
    www.digitaltrends.com
    Amazons Big Spring Sale is off and running, which means youll be able to save up to 40% on select items. While a majority of these markdowns are on outdoor-friendly devices, Amazon is also offering discounts on some of its TVs, including the Fire TV 4-Series.Right now, when you purchase the Amazon 55-inch Fire TV 4-Series 4K LED, youll only pay $310. The full price of this TV is $520.The Amazon Fire TV 4-Series is an entry-level LED that delivers solid picture quality and decent motion performance. When watching TV in a brightly lit room, you may notice a bit of glare on the screen, so the best viewing space is a dark one. Still, the 4-Series is able to produce a colorful picture with great contrast levels. The TV also has next to no low input lag when the TV is set to Game mode, making it a good choice for modern consoles and PCs.RelatedThe Fire TV 4-Series uses Amazons Fire TV OS for all things apps, casting, and smart home controls. With the included Alexa Voice Remote, youll be able to search for movies and shows, check news and weather, and even adjust smart lights and thermostats from the comfort of the couch. The 4-Series even has a decent set of speakers, though we always recommend looking at some of the best soundbar deals for an improved audio experience.Save $210 when you purchase the Amazon 55-inch Fire TV 4-Series 4K LED today. We also recommend you take a look at our roundups of the best TV deals and best Amazon deals for additional markdowns on top AV products!Editors Recommendations
    0 Comments ·0 Shares ·38 Views
  • NYT Mini Crossword today: puzzle answers for Thursday, March 27
    www.digitaltrends.com
    Love crossword puzzles but dont have all day to sit and solve a full-sized puzzle in your daily newspaper? Thats what The Mini is for!A bite-sized version of the New York Times well-known crossword puzzle, The Mini is a quick and easy way to test your crossword skills daily in a lot less time (the average puzzle takes most players just over a minute to solve). While The Mini is smaller and simpler than a normal crossword, it isnt always easy. Tripping up on one clue can be the difference between a personal best completion time and an embarrassing solve attempt.Recommended VideosJust like ourWordle hints and Connections hints, were here to help with The Mini today if youre stuck and need a little help.Please enable Javascript to view this contentBelow are the answers for the NYT Mini crossword today.New York TimesAcrossSomething from pumping GASJourneys ___ Stop Believin' DONTWith 7-Across, it often falls to pieces TOWERBacterium that can prompt a food recall ECOLISee 5-Across JENGADownLeave this instant! GONOWPerspective ANGLEOne of 354 to reach the crown inside the Statue of Liberty STAIRArt ___ (architectural style) DECOBoeing 757 or Airbus A350 JETEditors Recommendations
    0 Comments ·0 Shares ·38 Views