• Monster Hunter Wilds finally gets a mod that lets you ride around on a Thomas the Tank Engine Seikret, so all we need now is Macho Man Arkveld
    www.vg247.com
    Yep, it's happened. Monster Hunter Wilds finally has its first Thomas the Tank Engine mod, and it's arrived just before the game turns a month old. Rather than turning a monster battle into a surreal train fight, it makes sure you'll be able to choo-choo your way through the game, by swapping Sir Topham Hat's best loco in for your trusty Seikret. Read more
    0 Commenti ·0 condivisioni ·70 Views
  • Own Sociable Soccer 24? You'll Be Able To Upgrade To The New Game For Free
    www.nintendolife.com
    Subscribe to Nintendo Life on YouTube800kUK-based developer Tower Studios has announced that Sociable Soccer 25 will be making its way to the Nintendo Switch, and it's launching next week on 3rd April 2025.Now, the good news is that if you're an owner of Sociable Soccer 24 on Switch, you'll be able to upgrade to the new game for free via downloadable content (take note, EA). Tower Studios states that this upgrade will include "all new licensed players and features" from the upcoming title.Otherwise, you're looking at 19.99 / 24.99 / $24.99 for Sociable Soccer 25 via the Switch eShop, with the game boasting around 1,400 club and national teams, and featuring more than 13,000 FIFPRO licensed players. It also features fast-paced arcade gameplay reminiscent of its main inspiration, Sensible Soccer, with multiple difficulty settings available depending on your preferences.Subscribe to Nintendo Life on YouTube800kWatch on YouTube Let's take a look at the full feature list:- More than 13,000 FIFPRO licensed professional players to collect and upgrade- Control and manage your squad with no loot boxes or costly DLC- Huge online league system with 10 divisions to conquer- Up-to-date player, team and competition data for 1,400 world teams- Over 80 world football trophies to win- Fast, intuitive arcade gameplay, easy to pick up, but hard to master- Two Player Couch Clashes to relive the good old days, playing against a friend in the ultimate face-to-face match up!- Clan play representing the club that you love against rival fans- Friendly, Career and Trophy game modes all included- Multiple ways to play: Coach (just watch and learn), Casual and Pro- A host of unique referees, each with a different personality and style, to keep you on your toes- Chat and send emojis during matches to trash talk your opponent!- Aftertouch, powered shots, sprint and energy systemsImages: Tower Studios Definitely worth a puntDo you already own Sociable Soccer 24? Will you be upgrading to the new game? Let us know with a comment down in the usual place.Related GamesSee AlsoShare:00 Nintendo Lifes resident horror fanatic, when hes not knee-deep in Resident Evil and Silent Hill lore, Ollie likes to dive into a good horror book while nursing a lovely cup of tea. He also enjoys long walks and listens to everything from TOOL to Chuck Berry. Hold on there, you need to login to post a comment...Related ArticlesNintendo Expands Switch Online's SNES Library With Four More TitlesArriving next weekNintendo Direct Confirmed For Today, 27th March 202530 minutes of news for OG Switch gamesPokmon Scarlet & Violet: Mystery Gift Codes ListAll the current Pokmon Scarlet and Violet Mystery Gift codesRumour: A Switch 1 Direct Presentation May Take Place This WeekColour us sceptical, though
    0 Commenti ·0 condivisioni ·64 Views
  • Pokmon TCG Pocket Update Introduces Shinies And Ranked Battles
    www.nintendolife.com
    Ooh, shiny.The latest major update for Pokmon Trading Card Game Pocket is now available, bringing with it a total of 72 normal cards and a plethora of secret cards via the new 'Shiny Revelry' booster pack.As the name implies, the new booster pack introduces Shiny Pokmon for the first time, bringing back a fan-favourite mechanic that started all the way back in Pokmon Gold and Silver. Highlights include Charizard, Trainer Red, Meowscarada, Tatsugiri, and more.Read the full article on nintendolife.com
    0 Commenti ·0 condivisioni ·66 Views
  • Times Internet spin-out Abound raises $14M to let more Indian Americans send money home
    techcrunch.com
    Abound, a remittance app that was spun off by Times Internet in 2023, has raised $14 million in its first external funding round as it aims to reach more Indian expats in the U.S. Remittance flows to India are rising as the Indian diaspora spreads worldwide. In 2024, the South Asian country recorded $129.1 billion in remittances, accounting for 14.3% share of the global market and topping the charts, according to a World Bank report. Abound aims to tap this growth with its mobile app.Indians are among the largest immigrant groups in the U.S. The average household income in the U.S. is close to $58,000, and the average Indian household income is about $150,000. That tells you that Indian expats are wealthy, affluent, and yet theyre vastly underserved in terms of products and services that are geared for them, said Nishkaam Mehta, CEO of Abound, in an interview.Mehta, who worked at Hulu as head of its mobile strategy and growth for more than four years, joined Times Internet in 2019 after meeting its vice chairman Satyan Gajwani to create a super app for non-resident Indians. The startup was incubated at the tech arm of Indian media conglomerate, The Times of India Group.Initially named Times Club, Abound allows users to send money to India, earn rewards, and get cashback on services including live sports streaming, grocery shopping, and OTT subscriptions. The firm has plans to explore avenues to let users access high-yield savings, India-focused investments, and cross-border credit solutions.In our model as a super app, we envision a role for banks themselves to be a part of the platform, Mehta told TechCrunch.The company claims it has processed over $150 million in remittances in total from its more than 500,000 monthly transacting users, and that its revenue has increased by 50% month-over-month since launch.Abounds remittance volume increased by 15% every month and the startup processed $110 million to $120 million in the past 12 months, Mehta said.Abound generates ad revenue from rewards and foreign exchange spread on money remittances. Foreign exchange presents significant potential for growth, Mehta stated. The startup said The Times of Indias over 50 million monthly online visitors outside India also help it reach new users and offer a range of rewards.In money remittances, if you purely play the exchange rate game, then youre always acquiring the user, said Mehta. In our case, because weve got this rewards layer from the Times of India and other local advertisers, we dont have that problem. We can always compete on exchange rates, knowing that we dont have the same customer acquisition cost that the other companies might have.This seed round was all-equity, and was led by NEAR Foundation, with participation from Circle Ventures, Times Internet, and other investors. The company plans to use the fresh cash to expand its presence, increase its offerings and improve its tech infrastructure.Traditional banks in the U.S. dont focus on the financial requirements of this segment because there is no banking product built just for the NRI population. We see that as a large gap and opportunity, said Gajwani.Following the deal, Times Internet will continue to be the largest stakeholder in Abound. Gajwani told TechCrunch the Times Internet would be using its strategic assets to help accelerate Abounds growth.The market of platforms enabling foreign remittances is crowded with incumbents such as Western Union, PayPal and MoneyGram, as well as newer players like Remitly and Wise. But Mehta thinks Abound has an edge as it super serves users by offering competitive exchange rates as well as rewards and cashback at about 5,000 Indian grocery stores and access to live-streamed cricket by far the most popular sport in India.Abound currently has a team of 40 people, primarily based in India. It plans to expand its headcount and set up an executive team in the U.S. as well.In time, the firm plans to enter markets such as Canada, Singapore and the UAE, which all have big populations of non-resident Indians. Nonetheless, Mehta said the immediate focus is to cement its footing in the U.S. and then run pilots in foreign markets.
    0 Commenti ·0 condivisioni ·51 Views
  • Pershing GTX116 Yacht Employs Caracols Robotic 3D Printing for Superstructure Components
    3dprintingindustry.com
    Pershing, an Italian brand under luxury yacht manufacturer Ferretti Group, has integrated large-format additive manufacturing (LFAM) into its latest sport utility yacht, the GTX116. Developed in collaboration with Caracol, the project showcases how 3D printing is reshaping the yacht industry by replacing traditional fiberglass molding with robotic extrusion systems.Pershing, an Italian brand under luxury yacht manufacturer Ferretti Group, has integrated large-format additive manufacturing (LFAM) into its latest sport utility yacht, the GTX116. Developed in collaboration with Caracol, the project showcases how 3D printing is reshaping yacht production by replacing traditional fiberglass molding with robotic extrusion systems.The yachts side air intake grilles and visor above the windshield were fabricated using Caracols Heron AM platform. This marks a significant milestone in the adoption of additive manufacturing for high-end marine applications, where performance, customization, and aesthetic value are paramount.3D printed intake grilles.Photo via Caracol.The additive manufacturing process and its benefits for Pershing GTX116 air grillesManufactured at Caracols facility using their Heron 300 system, the air grilles span 4.2 meters in length and were printed using ASA (Acrylonitrile Styrene Acrylate) reinforced with 20% glass fiber (GF), a material selected for its strength and resistance to marine environments. The printing process, completed in 72 hours, resulted in a 40 kg component measuring 420t0 x 400 x 400 mm, that was later finished with a gel coat for durability and visual appeal.This shift from traditional fiberglass lamination to robotic additive manufacturing offers numerous advantages. Yacht grilles and other custom superstructures typically require mold production, intensive manual labor, and long lead times. Caracols LFAM approach eliminates the need for tooling, enabling direct-from-CAD production, reducing steps, and enhancing design freedom.According to Ferretti Group, the new approach led to a 50% reduction in lead time, 60% reduction in material waste, and a 15% lighter component. These improvements align with the industrys growing focus on sustainability, cost-effectiveness, and performance.Founded in Milan in 2017, Caracol has developed an integrated LFAM platform that combines a proprietary extrusion head, robotic motion systems, and its in-house Eidos Manufacturing software. The company operates Europes largest LFAM center and has recently expanded to the U.S. and Dubai, with applications in aerospace, marine, energy, and architecture. Its most recent addition, the Vipra AM platform, brings LFAM to metal applications, targeting high-performance components in industries such as construction, aerospace, and shipbuilding.Intake grilles with the Heron 300 System. Photo via Caracol.3D Printing for marine applicationsA notable example is the companys recent collaboration with V2 Group, producing a 6-meter-long 3D printed catamaran. This catamaran was developed with a focus not only on producing a single vessel but also on examining how the manufacturing process could be refined for broader application. The results demonstrated the potential of large-format additive manufacturing to reduce material waste and allow for complex, customizable designs. Both companies plan to continue advancing this method of production, working toward a model that could be commercially viable in the marine industry.Other companies are also advancing similar initiatives- Dutch start-up Tanaruz Boats uses recycled polypropylene reinforced with 30% glass fiber to produce customizable leisure boats ranging from 4.5 to 10 meters. The company aims to scale production while maintaining a circular manufacturing modelIn the U.S., the University of Maine (UMaine) made headlines in 2022 after 3D printing two large vessels for the U.S. Marine Corps. Produced at UMaines Advanced Structures and Composites Center, the boats were designed as logistical support vessels capable of carrying supplies and personnel. The larger vessel can transport two 20-foot shipping containers, while the smaller one can accommodate an entire rifle squad with three days worth of provisions. This project highlights how additive manufacturing can be used to accelerate production, reduce costs, and deliver mission-specific performance at scale.Featured image shows the Pershing GTX116 yacht. Photo via Caracol.Who won the 2024 3D Printing Industry Awards?Subscribe to the 3D Printing Industry newsletter to keep up with the latest 3D printing news.You can also follow us on LinkedIn and subscribe to the 3D Printing Industry Youtube channel to access more exclusive content.
    0 Commenti ·0 condivisioni ·74 Views
  • Trideo Strategic Expansion in the North American 3D Printing Market
    3dprintingindustry.com
    Trideo, an Argentinean 3D printer manufacturer specializing in FDM printers for large-scale and industrial additive manufacturing, is expanding into the North American market.Founded in Buenos Aires in 2015 by Laurent Rodriguez, Simon Gabriac, and Nicolas Berenfeld, Trideo has established itself as a provider of high-performance 3D printers. The company expanded to Brazil in 2021 and, more recently, to Mexico. With the opening of its Mexico City office in late 2024, Trideo aims to strengthen its presence in North America and extend its services to clients in the region, including the Caribbean.15kg print of Diego Maradona. Photo via Trideo.Our expansion into Mexico is a strategic step to bring our innovative 3D printing solutions closer to a growing market, said Nicolas Berenfeld, CEO of Trideo. We are committed to offering cutting-edge technologies and contributing to the development of the additive manufacturing industry in the region.The company is targeting rising demand for 3D printing solutions in industries such as automotive, aerospace, manufacturing, and academic research.Trideo large-format 3D printers3D printing not only optimizes industrial production but also provides a sustainable alternative to traditional manufacturing methods. Our goal is to keep innovating and delivering solutions that enhance both efficiency and sustainability, said Simon Gabriac, Trideos CTO.One of Trideos most innovative products is the Big T, a large-format 3D printer with a 1000 x 1000 x 1000 mm build volume. Its capability to produce large-scale parts makes it ideal for industrial applications requiring robust, custom components.Large-scale printing offers several advantages, such as reducing the need for joints in smaller components, enhancing strength and aesthetics, optimizing production efficiency,Model printed with the Big T. Photo via Trideo.Other 3D printers include the T600 HT, a 600 x 600 x 400 mm 3D printer featuring a heated chamber reaching 200C, designed for high-performance materials. The Pellet Extrusion System, an add-on for the Big T, enables waste material recycling, reinforcing sustainability in the manufacturing process.Additionally, Trideo has developed Independent Dual Extruder (IDEX) 3D printers, which allow for simultaneous dual-part printing, optimizing production time.Large-scale prototype. Photo via Trideo.Manufacturing & Digital Transformation in Mexico & the CaribbeanAdditive manufacturing is ever more present in Latin America. Companies like MANUFACTURA are leveraging 3D printing for sustainable innovation, as seen in their development of bioceramic bricks made from eggshells, an eco-friendly building material that offers an alternative to traditional building materials.Similarly, projects like The Wood Project are utilizing 3D printing to repurpose wood waste, converting it into sustainable structures, further showcasing how digital manufacturing is reshaping production processes.Meanwhile, the Caribbean has seen applications of additive manufacturing in construction. A notable development is CyBe Constructions collaboration with Betonindustrie Brievengat (BIB) to build the regions first 3D-printed homes in Curaao. This initiative aims to address housing shortages by utilizing advanced concrete 3D printing techniques, offering efficient and sustainable building solutions.Similarly, Innova Building Solutions Inc, based in Trinidad and Tobago, is pioneering affordable housing through 3D construction. Their conceptual model showcases the potential of 3D printing in creating scalable homes ranging from 600 to 1,200 sq. ft., emphasizing design flexibility and rapid construction timelines. For instance, the walls of a 600 sq. ft. home can be printed in approximately 30 hours, significantly reducing traditional construction durations., Innova Building Solutions Inc, based in Trinidad and Tobago, is pioneering affordable housing through 3D construction. Their conceptual model showcases the potential of 3D printing in creating scalable homes ranging from 600 to 1,200 sq. ft., emphasizing design flexibility and rapid construction timelines. For instance, the walls of a 600 sq. ft. home can be printed in approximately 30 hours, significantly reducing traditional construction durations.Featured image shows a large-scale prototype. Photo via Trideo.Who won the 20243D Printing Industry Awards?Subscribe to the 3D Printing Industry newsletter to keep up with the latest 3D printing news.You can also follow us on LinkedIn and subscribe to the 3D Printing Industry Youtube channel to access more exclusive content.
    0 Commenti ·0 condivisioni ·94 Views
  • Can regulatory oversight alone unlock cloud competition?
    www.computerweekly.com
    Cloud computings rise is a success story under scrutiny. It has been nothing short of transformative, enabling businesses to scale operations, innovate rapidly, and optimise costs. It has become an essential pillar of modern enterprise IT, supporting mission-critical workloads across industries. From finance and healthcare to artificial intelligence (AI) and retail, the cloud is now the undisputed underlying infrastructure for digital transformation.Yet, as public cloud hyperscalers, such as Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform solidify their dominance, concerns over market competition, licensing restrictions, and barriers to switching are gaining momentum. The UKs Competition and Markets Authority (CMA) is taking a closer look at whether the UK cloud market is functioning fairly or whether customers are being locked into specific ecosystems with limited flexibility.These regulatory discussions are unfolding at a pivotal moment for the cloud market. There is a growing number of IT providers with hybrid and multi-cloud subscription-based services. Broadcom, for instance, with its acquisition of VMware, has a streamlined portfolio focused on private, public, and/or hybrid cloud flexibility. Given VMware's footprint in enterprise IT, Broadcom is positioning itself as a viable alternative (see VMware Cloud Foundation box) for end users seeking to escape cloud hyperscaler lock-in, as well as a useful partner for cloud service providers seeking to compete with the hyperscalers. The question is whether regulatory oversight alone can truly open the market, or if market forces can further help reduce hyperscaler dominance and end their deep ecosystem entrenchments.The cloud computing industry has reached a point where a few major providers dictate the market. The CMAs concerns are not unfounded the three major cloud hyperscalers AWS, Microsoft, and Google together control a sizable share of the UKs cloud infrastructure market, benefiting from deep enterprise relationships, extensive service ecosystems, and economies of scale that are difficult to match. And this is true not just in the UK, but in other major markets, ranging from the European Union to the United States. These advantages create structural challenges for organisations seeking to diversify their cloud strategy, whether they are end-users that rely on cloud service providers or other cloud service providers seeking to compete with the hyperscalers.One of the most significant barriers to competition is the cost of switching providers. Many organisations that initially embraced public cloud find themselves facing egress fees, technical dependencies, and licensing restrictions from hyperscalers that make hybrid-cloud adoption by end users more complex and costly than expected. For example, Microsofts licensing practices have come under scrutiny, with the argument that it unfairly raises the cost of running Windows workloads on competing platforms. If hyperscalers can no longer rely on egress fees and licensing constraints to retain customers, they may need to rethink service deprecation policies, reduce redundant offerings, and provide clearer pricing structures Bola Rotibi, chief of enterprise research, CCS InsightYes, hyperscaler dominance isnt purely a result of anti-competitive behaviour. These companies have earned their positions in part through innovation and strategic investment. AWS revolutionised developer and infrastructure-focused cloud services, making them easily accessible and aligned to their specific operational needs. Microsoft, on the other hand, has leveraged its strong enterprise footprint to make Azure a seamless extension of its software stack. Its offerings are widely deployed and deeply embedded into corporate IT infrastructures.The challenge regulators face is determining whether these advantages give hyperscalers the ability to lock-in customers and create an unfair playing field, or if they simply reflect the natural evolution of an industry where scale and efficiency drive competitive success.The banking industry offers a compelling case study in regulatory-driven competition. Open banking policiesforced large financial institutions to provide API access to fintech companies, enabling new players to compete with established banks. The result was a surge in innovation, improved customer services, and increased choice, benefiting both startups and traditional financial institutions.Could a similar pro-competition model be applied to cloud computing? If regulators push for greater data portability, reduced egress fees, and fairer licensing models, hyperscalers could be forced to compete more on service quality rather than continue to benefit from vendor lock-in mechanisms. This would encourage a more diverse cloud ecosystem, allowing alternative cloud service providers to expand the overall market, while potentially providing end users with more cloud-based options to better utilise their data and applications. Yet, there are important differences between banking and cloud computing. Unlike financial institutions, which can adapt through open application programming interface (APIs) and partnership models, cloud providers operate at a scale that requires enormous capital investment in infrastructure, networking, and security. Regulators must be careful not to create unintended consequences for example, excessive restrictions could reduce the incentive for hyperscalers to invest in next-generation cloud technologies.One similarity that does exist between banking and cloud computing is the presence of emerging alternatives in the cloud market that are poised to compete with the hyperscalers. This is where Broadcoms acquisition of VMware and the resulting business model adjustments become particularly relevant.For businesses looking to escape single-vendor cloud dependency, VMware Cloud Foundation (VCF) presents viable options, including private cloud, public cloud, or a combination of both through a hybrid cloud model. Where hyperscaler cloud platforms promote open ecosystem engagement and standards, they still pivot towards a design strategy that reinforces their own ecosystems. On the other hand, VCFs architectural principle is based on building for interoperability and offering a consistent, enterprise-grade cloud experience across private and public clouds.One of VCFs biggest advantages is its ability to support both virtual machines (VMs) and Kubernetes-based workloads on a single platform. Many enterprises are still running legacy applications that rely on Virtual Machines (VMs), yet also need to modernise with cloud-native, containerised applications. Instead of forcing businesses to choose between two separate architectures, VCF seamlessly integrates both. It is a perspective that has not escaped Broadcoms competitors in the market. A clear acknowledgment of businesses reliance on VMs and the slow transition to containerised operations is Red Hats launch of OpenShift Virtualisationa competing unified platform designed to manage both virtual machines and containers, helping accelerate the shift toward modernised, container-based workloads.Additionally, recent total cost of ownership studies have indicated that VCF delivers 40-52% cost savings compared to bare-metal or alternative cloud-native solutions. This is particularly relevant in an era where businesses are reevaluating cloud costs and looking for ways to optimise spending while maintaining operational flexibility.Security and compliance are also key considerations. Many regulated industriesincluding financial services, healthcare, and government sectorsrequire hybrid cloud models to comply with data sovereignty laws. VCF enables organisations to deploy a unified cloud infrastructure while ensuring that sensitive workloads remain under direct control.As regulatory conversations evolve, VCFs value proposition as a flexible, secure, and cost-effective option for end users and enabler for cloud service providers align well with industry needs and even CMA objectives.Regulating dominant cloud providers is a complex balancing act. If done well, it could promote a healthier, more competitive ecosystem, ensuring that businesses can choose cloud providers based on functionality rather than contractual obligations. If done poorly, it may slow down innovation, increase complexity, and create compliance burdens for all providers.It is a balancing act well understood by the CMA, the regulatory body tasked by the UK government with helping drive growth without violating its central mandate of promoting competition and protecting consumers. One potential outcome of regulation is that the hyperscalers themselves may be forced to improve. If hyperscalers can no longer rely on egress fees and licensing constraints to retain customers, they may need to rethink service deprecation policies, reduce redundant offerings, and provide clearer pricing structures. In a competitive landscape that values service quality over forced loyalty, businesses could ultimately benefit from more transparency, innovation, and choice.Yet, like in the case of financial services, regulation alone will not create more competition in the cloud marketplace. The presence of competitive options and enablers should be a factor when considering regulatory measures. In addition, businesses should take on greater responsibility for cloud architecture decisions, ensuring that vendor flexibility is a key consideration from the outset. Too often, organisations become entrenched in a single-provider cloud model not because of external constraints, but because of internal planning deficiencies. Choosing among private, public, and/or hybrid clouds requires investment in integration, governance, and skill developmentregulation can lower barriers, but companies must still take proactive steps to build adaptable, future-proof IT environments.The CMAs scrutiny of the cloud market represents a critical turning point for the cloud computing industry. If regulators successfully lower switching costs, enforce fairer licensing policies, and promote data portability, end users will have more options, and other cloud providers will be better positioned to capitalise on a more competitive market.However, success wont be determined by regulation alone. Regulation can create opportunities, but those opportunities need to be seized within the impacted market. The hyperscalers are not passive playersthey will adapt, innovate, and respond to regulatory changes in ways that could preserve their market dominance. Broadcoms opportunity lies in its ability to clearly articulate the value of various cloud models, simplify adoption, and prove the long-term benefits of its platform for both end-users and other cloud service providers. The cloud landscape is evolving, and the next 12 months will determine whether the hyperescalers maintain their stronghold, or if a more competitive and flexible cloud market grows significantly. Either way, the cloud market will not look the same a year from nowand given the enterprise footprint of VMware, Broadcom has a unique chance to shape its future.Bola Rotibi is chief of enterprise research at CCS Insight
    0 Commenti ·0 condivisioni ·87 Views
  • What happened when a tech journalist experimented with AI on a PC?
    www.computerweekly.com
    Over the past few months, the editorial team at Computer Weeklys French sister title, LeMagIT, has been evaluating different versions of several free downloadable large language models (LLMs) on personal machines. These LLMs currently include Google's Gemma 3, Meta's Llama 3.3, Anthropic's Claude 3.7 Sonnet, several versions of Mistral (Mistral, Mistral Small 3.1, Mistral Nemo, Mixtral), IBM's Granite 3.2, Alibaba's Qwen 2.5, and DeepSeek R1, which is primarily a reasoning overlay on top of distilled versions of Qwen or Llama.The test protocol consists of trying to transform interviews recorded by journalists during their reporting into articles that can be published directly on LeMagIT. What follows is the LeMagIT teams experiences:We are assessing the technical feasibility of doing this on a personal machine and the quality of the output with the resources available. Let's make it clear from the outset that we have never yet managed to get an AI to work properly for us. The only point of this exercise is to understand the real possibilities of AI based on a concrete case.Our test protocol is a prompt that includes 1,500 tokens (6,000 characters, or two magazine pages) to explain to the AI how to write an article, plus an average of 11,000 tokens for the transcription of an interview lasting around 45 minutes. Such a prompt is generally too heavy to fit into the free window of an online AI. That's why it's a good idea to download an AI onto a personal machine, since the processing remains free, whatever its size.The protocol is launched from the LM Studio community software, which mimics the online chatbot interface on the personal computer. LM Studio has a function for downloading LLMs directly. However, all the LLMs that can be downloaded free of charge are available on the Hugging Face website.Technically, the quality of the result depends on the amount of memory used by the AI. At the time of writing, the best result is achieved with an LLM of 27 billion parameters encoded on 8 bits (Google's Gemma, in the "27B Q8_0" version), with a context window of 32,000 tokens and a prompt length of 15,000 tokens, on a Mac with SOC M1 Max and 64 GB of RAM, with 48 GB shared between the processor cores (orchestration), the GPU cores (vector acceleration for searching for answers) and the NPU cores (matrix acceleration for understanding input data).In this configuration, the processing speed is 6.82 tokens/second. The only way to speed up processing without damaging the result is to opt for an SOC with a higher GHz frequency, or with more processing cores.In this configuration, LLMs with more parameters (32 billion, 70billion, etc) exceed memory capacity and either don't even load, or generate truncated results (a single-paragraph article, for example). With fewer parameters, they use less memory and the quality of writing falls dramatically, with repetitions and unclear information. Using parameters encoded on fewer bits (3, 4, 5 or 6) significantly speeds up processing, but also reduces the quality of writing, with grammatical errors and even invented words.Finally, the size of the prompt window in tokens depends on the size of the data to be supplied to the AI. It is non-negotiable. If this size saturates memory, then you should opt for an LLM with fewer parameters, which will free up RAM to the detriment of the quality of the final result.Our tests have resulted in articles that are well written. They have an angle, a coherent chronology of several thematic sections, quotations in the right place, a dynamic headline and concluding sentence. Regardless of the LLM used, the AI is incapable of correctly prioritising the points discussed during the interview However, we have never managed to obtain a publishable article. Regardless of the LLM used, including DeepSeek R1 and its supposed reasoning abilities, the AI is systematically incapable of correctly prioritising the various points discussed during the interview. It always misses the point and often generates pretty but uninteresting articles. Occasionally, it will write an entire, well-argued speech to tell its readers that the company interviewed... has competitors.LLMs are not all equal in the vocabulary and writing style they choose. At the time of writing, Meta's Llama 3.x is producing sentences that are difficult to read, while Mistral and, to a lesser extent, Gemma have a tendency to write like marketing agencies, using flattering adjectives but devoid of concrete information.Surprisingly, the LLM that writes most beautifully in French within the limits of the test configuration is Chinese Qwen. Initially, the most competent LLM on our test platform was Mixtral 8x7B (with an x instead of an s), which mixes eight thematic LLMs, each with just 7 billion parameters.However, the best options for fitting Qwen and Mixtral into the 48GB of our test configuration are, for the former, a version with only 14 billion parameters and, for the latter, parameters encoded on 3 bits. The former writes unclear and uninteresting information, even when mixed with DeepSeek R1 (DeepSeek R1 is only available as a distilled version of another LLM, either Qwen or Llama). The latter is riddled with syntax errors.The version of Mixtral with parameters encoded on 4 bits offered an interesting compromise, but recent developments in LM Studio, with a larger memory footprint, prevent the AI from working properly. Mixtral 8x7B Q4_K_M now produces truncated results.An interesting alternative to Mixtral is the very recent Mistral Small 3.1 with 24 billion parameters encoded on 8 bits, which, according to our tests, produces a result of a quality fairly close to Gemma 3. What's more, it is slightly faster, with a speed of 8.65 tokens per second.According to the specialists interviewed by LeMagIT, the hardware architecture most likely to support the work of generative AI on a personal machine is one where the same RAM is accessible to all types of computing cores at the same time. In practice, this means using a machine based on a system-on-chip (SoC) processor where the CPU, GPU and NPU cores are connected together to the same physical and logical access to the RAM, with data located at the same addresses for all the circuits.When this is not the case that is, when the personal machine has an external GPU with its own memory, or when the processor is indeed a SoC that integrates the CPU, GPU and NPU cores, but where each has access to a dedicated part in the common RAM - then the LLMs need more memory to function. This is because the same data needs to be replicated in each part dedicated to the circuits.So, while it is indeed possible to run an LLM with 27 billion parameters encoded in 8 bits on a Silicon M Mac with 48 GB of shared RAM, using the same evaluation criteria, we would have to make do with an LLM with 13 billion parameters on a PC where a total of 48 GB of RAM would be divided between 24 GB of RAM for the processor and 24 GB of RAM for the graphics card.This explains the initial success of Silicon M-based Macs for running LLMs locally, as this chip is a SoC where all the circuits benefit from UMA (unified memory architecture) access. In early 2025, AMD imitated this architecture in its Ryzen AI Max SoC range. At the time of writing, Intel's Core Ultra SoCs, which combine CPU, GPU and NPU, do not have such unified memory access.Writing the prompt that explains how to write a particular type of article is an engineering job. The trick to getting off to a good start is to give the AI a piece of work that has already been done by a human - in our case, a final article accompanied by a transcript of the interview - and ask what prompt it should have been given to do the same job. Around five very different examples are enough to determine the essential points of the prompt to be written, for a particular type of article. The trick is to give the AI a piece of work that has already been done by a human and ask what prompt it should have been given to do the same job However, AI systematically produces prompts that are too short, which will never be enough to write a full article. So the job is to use the leads it gives us and back them up with all the business knowledge we can muster.Note that the more pleasantly the prompt is written, the less precisely the AI understands what is being said in certain sentences. To avoid this bias, avoid pronouns as much as possible ("he", "this", "that", etc) and repeat the subject each time ("the article", "the article", "the article"...). This will make the prompt even harder to read for a human, but more effective for AI.Ensuring that the AI has sufficient latitude to produce varied content each time is a matter of trial and error. Despite our best efforts, all the articles produced by our test protocol have a family resemblance. It would be an effort to synthesise the full range of human creativity in the form of different competing prompts.Within the framework of our test protocol and in the context of AI capabilities at the time of writing, it is illusory to think that an AI would be capable of determining on its own the degree of relevance of all the comments made during an interview. Trying to get it to write a relevant article therefore necessarily involves a preliminary stage of stripping down the transcript of the interview.In practice, stripping the transcript of an interview of all the elements that are unnecessary for the final article, without however eliminating elements of context that have no place in the final article, but which guide the AI towards better results, requires the transcript to be rewritten. This rewriting costs human time, to the benefit of the AI's work, but not to the benefit of the journalist's work.This is a very important point - from that point onwards, AI stops saving the user time. As it stands, using AI means shifting work time from an existing task (writing the first draft of an article) to a new task (preparing data before delivering it to an AI).Secondly, the description in 1,500 tokens of the outline to follow when writing an article only works for a particular type of article. In other words, you need to write one outline for articles about a startup proposing an innovation, a completely different outline for those about a supplier launching a new version of its product, yet another outline for a player setting out a new strategic direction, and so on. The more use cases there are, the longer the upstream engineering work will take.Worse still, to date our experiments have only involved writing articles based on a single interview, usually at press conferences, so in a context where the interviewee has already structured his or her comments before delivering them. In other words, after more than six months of experimentation, we are still only at the simplest stage. We have not yet been able to invest time in more complex scenarios, which are nevertheless the daily lot of LeMagIT's production, starting with articles written on the basis of several interviews.The paradox is as follows - for AI to relieve a user of some of their work, that user has to work more. On the other hand, on these issues, AI on a personal machine is on a par with paid AI online.Read more about using LLMsGoogle claims AI advances with Gemini LLM - Code analysis, understanding large volumes of text and translating a language by learning from one read of a book are among the breakthroughs of Gemini 1.5.Prompt engineering is not for dummies - This is a guest post written by Sascha Heyer in his capacity as senior machine learning engineer at DoiT International and oversees machine learning.What developers need to know about Large Language Models - A developer strolls casually into work and gets comfy in their cubicle. Suddenly theres an update alert on the laptop screen - a new generative artificial intelligence function has been released.
    0 Commenti ·0 condivisioni ·88 Views
  • Want to learn Linux from legends? This mentorship pairs you with top developers
    www.zdnet.com
    You'll get priceless Linux experience from developers such as Linux Foundation Fellow Shuah Khan and kernel stable maintainer Greg Kroah-Hartman. Here's how to apply.
    0 Commenti ·0 condivisioni ·92 Views
  • Garmin Connect Plus brings AI to your wrist, but it's not free
    www.zdnet.com
    If you want AI-powered 'Active Intelligence,' training guidance, and more, you'll need to pay for Connect Plus.
    0 Commenti ·0 condivisioni ·82 Views