0 Commentaires
0 Parts
31 Vue
Annuaire
Annuaire
-
Connectez-vous pour aimer, partager et commenter!
-
WWW.GAMEDEVELOPER.COMRobocraft 2 developer Freejam shuts downFreejam, the studio behind the two Robocraft games and CardLife, is closing its doors. In a statement on its Discord server (spotted by TechRaptor), it attributed the closure to "current market conditions" and the server costs to keep making Robocraft 2, which launched as an Early Access title in November 2023."We're simply unable to launch or sustain development," Freejam explained. In the coming weeks, both Robocraft titles and CardLife will be sunset. RoboCraft 2 has already been delisted on Steam, while at time of writing, the original game and CardLife are still available on the platform.The week in industry layoffsThis marks the first studio closure of 2025. Earlier this week, layoffs were reported to have hit Rocksteady at the end of last year, and cuts are expected at Splash Damage, which recently canceled Transformers Reactivate. Jar of Sparks, a NetEase studio founded in 2022, paused operations to find a new publisher for its debut project."Freejam has been a family to all of us, and [the players] have been a part of that," wrote the studio. "Your feedback and thoughts have always driven us forward, but beyond this, the community's passion is a massive part of what's made working [here] such a pleasure.""From all of us here, thank you so much for joining us on this journey."0 Commentaires 0 Parts 32 Vue
-
WWW.THEVERGE.COMWe tried to hold Acers giant new Nitro Blaze 11 handheldThe PC handheld space continues to grow, and the biggest of all isNitro Blaze 11. As soon as I saw it announced at CES, I knew I had to try and get it in my hands, at least for the sheer curiosity of Will this thing even fit in my hands? The answer is yes though kind of just barely.RelatedI brought a Steam Deck OLED with me for a quick size and feel comparison.While the Blaze 11 isnt as heavy as I feared, the Steam Deck OLEDs 1.41 pounds feel like a featherweight in comparison. The Deck also feels a little more solidly built. Acers handheld isnt flimsy, but it did seem cheaper.But credit where credits due: playing games on such a big screen in your hands is a treat, and the kickstand felt solid for propping it up in tablet mode with detached controllers, which the Steam Deck cant do. Acer also gets points for using Hall effect sticks and triggers. Well have to wait and see how this jumbo $1,099 handheld fares when it launches in Q2 2025, as the competition heats up with the impending arrival of the Lenovo Legion Go S and the constantly leaking Nintendo Switch 2. In the meantime, here are a bunch of pictures of the Blaze 11 and the Steam Deck OLED.Maybe if we one day get 13- or 14-inch handhelds, a Steam Deck will be able to fit within the screen itself.The Steam Deck OLEDs screen is 7.4 inches, compared to the 10.95 inches of the Blaze 11.I only held the Blaze 11 for a short time, but I can say I did find the Steam Deck more ergonomic.Acers launcher looks and feels a bit spartan. It sits atop Windows, while Valves SteamOS is Linux-based.I dont know what these pins on the bottom of the Blaze 11 are, but Ive reached out to Acer to find out.The top of the Blaze 11 has dual USB 4 ports, a USB-A 3.2 port, a microSD card slot, and a 3.5mm headset jack.Dont talk to me or my son ever again.The rear feels like a wall of black plastic.The Blaze 11 has detachable controllers and a kickstand, which the Steam Deck does not.The Blaze 11s tablet mode. With a screen this big, it actually seems fairly usable in this configuration.A handheld this big isnt likely to be something you take on the road very much.Photography by Antonio G. Di Benedetto / The Verge0 Commentaires 0 Parts 32 Vue
-
WWW.THEVERGE.COMYou can get the newest 8BitDo Ultimate or 8BitDo Pro 2 wired Xbox controllers for $30Xbox gamers have a growing list of options among the best Xbox controllers, but even expensive ones like the Xbox Elite Series 2 can develop stick drift and other issues. RelatedIf youre tired of shelling out for unreliable controllers, 8Bitdos latest wired Xbox models with Hall effect analog sticks and triggers can offer affordable relief, as you can get them for 33 percent off right now. That includes the 8BitDo Ultimate controller, which has dropped to a record low $29.99 ($15 off) at Amazon, Best Buy, and 8BitDo. The DualShock-like 8BitDo Pro 2 is also on saleAmazon and 8BitDo starting at $29.99 (about $15 off), which is only $2 more than the lowest price to date.8BitDo Ultimate Wired Controller for Xbox$30$4533% off$30$30$4533% offThe newest 8BitDo Ultimate Wired Controller for Xbox includes Hall effect analog sticks and triggers, plus two rear buttons, customizable mapping and sensitivity, and is also compatible with PC and mobile devices.8BitDo Pro 2 Wired Controller for Xbox$30$4533% off$30$30$4533% offThe latest 8Bitdo Pro 2 wired controller for Xbox and Windows offers a DualShock-like shape and layout with Hall effect triggers and sensors, software customization, and two mappable rear buttons.8BitDos wired Xbox controllers have been on the market for a few years now, so even if you already have one, you may have missed the refreshed Hall effect models. The older ones have ALPS-based sticks, which are commonly used in the standard controllers that ship with major consoles. They use mechanically moving parts and sensors to read the sticks positioning, which can eventually degrade and cause misreads to the point that your in-game characters can move even when youre not touching the controller.Hall effect sticks, instead, use magnetism and the sensors dont have moving parts, and while they arent completely immune to eventually getting stick drift, they should last much longer. That doesnt mean you cant still break a controller from excessive sweaty rounds of Marvel Rivals. The triggers on both controllers benefit from similar technology and also include dedicated vibration motors.The 8BitDo Ultimate and 8BitDo Pro 2 offer other perks that are nice to have at this price point, too, like dual rear buttons, software-based remapping (the 8BitDo Ultimate supports on-the-fly switching between three profiles using a dedicated button), and configurable sensitivity and vibration settings. In addition to Xbox One, Series X, and Series S, you can also use the controllers on Windows PCs, Android, and iOS devices by plugging them in using the detachable USB-C cable.0 Commentaires 0 Parts 33 Vue
-
WWW.THEVERGE.COMDirectTV and EchoStar arent happy about Disney and Fubos settlementFollowing FuboTVs recent move to settle its antitrust lawsuit with Disney, Fox, and Warner Bros. Discovery over the impending launch of their Venu Sports streaming service, DirectTV and EchoStar are urging the courts to consider how other TV distributors could still be shut out of the sports streaming space.On Monday, Fubo announced that, as part of its plan to merge with Hulu + Live TV, it would also drop its lawsuit against Disney, Fox, and WBD alleging that their collaboration on Venu Sports violated US antitrust laws. The settlement outlines how Hulu + Live TV and Fubo can create a new multichannel video programming distributor that Disney would own 70 percent of. But the lawsuits dismissal also lifted the injunction to halt Venus launch which US District Judge Margaret M. Garnett passed down last August.Because Venu Sports now has a much more realistic chance of coming to market, DirectTV and EchoStar are voicing concerns about how Fubos proposed Hulu deal may exacerbate, rather than properly address, the core issue of sports streaming anticompetitiveness. In a letter to Garnett, DirectTV argued that while Venus venture partners have paid Fubo to ensure cooperation from an aggrieved competitor, they have also restored an anticompetitive runway for the JV Defendants to control the future of the live pay TV market.DirectTV is just one of several non-parties that expressed grave concerns about the impact Venu would have on competition for sports programming, given that Venu would offer content in a manner that [the Defendants] do not allow DirectTV or other distributors to offer to consumers, DirectTVs lawyers said.In its own letter to Garnett, EchoStars legal team insisted that the original injunction blocked Disney, Fox, and WBDs scheme to monopolize the pay-TV market and, once accomplished, charge inflated prices to millions of Americans.The parties settlement appears designed to eliminate court jurisdiction over this multifarious harm by effectuating the preliminary injunctions expiration, rather than addressing the underlying competition issues, EchoStar said. Now, with the injunction undone by voluntary dismissal, DISH, Sling, and other distributors will suffer antitrust injury.0 Commentaires 0 Parts 33 Vue
-
WWW.MARKTECHPOST.COMEvola: An 80B-Parameter Multimodal Protein-Language Model for Decoding Protein Functions via Natural Language DialogueProteins, essential molecular machines evolved over billions of years, perform critical life-sustaining functions encoded in their sequences and revealed through their 3D structures. Decoding their functional mechanisms remains a core challenge in biology despite advances in experimental and computational tools. While AlphaFold and similar models have revolutionized structure prediction, the gap between structural knowledge and functional understanding persists, compounded by the exponential growth of unannotated protein sequences. Traditional tools rely on evolutionary similarities, limiting their scope. Emerging protein-language models offer promise, leveraging deep learning to decode protein language, but limited, diverse, and context-rich training data constrain their effectiveness.Researchers from Westlake University and Nankai University developed Evola, an 80-billion-parameter multimodal protein-language model designed to interpret the molecular mechanisms of proteins through natural language dialogue. Evola integrates a protein language model (PLM) as an encoder, an LLM as a decoder, and an alignment module, enabling precise protein function predictions. Trained on an unprecedented dataset of 546 million protein-question-answer pairs and 150 billion tokens, Evola leverages Retrieval-Augmented Generation (RAG) and Direct Preference Optimization (DPO) to enhance response relevance and quality. Evaluated using the novel Instructional Response Space (IRS) framework, Evola provides expert-level insights, advancing proteomics research.Evola is a multimodal generative model designed to answer functional protein questions. It integrates protein-specific knowledge with LLMs for accurate and context-aware responses. Evola features a frozen protein encoder, a trainable sequence compressor and aligner, and a pre-trained LLM decoder. It employs DPO for fine-tuning based on GPT-scored preferences and RAG to enhance response accuracy using Swiss-Prot and ProTrek datasets. Applications include protein function annotation, enzyme classification, gene ontology, subcellular localization, and disease association. Evola is available in two versions: a 10B-parameter model and an 80B-parameter model still under training.The study introduces Evola, an advanced 80-billion-parameter multimodal protein-language model designed to interpret protein functions through natural language dialogue. Evola integrates a protein language model as the encoder, a large language model as the decoder, and an intermediate module for compression and alignment. It employs RAG to incorporate external knowledge and DPO to enhance response quality and refine outputs based on preference signals. Evaluation using the IRS framework demonstrates Evolas capability to generate precise and contextually relevant insights into protein functions, thereby advancing proteomics and functional genomics research.The results demonstrate that Evola outperforms existing models in protein function prediction and natural language dialogue tasks. Evola was evaluated on diverse datasets and achieved state-of-the-art performance in generating accurate, context-sensitive answers to protein-related questions. Benchmarking with the IRS framework revealed its high precision, interpretability, and response relevance. The qualitative analysis highlighted Evolas ability to address nuanced functional queries and generate protein annotations comparable to expert-curated knowledge. Additionally, ablation studies confirmed the effectiveness of its training strategies, including retrieval-augmented generation and direct preference optimization, in enhancing response quality and alignment with biological contexts. This establishes Evola as a robust tool for proteomics.In conclusion, Evola is an 80-billion-parameter generative protein-language model designed to decode the molecular language of proteins. Using natural language dialogue, it bridges protein sequences, structures, and biological functions. Evolas innovation lies in its training on an AI-synthesized dataset of 546 million protein question-answer pairs, encompassing 150 billion tokensunprecedented in scale. Employing DPO and RAG it refines response quality and integrates external knowledge. Evaluated using the IRS, Evola delivers expert-level insights, advancing proteomics and functional genomics while offering a powerful tool to unravel the molecular complexity of proteins and their biological roles.Check out the Paper. All credit for this research goes to the researchers of this project. Also,dont forget to follow us onTwitter and join ourTelegram Channel andLinkedIn Group. Dont Forget to join our60k+ ML SubReddit. FREE UPCOMING AI WEBINAR (JAN 15, 2025): Boost LLM Accuracy with Synthetic Data and Evaluation IntelligenceJoin this webinar to gain actionable insights into boosting LLM model performance and accuracy while safeguarding data privacy.The post Evola: An 80B-Parameter Multimodal Protein-Language Model for Decoding Protein Functions via Natural Language Dialogue appeared first on MarkTechPost.0 Commentaires 0 Parts 34 Vue
-
WWW.MARKTECHPOST.COMThis AI Paper Explores Quantization Techniques and Their Impact on Mathematical Reasoning in Large Language ModelsMathematical reasoning stands at the backbone of artificial intelligence and is highly important in arithmetic, geometric, and competition-level problems. Recently, LLMs have emerged as very useful tools for reasoning, showing the ability to produce detailed step-by-step reasoning and present coherent explanations about complex tasks. However, due to such success, its becoming harder and harder to support these models with the computational resources required, thus leading to difficulty deploying them in restricted environments.An immediate challenge for researchers is lowering LLMs computational and memory needs without deteriorating performance. Mathematical reasoning poses a very big challenge as a task in maintaining the need for accuracy and logical consistency, without which many techniques may compromise those aims. Scaling models to realistic uses is severely affected by such limitations.Current approaches toward this challenge are pruning, knowledge distillation, and quantization. Quantization, the process of converting model weights and activations to low-bit formats, has indeed been promising to reduce memory consumption while improving computational efficiency. However, its impact on tasks requiring stepwise reasoning is poorly understood, especially in mathematical domains. Most existing methods cannot capture the nuances of the trade-offs between efficiency and reasoning fidelity.A group of researchers from The Hong Kong Polytechnic University, Southern University of Science & Technology, Tsinghua University, Wuhan University, and The University of Hong Kong developed a systematic framework for the effects of quantization on mathematical reasoning. They used several techniques for quantization, such as GPTQ and SmoothQuant, to combine and evaluate the impact of both techniques on reasoning. The team focused on the MATH benchmark, which requires step-by-step problem-solving, and analyzed the performance degradation caused by these methods under varying levels of precision.The researchers used a methodology that involved training models with structured tokens and annotations. These included special markers to define reasoning steps, ensuring the model could retain intermediate steps even under quantization. To reduce architectural changes to the models while applying fine-tuning techniques similar to LoRA, this adapted approach balances the trade-off of efficiency and accuracy in the implementation and the quantized model. Hence, it provides logical consistency to the models. Similarly, the PRM800K datasets step-level correctness has been considered training data to enable a granular set of reasoning steps that the model would learn to reproduce.A thorough performance analysis unveiled critical deficiencies of the quantized models. Quantization heavily impacted computation-intensive tasks, with large performance degradations across different configurations. For example, the Llama-3.2-3B model lost accuracy, with scores falling from 5.62 in full precision to 3.88 with GPTQ quantization and 4.64 with SmoothQuant. The Llama-3.1-8B model had smaller performance losses, with scores falling from 15.30 in full precision to 11.56 with GPTQ and 13.56 with SmoothQuant. SmoothQuant showed the highest robustness of all methods tested, performing better than GPTQ and AWQ. The results highlighted some of the challenges in low-bit formats, particularly maintaining numerical computation precision and logical coherence.An in-depth error analysis categorized issues into computation errors, logical errors, and step omissions. Computation errors were the most frequent, often stemming from low-bit precision overflow, disrupting the accuracy of multi-step calculations. Step omissions were also prevalent, especially in models with reduced activation precision, which failed to retain intermediate reasoning steps. Interestingly, some quantized models outperformed their full-precision counterparts in specific reasoning tasks, highlighting the nuanced effects of quantization.The results of this study clearly illustrate the trade-offs between computational efficiency and reasoning accuracy in quantized LLMs. Although techniques such as SmoothQuant help mitigate some of the performance degradation, the challenges of maintaining high-fidelity reasoning remain significant. Researchers have provided valuable insights into optimizing LLMs for resource-constrained environments by introducing structured annotations and fine-tuning methods. These findings are pivotal for deploying LLMs in practical applications, offering a pathway to balance efficiency with reasoning capabilities.In summary, this study addresses the critical gap in understanding the effect of quantization on mathematical reasoning. The methodologies and frameworks proposed here indicate some of the inadequacies in the existing quantization techniques and provide actionable strategies to overcome them. These advances open pathways toward more efficient and capable AI systems, narrowing the gap between theoretical potential and real-world applicability.Check out the Paper. All credit for this research goes to the researchers of this project. Also,dont forget to follow us onTwitter and join ourTelegram Channel andLinkedIn Group. Dont Forget to join our60k+ ML SubReddit. Nikhil+ postsNikhil is an intern consultant at Marktechpost. He is pursuing an integrated dual degree in Materials at the Indian Institute of Technology, Kharagpur. Nikhil is an AI/ML enthusiast who is always researching applications in fields like biomaterials and biomedical science. With a strong background in Material Science, he is exploring new advancements and creating opportunities to contribute. [Recommended Read] Nebius AI Studio expands with vision models, new language models, embeddings and LoRA (Promoted)0 Commentaires 0 Parts 34 Vue
-
WWW.IGN.COMLike a Dragon: Pirate Yakuza in Hawaii to Get New Game Plus as Free DLC After Infinite Wealth BacklashRyu Ga Gotoku Studio will release the New Game Plus mode for Like a Dragon: Pirate Yakuza in Hawaii as free downloadable content after facing controversy over charging for it in previous game Like a Dragon: Infinite Wealth.RGG announced the New Game Plus mode during a Like a Dragon Direct today, January 9, though didn't say when the "post release patch" would arrive exactly. Pirate Yakuza in Hawaii itself launches February 21.The studio faced backlash from fans in early 2024 when it revealed the New Game Plus mode for mainline entry Infinite Wealth would be locked behind the more expensive edition.The Yakuza Games In (Chronological) OrderIGN's Twenty Questions - Guess the game!IGN's Twenty Questions - Guess the game!To start:...try asking a question that can be answered with a "Yes" or "No".000/250Pirate Yakuza in Hawaii similarly has different editions but these just include digital extras such as new crew members and skins, while the most expensive version has some physical bonuses such as an eye patch and pirate flag.The game is a spin-off sequel to Infinite Wealth, the eighth mainline entry in the Yakuza / Like a Dragon series (or ninth including Yakuza 0). It follows Majima as he wakes up with amnesia and becomes a pirate, exploring the likes of Hawaii along the way.A trailer released at the October Xbox Partner Showcase revealed a proper first look at ship combat akin to Assassin's Creed 4: Black Flag and the return of the beloved character Taiga Saejima, perhaps teasing more ties to the main series than previously thought.It will also be a decent bit longer than previous Yakuza spin-off Like a Dragon Gaiden: The Man Who Erased His Name, with its story taking around 15 to 18 hours to complete. Fans can also dress up Majima as longtime series protagonist Kiryu Kazuma, but only if they sign up for email notifications.Ryan Dinsdale is an IGN freelance reporter. He'll talk about The Witcher all day.0 Commentaires 0 Parts 33 Vue
-
WWW.IGN.COMTransformers: Reactivate Leaked Gameplay Reveals Bumblebee Footage Following CancelationTransformers: Reactivate gameplay seems to have leaked following the announcement that the project was canceled yesterday.Footage of what appears to be the early-in-development Transformers third-person action game found its way to X/Twitter just hours after developer Splash Damage revealed it had made the decision to end development for the project. The video, shared online by X account @DpzLuna, features fan-favorite Autobot Bumblebee as he blasts away enemy robots both in and out of car form. His movement options include the ability to drive around a few different city-like environments and use short, rocket-powered dashes during gunfights.Texture-less roads, unfinished weapon effects, and more rough-around-the-edges elements suggest more work was needed on this version of the project, though it's unclear at what point in development the footage was captured. Theres also no telling whether the video were seeing today would have resulted in a cohesive package or which other Transformers players would have been able to control. Still, it seems some progress had been made on Transformers: Reactivate before it ultimately met its demise.Transformers: Reactivate was originally announced as a new video game entry in the long-running robot universe back at The Game Awards in 2022. Splash Damage pitched it to players as an online action game with a fresh, new story for PC and consoles, but updates on its progress have been few and far between since its initial reveal. Hope for any news came to a halt yesterday when the team said it would no longer continue working on its Transformers game as it refocused resources on other projects amid fears of potential layoffs."While not being able to see the game through to release is painful, having to say goodbye to friends and colleagues hurts even more, Splash Damage said at the time. We're now focused on doing everything we can to support them through this tough period, just as we are committed to caring for those who stay with us as we build a stronger Splash Damage for the future."Michael Cripe is a freelance contributor with IGN. He started writing in the industry in 2017 and is best known for his work at outlets such as The Pitch, The Escapist, OnlySP, and Gameranx.Be sure to give him a follow on Twitter @MikeCripe.0 Commentaires 0 Parts 34 Vue
-
WWW.IGN.COMNintendo Announces Lego Game BoyNintendo is teaming with Lego once again but not for a Super Mario, Animal Crossing, or Mario Kart set this time, but instead a Lego Game Boy.An incredibly brief teaser trailer was shared by Nintendo of America on X/Twitter, below, which showed a literal handful of bricks that make up the Game Boy including the iconic purple buttons and directional pad.The finished product therefore wasn't shown, but an October 2025 release window was shared. Nintendo and Lego will likely share full details on the model, such as its price, number of bricks, and actual design, closer to the time.Nintendo and Lego now have a long and presumably prosperous relationship having released myriad Nintendo Lego sets in partnership. This includes lines targeted at younger fans such as Lego sets based on Super Mario and Mario Kart, but also some more sophisticated models too.These include, for example, the Lego Pirhana Plant, a 16-bit Mario and Yoshi from Super Mario World, and a colossal Bowser. It recently released the first The Legend of Zelda set too with the $300 Great Deku Tree.The most similar product to this incoming Game Boy is, however, the Lego Nintendo Entertainment System. This arrived in 2024 as a replica of the classic games console and both Lego and Nintendo surely hope fans of it will buy the Lego Game Boy to go alongside.Ryan Dinsdale is an IGN freelance reporter. He'll talk about The Witcher all day.0 Commentaires 0 Parts 32 Vue