• WWW.MARKTECHPOST.COM
    SQL-R1: A Reinforcement Learning-based NL2SQL Model that Outperforms Larger Systems in Complex Queries with Transparent and Accurate SQL Generation
    Natural language interface to databases is a growing focus within artificial intelligence, particularly because it allows users to interact with structured databases using plain human language. This area, often known as NL2SQL (Natural Language to SQL), is centered on transforming user-friendly queries into SQL commands that can be directly executed on databases. The objective is to simplify data access for non-technical users and broaden the utility of data systems in various sectors like finance, healthcare, and retail. With the rise of LLMs, significant progress has made these conversions more accurate and context-aware, especially when dealing with simple queries or structured database layouts. Despite progress, converting natural language into accurate SQL remains difficult in complex situations involving multiple table joins, nested queries, or ambiguous semantics. The challenge is not just about generating syntactically correct SQL but producing queries that correctly reflect the user’s intent and can be generalized across domains. Standard approaches struggle to scale in high-stakes fields where interpretability and precision are critical. Moreover, many current models depend heavily on fixed schemas and training data structures, which hampers their performance in new or evolving environments. Most NL2SQL systems today rely on supervised fine-tuning, where large language models are trained on annotated datasets that pair questions with correct SQL answers. While this method has led to noticeable improvements, it introduces limitations in adaptability and interpretability. Because these models are tuned to specific datasets and schemas, they often fail in unfamiliar scenarios. Also, they follow a rigid generation strategy, which can lead to failures when the input diverges from training data. These systems also typically lack transparency in their reasoning processes, limiting their utility in domains where clear decision-making trails are necessary. Researchers from IDEA Research, the Hong Kong University of Science and Technology (Guangzhou), the University of Chinese Academy of Sciences, and DataArc Tech Ltd. introduced SQL-R1. This new NL2SQL model leverages reinforcement learning rather than traditional supervised learning. SQL-R1 uses feedback mechanisms during training to improve its performance. Instead of just learning from annotated examples, the model learns by generating SQL candidates, executing them, and receiving structured feedback on the outcome. This feedback includes whether the SQL was syntactically correct, whether it produced the proper result, and how efficient and interpretable it was. This dynamic learning process allows the model to optimize its SQL generation strategies over time and improves generalization in complex or unfamiliar scenarios. To build SQL-R1, researchers first performed supervised fine-tuning on 200,000 samples drawn from a large synthetic dataset called SynSQL-2.5M. This process, known as a cold start, ensured the model could follow basic instructions and generate simple SQL outputs. Following this, reinforcement learning was introduced using the Group Relative Policy Optimization (GRPO) algorithm. The model generated multiple SQL candidates for each query and was rewarded based on a composite scoring function. This function included four metrics: format reward (+1 or -1 depending on syntax correctness), execution reward (+2 for executable queries, -2 for failures), result reward (+3 for correct query outputs, -3 for incorrect ones), and length reward based on the depth and clarity of the reasoning trace. Each of these scores contributed to updating the model’s internal decision-making process. SQL-R1 was evaluated on two industry-standard NL2SQL benchmarks: Spider and BIRD. On the Spider development set, the model achieved 87.6% execution accuracy, and on the Spider test set, it gained 88.7%. For the BIRD dataset, which covers 95 databases from 37 domains, the model scored 66.6%. These results are competitive with or superior to larger models, including closed-source solutions like GPT-4. Notably, SQL-R1 used the Qwen2.5-Coder-7B model, which is considerably smaller than many alternatives, demonstrating that high accuracy can be achieved with efficient architectures when combined with reinforcement learning. An ablation study confirmed the contribution of each reward component. Removing the format reward, for instance, caused accuracy to drop from 63.1% to 60.4%. Removing the result reward caused a 0.7% drop, indicating that each element in the reward mechanism plays a role in guiding the model. Several Key Takeaways from the Research on SQL-R1: SQL-R1 achieved 88.7% accuracy on the Spider test set and 66.6% on the BIRD development set, using only a 7B base model (Qwen2.5-Coder-7B).   The model used 200,000 samples from the SynSQL-2.5M dataset for supervised fine-tuning and 5,000 complex samples for reinforcement learning.   The GRPO algorithm powered reinforcement learning, which required no value model and worked efficiently with relative performance scores.   The reward function included four components: Format (+1/-1), Execution (+2/-2), Result (+3/-3), and Length (proportional).   SQL-R1 outperformed larger models like GPT-4, highlighting that model architecture and feedback training are as critical as size.   Ablation studies revealed the importance of each reward: removing the format reward caused a 2.7% drop in performance, while eliminating the execution reward dropped accuracy by 2.4%.   The approach promotes transparency, as the model provides reasoning traces using ‘<think>’ and ‘<answer>’ tags, improving end-user interpretability. Here is the Paper. Also, don’t forget to follow us on Twitter and join our Telegram Channel and LinkedIn Group. Don’t Forget to join our 90k+ ML SubReddit. Asif RazzaqWebsite |  + postsBioAsif Razzaq is the CEO of Marktechpost Media Inc.. As a visionary entrepreneur and engineer, Asif is committed to harnessing the potential of Artificial Intelligence for social good. His most recent endeavor is the launch of an Artificial Intelligence Media Platform, Marktechpost, which stands out for its in-depth coverage of machine learning and deep learning news that is both technically sound and easily understandable by a wide audience. The platform boasts of over 2 million monthly views, illustrating its popularity among audiences.Asif Razzaqhttps://www.marktechpost.com/author/6flvq/THUDM Releases GLM 4: A 32B Parameter Model Competing Head-to-Head with GPT-4o and DeepSeek-V3Asif Razzaqhttps://www.marktechpost.com/author/6flvq/Small Models, Big Impact: ServiceNow AI Releases Apriel-5B to Outperform Larger LLMs with Fewer ResourcesAsif Razzaqhttps://www.marktechpost.com/author/6flvq/A Coding Implementation for Advanced Multi-Head Latent Attention and Fine-Grained Expert SegmentationAsif Razzaqhttps://www.marktechpost.com/author/6flvq/A Coding Implementation on Introduction to Weight Quantization: Key Aspect in Enhancing Efficiency in Deep Learning and LLMs
    0 Comentários 0 Compartilhamentos 17 Visualizações
  • TOWARDSAI.NET
    TAI #148: New API Models from OpenAI (4.1) & xAI (grok-3); Exploring Deep Research’s Scaling Laws
    Author(s): Towards AI Editorial Team Originally published on Towards AI. What happened this week in AI by Louie This week, AI developers got their hands on several significant new model options. Adding to the new Llama 4 model options last week, OpenAI released GPT-4.1 — its first developer-only API model, not directly available within ChatGPT — and xAI launched its Grok-3 API. OpenAI made significant progress in addressing prior limitations in its non-reasoning models, improving coding capabilities, and finally breaking through its previous long-context barrier (now supporting up to 1 million tokens in GPT-4.1). OpenAI also enhanced ChatGPT’s memory capabilities with access to the user’s full conversation history (likely still via summarisation and RAG). OpenAI is also expected to imminently release its powerful new reasoning models, o3 and o4-mini. The GPT-4.1 series offers three models: GPT-4.1, GPT-4.1 mini, and GPT-4.1 nano. GPT-4.1 substantially surpasses GPT-4o across multiple tasks at lower pricing, scoring 54.6% on SWE-bench Verified — a 21.4 percentage point jump from GPT-4o — big gains in practical software engineering capabilities. Real world instruction following, long a frustration for developers, has also seen clear gains: GPT-4.1 outperforms GPT-4o by 10.5 percentage points on Scale’s MultiChallenge benchmark. Perhaps most significantly, GPT-4.1 offers up to one million tokens of input context, compared to 128k tokens in GPT-4o (though still with significantly worse multitasking across long context vs Gemini 2.5). This makes the new models suitable for processing massive codebases and extensive documentation. GPT-4.1 mini and nano also offer solid performance boosts at much lower latency and cost, with GPT-4.1 mini beating GPT-4o in several tests and reducing costs by 83%. Practical usage tips for developers adopting the new GPT-4.1 models include equipping GPT-4.1 with tools and iterative planning to build agentic workflows. Chain-of-thought prompting remains useful, particularly for structured and detailed reasoning tasks, with GPT-4.1 benefiting from clearly specified prompts and systematic thinking instructions. Developers should approach the full million-token context window carefully; while highly performant compared to previous models, very complex tasks may still require smaller targeted contexts for optimal results. Detailed and consistent instructions significantly enhance GPT-4.1’s performance; conflicting or underspecified instructions remain a common pitfall that can induce errors or hallucinations. Meanwhile, xAI’s Grok-3 API also became publicly available, with pricing marginally above GPT-4o — $3 per million input tokens and $15 per million output tokens, along with premium faster-inference options. Its 131k-token context limit, while large, currently falls short of earlier claims of a million tokens, prompting some disappointment within the developer community. Clarification around Grok-3’s model scale and underlying training specifics remains elusive, including whether it might be a distilled version of a larger internal training run. Latest LLM API pricing, context windows, and options. Source: Towards AI; Google Gemini, OpenAI, Anthropic Claude, and xAI Grok. The model choice should increase further later this week with the o3 release, which, so far, is only available through Deep Research. Together with coding tools, Deep Research Agents are one of the most valuable LLM-based tools for high-value work at enterprises that have been released so far. Competition escalated this week with a big upgrade to Gemini Deep Research after the base model was moved to Pro 2.5. In my own experience, Gemini Pro 2.5 Deep Research is a big improvement and now offers distinct strengths and weaknesses relative to OpenAI’s Deep Research. Leveraging both platforms simultaneously — given their distinct strengths — may become standard practice for many of my research tasks. Gemini’s search tends to be broader and wider, often rapidly finding many hundreds of sources (possibly due to Google’s existing web crawling infrastructure). Its true strength lies in its true long context understanding and ability to synthesize and cohesively present large quantities of information. OpenAI’s Deep Research, in contrast, is narrower but often feels more targeted and intelligent — executing more complex, iterative research strategies, exploring deeper niche rabbit holes, and generating intricate next-step research plans after each stage. OpenAI’s model also engages in more advanced reasoning, extrapolating between data points with added analysis and insight, although this sophistication can also make verification for hallucinations more challenging. OpenAI’s new BrowseComp test and paper offered further insights into Deep Research this week. On this benchmark, only~ 21% of tasks were solved by humans in under two hours and just 2% by GPT-4o with a web search. Deep Research’s performance increased from ~10% to ~52% (the released version) — by scaling inference tokens in series (22x token scaling assuming base 2 in their unlabelled chart). Series scaling is likely both from longer Chains of Thoughts and a larger number of agentic steps. On top of this, further 64x parallel compute scaling (64 parallel attempts at the answer) lifted accuracy to 78%. This parallel sampling used self-judged confidence scores (“best of N”) to choose the final answer. These confidence scores outperformed majority voting methods (which instead choose the most common answer in the final exam). A similar mechanism may well also be behind o1-pro’s effectiveness! Collectively, the test-time or inference scaling of Deep Research increased its score from 10% to 78%. It now becomes a difficult task to find the optimal balance of scaling these vectors and for which tasks extra capability will be worth the extra cost! Why should you care? All these new model API options are great for the AI ecosystem but make LLM model choices harder than ever. Many models now have their own unique strengths and are state-of-the-art for certain use cases. The best LLM development pipelines and agents will now use a combination of several models from different providers. Multiple models should also be used together via ChatGPT, Gemini, and Claude, even in non-technical workflows. Insights from the BrowseComp study highlight another increasing complexity: advanced agents get huge benefits from investing more compute into inference-time scaling. While costs escalate quickly, performance gains from deeper iterative reasoning and parallel sampling can outweigh those costs for strategically important tasks. Many professional and enterprise workflows could see returns by leveraging greater inference scaling than is commonly employed today, provided these additional costs align clearly with the value delivered. More broadly, the rapid evolution of AI highlights different but equally important considerations for non-technical users and LLM developers alike. For non-technical users and businesses, the emphasis should shift towards well-informed, deliberate, and deep integration of AI into daily workflows — moving beyond casual experimentation. Selecting the right models requires clearly defined goals and a careful understanding of each model’s unique strengths and limitations. Users must focus on how different AI tools practically enhance their productivity, creativity, and effectiveness in real-world tasks. For LLM developers, the expanding array of models and development options requires an even deeper grasp of nuances such as inference scaling behaviors, optimal model combinations, and agentic frameworks. Developers need to thoughtfully customize and embed these models within robust pipelines and workflows, carefully balancing performance gains against computational costs. A deep understanding of model-specific strengths and inference techniques, such as strategic parallel or series scaling, will become essential to building highly capable, efficient, and economically viable applications. In both cases, those who proactively master these subtleties today will be best positioned to drive productivity, innovation, and competitive advantage in their fields. We have yet another guest post this week with GradientFlow (aka Ben Lorica) and are diving into the hottest new development in AI: Deep Research Tools. While tools like ChatGPT’s web browsing or Perplexity extend these capabilities by gathering context from the internet, they remain limited for complex analytical work. Deep research tools change that by combining conversational AI with autonomous web browsing, tool integrations, and sophisticated reasoning capabilities. If you’re building or integrating AI tools, this is essential context for what’s coming next. We’ll provide a detailed comparative analysis of popular deep research platforms, examining their unique approaches and explaining why they represent a fundamental shift in knowledge work. Read the complete article here! Hottest News 1. OpenAI Introduced GPT-4.1 in the API OpenAI has launched GPT-4.1, along with GPT-4.1 Mini and Nano variants, exclusively through its API. The new models bring substantial improvements in coding capabilities, instruction-following, and long-context handling — supporting up to 1 million tokens. GPT-4.1 also shows marked performance gains over GPT-4o, achieving a 54.6% score on SWE-bench Verified and outperforming GPT-4o by 10.5 percentage points on Scale’s MultiChallenge benchmark. 2. Google announces the Agent2Agent Protocol (A2A) Google unveiled a new open-source Agent2Agent Protocol (A2A) designed to let AI agents from different vendors and systems work together seamlessly. While A2A overlaps with Anthropic’s Model Context Protocol (MCP), Google has also announced support for MCP, positioning the two protocols as complementary. A2A particularly steps in for more complex multiple-agent interactions and also allowing secure agent communication. 3. Anthropic Rolls Out a $200-per-Month Claude Subscription Anthropic has introduced a high-tier subscription plan for its Claude chatbot called Claude Max. In addition to the existing $20/month Claude Pro, there are now two Max tiers: a $100/month plan offering 5x higher usage limits and a $200/month option with 20x the rate limits. Both plans include priority access to Anthropic’s newest models and features. 4. OpenAI Launched Memory in ChatGPT OpenAI has begun rolling out a new memory feature in ChatGPT that personalizes responses based on a user’s past interactions. Displayed as “reference saved memories” in settings, the feature enhances context awareness across text, voice, and image interactions — making conversations more relevant and adaptive over time. 5. xAI Launches an API for Grok 3 xAI is making its flagship Grok 3 model available via an API. Grok 3 is priced at $3 per million tokens (~750,000 words) fed into the model and $15 per million tokens generated by the model. The company also introduced Grok 3 Mini, available at $0.30 per million input tokens and $0.50 per million output tokens, offering a more cost-effective option. 6. OpenAI Open Sources BrowseComp: A Benchmark for Browsing Agents OpenAI has released BrowseComp, a benchmark designed to evaluate agents’ ability to persistently browse the web and retrieve challenging, hard-to-find information. The benchmark contains 1,266 fact-seeking tasks, each with a short, unambiguous answer. Solving them involves navigating multiple pages, reconciling conflicting information, and filtering signals from noise. 7. Together AI Released DeepCoder-14B-Preview: A Fully Open-Source Code Reasoning Model Together AI, in collaboration with Agentica, unveiled DeepCoder-14B-Preview, a model fine-tuned from DeepSeek-R1-Distilled-Qwen-14B using distributed reinforcement learning. Achieving 60.6% Pass@1 on LiveCodeBench, it rivals top-tier models like o3-mini-2025 in output quality — while operating at just 14B parameters, showcasing impressive efficiency in code reasoning. 8. Google Introduces Firebase Studio Google announced Firebase Studio, a web-based IDE for building and deploying full-stack AI apps. It integrates tools like Project IDX, Genkit, and Gemini into a unified environment. Its standout feature, the App Prototyping Agent, allows users to create entire applications from natural language prompts or hand-drawn sketches. 9. OpenAI Will Soon Phase Out GPT-4 From ChatGPT OpenAI will fully phase out GPT-4 from ChatGPT by April 30, replacing it with GPT-4o as the default model. While GPT-4 will no longer be available within ChatGPT, it will remain accessible through OpenAI’s API offerings. 10. Measuring Human Leadership Skills with AI Agents A new study found managing AI agents effectively strongly correlates with managing people. Harvard research demonstrated that leadership skill assessment via GPT-4o simulated interactions strongly aligns (r=0.81) with real-world human evaluations. Effective leaders exhibit conversational reciprocity, fluid intelligence, and nuanced social perception. 11. Kunlun Wanwei Open-Sources Skywork-OR1 Series Models Kunlun Wanwei’s Tiangong team released the Skywork-OR1 (Open Reasoner 1) series, featuring open-source models tailored for math, code, and reasoning tasks. The lineup includes Skywork-OR1-Math-7B, Skywork-OR1–7B-Preview for combined skills, and the powerful Skywork-OR1–32B-Preview for complex, general-purpose applications. 12. Google Announced its New TPU v7 Ironwood. The chip will be available later this year and offers 4.8 PFLOP/s FP8 (Nvidia Blackwell GB200 is 5.0 PFLOP/s FP8) and 192 GB High Bandwidth Memory (equal to GB200). Its easier pod size scaling (up to 9216 chips) is a key strength for TPUs, while Nvidia has its easy-to-use CUDA software and strong ecosystem. Nvidia GPU vs TPU competition aside, High Bandwidth Memory is scaling rapidly in all AI solutions now that we have a sparse Mixture of Experts and inference time scaling laws (with high KV cache needs). HBM costs are now higher than TSMC manufacturing costs for these chips. 13. Google Improves Deep Research Google upgraded its Deep Research capabilities by migrating to the Gemini Pro 2.5 model. It has different strengths and weaknesses and focus vs. OpenAI Deep Research but overall was ranked more useful 70% to 30% in head-to-head by external users (Google’s data). Five 5-minute reads/videos to keep you learning 1. Are AI Agents Sustainable? It Depends This article explores whether AI agents are more environmentally and computationally sustainable than other AI systems. It focuses on three key factors: the type of model, the modality used, and how system choices are made in real-world deployments. 2. Decoding Strategies in Large Language Models This is a deep dive into how LLMs generate text, covering greedy search, beam search, and sampling strategies like top-k and nucleus sampling. The article includes GitHub and Colab links with working code for hands-on experimentation. 3. 5 Reasons Why Traditional Machine Learning is Alive and Well in the Age of LLMs Despite the rise of LLMs, traditional machine learning still holds strong. This piece highlights five reasons why classical models continue to be crucial — especially for targeted, efficient solutions in specialized domains. 4. Strategic Planning with ChatGPT This demo showcases how o1 pro mode can tackle complex business scenarios, such as crafting a step-by-step market entry strategy. It highlights the model’s ability to break down competitors, analyze trends, and identify growth opportunities. 5. The Best Open-Source OCR Models OmniAI evaluated several open-source vision-language models for OCR tasks, including Qwen 2.5 VL, Gemma-3, Mistral-ocr, and Llama 4. Qwen 2.5 VL (72B) achieved the highest accuracy at 75%, outperforming other models. The benchmark assessed models on JSON extraction accuracy, cost, and latency across 1,000 documents using open-source datasets and methodologies. Repositories & Tools AI Agents for Beginners contains 10 lessons on how to get started building AI agents. vLLM is a fast and easy-to-use library for LLM inference and serving. Wan 2.1 is an open suite of video foundation models for video generation. Debug Gym is a text-based interactive debugging framework designed for debugging Python programs. Top Papers of The Week 1. Inference-Time Scaling for Generalist Reward Modeling This paper introduces Self-Principled Critique Tuning (SPCT) to enhance inference-time scalability in reward modeling for general queries. Using pointwise generative reward modeling, SPCT improves quality and scalability, outperforming existing approaches. The method employs parallel sampling and a meta-reward model for better performance. 2. Self-Steering Language Models Researchers introduce DisCIPL, a self-steering framework where a Planner model generates recursive inference programs executed by Follower models. This decouples planning from execution, enabling efficient, verifiable reasoning. DisCIPL matches or outperforms larger models like GPT-4o on constrained generation tasks without fine-tuning. 3. Rec-R1: Bridging Generative Large Language Models and User-Centric Recommendation Systems via Reinforcement Learning The paper presents Rec-R1, a reinforcement learning framework that directly optimizes LLM outputs using feedback from fixed black-box recommendation models. This approach outperforms prompting and SFT methods in product search and sequential recommendation tasks. Rec-R1 maintains the general-purpose capabilities of LLMs while enhancing recommendation performance. 4. OLMoTrace: Tracing Language Model Outputs Back to Trillions of Training Tokens This paper introduces OLMoTrace, the first system capable of tracing LLM outputs back to their multi-trillion-token training data in real-time. By identifying verbatim matches between model outputs and training documents, it aids in understanding model behavior, including fact-checking and detecting hallucinations. The open-source tool operates efficiently, returning results within seconds. 5. Towards Accurate Differential Diagnosis With Large Language Models This study evaluates an LLM optimized for diagnostic reasoning, assessing its ability to generate differential diagnoses (DDx) independently and as a clinician aid. In trials with 20 clinicians on 302 challenging cases, the LLM outperformed unassisted clinicians and improved DDx quality when used as an assistive tool. Findings suggest the LLM’s potential to enhance diagnostic accuracy and support clinical decision-making. Quick Links 1. Amazon introduced Nova Sonic, a new foundation model capable of natively processing voice and generating natural-sounding speech. Amazon claims Sonic’s performance is competitive with frontier voice models from OpenAI and Google on benchmarks measuring speed, speech recognition, and conversational quality. 2. Eleven Labs has rolled out a brand new version of the Professional Voice Clone (PVC) creation flow that makes creating a perfect clone of your voice much easier. Users can upload their recordings. They can also trim clips, remove background noise, and pick up where they left off anytime. Who’s Hiring in AI Senior Machine Learning Scientist (Generative AI) — Viator @Tripadvisor (London, England) A.I. Engineering Intern @Sezzle (Colombia/Remote) Intern AI Systems Analyst @Acubed (USA/Remote) Software Engineer @Isomorphic Labs (Switzerland/UK) Developer Relations Engineer, AI Community Manager Tenstorrent Inc. (Hybrid/Multiple US Locations) Software Engineering Intern, Server @Strava (Denver, CO, USA) Interested in sharing a job opportunity here? Contact [email protected]. Think a friend would enjoy this too? Share the newsletter and let them join the conversation. Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor. Published via Towards AI
    0 Comentários 0 Compartilhamentos 12 Visualizações
  • WWW.IGN.COM
    The Fellowship of The Ring: Trick-Taking Game Review
    Two things that seem to have been eternally popular since their inception are The Lord of the Rings and cooperative card games. Now, Tolkein’s legions of fans can enjoy both at once by playing their way through the first book of his trilogy, working together, in The Fellowship of the Ring: Trick-Taking Game.The Fellowship of The Ring: Trick-Taking GameSee it at AmazonThe traditional set of playing card rules used in trick-taking has a lot of weight to carry on its narrative backbone to support Tolkein’s storytelling, but this sturdy little box tries its best to bear the burden.What’s in the BoxWhile The Fellowship of the Ring: Trick-Taking Game comes in a small box, as befits a card game, it has immediate appeal with its stained-glass style art and shiny, gilded box-front ring. Opening the trove reveals more treasures: the box is divided into three compartments, each with a chapter ribbon, two of which start out sealed while the third contains the cards and counters you’ll need for your initial plays.The cards themselves are a delight, featuring a rich art style reminiscent of stained glass that doesn’t feel like an immediate fit for Tolkien’s universe, but which grew on me over time. This combination gives the game its own distinctive style, while still managing to conjure beloved characters from the LotR novel. Rules and How It PlaysThis is a trick-taking game, so it riffs on classic playing card folk games like Whist and Bridge. For those unfamiliar, this means the first player plays a card, and following players have to play a card of the same suit, if possible, with the highest-value card of the initial suit winning the hand. Rather than the familiar suits of a standard playing card deck, these cards are divided into forest, hill, mountain, and shadow, which run from one to eight, and rings, which run from one to five. Many games have a trump suit which will beat the initial suit if played, but here there’s only a single trump card, which appropriately enough is the one of rings.The other major departure from the trick-taking formula is the fact that this game is cooperative, so you’re working together to achieve a set of goals rather than trying to beat the other players. The game is broken down into chapters, which reflect important sequences from Tolkien’s masterwork, and each player takes the role of a character from that chapter, who has their own goals. Frodo is almost always one of the characters, his goal is always to win ring cards, and whoever is dealt the one of rings has to play as him. Other players get to choose their characters from the selection available for the chapter. As you go through the game you’ll encounter other members of the fellowship alongside more minor characters from the book like Farmer Maggot.All the character powers and goals have a vague connection to the source material, but given the abstract nature of translating an adventure narrative into a trick-taking game, these are often pretty tenuous. Gildor the elf, for example, shows his elvishness by having to play a forest card in the final trick of the game while Pippin, whose card is delightfully sub-titled as “fool”, has to win the fewest tricks. But for many other characters, such as Gandalf and Bilbo, the goal is wholly divorced from their role in the story, often equating to winning a particular number of tricks.The other major departure from the trick-taking formula is the fact that this game is cooperative.“Initially, working together to win particular tricks for particular players can feel odd, especially if you’re used to the rhythm of traditional, competitive trick-taking games. There’s also a rule forbidding players from talking about what’s in their hands – the game would be too easy otherwise – which might take a bit of getting used to. But after a few tries you should be able to establish the basic tactics needed and the game will begin to unfold. It’s a nice balance of strategy and luck: there are occasions where the deal will just not give players the cards required, but the ability to choose your character, and the ability many characters have to exchange cards with others, gives you extra levers to increase your chance of success.Just as you think you’ve gotten comfortable with the way the game works, it throws you a curveball by adding in some new rule concepts and character goals. There are eighteen chapters in total and the game keeps coming up with creative and surprising ways to modify its mechanics to keep you on your toes. Many of them manage a better tie-in with the story than the character cards. It would be a shame to spoil too many but the barrow downs chapter, for example, recreates the omnipresent fog of that dreadful place through the simple expedient of removing a slew of random cards from the deck to confuse things. Other villains that put in an appearance include Old Man Willow, the Ringwraiths, and the Balrog.Veteran gamers may, by this point, have realized that The Fellowship of the Ring: Trick-Taking Game shared quite a lot of DNA with another cooperative trick-taking game, the excellent The Crew: Mission Deep Sea (Amazon), a perennial in our list of the best board games for families. And indeed the flow and feel of both games are broadly similar, with trick-taking being adapted into a group goal by giving each player certain objectives in the tricks that they win. However, the Tolkein adaptation has several slender advantages over its older relation.Most notably, while the theming of the game might be weak, the story is so familiar – and indeed the fact there’s a story at all – gives the game a better sense of progression than The Crew’s vague march through difficulty levels. It’s still a slow climb through various challenges, of course, but the familiarity of the tale and the lovely artwork make that progress come alive in a way that The Crew just can’t manage. There are also some little mechanical flourishes, too: the single trump of the one of rings is more interesting than the standard trump suit in The Crew, and the objectives are more varied and thematic.Surprisingly for a trick-taking game there’s also a solo mode and, even more surprisingly, it works pretty well. You play four characters at once, but you only start with about half the cards dealt, with replacements coming at random off the deck as you choose which cards to play. This is an effective stand-in for the uncertainty of not knowing what’s in other player’s hands, and even when you know what cards are available, trying to coordinate your character’s goals across four different hands at once is a stiff challenge.Where to BuyGet it at AmazonSee more Lord of the Rings board games1-5 PlayersThe Lord of the Rings: Journeys in Middle EarthSee it at AmazonThe Lord of the Rings Chess Set: Battle for Middle-EarthSee it at Amazon1-4 PlayersThe Lord of The Rings Adventure Book GameSee it at Amazon1-4 PlayersThe Lord of The Rings: Adventure to Mount DoomSee it at Amazon
    0 Comentários 0 Compartilhamentos 17 Visualizações
  • 9TO5MAC.COM
    watchOS 12 will offer Apple Intelligence with a unique twist: report
    We just had the most feature-packed Apple software year I can remember. But per rumors ahead of WWDC, 2025 is shaping up to quite a banner year for upgrades too, including with watchOS 12 adding Apple Intelligence features. Apple Intelligence in watchOS 12 will rely on iPhone-run models Mark Gurman recently provided several updates on Apple’s upcoming software releases. This included the welcome news that iPadOS 19 will push the iPad to be “more like a Mac.” Gurman also mentioned watchOS 12, offering the first noteworthy details we’ve heard about the next big Apple Watch update. He wrote: the company is branding a new set [of] features as “powered by Apple Intelligence” (even though the device isn’t actually running the AI models directly). It seems Apple plans to provide Apple Intelligence features in the same way it’s offered certain Watch features in the past: by offloading some processing to the iPhone. When Apple Watch first launched, it was heavily reliant on iPhone for much of its functionality, including the basic task of running third-party apps. Over time it’s become less and less dependent, but it sounds like watchOS 12 will reverse that trend just a bit. The approach makes sense considering the Apple Watch’s limitations. In fact, you could argue that Apple is already doing this in watchOS 11. Two of the iPhone’s current Apple Intelligence features also work on the Apple Watch. Notification summaries and the Reduce Interruptions Focus are technically powered by an AI-capable iPhone. But if you’re using your Apple Watch, there’s no real indication that watchOS 11 isn’t doing the AI work itself. Gurman doesn’t share which specific features are coming, but one way or another, watchOS 12 should make Apple Intelligence an “officially supported” feature on the Apple Watch. As for other watchOS 12 features, the report only says that some of iOS 19’s new interface elements will debut on the Watch too—but no major overhaul is coming. Which AI features would you want to see in watchOS 12? Let us know in the comments. Best Apple Watch deals and accessories Add 9to5Mac to your Google News feed.  FTC: We use income earning auto affiliate links. More.You’re reading 9to5Mac — experts who break news about Apple and its surrounding ecosystem, day after day. Be sure to check out our homepage for all the latest news, and follow 9to5Mac on Twitter, Facebook, and LinkedIn to stay in the loop. Don’t know where to start? Check out our exclusive stories, reviews, how-tos, and subscribe to our YouTube channel
    0 Comentários 0 Compartilhamentos 10 Visualizações
  • FUTURISM.COM
    Horrifying Results as Man Exercises Only One Single Muscle
    Image by TheCrookedMon via TikTokDevelopmentsIn a grotesque twist to the online trend of "looksmaxxing," a term referring to male incels looking to maximize their own physical attractiveness, a TikTok user has spent over 160 days working out just a single trapezius muscle.In daily update videos, the college student, who goes by the fitting online handle TheCrookedMon, showed off the results of his unorthodox "looks minimizing" strategy: a massively imbalanced shoulder muscle diagonally — and unsettlingly — stretching from the left side of his head to his left shoulder.Why? Besides a modicum of internet clout, we're still not entirely sure.To get his very particular gains, TheCrookedMon did daily shrugs on his left side only, using a variety of objects, including a dog, a backpack, and a pack of drinks at the grocery store.Instead of cheering him on, commenters appeared largely concerned for his health."Bro you proved your point," one user wrote, with a crying emoji."Bro stop please," another comment reads.Others were more impressed."This is literally the biggest flex I've ever seen," one user wrote.The trend became so big, it spawned an entire memecoin, dubbed $TRAPMAN. According to DexScreener, the coin spiked to a value of $0.00003649 — yes, you read that decimal point correctly — when it launched on April 13. Unsurprisingly, the dubious asset crashed almost immediately, wiping out most of its value.The jury is still out on whether TheCrookedMon could be damaging his body, but chances are he'll be fine in the long run. Researchers have found that primarily focusing on training just one side of the body could still have plenty of benefits for the other side.It's a particularly pertinent reality for those recovering from an injury or missing part of their body. Research has shown that training just one side could even build muscle mass in an unused, injured limb through the triggering of neural pathways, a phenomenon referred to as the "cross-training effect" or "cross-education."However, that doesn't mean we condone the kind of eyebrow-raising workout regimen TheCrookedMon chose for himself — there are far better ways to design a unilateral training plan.In a tongue-in-cheek April 2 video, he attempted to explain why he intentionally sacrificed his looks."I was scrolling TikToks in my Ferrari, and I kept getting these looksmaxxing TikToks," he said. "And they were like, 'Do this, do that, you'll look more attractive, you'll get more women.""And it's like, people have that problem?" he added. "I've the opposite problem. I get so many DMs I don't even have time to get through them all.""If I have the opposite problem, then I need the opposite solution," TheCrookedMon argued.Worse things are on the horizon: Over the past month, TheCrookedMan has now started working out just his right leg.More on the incel community: A Google-Backed AI Startup Is Hosting Chatbots Modeled After Real-Life School Shooters — and Their VictimsShare This Article
    0 Comentários 0 Compartilhamentos 13 Visualizações
  • THEHACKERNEWS.COM
    Malicious PyPI Package Targets MEXC Trading API to Steal Credentials and Redirect Orders
    Apr 15, 2025Ravie LakshmananSupply Chain Attack / Malware Cybersecurity researchers have disclosed a malicious package uploaded to the Python Package Index (PyPI) repository that's designed to reroute trading orders placed on the MEXC cryptocurrency exchange to a malicious server and steal tokens. The package, ccxt-mexc-futures, purports to be an extension built on top of a popular Python library named ccxt (short for CryptoCurrency eXchange Trading), which is used to connect and trade with several cryptocurrency exchanges and facilitate payment processing services. The malicious package is no longer available on PyPI, but statistics on pepy.tech shows that it has been downloaded at least 1,065 times. "The authors of the malicious ccxt-mexc-futures package, claim in its README file that it extends the CCXT package to support 'futures' trade on MEXC," JFrog researcher Guy Korolevski said in a report shared with The Hacker News. However, a deeper examination of the library has revealed that it specifically overrides two APIs associated with the MEXC interface -- contract_private_post_order_submit and contract_private_post_order_cancel -- and introduces a new one named spot4_private_post_order_place. In doing so, the idea is to trick developers into calling these API endpoints to create, cancel, or place a trading order on the MEXC exchange and stealthily perform malicious actions in the background. The malicious modifications particularly target three different MEXC-related functions present in the original ccxt library, viz. ֵdescribe, sign, and prepare_request_headers. This makes it possible to execute arbitrary code on the local machine on which the package is installed, effectively retrieving a JSON payload from a bogus domain impersonating MEXC ("v3.mexc.workers[.]dev"), which contains a configuration to direct the overridden APIs to a malicious third-party platform ("greentreeone[.]com") as opposed to the actual MEXC website. "The package creates entries in the API for MEXC integration, using an API that directs requests to the domain greentreeone[.]com, and not the MEXC site mexc.com," Korolevski said. "All requests are redirected to the domain set up by the attackers, allowing them to hijack all of the victim's crypto tokens and sensitive information transferred in the request, including API keys and secrets." What's more, the fraudulent package is engineered to send the MEXC API key and secret key to the attacker-controlled domain whenever a request is sent to create, cancel, or place an order. Users who have installed ccxt-mexc-futures are recommended to revoke any potentially compromised tokens and remove the package with immediate effect. The development comes as Socket revealed that threat actors are making use of counterfeit packages across npm, PyPI, Go, and Maven ecosystems to launch a reverse shell to maintain persistence and exfiltrate data. "Unsuspecting developers or organizations might inadvertently be including vulnerabilities or malicious dependencies in their code base, which could allow for sensitive data or system sabotage if undetected," the software supply chain security company said. It also follows new research that delves into how large language models (LLMs) powering generative artificial intelligence (AI) tools could endanger the software supply chain by hallucinating non-existent packages and recommending them to developers. The supply chain threat comes into play when malicious actors register and publish malware-laced packages with the hallucinated names to open-source repositories, infecting developer systems in the process – a technique referred to as slopsquatting. The academic study found that "the average percentage of hallucinated packages is at least 5.2% for commercial models and 21.7% for open-source models, including a staggering 205,474 unique examples of hallucinated package names, further underscoring the severity and pervasiveness of this threat." Found this article interesting? Follow us on Twitter  and LinkedIn to read more exclusive content we post. SHARE    
    0 Comentários 0 Compartilhamentos 16 Visualizações
  • WEWORKREMOTELY.COM
    College Essay Guy: Customer Support & Operations Manager (Full-Time, Remote)
    Join the team that’s changing the way students apply to college.→ About This Role: Are you the kind of person who loves solving puzzles, creating order from chaos, and making people feel genuinely cared for—even over email? Do you want your work to support something meaningful, like helping students find their voices and futures? If so, read on.We’re seeking an energetic, proactive, and detail-oriented rockstar (yes, rockstar) who’s excited to take ownership of our customer support and experience systems to make them feel personal, seamless, and genuinely supportive.You'll be a key part of leveling up our operations and customer support department. You’ll have a direct impact on students' lives by ensuring exceptional experiences and smooth operational excellence, and you’ll be the first touchpoint for students celebrating an exciting admissions decision or scholarship with us. Bring your creativity, organization, and energy to help us continuously evolve our customer experience and operations systems.If you love solving problems, thrive in a fast-moving, mission-driven environment, and get a little giddy about organized systems and happy customers, this might just be your dream role.This fully remote position operates Monday through Friday, 8:00 AM – 5:00 PM (your local U.S.-based timezone).→ Be the Heart of Customer ExperienceAct as a primary touchpoint for students, families, and internal stakeholders by managing and resolving 40-50 customer support inquiries daily (primarily via email; higher during peak seasons).Proactively enhance our customer experience processes, workflows, and resources, anticipating problems before they exist, and continuously improving the customer experience.Update and maintain our Customer Experience Knowledge Base and FAQs to empower self-service options.Collaborate with the Online Operations team, supporting the customer-facing aspects of live courses, webinars, and events.Take initiative to identify and implement strategies that elevate our customer experience capabilities.→ Live Events SupportSupport our operations team in creating compelling course and webinar materials and resources.Assist in auditing and organizing live event calendars, ensuring timely communications.Schedule email reminders and conduct thoughtful follow-ups after sessions, helping families and counselors stay engaged and informed.→ Boost Our Content & Internal OpsAssist with publishing and maintaining key blog content, which helps with our educational mission of reaching more users.Distribute materials internally and externally to help the right info reach the right people.→ What You Bring:Customer Service & Ops Chops2+ years in customer experience/support or ops (bonus points if in an online ed or edtech setting)Communication & Problem-Solving SuperpowersYou troubleshoot fast, write clearly, and document SOPs and solutions for self-service.Project Management & PrecisionYou juggle multiple tasks, hit deadlines, are detail-oriented, and don’t sweat the small stuff.Tool Fluency (or Fast Learning)HelpScout (or similar customer support ticketing system)Google Workspace (Google Docs, Sheets, Calendar, Drive)Slack, basic CRMs (like HubSpot), and LMS platforms (like Kajabi)Bonus Points If You Know (Or Are Excited to Learn)Zapier for automating tasks and processesAirtable for light data tracking and workflowsHubSpot for CRM and email schedulingKajabi for course and event hostingCanva or Photoshop for basic visual content creation→ The DetailsSalary, Benefits, Start DateThis is a salaried position, with a range of $50k - $65k, depending on experience.You’ll work roughly 8am-5pm U.S. time, but the work schedule is flexible.You’ll enjoy a suite of benefits, including generous and flexible paid and sick time off with 8 company holidays, a health insurance stipend, 401K contribution matching upon eligibility, and more.We’re aiming for an early May start date, but we’re flexible for the right person.Interested? Please follow these steps.We believe in fostering an inclusive and unbiased hiring environment where talent shines irrespective of demographics. To ensure fair evaluation, we kindly request all applicants to remove their names from their application documents before submission.By anonymizing application documents, we aim to focus solely on skills, experience, and qualifications, enabling us to select candidates based on merit alone. This approach helps eliminate unconscious bias and promotes equal opportunities for all applicants. We value diversity in our workforce and encourage individuals from all backgrounds to apply. Our commitment to fair recruitment practices ensures that every candidate receives equal consideration based on their abilities and potential to contribute to our team. Thank you for joining us in our efforts to build a more inclusive workplace.→ How to ApplyWe are asking interested candidates to:Submit their resume and a cover letter detailing their qualifications and experience relevant to this role, as well as how they align with College Essay Guy’s mission and values.Applications without a cover letter will not be considered.Please anonymize your application documents and upload them directly to the linked Google form to apply. You may also apply on collegeessayguy.com/careers.Questions? Please review the full description above. If your question isn’t answered there, please email (no phone calls, please) Ashley at [email protected]. Emailed applications submitted outside of our application portal will not be considered.We are humans reviewing your application, so we appreciate your patience while we review all submitted applications. We will follow up with everyone who applies with an answer by May 31st, 2025.—College Essay Guy is an equal opportunity employer that seeks to hire those who represent the diverse communities we serve. All are encouraged to apply. We are a company of humans, our differences are our strengths. We bring all of ourselves to our jobs. That’s what gives us our strength.
    0 Comentários 0 Compartilhamentos 15 Visualizações
  • WWW.CNET.COM
    Clair Obscur, Call of Duty and More Are Coming to Xbox Game Pass Soon
    Subscribers can also go fishing for Lovecraftian horrors in Dredge.
    0 Comentários 0 Compartilhamentos 11 Visualizações
  • WWW.SCIENTIFICAMERICAN.COM
    Harvard's Stand Against Trump Interference Cheered by Scientists, Despite Risk to Research
    April 15, 20252 min readScientists Rally behind Harvard’s Stand against Trump Interference despite Risk to ResearchThe Trump administration has frozen billions in funding to the world’s richest university after Harvard refused to acquiesce to its demandsBy Tanya Lewis edited by Dean VisserHarvard University. Adam Glanzman/Bloomberg via Getty ImagesAfter Harvard University pushed back against the Trump administration’s attempts to force the school to comply with sweeping political demands yesterday, the White House announced it would freeze more than $2.2 billion in Harvard’s funding—and threatened the university’s tax-exempt status.What Happened at HarvardThe Trump administration sent a letter to Harvard on Friday that accused the university of what it claimed were civil rights violations and antisemitism and demanded unprecedented changes to the institution’s hiring and admissions practices. In response, lawyers representing Harvard sent a letter to Trump administration lawyers in which they said the university refused to comply with the demands and described them as going “beyond the lawful authority of this or any administration.”On supporting science journalismIf you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.The funding freeze jeopardizes vital research in public health and medicine, among other subjects. And Harvard-affiliated hospitals could bear the brunt of these effects. Nevertheless, Harvard’s pushback has garnered widespread praise from faculty members, scientists and public figures, including former president Barack Obama.How Scientists Are ReactingSome Harvard scientists expressed enthusiastic support for the institution’s response. “I’ve been waiting for a major university to take a stand like this. I am thrilled that mine did,” wrote Jeremy Faust, an assistant professor at Harvard Medical School and an emergency physician at Brigham and Women’s Hospital, in his newsletter Inside Medicine.When Scientific American reached out to Faust for comment, he said he couldn’t speak for Harvard. “What I can say on behalf of myself is that I had been seriously considering submitting my first federal grant proposals until November [2024]. After the election, I abandoned those plans,” he says. “Any thought of soldering on was quickly put aside in January and February, when we witnessed the unprecedented viewpoint-driven censorship across [the Department of Health and Human Services], with certain disfavored views being literally punished with the sudden pulling of promised grant funding along ideological lines.”Other Harvard scientists also praised the university’s recent actions. “I certainly don’t speak for the community, but I suspect that a lot of my colleagues are quite excited that Harvard has a spine,” said George Church, a professor of genetics at Harvard Medical School, who leads synthetic biology at the Wyss Institute, to STAT.Scientists at other institutions chimed in with support on social media. “First time you’ll hear this Yalie say: let’s go Harvard. Seriously: Yale stand up and lead,” wrote Gregg Gonsalves, an associate professor of epidemiology at the Yale School of Public Health, in a post on Bluesky. Gonsalves was among about 900 faculty members who signed a letter to Yale’s president that called on the school to defend academic freedom.Stanford University’s president and provost also issued a statement in support of Harvard’s actions. “Harvard’s objections to the letter it received are rooted in the American tradition of liberty, a tradition essential to our country’s universities, and worth defending,” read the statement, which was published in the Stanford Daily. “America’s universities are a source of great national strength, creating knowledge and driving innovation and economic growth,” it continued. “This strength has been built on government investment but not government control.”
    0 Comentários 0 Compartilhamentos 9 Visualizações
  • WWW.EUROGAMER.NET
    Is Assassin's Creed Shadows with PSSR on PS5 Pro a game changer? We tested it across all display modes
    Is Assassin's Creed Shadows with PSSR on PS5 Pro a game changer? We tested it across all display modes Plus: new improvements to the balanced mode too. Image credit: Ubisoft Feature by Oliver Mackenzie Contributor Additional contributions by Alex Battaglia, and Will Judd Published on April 15, 2025 Assassin's Creed Shadows has had its first major patch on consoles, bringing PSSR to the PS5 Pro version and adding RT reflections to the balanced graphics mode, two changes that the developers suggested were on the way in our AC Shadows tech interview last month. So how does version 1.02 compare to the day one patch, and are there are any other major changes to report? In short, the new version brings in everything promised in that interview and more, with 1.02 hitting all platforms - but being perhaps most impactful on the PS5 Pro, the only console with access to Sony and AMD's PSSR upscaler. This can now be enabled via a toggle in the menu, and it delivers better stability, less obvious aliasing in stills and better sub-pixel detail at distance. Foliage is one of the most obvious places to look when turning the feature on and off, with the standard TAAU giving a grainy but relatively sharp look, while PSSR is a bit blurrier on average but also better anti-aliased. We prefer the PSSR look, but this is a matter of taste as much as anything. Here's Alex and Oliver discussing the new changes. Watch on YouTube The only real compromise in terms of PSSR image quality comes with particles, which suffer from trails in PSSR where they don't tend to in TAAU. This is most obvious in the opening shot of the game, where there are fiery particles in the air, but you can see similar issues throughout. Otherwise, PSSR does hit image clarity somewhat, with a softer resolve overall - a change that's plain to see in side-by-side comparisons, especially ones that are magnified for analysis, but probably something that you just get used to in a normal gaming scenario from a typical viewing distance. By comparison, the image stability issues of TAAU are still visible at a typical viewing distance, so PSSR still feels like an upgrade overall. The image quality differentials are also minimised in quality mode, which tends to run at a higher resolution overall regardless of upscaler, and PSSR exhibits fewer issues with particle trails too. Beyond the PSSR option, the game doesn't look too different if we compare the launch and 1.02 patches, with similar pixel counts, for example. However, enabling PSSR does reduce internal resolutions somewhat, likely as the GPU cost of the more advanced upscaling needs to be clawed back. PS5 Pro (Performance) Shot 1 Shot 2 Shot 3 Launch TAAU 1080p 1080p 1008p 1.02 TAAU 1080p 1080p 1008p 1.02 PSSR 864p 864p 864p PS5 Pro (Quality) Shot 1 Shot 2 Shot 3 Launch TAAU 1656p 1584p 1584p 1.02 TAAU 1656p 1584p 1656p 1.02 PSSR 1656p 1584p 1440p In general, the naturalistic environments of AC Shadows suits this kind of more advanced upscaling to our eyes, trading a bit of image clarity for less break-up, but users will have the option to choose whichever upscaler they prefer. Of course, it's not a given that a reduction in image clarity perfectly counteracts the increased GPU demands of PSSR upscaling. And unfortunately, enabling PSSR does seem to come with a frame-rate penalty in performance mode, with mid-50s read-outs being quite common in busy city areas like Kyoto. There are also issues when quickly moving the camera from less taxing to more taxing scenes, eg a static shot that runs at 57fps might shoot to 60fps when looking at the sky, but drop to 40fps briefly while dynamic resolution scaling engages, causing a visible spike in frame times. It's possible that the game's DRS is not as aggressive as it could be in these moments, and in general the relatively low internal resolution needs to be even lower to hit a stable 60. To see this content please enable targeting cookies. One other major change is that Ubisoft disabled the 30fps lock in the hideout area, so it now runs at 60fps on PS5 Pro in performance mode with TAAU active, and a bit below that with PSSR. This is from a relatively early save, so it's possible that the performance here dips as the area becomes more built up over the course of the game. So far, though, it seems like performance remains in the range of VRR-capable displays, both in the hideout and elsewhere in the game - and these sorts of screens are probably more common for PS5 Pro owners than they are for owners of less expensive consoles. The balanced mode also now runs with RT reflections, a feature that Ubisoft previously felt they didn't have enough time to validate outside of quality mode where it first appeared. This improves reflections on bodies of water as you avoid SSR artefacts and instead get a more accurate fallback. It's not perfect, with some limitations in the way the world is represented in the reflection, but it's still a worthwhile inclusion. The balanced mode also gets PSSR, and it's shakier here than with TAAU, especially in demanding sections like the opening of the game or Kyoto. Again, with a VRR display you're OK, but without VRR the frame-rate dips are more noticeable and TAAU may be preferred. Still, the PSSR image tends to look better overall. The hideout also now runs at 40fps in the balanced mode, versus 30fps before, which is a nice change that works well. RT reflections come to balanced mode with 1.02, providing a more realistic look in glossy surfaces like water. The game's hideout area also runs up to 40fps, though it doesn't always hit that figure. | Image credit: Digital Foundry Weirdly, there seems to be an HDR issue with PSSR for some reason, with certain elements of the game's lighting exhibiting coloured halos that resemble a banding artefact. This has been acknowledged as a bug on Ubisoft's support website, so I presume this will be fixed in a future patch. Overall then, this is a solid enough patch. I wish I could recommend the PSSR mode without caveats, but at the moment its lower performance profile means that it runs best with VRR enabled. The balanced mode improvements are more wholeheartedly worth experiencing, and the unlocked frame-rate in the hideout is also a sensible change.
    0 Comentários 0 Compartilhamentos 9 Visualizações