• GeForce Now to establish 100-hour playtime limit
    www.gamesindustry.biz
    GeForce Now to establish 100-hour playtime limitThe rule will come into effect in January 2025 for new members, and in 2026 for people with an active subscription as of December 31, 2024 News by Marie Dealessandri Deputy Editor Published on Nov. 8, 2024 Nvidia's subscription cloud gaming service GeForce Now will soon be enforcing a playtime limit of 100 hours per month.The new rule won't affect Founder memberships, but will impact the newly renamed Performance and Ultimate tiers, Nvidia announced on Reddit. The company said the limit "comfortably accommodates 94% of members."Unused playtime can be rolled over to the next month, within a 15 hours limit, but these can't be added up and members can't have more than 115 hours available at the beginning of a month.Members will also be able to purchase additional hours. This will set Performance subscribers back $2.99 for an additional 15 hours, and cost Ultimate members $5.99 for the same number of hours. As a reminder, a Performance membership costs $9.99 a month, and an Ultimate one is priced at $19.99 monthly.Players who have an active membership as of December 31, 2024 won't see the change enforced on their account until January 2026. However, the rules will affect new members signing up from January 2025.
    0 Comentários ·0 Compartilhamentos ·117 Visualizações
  • Pocketpair shares Nintendo, The Pokmon Company's demands from Palworld lawsuit
    www.gamesindustry.biz
    Pocketpair shares Nintendo, The Pokmon Company's demands from Palworld lawsuitJapanese studio faces claims of infringing three patents, as well as potential injunction and at least $65,000 in damages News by James Batchelor Editor-in-chief Published on Nov. 8, 2024 Palworld developer Pocketpair has shared more details from the lawsuit it faces from both Nintendo and The Pokmon Company over alleged similarities between the latter's titular franchise and its own monster-based survival game.Nintendo and The Pokmon Company first filed a lawsuit against Pocketpair on September 19 eight months after Palworld launched claiming the title infringed on multiple patent rights.Details of which patents were not shared, but now Pocketpair has specified the three patents is it accused of infringing upon via a post on its website.The company also reported that Nintendo and The Pokmon Company are seeking an injunction against Palworld, as well as payment of at least 5 million ($33,683) plus late payment damages to each company.Pocketpair intends to defend its game, stating: "We will continue to assert our position in this case through future legal proceedings."As previously speculated by the media and legal experts including MBHB associate Andrew Velzen in an analysis for GamesIndustry.biz the patents in question are:JP7545191, which refers to a system for using capture items that can catch characters encountered in a virtual spaceJP7493117, which refers to an aiming system for deploying such capture itemsJP7528390, which refers to a system for rideable charactersThe Pocketpair post reports these were all applied for and registered after Palworld first launched on Jaunary 19, 2024 as was observed by Velzen in his article for GamesIndustry.biz.Velzen added that Nintendo has also applied for counterpart patents in the US, although some were filed as far back as September 2022.In his article, he noted that the outcome of this case could have significant repercussions for the video games industry."In recent years, the video game industry has somewhat moved away from patents, especially for in-game features," he wrote. "If Nintendo is successful here, though, perhaps this paradigm could be in question."
    0 Comentários ·0 Compartilhamentos ·116 Visualizações
  • Report: NetEase execs and employees arrested over alleged money laundering
    www.gamedeveloper.com
    Nine NetEase developers have reportedly been arrested, including a pair of senior executives, according to Bloomberg and Chinese outlet Leifeng.According to the latter's translated story, the employees laundered what's ultimately estimated to be between 800 million to 1 billion yuan (or $111.4 million-$139.3 million). The studio is currently developing the free-to-play shooters Marvel Rivals and Destiny: Rising.Per Yicai Global, the two NetEase executives implicated are esports division head Xiang Liang and publishing head Jin Yuchen. The outlet further noted 27 unnamed companies with alleged connections to the laundering scheme have also been blacklisted following these arrests.Leifeng also reported several NetEase staff were responsible for the purchasing traffic of "multiple [top NetEase] products." Other details have not been shared at this time.Recent white collar crimes in the game industryIn 2022, several arrests were made over insider trading in the game industry. Yuji Naka, creator of Sonic the Hedgehog, was arrested over suspicions involving his time at Square Enix, where he was accused of spending 2.8 million yen to buy shares of Aiming before it and Square Enix were revealed to be teaming on Dragon Quest Tact.Naka was arrested again weeks later over buying stock in Final Fantasy VII: The First Soldier developer Ateam, then indicated weeks after that.Separately from Naka, ex-Square developers Fumiaki Suzuki andTaisuke Sasaki were accused of also buying stock for Aiming ahead of Dragon Quest Tact's reveal. Sasaki was similarly indicted along with Naka in relation to purchasing Ateam stock.Naka later admitted to trading in 2023, and was sentenced to serve two-and-a-half years in prison and fined 171 million yen.
    0 Comentários ·0 Compartilhamentos ·175 Visualizações
  • NVIDIA's GeForce service gets 100-hour 'monthly playtime allowance' in 2025
    www.gamedeveloper.com
    Justin Carter, Contributing EditorNovember 8, 20242 Min ReadImage via NVIDIA.At a GlanceNVIDIA GeForce now users will soon only be able to play games on the service for a maximum of 100 hours per month.NVIDIA is putting a time limit on its GeForce Now cloud streaming service. On January 1, 2025, subscribers will have a 100-hour "monthly playtime allowance" for members of its paid Performance and Ultimate Tiers.After hitting the 100-hour cap, subscribers can buy 15 more hours for $3 (Performance) or $6 (Ultimate). If a subscriber has up to 15 hours of unused playtime, that time will be automatically rolled over to the following month.Subscription services for games often withhold what (or how) players have access to through their membership tiers, as we've recently seen with Xbox Game Pass. NVIDIA restricting play time is another matter entirely, particularly as other companies like Microsoft are trying to carve out a niche in the cloud game market.The cap is being set "to continue providing exceptional quality and speed, as well as shorter queue times, to members," NVIDIA explained on Reddit. It acknowledged this limit will only affect 6 percent of its subscribersthe other 94 percent "typically enjoy the service well within this timeframe," and will be "comfortably accommodated."Balancing between features and pricing will shift over time with any streaming platform, leading to a price increase (again, like Game Pass), or a change to the benefits offered by a subscription tier, similar to Amazon Prime Video. In many cases, the root cause is a desire to cut down costs, which NVIDIA may be looking to do here.Can subscription services and games still coexist?Amid all of this, there have been discussions about if subscription services are still (or will remain viable). For Microsoft, getting more subscribers has been a core focus for years, hence its acquisition of big name studios like Bethesda and Activision Blizzard and touting those teams' big titlesnamely Call of Duty: Black Ops 6 and Indiana Jones & the Great Circleas day one Game Pass titles.In the former game's case, that gambit seems to have paid off: in its recent earnings report, Microsoft called the military shooter "the biggest Call of Duty release ever, setting a record for day one players, as well as Game Pass subscriber adds on launch day."However, not every game can be a Black Ops 6-level hit. Several titles that launched on a subscription service have stumbled out of the gate (see Foamstars) or start strong before fizzling out. Add on the fact that every company is trying to get you to sign up for their service, and the ease of access such services provide can be easily drowned out by growing complications.About the AuthorJustin CarterContributing Editor, GameDeveloper.comA Kansas City, MO native, Justin Carter has written for numerous sites including IGN, Polygon, and SyFy Wire. In addition to Game Developer, his writing can be found at io9 over on Gizmodo. Don't ask him about how much gum he's had, because the answer will be more than he's willing to admit.See more from Justin CarterDaily news, dev blogs, and stories from Game Developer straight to your inboxStay UpdatedYou May Also Like
    0 Comentários ·0 Compartilhamentos ·162 Visualizações
  • The Beatles final song, restored using AI, is up for a Grammy
    www.theverge.com
    The Beatles have been nominated for two Grammys nearly 50 years after the band officially split up. Their final song, called Now and Then, was restored last year with the help of AI, and is now up for record of the year alongside the likes of Beyonc, Charlie XCX, Billie Eilish, and Taylor Swift. Its also been nominated for best rock performance, where it goes up against Green Day, Pearl Jam, and The Black Keys.Released in November 2023, Now and Then started as a demo recorded by John Lennon in the late 1970s. This recording, as well as Free As A Bird and Real Love, was given to Lennons three surviving bandmates in the 90s, with the hopes of including it in The Beatles Anthology project.However, Now and Then was never released, as technology at the time couldnt separate Johns vocals and piano to get a clear sound. But in 2021, filmmaker Peter Jackson and his sound team were able to separate the instrumentals and vocals with machine learning technology, allowing Paul McCartney and Ringo Starr to finally complete the song.Though Now and Then was finished using machine learning, it still falls within the bounds of The Grammys rules surrounding AI. The guidelines currently state that only human creators are eligible to be submitted for consideration for, nominated for, or win a GRAMMY Award, but work that contains elements of AI material is eligible in applicable categories.Its a bit strange to see Now and Then competing with modern-day music like Beyoncs Texas Hold Em, but its been a long time coming. Well get to see how the Beatles fare during the 2025 Grammy Awards, which takes place on Sunday, February 2nd.
    0 Comentários ·0 Compartilhamentos ·86 Visualizações
  • YouTube Premiums legacy price breaks are going away for more users
    www.theverge.com
    YouTubes premium subscriptions are about to get more expensive for long-time subscribers with legacy plans in more places. In December, YouTube told US subscribers with legacy YouTube Premium plans (stemming from discontinued services Google Play Music or YouTube Red) theyd need to start paying the current $13.99 per month price in the new year.YouTube Music users in Europe have posted emails they received announcing a price increase for them, too, and just like the US, some report getting three more months at the current price before the hike.In an email to The Verge, YouTube communications manager Paul Pennington confirmed prices are increasing for both YouTube Premium, which removes ads on the streaming videos and includes access to the music service, as well as the YouTube Music standalone plans:Were updating the price for YouTube Premium and YouTube Music Premium for new and current subscribers in Bulgaria, Costa Rica, Dominican Republic, Ecuador, Estonia, Spain, Finland, Greece, Guatemala, Honduras, Kuwait, Lithuania, Luxembourg, Latvia, Puerto Rico, Portugal, Romania, Slovakia, Uruguay, and Turkey. Members who signed up originally via Google Play and received early adopter pricing will get three additional months at their current price.RelatedThe initial Reddit poster said they were on the legacy plan from a Google Play Music subscription that started before YouTube Music launched (as YouTube Music Key in 2014 with a $7.99 monthly rate in the US), leading to their eventual merger and the shutdown of Google Play Music. Now, their monthly rate as a subscriber in Spain is going up from 7.99 euros to 10.99, which is still less than the rate for new subscribers to the individual music subscription, which is 12.99 euros.
    0 Comentários ·0 Compartilhamentos ·78 Visualizações
  • Modo Is Being Shutdown
    gamefromscratch.com
    Modo Is Being Shutdown / News / November 8, 2024 / GraphicsFoundry, the makers of computer graphics software like the NUKE family, MARI, KATANA and more have announced they are shutting down Modo with the upcoming release of Modo 17.1. Modo is a long running 3D modelling application that was first formed in 2001 by several senior Lightwave developers.Foundary made the announcement Thursday with the following Tweet:Further details of the Modo shutdown:London, November 7 2024 Foundry, a leading developer of creative software for the media and entertainment industries, today announced its decision to wind down development of 3D modeling tool, Modo, after the release of Modo 17.1 later this year. This strategic decision will allow Foundry to focus on its core offerings and invest in new solutions that meet the evolving needs of the media and entertainment community.Active customers will continue to receive support until the end of their current contract, and can obtain an extended 10-year license so that they can continue to use Modo in the future.FAQWhat support will be provided for customers?Customers on maintenance or subscription will receive support until their current contract term expires. We will continue to investigate and provide solutions and workarounds for issues, however we do not anticipate any further product releases (either feature releases or maintenance releases).Will Modo still work on future operating system updates after wind-down?Modo may continue to function on future operating systems, but since we will not be issuing patches or updates to address potential conflicts, we cannot guarantee compatibility with future operating system updates. We therefore recommend that customers migrate to alternative 3D workflows as soon as possible.Will Modo downloads, documentation, forums, and support channels continue to operate?Can a Modo license be transferred to another machine?Yes. All customers on active maintenance or subscription will have received an email explaining how to obtain an extended license which can be moved between machines as needed. Inactive customers, or those who did not receive this email, can contact [emailprotected] to request a license transferKey LinksModo Closure AnnouncementModo HomepageYou can learn more about the shutdown of Modo in the video below.
    0 Comentários ·0 Compartilhamentos ·108 Visualizações
  • Databricks Mosaic Research Examines Long-Context Retrieval-Augmented Generation: How Leading AI Models Handle Expansive Information for Improved Response Accuracy
    www.marktechpost.com
    Retrieval-augmented generation (RAG) represents a great advancement in the capability of large language models (LLMs) to perform tasks accurately by incorporating relevant external information into their processing workflows. This approach, blending information retrieval techniques with generative modeling, has seen growing utility in complex applications such as machine translation, question answering, and comprehensive content generation. By embedding documents into LLMs contexts, RAG enables models to access and utilize more extensive and nuanced data sources, effectively expanding the models capacity to handle specialized queries. This technique has proven especially valuable in industries that require precise and informed responses, offering a transformative potential for fields where accuracy and specificity are paramount.A major challenge facing the development of large language models is the effective management of vast contextual information. As LLMs grow more powerful, so does the demand for their ability to synthesize large volumes of data without losing the quality of their responses. However, incorporating extensive external information often results in performance degradation, as the model may need help to retain critical information across long contexts. This issue is compounded in retrieval scenarios, where models must pull from expansive information databases and integrate them cohesively to generate meaningful output. Consequently, optimizing LLMs for longer context lengths is a crucial research goal, particularly as applications increasingly rely on high-volume, data-rich interactions.Most conventional RAG approaches use embedding documents in vector databases to facilitate efficient, similarity-based retrieval. This process typically involves breaking down documents into retrievable chunks that can be matched to a users query based on relevance. While this method has proven useful for short-to-moderate context lengths, many open-source models experience a decline in accuracy as context size increases. While some more advanced models exhibit promising accuracy with up to 32,000 tokens, limitations remain in harnessing even greater context lengths to consistently enhance performance, suggesting a need for more sophisticated approaches.The research team from Databricks Mosaic Research undertook a comprehensive evaluation of RAG performance across an array of both open-source and commercial LLMs, including well-regarded models such as OpenAIs GPT-4, Anthropics Claude 3.5, and Googles Gemini 1.5. This evaluation tested the impact of increasing context lengths, ranging from 2,000 tokens up to an unprecedented 2 million tokens, to assess how well various models could maintain accuracy when handling extensive contextual information. By varying context lengths across 20 prominent LLMs, the researchers aimed to identify which models demonstrate superior performance in long-context scenarios, making them better suited for applications requiring large-scale data synthesis.The research employed a consistent methodology across all models, embedding document chunks using OpenAIs text-embedding-3-large model and then storing these chunks in a vector store. The studys tests were conducted on three specialized datasets: Databricks DocsQA, FinanceBench, and Natural Questions, each chosen for its relevance to real-world RAG applications. In the generation stage, these embedded chunks were then provided to a range of generative models, where performance was gauged based on the models ability to produce accurate responses to user queries by integrating retrieved information from the context. This approach compared each models capacity to handle information-rich scenarios effectively.The results showed notable variance in performance across the models. Not all benefited equally from expanded context lengths, as extending context did not consistently improve RAG accuracy. The research found that models such as OpenAIs o1-mini and o1-preview, GPT-4o, Claude 3.5 Sonnet, and Gemini 1.5 Pro showed steady improvements, sustaining high accuracy levels even up to 100,000 tokens. However, other models, particularly open-source options like Qwen 2 (70B) and Llama 3.1 (405B), displayed performance degradation beyond the 32,000-token mark. Only a few of the latest commercial models demonstrated consistent long-context capabilities, revealing that while extending context can enhance RAG performance, many models still face substantial limitations beyond certain token thresholds. Of particular interest, Googles Gemini 1.5 Pro model maintained accuracy at extremely long contexts, handling up to 2 million tokens effectively, a remarkable feat not widely observed among other tested models.Analyzing the failure patterns of models in long-context scenarios provided additional insights. Some models, such as Claude 3 Sonnet, frequently refused to respond due to concerns around copyright compliance, especially as context lengths increased. Other models, including Gemini 1.5 Pro, encountered difficulties due to overly sensitive safety filters, resulting in repeated refusals to complete certain tasks. Open-source models also exhibited unique failure patterns; Llama 3.1, for example, demonstrated consistent failures in contexts above 64k tokens, often by providing irrelevant or random content. These results underscore that long-context models fail in various ways, largely dependent on context length and task demands, and suggest specific areas for future improvement.The studys key findings reveal the potential and limitations of using long-context LLMs for RAG applications. While certain state-of-the-art models, such as OpenAIs o1 and Googles Gemini 1.5 Pro, displayed consistent improvement in accuracy across long contexts, most models only demonstrated optimal performance within shorter ranges, around 16,000 to 32,000 tokens. The research team hypothesizes that advanced models like o1 benefit from increased test-time computation, allowing them to handle complex questions and avoid confusion from less relevant retrieved documents. The teams findings highlight the complexities of long-context RAG applications and provide valuable insights for researchers seeking to refine these techniques.Key takeaways from the research include:Performance Stability: Only a select group of commercial models, such as OpenAIs o1 and Googles Gemini 1.5 Pro, maintained consistent performance up to 100,000 tokens and beyond.Performance Decline in Open-Source Models: Most open-source models, including Qwen 2 and Llama 3.1, experienced significant performance drops beyond 32,000 tokens.Failure Patterns: Models like Claude 3 Sonnet and Gemini 1.5 Pro failed differently, with issues like task refusals due to safety filters or copyright concerns.High-Cost Challenges: Long-context RAG is cost-intensive, with processing costs ranging from $0.16 to $5 per query, depending on the model and context length.Future Research Needs: The study suggests further research on context management, error handling, and cost mitigation in practical RAG applications.In conclusion, while extended context lengths present exciting possibilities for LLM-based retrieval, practical limitations persist. Advanced models like OpenAIs o1 and Googles Gemini 1.5 show promise, but broader applicability across diverse models and use cases requires continued refinement and targeted improvements. This research marks an essential step toward understanding the trade-offs and challenges inherent in scaling RAG systems for real-world applications.Check out the Paper. All credit for this research goes to the researchers of this project. Also,dont forget to follow us onTwitter and join ourTelegram Channel andLinkedIn Group. If you like our work, you will love ournewsletter.. Dont Forget to join our55k+ ML SubReddit.[Sponsorship Opportunity with us] Promote Your Research/Product/Webinar with 1Million+ Monthly Readers and 500k+ Community MembersThe post Databricks Mosaic Research Examines Long-Context Retrieval-Augmented Generation: How Leading AI Models Handle Expansive Information for Improved Response Accuracy appeared first on MarkTechPost.
    0 Comentários ·0 Compartilhamentos ·98 Visualizações
  • Google DeepMind Researchers Propose RT-Affordance: A Hierarchical Method that Uses Affordances as an Intermediate Representation for Policies
    www.marktechpost.com
    In recent years, there has been significant development in the field of large pre-trained models for learning robot policies. The term policy representation here refers to the different ways of interfacing with the decision-making mechanisms of robots, which can potentially facilitate generalization to new tasks and environments. Vision-language-action (VLA) models are pre-trained with large-scale robot data to integrate visual perception, language understanding, and action-based decision-making to guide robots in various tasks. On top of vision-language models (VLMs), they come up with the promise of generalization to new objects, scenes, and tasks. However, VLAs still need to be more reliable to be deployed outside the narrow lab settings they are trained in. While these drawbacks can be mitigated by expanding the scope and diversity of robot datasets, this is highly resource-intensive and challenging to scale. In simple words, these policy representations either need to provide more context or over-specified context that yields less robust policies.language, goal images, and trajectory sketches are widely used and are helpful. One of the most common policy representations is conditioning on language. Most of the robot datasets are labeled with underspecified descriptions of the task, and language-based guidance does not provide enough guidance on how to perform the task. Goal image-conditioned policies provide detailed spatial information about the final goal configuration of the scene. However, goal images are high-dimensional, which presents learning challenges due to over-specification issues. Intermediate representation such as Trajectory sketches, or key points attempts to provide spatial plans for guiding the robots actions. While these spatial plans provide guidance, they still lack sufficient information for the policy on how to perform specific movements.A team of researchers from Google DeepMind conducted detailed research on policy representation for robots and proposed RT-Affordance which is a hierarchical model that first creates an affordance plan given the task language, and then uses the policy on this affordance plan to guide the robots actions for manipulation. In robotics, affordance refers to the potential interactions that an object enables for a robot, based on its shape, size etc. The RT-Affordance model can easily connect heterogeneous sources of supervision including large web datasets and robot trajectories.First, the affordance plan is predicted for the given task language and the initial image of the task. This affordance plan is then combined with language instructions to condition the policy for task execution. It is then projected onto the image, and following this, the policy is conditioned on images overlaid with the affordance plan. The model is co-trained on web datasets (the largest data source), robot trajectories, and a modest number of cheap-to-collect images labeled with affordances. This approach benefits from leveraging both robot trajectory data and extensive web datasets, allowing the model to generalize well across new objects, scenes, and tasks.The research team conducted various experiments that mainly focused on how affordances help to improve robotic grasping, especially for movements of household items with complex shapes (like kettles, dustpans, and pots). A detailed evaluation showed that RT-A remains robust across various out-of-distribution (OOD) scenarios, such as novel objects, camera angles, and backgrounds. The RT-A model performed better than RT-2 and its goal-conditioned variant, achieving success rates of 68%-76% compared to RT-2s 24%-28%. In tasks beyond grasping, like placing objects into containers, RT-A showed a significant performance with a 70% success rate. However, the performance of RT-A slightly dropped when it faced entirely new objects.Check out the Paper. All credit for this research goes to the researchers of this project. Also,dont forget to follow us onTwitter and join ourTelegram Channel andLinkedIn Group. If you like our work, you will love ournewsletter.. Dont Forget to join our55k+ ML SubReddit. Divyesh Vitthal Jawkhede+ postsDivyesh is a consulting intern at Marktechpost. He is pursuing a BTech in Agricultural and Food Engineering from the Indian Institute of Technology, Kharagpur. He is a Data Science and Machine learning enthusiast who wants to integrate these leading technologies into the agricultural domain and solve challenges. Listen to our latest AI podcasts and AI research videos here
    0 Comentários ·0 Compartilhamentos ·100 Visualizações
  • DSPy: Machine Learning Attitude Towards LLM Prompting
    towardsai.net
    DSPy: Machine Learning Attitude Towards LLM Prompting 0 like November 8, 2024Share this postAuthor(s): Serj Smorodinsky Originally published on Towards AI. Transition from prompt string manipulations to a PyTorch like frameworkThis member-only story is on us. Upgrade to access all of Medium.Link to the official tutorialFull code at your one stop LLM classification projectHeres a link to a short YouTube video with the code rundownMy goal is to showcase complex technologies through non trivial use cases. This time I have chosen to focus DSPy framework. Its raison detre (reason of being) is to abstract, encapsulate and optimize the logic that is needed for tasking LLM outputs.DSPy allows coders to specify inputs and outputs for an LLM task, and let the framework deal with composing the best possible prompt.Why should you care?You can brag about it during lunchImprove code readabilityImprove LLM task outputsThis is the first part of a series, in which we will focus on an implementation of LLM based classifier. In the next instalment we go deeper with actual optimization.What is DSPy?Why DSPy?Use case: LLM intent classifier for customer serviceDSPy is a framework that was created by Stanford researches. I love the way that the official docs explain so Im attaching it here:DSPy emphasises programming over prompting. It unifies techniques for prompting and fine-tuning LMs as well as improving them with reasoning and tool/retrieval augmentation, all expressed through a Read the full blog for free on Medium.Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming asponsor. Published via Towards AITowards AI - Medium Share this post
    0 Comentários ·0 Compartilhamentos ·97 Visualizações