0 Comentários
0 Compartilhamentos
Diretório
Diretório
-
Faça o login para curtir, compartilhar e comentar!
-
WWW.GAMEDEVELOPER.COMRuckus Games raises $19 million in funding for debut title, 'Project Bobcat'Justin Carter, Contributing EditorNovember 15, 20241 Min ReadAt a GlanceKrafton and Hypergryph were among the firms that provided funding to Ruckus to develop its co-op RPG-shooter.Indie developer Ruckus Games recently secured $19 million in funding for its unannounced debut project.Two years ago, the studio secured $5.5 million in a round led by Transcend Fund to build its "high-quality prototype." This new round was led by Krafton, with additional contributions from Transcend, BitKraft, and Hypergryph.In its statement, Ruckus highlighted that prototype as proof its development costs "remain much lower than triple-A, while the team still delivers that same level of quality and fun of titles with exponentially bigger budgets."Speaking to its contribution, Krafton's Maria Park said it "believes in the future of co-op games, and the Ruckus team has demonstrated incredible progress with a small team in a short period of time. Theyve crafted something so on-trend, with stylish action and humor, that it not only entertains but also connects players in memorable ways - a vision that strongly resonates withKrafton."Ruckus was founded in 2021 by former Riot and Gearbox developers, including Borderlands 3 creative director Paul Sage. The studio's debut titledubbed "Project Bobcat" by Hypergryph's investment headis described as a session-based co-op RPG-shooter it hopes to "disrupt the multiplayer landscape [...] anddeliver a unique blend of style, humor, and explosive action."Both Park and Transcend GP Andrew Sheppard sit on the board of directors for Ruckus, and Sage said the contributing firms "bring not only a global reach, but as developers themselves, they bring a unique perspective to our team. [...] This level of support is a great show of confidence that bodes well as we search for the right publishing partner going forward."Development on "Project Bobcat" is said to be continuing "at a rapid pace," and Ruckus Games is currently hiring for various positions.Read more about:FundingAbout the AuthorJustin CarterContributing Editor, GameDeveloper.comA Kansas City, MO native, Justin Carter has written for numerous sites including IGN, Polygon, and SyFy Wire. In addition to Game Developer, his writing can be found at io9 over on Gizmodo. Don't ask him about how much gum he's had, because the answer will be more than he's willing to admit.See more from Justin CarterDaily news, dev blogs, and stories from Game Developer straight to your inboxStay UpdatedYou May Also Like0 Comentários 0 Compartilhamentos
-
WWW.GAMEDEVELOPER.COMThe Splinter Cell movie is no longer moving forwardJustin Carter, Contributing EditorNovember 15, 20242 Min ReadImage via Ubisoft.At a GlanceThe Splinter Cell adaptation was first announced back in 2005, and saw little progress.The Splinter Cell movie has been quietly terminated, according to a producer on the would-be film.During a recent promotional event for John Wick (and spotted by TheDirect), producer Basil Iwanyk said production company New Regency "just couldn't get it right, script-wise, budget-wise. [...] We had a million different versions of it, but it was going to be hardcore and awesome. That's one of the ones that got away, which is really sad."Ubisoft announced a film adaptation for the stealth game series back in 2005, then re-announced it in 2013, this time with actor Tom Hardy locked in to play series lead Sam Fisher. In the years since, the project saw no substantial progressHardy has spent the past decade playing Venom or starring in dramas, and the only director attached was Doug Liman, who was hired in March 2014 and exited the project a year later.Video game adaptations in development hellBefore transmedia ventures became a major fixture of entertainment, game adaptations often got locked into development hell. Some notable examples include Gears of War, BioShock, Infamous, and fellow Tom Clancy sub-series The Division. In some cases, those adaptations managed to push forward, others remain unmade to this day, often with no real confirmation on their status.Previously, Ubisoft appeared to ditch its transmedia plans after the critical and commercial failure of the 2016 Assassin's Creed movie. More recently, it teamed with Netflix on the animated Captain Laserhawk serieswhich brought various Ubisoft properties together into a single universeand plans to release a Watch Dogs film that wrapped production in September.Outside of the film, Ubisoft plans to bring back Splinter Cell with a remake of the original 2002 game. It will be the first installment in the franchise since 2013's Splinter Cell: Blacklist.Read more about:UbisoftTransmediaAbout the AuthorJustin CarterContributing Editor, GameDeveloper.comA Kansas City, MO native, Justin Carter has written for numerous sites including IGN, Polygon, and SyFy Wire. In addition to Game Developer, his writing can be found at io9 over on Gizmodo. Don't ask him about how much gum he's had, because the answer will be more than he's willing to admit.See more from Justin CarterDaily news, dev blogs, and stories from Game Developer straight to your inboxStay UpdatedYou May Also Like0 Comentários 0 Compartilhamentos
-
WWW.THEVERGE.COMHalf-Life 2 is getting a huge 20th anniversary updateHalf-Life 2 is getting a major update in celebration of the classic titles 20th anniversary. In addition to Steam Workshop support directly within the game, Valve has fixed bugs and restored some content, added new graphics settings, updated gamepad controls, and a whole lot more. Youll also be able to get the game for free on Steam through November 18th at 1PM ET. After that, it will cost $9.99.Valve is also now including the Episode One and Episode Two expansions with the base game. Theyll be accessible from the Half-Life 2 menu, and Valve says that you will automatically advance to the next expansion after completing each one. Youll also be able to access the Steam Workshop within the Extras menu, which means youll no longer have to leave the game to enable mods.Valve says it also made massive updates to Half-Life 2s maps, which will fix longstanding bugs, restore content and features lost to time, and improve the quality of a few things like lightmap resolution and fog. Theres a new option to play with the original launch day blood and fire effects as well, and Valve has updated Half-Life 2s gamepad controls to match last years Half-Life 1 anniversary update.If you want to access the older version of Half-Life 2, thats still an option: youll just have to roll back to a publicly visible Beta branch named steam_legacy and grab the Pre-20th Anniversary Build, Valve says.Like with the 25th anniversary celebration for the original Half-Life, Valve has also released a free documentary about Half-Life 2 that you can watch for free on YouTube. Heres what you can expect, according to the documentarys YouTube description: weve gotten members of the HL2 team back to talk about the games development, how we almost ran out of money, what it was like when we were hacked, what happened when we were sued by our publisher, the birthplace of Steam, and much more.In addition to the documentary, Valve has shared videos of old demos of the game including one that it planned to bring to E3 2022 but decided not to show at the last minute. Valve added 3.5 hours of new developer commentary within the game, too.And the company is printing an expanded second edition of the Raising the Bar book about the games development, which includes the Half-Life 2 development story, with never-before-seen concept art from Episode One and Episode Two, along with ideas and experiments for the third episode that never came to be. The book will return to print in 2025.Maybe soon well get Half-Life 3? Maybe?0 Comentários 0 Compartilhamentos
-
WWW.THEVERGE.COMESPN is testing a generative AI avatar called FACTSESPN is testing an AI-generated avatar with the Saturday college football show SEC Nation. Dubbed FACTS, its going to be ...promoting education and fun around sports analytics with information drawn from ESPN Analytics, which includes data like the Football Power Index (FPI), player and team statistics, and game schedules. We havent seen the avatar in action, but it sounds like a bot-ified version of stats encyclopedia Howie Schwab, who was ESPNs first statistician and eventually the star of a mid-2000s game show, Stump the Schwab.ESPN has already brought generative AI to its website with AI-written game recaps. FACTS is still in development, and theres no word on when it could make its first appearance on the network. FACTS uses Nvidias ACE (Avatar Cloud Engine), an Azure OpenAI integration to power language processing, and ElevenLabs for its text-to-speech capabilities.In the announcement from its ESPN Edge Innovation Conference, the network claims FACTS is absolutely not made to replace journalists or other talent. FACTS is designed to test innovations out in the market and create an outlet for ESPN Analytics data to be accessible to fans in an engaging and enjoyable segment, the company writes.0 Comentários 0 Compartilhamentos
-
WWW.MARKTECHPOST.COMTop Generative Artificial Intelligence AI Courses in 2024In recent years, generative AI has surged in popularity, transforming fields like text generation, image creation, and code development. Its ability to automate and enhance creative tasks makes it a valuable skill for professionals across industries. Learning generative AI is crucial for staying competitive and leveraging the technologys potential to innovate and improve efficiency. This article lists the top generative AI courses that provide comprehensive training to help you master this technology, enhance your professional skill set, and stay ahead in the rapidly evolving job market.Introduction to Generative AI Learning Path SpecializationThis course offers a comprehensive introduction to generative AI, covering large language models (LLMs), their applications, and ethical considerations. The learning path comprises three courses: Generative AI, Large Language Models, and Responsible AI.Generative AI for EveryoneThis course provides a unique perspective on using generative AI. It covers how generative AI works, its applications, and its limitations, with hands-on exercises for practical use and effective prompt engineering. It aims to empower everyone to participate in an AI-powered future.Introduction to Generative AIThis beginner-friendly course provides a solid foundation in generative AI, covering concepts, effective prompting, and major models. It includes hands-on examples and practical exercises and explores use cases across various domains like text, images, and code.Generative AI with Large Language ModelsThis course teaches the fundamentals of generative AI with large language models (LLMs), including their lifecycle, transformer architecture, and optimization. It covers training, tuning, and deploying LLMs with practical insights from industry experts.Generative AI Fundamentals SpecializationThis specialization offers a comprehensive introduction to generative AI, covering models like GPT and DALL-E, prompt engineering, and ethical considerations. It includes five self-paced courses with hands-on labs and projects using tools like ChatGPT, Stable Diffusion, and IBM Watsonx.ai.Generative AI for Data Scientists SpecializationThis specialization by IBM is designed for data professionals to learn generative AI, including prompt engineering and applying AI tools in data science. It features hands-on projects like text, image, and code generation, as well as creating prediction models.Generative AI for Data Analysts SpecializationThis specialization covers generative AI use cases, models, and tools for text, code, image, audio, and video generation. It includes prompt engineering techniques, ethical considerations, and hands-on labs using tools like IBM Watsonx and GPT. Suitable for beginners, it offers practical projects to apply AI concepts in real-world scenarios.Generative AI for Software Developers SpecializationThis IBM specialization teaches software developers to leverage generative AI for writing high-quality code, enhancing productivity and efficiency. It includes three self-paced courses covering generative AI basics, prompt engineering, and tools like GitHub Co-pilot and ChatGPT, with hands-on projects to apply skills in real-world scenarios.IBM: Developing Generative AI Applications with PythonThis course teaches generative AI modeling through hands-on projects using Python, Flask, Gradio, and frameworks like Langchain. Youll build applications with LLMs like GPT-3 and Llama 2 and explore retrieval-augmented generation and voice-enabled chatbots.AI: Generative AI and LLMs on AWSThis course teaches deploying generative AI models like GPT on AWS through hands-on labs, covering architecture selection, cost optimization, monitoring, CI/CD pipelines, and compliance. It is ideal for ML engineers, data scientists, and technical leaders, providing real-world training for production-ready generative AI using Amazon Bedrock and cloud-native services.Using GenAI to Automate Software Development TasksThis course teaches how to streamline development workflows with generative AI, use AI pair programming tools like CodeWhisperer, master prompt engineering, and understand the role of Rust and Python in MLOps. It includes hands-on experience with AWS services like Code Catalyst, SageMaker, and Lightsail.AI Prompt Engineering for BeginnersThis course focuses on prompt engineering for AI language tools like ChatGPT. It offers hands-on practice and guidance to frame effective prompts.Generative AI for Business LeadersThis course equips business leaders with essential knowledge of generative AI and its tools to adapt and implement this transformative technology. By the end, youll understand how generative AI can revolutionize business operations and gain the skills needed for successful implementation.We make a small profit from purchases made viareferral/affiliate links attached to each course mentioned in the above list.If you want to suggest any course that we missed from this list, then please email us atasif@marktechpost.comThe post Top Generative Artificial Intelligence AI Courses in 2024 appeared first on MarkTechPost.0 Comentários 0 Compartilhamentos
-
WWW.MARKTECHPOST.COMTop Artificial Intelligence AI Books to Read in 2024Artificial Intelligence (AI) has been making significant strides over the past few years, with the emergence of Large Language Models (LLMs) marking a major milestone in its growth. With such widespread adoption, feeling left out of this revolution is not uncommon. One way an individual can stay updated with the latest trends is by reading books on various facets of AI. Following are the top AI books one should read in 2024.Deep Learning (Adaptive Computation and Machine Learning series)This book covers a wide range of deep learning topics along with their mathematical and conceptual background. It also provides information on the different deep learning techniques used in various industrial applications.Python: Advanced Guide to Artificial IntelligenceThis book helps individuals familiarize themselves with the most popular machine learning (ML) algorithms and delves into the details of deep learning, covering topics like CNN, RNN, etc. It provides a comprehensive understanding of advanced AI concepts while focusing on their practical implementation using Python.Machine Learning (in Python and R) for DummiesThis book explains the fundamentals of machine learning by providing practical examples using Python and R. It is a beginner-friendly guide and a good starting point for people new to this field.Machine Learning for BeginnersGiven the pace with which machine learning systems are growing, this book provides a good base for anyone shifting to this field. The author talks about machine intelligences historical background and provides beginners with information on how advanced algorithms work.Artificial Intelligence: A Modern ApproachThis is a well-acclaimed book that covers the breadth of AI topics, including problem-solving, knowledge representation, machine learning, and natural language processing. It provides theoretical explanations along with practical examples, making it an excellent starting point for anyone looking to dive into the world of AI.Human Compatible: Artificial Intelligence and the Problem of ControlThe book discusses the inevitable conflict between humans and machines, providing important context before we advocate for AI. The author also talks about the possibility of superhuman AI and questions the concepts of human comprehension and machine learning.The Alignment Problem: Machine Learning and Human ValuesThis book talks about a concept called The Alignment Problem, where the systems we aim to teach, dont perform as expected, and various ethical and existential risks emerge.Life 3.0: Being Human in the Age of Artificial IntelligenceThe author of this book talks about questions like what the future of AI will look like and the possibility of superhuman intelligence becoming our master. He also talks about how we can ensure these systems perform without malfunctioning.The Coming Wave: Technology, Power, and the Twenty-First Centurys Greatest DilemmaThis book warns about the risks that emerging technologies pose to global order. It covers topics like robotics and large language models and examines the forces that fuel these innovations.Artificial Intelligence Engines: A Tutorial Introduction to the Mathematics of Deep LearningArtificial Intelligence Engines dives into the mathematical foundations of deep learning. It provides a holistic understanding of deep learning, covering both the historical development of neural networks as well as modern techniques and architecture while focusing on the underlying mathematical concepts.Neural Networks and Deep LearningThis book covers the fundamental concepts of neural networks and deep learning. It also covers the mathematical aspects of the same, covering topics like linear algebra, probability theory, and numerical computation.Artificial Intelligence for HumansThis book explains how AI algorithms are used using actual numeric calculations. The book aims to target those without an extensive mathematical background and each unit is followed by examples in different programming languages.AI Superpowers: China, Silicon Valley, and the New World OrderThe author of this book explains the unexpected consequences of AI development. The book sheds light on the competition between the USA and China over AI innovations through actual events.Hello World: Being Human in the Age of AlgorithmsThe author talks about the powers and limitations of the algorithms that are widely used today. The book prepares its readers for the moral uncertainties of a world run by code.The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our WorldThis book talks about the concept of the Master algorithm, which is a single, overarching learning algorithm capable of incorporating different approaches.Applied Artificial Intelligence: A Handbook for Business LeadersApplied Artificial Intelligence provides a guide for businesses on how to leverage AI to drive innovation and growth. It covers various applications of AI and also explores its ethical considerations. Additionally, it sheds light on building AI teams and talent acquisition.Superintelligence: Paths, Dangers, StrategiesThis book asks questions like whether AI agents will save or destroy us and what happens when machines surpass humans in general intelligence. The author talks about the importance of global collaboration in developing safe AI.We make a small profit from purchases made via referral/affiliate links attached to each book mentioned in the above list.If you want to suggest any book that we missed from this list, then please email us atasif@marktechpost.com Shobha Kakkar+ postsShobha is a data analyst with a proven track record of developing innovative machine-learning solutions that drive business value. LinkedIn event, 'One Platform, Multimodal Possibilities,' where Encord CEO Eric Landau and Head of Product Engineering, Justin Sharps will talk how they are reinventing data development process to help teams build game-changing multimodal AI models, fast0 Comentários 0 Compartilhamentos
-
TOWARDSAI.NETHow Googles Watermarking Technology Identifies AI-Generated ContentAuthor(s): Lamprini Papargyri Originally published on Towards AI. In October 2024, Google DeepMinds SynthID tool for watermarking AI-generated text was released as open-source, marking a significant step forward in AI transparency. This tool emerged in response to growing concerns about distinguishing AI-generated content, as tools like OpenAIs ChatGPT and Googles Gemini now produce text, images, and even audio that are increasingly difficult to differentiate from human-made content. With policymakers and civil society demanding reliable identification of AI content, SynthID represents an important development in addressing issues around AI-driven misinformation and authenticity.Notably, the European Digital Education Hub (EDEH) and its Explainable AI squad have played a crucial role in advancing AI transparency in educational settings. Explainable AI (XAI) refers to AI systems that clearly reveal how decisions and recommendations are made, rather than functioning as a black box with hidden processes. Through collaboration with tech companies and organizations, they aim to promote digital literacy and enhance transparency across Europes educational and public sectors, fostering ethical AI practices and building trust in both educational and digital environments.Community workshop on explainable AI (XAI) in education.Evaluating AI Detection Tools: Key Technical and Policy CriteriaThe rapid advancement of generative AI has created an urgent need for tools that can reliably detect AI-generated content. The effectiveness of any detection tool hinges on a set of essential technical and policy criteria:Accuracy: A detection tool should reliably distinguish between human-made and AI-generated content, with minimal false positives and negatives. For transparence and explainability purposes, the tool should provide nuanced responses (e.g., a probability score) rather than a simple binary answer.Robustness Against Evasion: Detection methods should withstand tampering or manipulation, as motivated actors might attempt to alter AI content to make it appear human-made, such as through paraphrasing or translation.Quality Preservation: Detection techniques should avoid diminishing the quality of AI-generated content. Tools that intentionally degrade quality to make content detectable may deter adoption by developers focused on user experience.Universality and Privacy: Ideally, a detection tool should be universal, meaning it can apply to any AI model without requiring active cooperation from the developer. Privacy is equally important; any detection method should respect user data privacy.Main Aspects of WatermarkingWatermarking involves embedding identifiable markers in content to indicate its origin, a method long used in digital media like photos and audio. With AI, watermarking has gained traction as a viable way to mark content for later identification, addressing authenticity concerns. Here are some key watermarking techniques and how they fare in theory and practice:Statistical Watermarking: Embeds statistically unusual patterns in text or other content to create a subtle, machine-readable signature.Advantages: Allows for subtle identification without compromising readability and works well with light modifications.Limitations: Sensitive to extensive changes (e.g., paraphrasing, translation), which can remove or weaken the watermark.Visible and Invisible Watermarks: Visible watermarks, such as logos or labels, are immediately recognizable but can disrupt user experience. Invisible watermarks embed patterns within content that are undetectable by users but can be identified by specialized detection tools.Advantages: Invisible watermarks avoid altering the contents appearance, providing a seamless user experience.Limitations: Advanced users may be able to remove or alter these markers, especially if they understand the watermarking method.Googles SynthID uses a statistical watermarking approach to subtly alter token probabilities during text generation, leaving an invisible, machine-readable signature. SynthIDs invisible watermark preserves content quality while marking AI-generated materialOverview of AI Detection ApproachesRetrieval-Based Approach: This method involves creating and maintaining a database of all generated content so that new text can be checked against it for matches.Advantages: Effective for detecting exact matches and is reliable for specific high-value use cases.Disadvantages: Requires massive storage and continuous updates, raising scalability and privacy concerns. Retrieval-based systems can be impractical at large scales.2. Post-Hoc Detection: This technique applies machine learning classifiers to text after it is generated, assessing characteristics typical of AI-written versus human-written material. It relies on analyzing patterns in syntax, word choice, and structure.Advantages: Post-hoc detection doesnt interfere in text creation and is flexible across different AI models.Disadvantages: Computationally demanding, with inconsistent performance on out-of-domain or highly edited content. Detection accuracy can decrease significantly when content undergoes substantial changes.3. Text Watermarking: SynthID falls into this category, which embeds markers directly within the generated text at the time of creation. Text watermarking has several subcategories:3.1 Generative Watermarking: Adjusts token probabilities during text generation to introduce an invisible signature without altering the texts quality.Advantages: Maintains readability and is robust against minor edits; minimal impact on text quality.Disadvantages: Vulnerable to substantial edits, like extensive rephrasing or translations, which may remove the watermark.3.2 Edit-Based Watermarking: Alters text after its generated by adding specific characters or symbols.Advantages: Easily detectable and quick to implement.Disadvantages: Visibly changes the text, potentially affecting readability and user experience.3.3 Data-Driven Watermarking: Embeds watermarks in the training data so that certain sequences or phrases appear only when prompted.Advantages: Effective for deterring unauthorized use when integrated from the training stage.Disadvantages: Limited to specific prompts, with visible markers that may compromise subtlety.SynthID uses generative watermarking to subtly embed markers during text generation, ensuring an undetectable signature while preserving the texts quality. This approach strikes a balance between detection and usability, marking a significant advancement in watermarking for AI.How SynthID WorksSynthIDs watermarking technology employs two neural networks to embed and detect an invisible watermark. For text, this mechanism works by subtly modifying token probabilities during text generation. Large language models (LLMs) generate text one token at a time, assigning each token a probability based on context. SynthIDs first network makes small adjustments to these probabilities, creating a watermark signature that remains invisible and maintains the texts readability and fluency.For images, the first neural network modifies a few pixels in the original image to embed an undetectable pattern. The second network then scans for this pattern in both text and images, allowing it to inform users whether it detects a watermark, suspects one, or finds none.The watermark detection process compares the probability distributions of watermarked and unwatermarked text, identifying the signature left by the watermark. Through large-scale testing, Google DeepMind confirmed SynthIDs effectiveness: in the Gemini app, where over 20 million users unknowingly rated watermarked and unwatermarked text, the feedback showed no noticeable quality difference between the two. This suggests that SynthIDs watermarking process is effective without compromising the texts fluency or usability.SynthID utilizes two neural networks to embed and detect watermarks in images. The first network processes the original image, generating a nearly identical version with slight modifications to a few pixels, embedding a pattern that remains invisible to the human eye. The second network then scans for this pattern, indicating to users whether a watermark is detected, likely present, or absent.Strengths and Limitations of SynthID and WatermarkingSynthIDs invisible watermarking approach provides a powerful tool for marking AI-generated content, yet it faces challenges, particularly as part of a comprehensive solution for AI transparency. Key strengths and limitations include:SynthIDs watermark is resilient with minor changes, such as slight paraphrasing or cropping, making it robust for lightly modified content.SynthID struggles with highly predictable outputs, such as factual statements (e.g., The capital of France is Paris) or code, where the watermark cannot be embedded without affecting accuracy.While effective against casual modifications, SynthIDs watermark could be compromised by users with knowledge of its workings, particularly in cases where sophisticated adversaries aim to remove or obscure the watermark.Given these limitations, SynthID works best when paired with other detection methods. Combining it with retrieval-based or post-hoc methods could enhance overall detection accuracy and resilience, especially in high-stakes applications like education or misinformation detection.Policy and Governance Considerations for WatermarkingSynthIDs deployment as an open-source tool is part of a larger trend toward establishing AI transparency standards. Policymakers are exploring ways to promote accountability, including watermarking requirements in laws and international agreements. Effective governance of AI watermarking requires attention to several key considerations: As watermarking research advances, standardized techniques will help align different stakeholders and make AI transparency measures more consistent. A centralized organization could manage a registry of watermarking protocols, simplifying detection by providing a standardized platform for users to verify content provenance. Policymakers must ensure watermarking methods respect user privacy and data security. This includes defining what information can be embedded in watermarks and regulating data handling by third-party detection services.A balanced, layered approach that combines multiple detection methods may be the most practical strategy for addressing the complex challenges posed by generative AI content.Conclusion: SynthIDs Role in Building AI TransparencySynthID is another step forward in AI transparency, but watermarking alone cannot guarantee full accountability for AI-generated content. As AI becomes increasingly skilled at producing realistic text, images, and media, a multi-layered approach is essential for content verification. SynthID provides a starting point, giving users a means of identifying AI-generated material and discouraging misuse. However, it should ideally be part of a larger ecosystem of checks and balances to ensure robust AI accountability.For true content authenticity, additional safeguards should be explored. Fact-checking, for instance, can help verify information accuracy, while standardized content verification frameworks would ensure consistent detection across platforms and tools. Additionally, regulatory measures could help ensure that AI-generated content is labeled and traceable, empowering users to assess the credibility and origin of the information they encounter.In this evolving landscape, SynthID can serve as a tool for AI transparency by offering users a reliable method of distinguishing between human and AI-generated content. As watermarking and complementary approaches become widely adopted, we may see the emergence of a more transparent and accountable digital ecosystem that encourages responsible AI practices. By equipping users with tools to verify the authenticity of digital content, SynthID and similar technologies can contribute to a safer, more trustworthy online environment.Interested to learn more about SynthID? Read here the article.Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming asponsor. Published via Towards AI0 Comentários 0 Compartilhamentos
-
TOWARDSAI.NETTaming the Oracle: Key Principals That Bring Our LLM Agents to ProductionTaming the Oracle: Key Principals That Bring Our LLM Agents to Production 1 like November 15, 2024Share this postAuthor(s): Nate Liebmann Originally published on Towards AI. A Tame Oracle. Generated with Microsoft DesignerWith the second anniversary of the ChatGPT earthquake right around the corner, the rush to build useful applications based on large language models (LLMs) of its like seems to be in full force. But despite the aura of magic surrounding demos of LLM agents or involved conversations, I am sure many can relate to my own experience developing LLM-based applications: you start with some example that seems to be working great, but buyers remorse is soon to follow. Trying out other variations of the task could simply fail miserably, without a clear differentiator; and agentic flows could reveal their tendency to diverge when straying away from the original prototyping happy path.If not for the title, you might have thought at this point I was a generative AI luddite, which could not be further from the truth. The journey my team at Torq and I have been on in the past two years, developing LLM-based software features that enhance the no-code automation building experience on our platform, has taught me a lot about the great power LLMs bring if handled correctly.From here on I will discuss three core principals that guide our development and allow our agents to reach successful production deployment and customer utility. I believe they are highly relevant to other LLM based applications just as much. The least freedom principleLLMs interact through free-text, but its not always the way our users will interact with our LLM-based application. In many cases, even if the input is indeed a textual description provided by the user, the output is much more structured, and could be used to take actions in the application automatically. In such a setting, the great power in the LLMs ability to solve some tasks otherwise requiring massive and complex deterministic logic or human intervention can turn into a problem. The more leeway we give the LLM, the more prone our application is to hallucinations and diverging agentic flows. Therefore, a-la the least privileges principle in security, I believe its important to constrain the LLM as much as possible.Fig. 1: The unconstrained, multi-step agentic flowConsider an agent that takes a snapshot of a hand-written grocery list, extracts the text via OCR, locates the most relevant items in stock, and prepares an order. It may sound tempting to opt for a flexible multi-step agentic flow where the agent can use methods such as search_product and add_to_order (see fig. 1 above). However, this process could turn out to be very slow, consist of superfluous steps, and might even get stuck in a loop in case some function call returns an error the model struggles with recovering from. An alternative approach could constrain the flow to two steps, the first being a batch search to get a filtered product tree object, and the second being generating the order based on it, referencing appropriate products from the partial product tree returned by the search function call (see fig. 2 below). Apart from the clear performance benefit, we can be much more confident the agent will remain on track and complete the task.Fig. 2: A structured agentic flow with deterministic auto-fixingWhen dealing with problems in the generated output, I believe its best to do as much of the correction deterministically, without involving the LLM again. This is because against our intuition, sending an error back to an LLM agent and asking it to correct it does not always get it back on track, and might even increase the likelihood of further errors, as some evidence has shown. Circling back to the grocery shopping agent, it is very likely that in some cases invalid JSON paths will be produced to refer to products (e.g., food.cheeses.goats[0] instead of food.dairy.cheeses.goat[0]). As we have the entire stock at hand, we can apply a simple heuristic to automatically fix the incorrect path in a deterministic way, for example by using an edit distance algorithm to find the valid path closest to the generated one in the product tree. Even then, some invalid paths might be too far from any valid ones. In such a case, we might want to simply retry the LLM request rather than adding the error to the context and asking it to fix it. Automated empirical evaluationUnlike traditional 3rd-party APIs, calling an LLM with the exact same input could produce different results each time, even when setting the temperature hyper-parameter to zero. This is in direct conflict with fundamental principals of good software engineering, that is supposed to give the users an expected and consistent experience. The key to tackling this conflict is automated empirical evaluation, which I consider the LLM edition of test-driven development.The evaluation suite can be implemented as a regular test suite, which has the benefit of natural integration into the development cycle and CI/CD pipelines. Crucially, however, the LLMs must be actually called, and not mocked, of course. Each evaluation case consists of user inputs and initial system state, as well as a grading function for the generated output or modified state. Unlike traditional test cases, the notion of PASS or FAIL is insufficient here, because the evaluation suite plays an important role in guiding improvements and enhancements, as well as catching unintended degradations. The grading function should therefore return a fitness score for the output or state modifications our agent produces. How do we actually implement the grading function? Think, for example, of a simple LLM task for generating small Python utility functions. An evaluation case could prompt it to write a function that computes the nth element of the Fibonacci sequence. The models implementation might take either the iterative or the recursive path, both valid (though suboptimal, because there is a closed form expression), so we cannot make assertions about the specifics of the functions code. The grading function in this case could, however, take a handful of test values for the Fibonacci functions argument, spin up an isolated environment, run the generated function on those values, and verify the results. This black-box grading of the produced output does not make unnecessary assumptions, while strictly validating it in a fully deterministic fashion.While I believe that should be the preferred approach, it is not suitable for all applications. There are cases where we cannot fully validate the result, but we can still make assertions about some of its properties. For example, consider an agent that generates short summaries of system logs. Some properties of its outputs, like length, are easy to check deterministically. Other, semantic ones, not as much. If the otherwise business-as-usual logs serving as input for an evaluation case contain a single record about a kernel panic, we want to make sure the summary will mention that. A naive approach for the grading function in this case will involve an LLM task that directly produces a fitness score for the summary based on the log records. This approach might lock our evaluation in a sort of LLM complacency loop, with none of the guarantees provided by deterministic checks. A more nuanced approach, however, could still use an LLM for grading, but craft the task differently: given a summary, the model could be instructed to answer multiple-choice factual questions (e.g. Has there been a major incident in the covered period? (a) No (b) Yes, a kernel panic (c) Yes, a network connectivity loss). We can be much more confident that the LLM would simply not be able to consistently answer such questions correctly if the key information is missing from the summary, making the score much more reliable.Finally, due to non-determinism, each evaluation case must be run several times, with the results aggregated to form a final evaluation report. I have found it very useful to implement the evaluation suite early and use it to guide our development. Once the application has reached some maturity, it could make sense to fail the integration pipeline if the aggregate score for its evaluation suite drops below some set threshold, to prevent catastrophic degradations. Not letting the tail wag the dogGood LLM-based software is, first and foremost, good software. The magic factor we see in LLMs (which is telling of human nature and the role language plays in our perception of other intelligent beings, a topic I will not cover here of course) might tempt us to think about LLM-based software as a whole new field, requiring novel tools, frameworks and development processes. As discussed above, the non-deterministic nature of commercial LLMs, as well as their unstructured API, indeed necessitate dedicated handling. But I would argue that instead of looking at LLM-based application as a whole new creature that might here and there utilise familiar coding patterns we should treat such an application as any other application, except for where it is not. The power of this approach lies in the fact that by doing so, we do not let external abstractions hide away the low-level LLM handling, which is crucial for truly understanding its capabilities and limitations in the scope of our application. Abstractions can and should be adopted where they save time and reduce boilerplate code, but never at the cost of losing control over the most important part of your application: the intricate touchpoints between the LLM and your deterministic code, that should be tailored to your specific use case.Wrapping up, LLMs can be viewed as powerful oracles that enable previously-unfeasible applications. My experience developing LLM based agents has taught me several principles that correlated with successful production deployment and utility. Firstly, agents should be given the least possible freedom: flows should be structured, and whatever can be done deterministically should be. Secondly, automated empirical evaluation of the LLM task and surrounding logic should be a cornerstone of the development process, relying as much as possible on deterministic scoring. Thirdly, abstractions provided by libraries and frameworks should not be adopted where they hide essential details of the integration between the LLM and our code, the core of LLM-based applications.Feel free to reach out to discuss this matter further and tell me what you think!Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming asponsor. Published via Towards AITowards AI - Medium Share this post0 Comentários 0 Compartilhamentos
-
WWW.IGN.COMBest TV for Gaming 2024Lots of competition has led to a glut of outstanding televisions for gaming in basically every price tier. Thanks to HDMI 2.1 support pushing 4K at up to 120Hz, gaming on a TV is probably cheaper and more enjoyable than using a gaming monitor. So whether you enjoy your PlayStation 5, Xbox Series X/S, or pushing the limits further with a PS5 Pro or dedicated gaming PC, getting a good 65-inch television that supports high-end performance is really just a matter of choosing how much you want to spend. That said, there are some features you want to look for as a baseline: namely, support for Variable Refresh Rate (VRR) and Auto Low Latency Mode (ALLM). The good news is, these can be found on basically any TV worth its salt these days. That means were looking for TVs that are going to offer even more, like our top pick, the LG G4. TL;DR These are the Best Gaming TVs:Our top pickLG 65" G4 OLED EvoSony 65" A95L QD-OLEDSony 65" Bravia 7 Mini-LED QLEDHisense 65" U8N Mini-LED ULEDBest ValueTCL 65" QM7 QLEDGreat brightness and excellent contrast are a must for HDR gaming, the likes of which youll want for games like Cyberpunk 2077 and Alan Wake II. Along with that great contrast, though, you also want a television that has low input lag, support for 120Hz or even 144Hz, and dimming zones that respond fast enough to not interrupt scenes with slowly transitioning highlights. Competitive gamers who enjoy Call of Duty: Black Ops 6 or Apex Legends will want the ability to pull shadows up so that enemies cant hide in dark spots but theyll also want highlights that dont blow away the ability to see a target set against bright objects. That means televisions that have dedicated gaming modes like the LG G4 OLED, the TCL QM7, or the Sony Bravia 7 should all be considered because they are designed to support all types of gaming. Additional contributions by Kevin Lee and Danielle Abraham.LG G4 Photos1. LG 65" G4 OLED EvoBest Gaming TV Our top pickLG 65" G4 OLED EvoEnjoy excellent color accuracy, contrast, and pixel response times, while features like a 4K/144Hz refresh rate, VRR and ALLM are ideal for gaming.Product SpecificationsScreen Size65"Resolution4KPanel TypeOLEDHDR CompatibilityDolby Vision, HDR10, HLG, HGiGRefresh Rate144HzAdaptive SyncG-Sync (Nvidia Certified), FreeSync Premium, VRRInputs4 x HDMI 2.1, 3 x USB, EthernetPROSExcellent picture qualityLimiteless contrastOutstanding gaming supportCONSWobbly standLGs Gallery Series televisions have always represented the best LG has to offer and that remains the case with the G4. Its not only the best OLED Ive tested, but it also represents the best overall package (specifications, price, and performance) of any television in 2024. The LG G4 is as close to perfect as a TV can be in 2024 and is an example of what OLED can do at its best, combining infinite contrast, excellent color reproduction, and fantastic brightness in one elegant, super-slim display. There was a fear that the development of Samsungs QD-OLED technology would leave WOLED TVs like LGs in the dust, but LG successfully pushed its technology in response and its brightness and color accuracy now rival what Samsung has produced, while packaging it in a more approachable television that gives you more control over how you want your content to look. While other OLED brands have over-simplified the user interface, LG believes in its users and gives them plenty of control over color, saturation, and contrast. Thanks to its boosted brightness, the G4 can excel in pretty much any environment. With outstanding color, infinite contrast, great HDR support, and outstanding gaming performance, its the best TV on the market in 2024 for gaming and quite honestly anything else. 2. Sony 65" A95L QD-OLEDBest High-End OLED TV for GamingSony 65" A95L QD-OLEDIf money is no object, the Sony A95L is the best there is.Product SpecificationsScreen Size65"Resolution4KPanel TypeOLEDHDR CompatibilityDolby Vision, HDR10, HLGRefresh Rate144HzAdaptive SyncG-Sync CompatibleInputs2 x HDMI 2.1, 2 x HDMI 2.0, 2 x USB, EthernetPROSExcellent picture qualityLimitless contrastCONSEye-watering price tagSpeaking of QD-OLED, Samsung didnt keep that technology to itself, licensing it to Sony to produce the A95L. Outside of an updated processor, not a lot has changed between Sonys most recent OLED model compared to the A95K I reviewed a few years ago, since the panel itself hasnt needed updating since it debuted. And what a panel it is. While LGs G4 is overall the best television for gaming given its price to performance ratio, if price doesnt matter to you and you only care about the best possible picture quality across the most use cases, then Sonys QD-OLED televisions are the best there is. Samsungs implementation of its QD-OLED is good, but the company does a poor job with giving users any control over how the picture looks and the result is an over-saturated, overly contrasty picture that can look too strong. That, and Samsung has not entered into an agreement with Dolby to support Dolby Vision, meaning HDR support is seriously weaker than the competition. Sony dials that back down to reality and gives full HDR support, allowing it to take advantage of that excellent QD-OLED panel. Sony also has the best processing in the industry and is better able to deal with noise, artifacts, and pixelation that comes as the result of streaming. Even the best streaming signals have problems and Sonys tech cant be beaten for smoothing that out and providing a picture with better tonality and cleaner sharpness. With best-in-class color accuracy, incredible brightness, and unlimited contrast, Sonys A95L is unbeatable, if you can get past the high asking price. If your budget has no limit, then Sony is the way to go.Sony Bravia 7 Photos3. Sony 65" Bravia 7 Mini-LED QLEDBest High-End LED TV for GamingSony 65" Bravia 7 Mini-LED QLEDWhile the Bravia 7 wont outperform OLED and isnt as bright as some competitors, its color accuracy and picture quality make for a premium experience at a fair price.Product SpecificationsScreen Size65"Resolution4KPanel TypeMini-LED QLEDHDR CompatibilityDolby Vision, HDR10, HLGRefresh Rate120HzAdaptive SyncG-Sync CompatibleInputs2 x HDMI 2.1, 2 x HDMI 2.0, 2 x USB, EthernetPROSGreat color and contrastFantastic processingCONSTakes up a 4K 120Hz port with eARCIf you like the sound of excellent processing but arent a fan of OLED, then Sonys Bravia 7 is the answer. It offers very good brightness, excellent color (which is consistent across the panel), great HDR support, and top-tier processing to create the best overall LED television experience on the market. A benefit of Sonys most recent televisions, the Bravia 8 included, is the pairing of Sony Remote Play built-in. The Bravia 7 allows you to treat your TV like a giant Playstation Portal, wirelessly connecting with your Playstation so you arent blocked from your console if the TV its connected to is in use. That makes the Bravia 7 a great second television either for the bedroom or office while your main television is tied up in the living room. Thanks to mini LED technology, the Bravia 7 attains very good contrast and brightness that is as close as you can come to OLED before jumping up a price tier and Sony did a great job this year of keeping its high-end LED approachable in price, which is in contrast to Samsungs strategy. That means that Sony-level picture quality and processing are more easily attainable to more people and it gives brands like TCL and Hisense more competition to keep pushing the envelope (which they have done). Sonys Bravia 7 does an excellent job of being a television that gets a lot right and doesnt overly charge for it, which feels like a shift for the company. As I noted in my review, the color accuracy it is capable of hitting and its consistency across the entire panel is hard to beat, and while it doesnt get quite as bright as some of its competitors, its got enough juice to overcome reasonably well-lit rooms and does so without sacrificing picture quality. As a result, the Sony Bravia 7 is the companys most compelling mini LED television in years and the best high-end model on the market. Hisense U8N Photos4. Hisense 65" U8N Mini-LED ULEDBest Mid-Range Gaming TVHisense 65" U8N Mini-LED ULEDCombining great brightness, good color accuracy, tons of ports, and a slick new design, the Hisense U8N not only displays content well, it looks good doing it.Product SpecificationsScreen Size65"Resolution4KPanel TypeMini-LED ULEDHDR CompatibilityDolby Vision, HDR10, HDR10+, HLGRefresh Rate144HzAdaptive SyncG-Sync Compatible, FreeSync, VRRInputs2 x HDMI 2.1, 2 x HDMI 2.0, 2 x USB, EthernetPROSFantastic brightnessGreat HDR supportFantastic for gamingCONSWeak off-angle viewingHisense has really come into its own in the last couple of years and the U8 represents the cumulative knowledge it has learned over that time. Hisense was the first mini LED maker to push brightness to new heights and the U8N is easily the brightest television I tested this year with the ability to push out 2,000 nits of brightness. Everything else about the U8N comes in just slightly under what Sony has been able to achieve. Color accuracy, contrast, off-angle viewing, and overall performance of the operating system are just slightly under the heights reached by the Sony Bravia 7, but the tradeoff is more brightness and a lower asking price. But compare it to the Samsung QN90D, a television that costs nearly twice as much as the U8N, and youll find that it gets brighter, has more customization options, and provides wider HDR format support. The Hisense U8N isnt going to win in a blow-for-blow fight against Sony and Samsung, but it still exists as a very compelling option for game rooms or brightly-lit living rooms, especially given its lower asking price. It sports fantastic gaming performance, great color accuracy, and spectacular brightness along with a redesigned frame that looks less like a discount TV and more like the high-end options it competes against.TCL QM7 Photos5. TCL 65" QM7 QLEDBest Entry-Level TV for GamingBest ValueTCL 65" QM7 QLEDTCLs QM7 is the best deal in gaming televisions and gets just about everything right. From excellent color rendition to top-end gaming performance, its a lot of television for quite a low price. Product SpecificationsScreen Size65"Resolution4KPanel TypeQLEDHDR CompatibilityDolby Vision, HDR10, HDR10+, HLGRefresh Rate144HzAdaptive SyncG-Sync Compatible, FreeSync, VRRInputs2 x HDMI 2.1, 2 x HDMI 2.0, 2 x USB, EthernetPROSGreat brightnessGreat gaming and HDR supportBetter-than-average sound qualityIncredible valueCONSWeak off-angle viewingTCLs QM7 is the king of value. As I found in my review, it offers outstanding color accuracy, good contrast, good brightness, and has a processor that is fast enough to make browsing the operating system feel quick and snappy. At $600 for a 65-inch set, you probably arent expecting a lot, so the QM7 kind of sneaks up on you with how much it does right. It offers a solid 1,500 nits of peak brightness, separated its pair of 4K 144Hz ports from its eARC port (so you can actually use all three simultaneously), comes packed with one of the better gaming menus, and comes with support for all the most-used HDR formats. It even sounds pretty darn good out of the box thanks to an Onkyo 2.1 sound system which includes a subwoofer built-into the rear of the panel. The achilles heel of the QM7 is off-axis viewing, though. While contrast loss isnt huge, bright objects set against dark backgrounds will exhibit a rather large halo effect and that is most visible when watching this TV from the side. The QM7 does a lot right, avoiding the expected norms of conceding some performance in order to arrive at that lower asking price and instead providing a television that just does everything pretty darn well. How to Choose a TV for GamingThere are several key features that make for a great gaming TV, according to our TV buying guide. The first one to pay attention to is the refresh rate. If you want the smoothest possible gameplay, you should be seeking out TVs with higher refresh rates. All of our picks offer 4K/120Hz, which perfectly match up with the specs of both the PS5 and Xbox Series X/S, and some TVs hit 144Hz when PC gaming. However, with 8K TVs making their way onto the scene 8K/60Hz also offers some solid gameplay. Next up, variable refresh rate is essential for staying immersed in the best PS5 and Xbox games. VRR helps to eliminate screen tearing by adjusting the refresh rate to match the frame rate coming out of your gaming system. Before VRR and high refresh rates, game consoles would force you to play with a 30 or 60fps cap in order to match the TV's 60Hz refresh rate otherwise you'd encounter screen tearing. With VRR, however, the system is free to push out as many frames as possible, confident that the TV will vary its refresh rate to match. In other words, VRR lets you play with an uncapped frame rate while still preventing screen tearing.HDMI 2.1 connectivity may be the most crucial specification to seek out when purchasing a gaming TV. If youre looking to game at 120Hz in 4K, which both the PS5 and Xbox Series X/S offer, you'll need this port. An HDMI 2.0 port will only offer 4K at speeds of 60Hz. Gaming TV FAQWhat kind of TV do I need for PS5?Most modern TVs will work with the PS5, but to take full advantage of all the console has on offer (including being one of the best Blu-Ray players), you may want to spend a little extra for better specs and more features. The PS5 can output a 120Hz refresh rate in 4K through HDMI 2.1, while some games also support VRR and ALLM, ensuring smoother and more enjoyable gameplay. The best TVs for PS5 will offer those specs. What are the disadvantages of gaming TVs? Gaming TVs are great for consoles, providing speedy refresh rates, VRR, and game modes to ensure an enjoyable playing experience, but that doesn't mean they're not also great for watching movies or streaming TV shows. These days, the best TVs come with gaming features, whether they're marketed as gaming TVs or not, so you can rest assured that if a TV is good for gaming, it's good for everything else too.Is a gaming monitor or TV better?Choosing a display to game on depends on personal preference and how you want to play. The best gaming monitors have an edge when it comes to responsiveness, sporting even higher refresh rates than the best TVs, as well as lower input lag and other advanced display features. Of course, monitors are often smaller in size, more adjustable, and live on a gaming desk. TVs are meant for couch gaming and tend to pull ahead in image quality and HDR performance. We discuss gaming monitors vs TVs here. When is the best time to buy a TV? TVs go on sale throughout the year. But some key times are Black Friday, before the Super Bowl, and Prime Day. Outside of that, many manufacturers churn out new TV models in the spring, so you can score deals on older offerings. Check out the best time to buy a TV guide for more info. Where to Get the Best TVs for Gaming in the UKLG 65" Class C1 Series Smart OLED 4K TVBest High-End Gaming TV for Next-Gen Consoles879.00 at Currys PC WorldHisense 55U8HSee it at CurrysSony A95K QD-OLED1,099.00 at AmazonSamsung QN65QN900BFXZA349.00 at AmazonSony BRAVIA XR | XR-55X90L1,599.00 at AmazonSamsung 75 Inch QN85B Neo QLED 4K Smart TVSee itJaron Schneider is an award-winning commercial filmmaker, an internationally-published consumer technology journalist, and long-time digital imaging expert across the fields of both video production and traditional photography. He is also the Editor-in-Chief of PetaPixel.0 Comentários 0 Compartilhamentos