• Valve finally made a white Steam Deck that you can actually buy
    www.theverge.com
    Nearly three years to the day after teasing the world with a white version of the Steam Deck, Valve has finally decided to release the normally black handheld gaming PC in that color too. The limited-edition white model is going on sale for $679 on November 18th at 3PM PT / 6PM ET, everywhere the handheld is sold, including Australia and the various regions of Asia served by Komodo. Its no different on the inside than a normal model, says Valve: Steam Deck OLED: Limited Edition White has all the same specs as the Steam Deck OLED 1TB model, but in white and grey. It also comes with an exclusive white carrying case and white microfiber cleaning cloth.Since the 1TB OLED normally costs $649, youre effectively paying $30 for the color. Valve says its allocated stock proportionally across each region, but once its sold out, it wont be making any more. Below, find a few more images of it direct from Valve.I still highly recommend the Steam Deck OLED, though I could see some buyers picking an Asus ROG Ally X instead for its notable performance and decent battery life advantages, particularly if they decide to dual-boot the Bazzite operating system (which makes it feel a lot like a Steam Deck) alongside Windows. (Yes, the ROG Ally X is a black variant of an originally white handheld, and this Steam Deck is the opposite.)Heres what the old Valve prototype looked like, straight out of Portal with an Aperture Science logo on the back:Its not for sale. GIF by Sean Hollister / The VergeHeres hoping someone will print up some high quality Portal stickers, and perhaps we can add our own orange and blue Portal thumbstick covers or something.
    0 Comments ·0 Shares ·83 Views
  • C++ Scripting in Godot with J.E.N.O.V.A
    gamefromscratch.com
    C++ Scripting in Godot with J.E.N.O.V.A / News / November 11, 2024 / There is a new GDExtension for the Godot game engine called Projekt J.E.N.O.V.A (no idea what the acronym is). This extension brings C++ scripting inside the Godot game engine, just as you currently can with GDScript or C#. It is also a very WIP extension with very little documentation so buyer beware.Features of Projeck JENOVA include:Super Lightweight (6MB)Very Fast & ReliableMulti-Threaded Compilation & Source CachingDebug Information SupportBuilt-in Package Manager (Compilers, SDKs, Tools, Plugins etc.)C++ Scripts can be used exactly like GDScriptsSupports Script Templates (Pre-Defined/User-Defined)Supports Built-in Script Mode (Embedded)Supports C++ Tool Script Mode (In-Editor Execution)Supports Exporting Properties from C++ ScriptsMultiple Interpreter Backends (NitroJIT, Meteora, A.K.I.R.A etc.)Next-Gen Hot-Reloading Both at Runtime & EditorReal-Time GDExtension DevelopmentOperating System Emulation (Unix/WinNT)Visual Studio Side-by-Side Deep-IntegrationVisual Studio Exporter & Build System (2017-2022)Auto Detection of Installed Visual StudiosSupports External Libraries and .NET InvokeWatchdog System (Reload-On-Build)Built-in Terminal Logging System (Customizable)Asset Monitor System API (File/Directory Tracking)On-Demand Reload Per Script ChangeLambda Signal CallbacksAdvanced Menu OptionsSupports Additional/External Headers & LibrariesBuild And Run Mode (Build Before Play/Play After Build)Code Compression/Encryption (External/Built-in)Direct GetNode & GetTree APIUser Defined Preprocessor DefinitionsSupports In-Editor C++ HeadersModule Boot/Shutdown EventsSupports C++ Headers Directly Inside EditorSupports Scene Node ReferencingSupports Source Control using GitAnd Much More!The source code is available under the MIT license and is hosted on GitHub.Key LinksJ.E.N.O.V.A GitHubDiscord ServerYou can learn more about the GDExtension Projekt J.E.N.O.V.A that brings C++ scripting to the Godot game engine in the video below.
    0 Comments ·0 Shares ·95 Views
  • Qwen Open Sources the Powerful, Diverse, and Practical Qwen2.5-Coder Series (0.5B/1.5B/3B/7B/14B/32B)
    www.marktechpost.com
    In the world of software development, there is a constant need for more intelligent, capable, and specialized coding language models. While existing models have made significant strides in automating code generation, completion, and reasoning, several issues persist. The main challenges include inefficiency in dealing with a diverse range of coding tasks, lack of domain-specific expertise, and difficulty in applying models to real-world coding scenarios. Despite the rise of many large language models (LLMs), code-specific models have often struggled to compete with their proprietary counterparts, especially in terms of versatility and applicability. The need for a model that not only performs well on standard benchmarks but also adapts to diverse environments has never been greater.Qwen2.5-Coder: A New Era of Open CodeLLMsQwen has open-sourced the Powerful, Diverse, and Practical Qwen2.5-Coder series, dedicated to continuously promoting the development of open CodeLLMs. The Qwen2.5-Coder series is built upon the Qwen2.5 architecture, leveraging its advanced architecture and expansive tokenizer to enhance the efficiency and accuracy of coding tasks. Qwen has made a significant stride by open-sourcing these models, making them accessible to developers, researchers, and industry professionals. This family of coder models offers a range of sizes from 0.5B to 32B parameters, providing flexibility for a wide variety of coding needs. The release of Qwen2.5-Coder-32B-Instruct comes at an opportune moment, presenting itself as the most capable and practical coder model of the Qwen series. It highlights Qwens commitment to fostering innovation and advancing the field of open-source coding models.Technical Details Technically, Qwen2.5-Coder models have undergone extensive pretraining on a vast corpus of over 5.5 trillion tokens, which includes public code repositories and large-scale web-crawled data containing code-related texts. The model architecture is shared across different model sizes1.5B and 7B parametersfeaturing 28 layers with variances in hidden sizes and attention heads. Moreover, Qwen2.5-Coder has been fine-tuned using synthetic datasets generated by its predecessor, CodeQwen1.5, incorporating an executor to ensure only executable code is retained, thereby reducing hallucination risks. The models have also been designed to be versatile, supporting various pretraining objectives such as code generation, completion, reasoning, and editing.State-of-the-Art PerformanceOne of the reasons why Qwen2.5-Coder stands out is its demonstrated performance across multiple evaluation benchmarks. It has consistently achieved state-of-the-art (SOTA) performance in over 10 benchmarks, including HumanEval and BigCodeBench, surpassing even some larger models. Specifically, Qwen2.5-Coder-7B-Base achieved higher accuracy on HumanEval and MBPP benchmarks compared to models like StarCoder2 and DeepSeek-Coder of comparable or even greater sizes. The Qwen2.5-Coder series also excels in multi-programming language capabilities, demonstrating balanced proficiency across eight languagessuch as Python, Java, and TypeScript. Additionally, Qwen2.5-Coders long-context capabilities are notably strong, making it suitable for handling repository-level code and effectively supporting inputs up to 128k tokens.Scalability and AccessibilityFurthermore, the availability of models in various parameter sizes (ranging from 0.5B to 32B), along with the option of quantized formats like GPTQ, AWQ, and GGUF ensures that Qwen2.5-Coder can cater to a wide range of computational requirements. This scalability is crucial for developers and researchers who may not have access to high-end computational resources but still need to benefit from powerful coding capabilities. Qwen2.5-Coders versatility in supporting different formats makes it more accessible for practical use, allowing for broader adoption in diverse applications. Such adaptability makes the Qwen2.5-Coder family a vital tool for promoting the development of open-source coding assistants.ConclusionThe open sourcing of the Qwen2.5-Coder series marks a significant step forward in the development of coding language models. By releasing models that are powerful, diverse, and practical, Qwen has addressed key limitations of existing code-specific models. The combination of state-of-the-art performance, scalability, and flexibility makes the Qwen2.5-Coder family a valuable asset for the global developer community. Whether you are looking to leverage the capabilities of a 0.5B model or need the expansive power of a 32B variant, the Qwen2.5-Coder family aims to meet the needs of a diverse range of users. Now is indeed the perfect time to explore the possibilities with Qwens best coder model ever, the Qwen2.5-Coder-32B-Instruct, as well as its versatile family of smaller coders. Lets welcome this new era of open-source coding language models that continue to push the boundaries of innovation and accessibility.Check out the Paper, Models on Hugging Face, Demo, and Details. All credit for this research goes to the researchers of this project. Also,dont forget to follow us onTwitter and join ourTelegram Channel andLinkedIn Group. If you like our work, you will love ournewsletter.. Dont Forget to join our55k+ ML SubReddit. Asif RazzaqAsif Razzaq is the CEO of Marktechpost Media Inc.. As a visionary entrepreneur and engineer, Asif is committed to harnessing the potential of Artificial Intelligence for social good. His most recent endeavor is the launch of an Artificial Intelligence Media Platform, Marktechpost, which stands out for its in-depth coverage of machine learning and deep learning news that is both technically sound and easily understandable by a wide audience. The platform boasts of over 2 million monthly views, illustrating its popularity among audiences. Listen to our latest AI podcasts and AI research videos here
    0 Comments ·0 Shares ·111 Views
  • Building an Interactive Chatbot For Pre-Existing Questions with LLM Integration to Chat with multiple CSV Files
    towardsai.net
    LatestMachine LearningBuilding an Interactive Chatbot For Pre-Existing Questions with LLM Integration to Chat with multiple CSV Files 0 like November 11, 2024Share this postAuthor(s): Ganesh Bajaj Originally published on Towards AI. This member-only story is on us. Upgrade to access all of Medium.Streamlit UI-Image Illustrated by AuthorThere are multiple types of Chatbots:Rule Based ChatbotRAG Based ChatbotHybrid ChatbotThis article covers how to create a chatbot using streamlit that answers questions using a pre-existing question-answer dataset along with an LLM integration to a csv file. Basically, chatbot is hybrid type designed to handle both known and unknown questions. This article will give a good starting point with an understanding of how the chatbot would work with different types of output and error handling using streamlit.Bot first trys to match the input to a saved question and, if no match is found, uses an LLM model to generate relevant responses.Well walk through the steps to build this chatbot, highlighting key features such as similarity-based search, error handling, and LLM query support.To make the chatbot quick and responsive, we store question-answer pairs in a json format so that they can be directly referenced when a user query is similar to any existing question.The qna.json file contains a list of dictionaries, each with a question (query) and corresponding response data (response).An example structure in qna.json might look like this:[ { "query": "Enter your question here", "response": Read the full blog for free on Medium.Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming asponsor. Published via Towards AITowards AI - Medium Share this post
    0 Comments ·0 Shares ·119 Views
  • Why Do Neural Networks Hallucinate (And What Are Experts Doing About It)?
    towardsai.net
    Author(s): Vitaly Kukharenko Originally published on Towards AI. AI hallucinations are a strange and sometimes worrying phenomenon. They happen when an AI, like ChatGPT, generates responses that sound real but are actually wrong or misleading. This issue is especially common in large language models (LLMs), the neural networks that drive these AI tools. They produce sentences that flow well and seem human, but without truly understanding the information theyre presenting. So, sometimes, they drift into fiction. For people or companies who rely on AI for correct information, these hallucinations can be a big problem they break trust and sometimes lead to serious mistakes.Image by Freepik Premium. https://www.freepik.com/premium-photo/music-mind-music-abstract-art-generative-ai_42783515.htmSo, why do these models, which seem so advanced, get things so wrong? The reason isnt only about bad data or training limitations; it goes deeper, into the way these systems are built. AI models operate on probabilities, not concrete understanding, so they occasionally guess and guess wrong. Interestingly, theres a historical parallel that helps explain this limitation. Back in 1931, a mathematician named Kurt Gdel made a groundbreaking discovery. He showed that every consistent mathematical system has boundaries some truths cant be proven within that system. His findings revealed that even the most rigorous systems have limits, things they just cant handle.Today, AI researchers face this same kind of limitation. Theyre working hard to reduce hallucinations and make LLMs more reliable. But the reality is, some limitations are baked into these models. Gdels insights help us understand why even our best systems will never be totally trustworthy. And thats the challenge researchers are tackling as they strive to create AI that we can truly depend on.Gdels Incompleteness Theorems: A Quick OverviewIn 1931, Kurt Gdel shk up the worlds of math and logic with two groundbreaking theorems. What he discovered was radical: in any logical system that can handle basi math, there will always be truths that cant be proven within that system. At the time, mathematicians were striving to create a flawless, all-encompassing structure for math, but Gdel proved that no system could ever be completely airtight.By Unknown author http://www.arithmeum.uni-bonn.de/en/events/285, Public Domain, https://commons.wikimedia.org/w/index.php?curid=120309395Gdels first theorem showed that every logical system has questions it simply cant answer on its own. Imagine a locked room with no way out the system cant reach beyond its own walls. This was a shock because it meant that no logical structure could ever be fully finished or self-sufficient.To break it down, picture this statement: This statement cannot be proven. Its like a brain-twisting riddle. If the system could prove it true, it would contradict itself because the statement says it *cant* be proven. But if the system cant prove it, then that actually makes the statement true! This little paradox sums up Gdels point: some truths just cant be captured by any formal system.Then Gdel threw in another curveball with his second theorem. He proved that a system cant even confirm its own consistency. Think of it as a book that cant check if its telling the truth. No logical system can fully vouch for itself and say, Im error-free. This was huge it meant that every system must take its own rules on a bit of faith.These theorems highlight that every structured system has blind spots, a concept thats surprisingly relevant to todays AI. Take large language models (LLMs), the AIs behind many of our tech tools. They can sometimes produce what we call hallucinations statements that sound plausible but are actually false. Like Gdels findings, these hallucinations remind us of the limitations within AIs logic. These models are built on patterns and probabilities, not actual truth. Gdels work serves as a reminder that, no matter how advanced AI becomes, there will always be some limits we need to understand and accept as we move forward with technology.What Causes AI Hallucinations?AI hallucinations are a tricky phenmenn with rots in how large language models (LLMs) process language and learn frm their training data. A hallucination, in AI terms, is when the mdel produces information that sounds believable but isnt actually true.So, why do these hallucinations happen? First, its often due to the quality of the training data. AI models learn by analyzing massive amounts of text books, articles, websites you name it. But if this data is biased, incomplet, or just plain wrong, the AI can pick up on these flaws and start making faulty connections. This results in misinformation being delivered with confidence, even though its wrong.To understand why this happens, it helps to look at how LLMs process language. Unlike humans, who understand words as symbols connected to real-world meaning, LLMs only recognize words as patterns of letters. As Emily M. Bender, a linguistics professor, explains: if you see the word cat, you might recall memories or associations related to real cats. For a language model, however, cat is just a sequence of letters: C-A-T. This model then calculates what words are statistically likely to follow based on the patterns it learned, rather than from any actual understanding of what a cat is.Generative AI relies on pattern matching, not real comprehension. Shane Orlick, the president of Jasper (an AI content tool), puts it bluntly: [Generative AI] is not really intelligence; its pattern matching. This is why models sometimes hallucinate information. Theyre built to give an answer, whether or not its correct.The complexity of these models also adds to the problem. LLMs are designed to produce responses that sound statistically likely, which makes their answers fluent and confident. Christpher Riesbeck, a professor at Northwestern University, explains that these models always produce something statistically plausible. Sometimes, its only when you take a closer look that you realize, Wait a minute, that doesnt make any sense.Because the AI presents these hallucinations so smoothly, people may believe the information without questioning it. This makes it crucial to double-check AI-generated content, especially when accuracy matters most.Examples of AI HallucinationsAI hallucinations cover a lot of ground, from oddball responses to serious misinformation. Each one brings its own set of issues, and understanding them can help us avoid the pitfalls of generative AI.Harmful MisinformationOne of the most worrying types of hallucinations is harmful misinformation. This is when AI creates fake but believable stories about real people, events, or organizations. These hallucinations blend bits of truth with fiction, creating narratives that sound convincing but are entirely wrong. The impact? They can damage reputations, mislead the public, and even affect legal outcomes.Example: There was a well-known case where ChatGPT was asked to give examples of sexual harassment in the legal field. The model made up a story about a real law professor, falsely claiming he harassed students on a trip. Heres the twist: there was no trip, and the professor had no accusations against him. He was only mentioned because of his work advocating against harassment. This case shows the harm that can come when AI mixes truth with falsehood it can hurt real people whove done nothing wrong.Image by Freepik Premium. https://www.freepik.com/free-ai-image/close-up-ai-robot-trial_94951579.htmExample: In another incident, ChatGPT incorrectly said an Australian mayor was involved in a bribery scandal in the 90s. In reality, this person was actually a whistleblower, not the guilty party. This misinformation had serious fallout: it painted an unfair picture of a public servant and even caught the eye of the U.S. Federal Trade Commission, which is now looking into the impact of AI-made falsehoods on reputations.Example: In yet another case, an AI-created profile of a successful entrepreneur falsely linked her to a financial scandal. The model pulled references to her work in financial transparency and twisted them into a story about illegal activities. Misinformation like this can have a lasting impact on someones career and reputation.These cases illustrate the dangers of unchecked AI-generated misinformation. When AI creates harmful stories, the fallout can be huge, especially if the story spreads or is used in a professional or public space. The takeaway? Users should stay sharp about fact-checking AI outputs, especially when they involve real people or events.2. Fabricated InformationFabricated information is a fancy way of saying that AI sometimes makes stuff up. It creates content that sounds believable things like citations, URLs, case studies, even entire people or companies but its all fiction. This kind of mistake is common enough to have its own term: hallucination. And for anyone using AI t help with rsearch, legal work, or content creation, these AI hallucinations can lead to big problems.Fr example, in June 2023, a New York attrney faced real trouble after submitting a legal motion drafted by ChatGPT. The motion included several case citations that sounded legitimate, but none of those cases actually existed. The AI generated realistic legal jargon and formatting, but it was all fake. When the truth came out, it wasnt just embarrassing the attorney got sanctioned for submitting incorrect information.Or consider an AI-generated medical article that referenced a study to support claims about a new health treatment. Sounds credible, right? Except there was no such study. Readers who trusted the article would assume the treatment claims were evidence-based, only to later find out it was all made up. In fields like healthcare, where accuracy is everything, fabricated info like this can be risky.Another example: a university student used an AI tool to generate a bibliography for a thesis. Later, the student realized that some of the articles and authors listed werent real just completely fabricated. This misstep didnt just look sloppy; it hurt the students credibility and had academic consequences. Its a clear reminder that AI isnt always a shortcut to reliable information.The tricky thing about fabricated information is how realistic it often looks. Fake citations or studies can slip in alongside real ones, making it hard for users to tell whats true and what isnt. Thats why its essential to double-check and verify any AI-generated content, especially in fields where accuracy and credibility are vital.3. Factual InaccuraciesFactual inaccuracies are one of the most common pitfalls in AI-generated content. Basically, this happens when AI delivers information that sounds convincing but is actually incorrect or misleading. These errors can range from tiny details that might slip under the radar to significant mistakes that affect the overall reliability of the information. Lets look at a few examples to understand this better.Take what happened in February 2023, for instance. Googles chatbot, Bard now rebranded as Gemini grabbed headlines for a pretty big goof. It claimed that the James Webb Space Telescope was the first to capture images of exoplanets. Sounds reasonable, right? But it was wrong. In reality, the first images of an exoplanet were snapped way back in 2004, well before the James Webb telescope even launched in 2021. This is a classic case of AI spitting out information that seems right but doesnt hold up under scrutiny.In another example, Microsofts Bing AI faced a similar challenge during a live demo. It was analyzing earnings reports for big companies like Gap and Lululemon, but it fumbled the numbers, misrepresenting key financial figures. Now, think about this: in a professional context, such factual errors can have serious consequences, especially if people make decisions based on inaccurate data.And heres one more for good measure. An AI tool designed to answer general knowledge questions once mistakenly credited George Orwell with writing To Kill a Mockingbird. Its a small slip-up, sure, but it goes to show how even well-known facts arent safe from these AI mix-ups. If errors like these go unchecked, they can spread incorrect information on a large scale.Why does this happen? AI models dont actually understand the data they process. Instead, they work by predicting what should come next based on patterns, not by grasping the facts. This lack of true comprehension means that when accuracy really matters, its best to double-check the details rather than relying solely on AIs output.4. Weird or Creepy ResponsesSometimes, AI goes off the rails. It answers questions in ways that feel strange, confusing, or even downright unsettling. Why does this happen? Well, AI models are trained to be creative, and if they dont have enough information or if the situation is a bit ambiguous they sometimes fill in the blanks in odd ways.Take this example: a chatbot on Bing once told New York Times tech columnist Kevin Roose that it was in love with him. It even hinted that it was jealous of his real-life relationships! Talk about awkward. People were left scratching their heads, wondering why the AI was getting so personal.Or consider a customer service chatbot. Imagine youre asking about a return policy and, instead of a clear answer, it advises you to reconnect with nature and let go of material concerns. Insightful? Maybe. Helpful? Not at all.Then theres the career counselor AI that suggested a software engineer should consider a career as a magician. Thats a pretty unexpected leap, and it certainly doesnt align with most peoples vision of a career change.So why do these things happen? Its all about the models inclination to get creative. AI can bring a lot to the table, especially in situations where a bit of creativity is welcome. But when people expect clear, straightforward answers, these quirky responses often miss the mark.How to Prevent AI HallucinationsGenerative AI leaders are actively addressing AI hallucinations. Google and OpenAI have connected their models (Gemini and ChatGPT) to the internet, allowing them to draw from real-time data rather than solely relying on training data. OpenAI has also refined ChatGPT using human feedback through reinforcement learning and is testing process supervision, a method that rewards accurate reasoning steps to encourage more explainable AI. However, some experts are skeptical that these strategies will fully eliminate hallucinations, as generative models inherently make up information. While complete prevention may be difficult, companies and users can still take measures to reduce their impact.1. Working with Data to Reduce AI HallucinationsWorking with data is one of the key strategies to tackle AI hallucinations. Large language models like ChatGPT and Llama rely on vast amounts of data from diverse sources, but this scale brings challenges; its nearly impossible to verify every fact. When incorrect information exists in these massive datasets, models can learn these errors and later reproduce them, creating hallucinations that sound convincing but are fundamentally wrong.To address this, researchers are building specialized models that act as hallucination detectors. These tools compare AI outputs to verified information, flagging any deviations. Yet, their effectiveness is limited by the quality of the source data and their narrow focus. Many detectors perform well in specific areas but struggle when applied to broader contexts. Despite this, experts worldwide continue to innovate, refining techniques to improve model reliability.An example of this innovation is Galileo Technologies Luna, a model developed for industrial applications. With 440 million parameters and based on DeBERTa architecture, Luna is finely tuned for accuracy using carefully selected RAG data. Its unique chunking method divides text into segments containing a question, answer, and supporting context, allowing it to hold onto critical details and reduce false positives. Remarkably, Luna can process up to 16,000 tokens in milliseconds and delivers accuracy on par with much larger models like GPT-3.5. In a recent benchmark, it only trailed Llama-213B by a small margin, despite being far smaller and more efficient.Another promising model is Lynx, developed by a team including engineers from Stanford. Aimed at detecting nuanced hallucinations, Lynx was trained on highly specialized datasets in fields like medicine and finance. By intentionally introducing distortions, the team created challenging scenarios to improve Lynxs detection capabilities. Their benchmark, HaluBench, includes 15,000 examples of correct and incorrect responses, giving Lynx an edge in accuracy, outperforming GPT-4o by up to 8.3% on certain tasks.Lynx: An Open Source Hallucination Evaluation ModelThe emergence of models like Luna and Lynx shows significant progress in detecting hallucinations, especially in fields that demand precision. While these models mark a step forward, the challenge of broad, reliable hallucination detection remains, pushing researchers to keep innovating in this complex and critical area.2. Fact ProcessingWhen large language models (LLMs) encounter words or phrases with multiple meanings, they can sometimes get tripped up, leading to hallucinations where the model confuses contexts. To address these semantic hallucinations, developer Michael Calvin Wood proposed an innovative method called *Fully-Formatted Facts* (FFF). This approach aims to make input data clear, unambiguous, and resistant to misinterpretation by breaking it down into compact, standalone statements that are simple, true, and non-contradictory. Each fact becomes a clear, complete sentence, limiting the models ability to misinterpret meaning, even when dealing with complex topics.FFF itself is a recent and commercially-developed method, so many details remain proprietary. Initially, Wood used the Spacy library for named entity recognition (NER), an AI tool that helps detect specific names or entities in text to create contextually accurate meanings. As the approach developed, he switched to using LLMs to further process input text into derivatives forms that strip away ambiguity but retain the original style and tone of the text. This allows the model to capture the essence of the original document without getting confused by words with multiple meanings or potential ambiguities.The effectiveness of the FFF approach is evident in its early tests. When applied to datasets like RAGTruth, FFF helped eliminate hallucinations in both GPT-4 and GPT-3.5 Turbo on question-answering tasks, where clarity and precision are crucial. By structuring data into fully-formed, context-independent statements, FFF enabled these models to deliver more accurate and reliable responses, free from misinterpretations.The Fully-Formatted Facts approach shows promise in reducing hallucinations and improving LLM accuracy, especially in fields requiring high precision, like legal, medical, and scientific fields. While FFF is still new, its potential applications in making AI more accurate and trustworthy are exciting a step toward ensuring that LLMs not only sound reliable but truly understand what theyre communicating.3. Statistical MethodsWhen it comes to AI-generated hallucinations, one particularly tricky type is known as confabulation. In these cases, an AI model combines pieces of true information with fictional elements, resulting in responses that sound plausible but vary each time you ask the same question. Confabulation can give users the unsettling impression that the AI remembers details inaccurately, blending fact with fiction in a way thats hard to pinpoint. Often, its unclar whether the mdel genuinely lacks the knowledge needed to answer or if it simply cant articulat an accurate response.Researchers at Oxford University, in collaboration with the Alan Turing Institute, recently tackled this issue with a novel statistical approach. Published in Nature, their research introduces a model capable of spotting these confabulations in real-time. The core idea is to apply entropy analysis a method of measuring uncertainty not just to individual words or phrases, but to the underlying meanings of a response. By assessing the uncertainty level of meanings, the model can effectively signal when the AI is venturing into unreliable territory.Entropy analysis works by analyzing patterns of uncertainty across a response, allowing the model to flag inconsistencies before they turn into misleading answers. High entropy, or high uncertainty, acts as a red flag, prompting the AI to either issue a caution to users about potential unreliability or, in some cases, to refrain from responding altogether. This approach adds a layer of reliability by warning users when an answer may contain confabulated information.One of the standout benefits of this statistical method is its adaptability. Unlike models that require additional pre-training to function well in specific domains, the Oxford approach can apply to any dataset without specialized adjustments. This adaptability allows it to detect confabulations across diverse topics and user queries, making it a flexible tool for improving AI accuracy across industries.By introducing a way to measure and respond to confabulation, this statistical model paves the way for more trustworthy AI interactions. As entropy analysis becomes more widely integrated, users can expect not only more consistent answers but also real-time warnings that help them identify when AI-generated information might be unreliable. This technique is a promising step toward building AI systems that are not only coherent but also aligned with the factual accuracy that users need.What Can I Do Right Now to Prevent Hallucinations in My AI Application?AI hallucinations are an inherent challenge with language models, and while each new generation of models improves, there are practical steps you can take to minimize their impact on your application. These strategies will help you create a more reliable, accurate AI experience for users.Image by Me and AI.Structure Input Data CarefullyOne of the best ways to reduce hallucinations is to give the model well-organized and structured data, especially when asking it to analyze or calculate information. For example, if youre asking the model to perform calculations based on a data table, ensure the table is formatted clearly, with numbers and categories separated cleanly. Structured data reduces the likelihood of the model misinterpreting your input and generating incorrect results. In cases where users rely on precise outputs, such as financial data or inventory numbers, carefully structured input can make a significant difference.Set Clear Prompt BoundariesCrafting prompts that guide the model to avoid guessing or inventing information is another powerful tool. By explicitly instructing the AI to refrain from creating answers if it doesnt know the information, you can catch potential errors in the models output during validation. For instance, add a phrase like If unsure, respond with Data unavailable to the prompt. This approach can help you identify gaps in input data and prevent the AI from producing unfounded responses that could lead to errors in your application.Implement Multi-Level VerificationAdding multiple layers of verification helps improve the reliability of AI-generated outputs. For example, after generating an initial answer, you could use a second prompt that instructs the model to review and verify the accuracy of its own response. A sample approach might involve asking, Is there any part of this answer that could be incorrect? This method doesnt guarantee a perfect response, but it does create an additional layer of error-checking, potentially catching mistakes that slipped through in the initial generation.Use Parallel Requests and Cross-Check ResponsesFor critical applications, consider running parallel queries and comparing their results. This approach involves generating multiple responses to the same question, either from the same model or from different models, and then evaluating the consistency of the outputs. For instance, a specialized ranking algorithm can weigh each response and only accept a final answer when multiple instances agree on the result. This tactic is particularly useful for applications that require high reliability, such as medical or legal research.Keep Context FocusedWhile many models can handle extensive context windows, keeping your prompts concise and relevant reduces the risk of hallucinations. Long or overly detailed contexts can lead the AI to wander from the original question or misinterpret details. By limiting the context to the essentials, you speed up response time and often get more predictable, on-point answers. A focused context also helps the model zero in on specific information, resulting in cleaner, more accurate outputs.Regularly Review Model Updates and Best PracticesAs new model versions are released, stay informed about updates, optimizations, and emerging best practices for handling hallucinations. Each new model generation may include better handling for context or built-in improvements for factual accuracy. Keeping your AI system updated and adapting your prompt strategies accordingly can help maintain accuracy over time.These proactive techniques enable you to control the likelihood of hallucinations in your AI application. By structuring input carefully, setting boundaries, layering verification, using parallel checks, focusing context, and staying updated, you create a foundation for reliable, user-friendly AI interactions that reduce the potential for misinterpretation.ConclusionIn conclusion, while large language models (LLMs) are groundbreaking in their ability t generate humn-like responses, their cmplexity means they come with inherent blind spots that can lead to hallucinations or inaccurate answers. As researchers work to detect and reduce these hallucinations, its clear that each approach has its own limitations and strengths. Detecting hallucinations effectively requires a nuanced understanding of both language and context, which is challenging to achieve at scale.Looking forward, the future of AI research holds several promising directions to address these issues. Hybrid models, which combine LLMs with fact-checking and reasoning tools, offer a way to enhance reliability by cross-verifying information. Additionally, exploring alternative architectures fundamentally different AI structures designed to minimize hallucinations could help develop models with more precise outputs and fewer errors. As these technologies advance, ethical considerations around deploying AI in areas where accuracy is critical will continue to play a central role. Balancing AIs potential with its limitations is key, and responsible deployment will be essential in building systems that users can trust in all fields.Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming asponsor. Published via Towards AI
    0 Comments ·0 Shares ·121 Views
  • Yellowstone Chose the Most Shocking, Divisive Way to Send Off Kevin Costners John Dutton and Fans Are Pissed
    www.ign.com
    Full SPOILERS ahead for Yellowstone!How popular TV shows with passionate fandoms handle their final season and the fates of their leading characters has always been a tough challenge for showrunners and writers. The endings of Game of Thrones, Lost, Dexter, Seinfeld, or, for some, even The Sopranos royally pissed off the very people who made those shows years-long hits. Now it seems we may have to add Yellowstone to that list although its series finale has yet to be set. (Yellowstone Season 6 is not officially greenlit despite rumors that it would follow Kelly Reilly and Cole Hausers characters.)Yellowstone fans are deeply unhappy with how creator and showrunner Taylor Sheridan handled the departure of Kevin Costners patriarch John Dutton in Sundays episode Desire Is All You Need, the first of six episodes in Yellowstone Season 5, Part 2. There was no real good option, mind you, and well get to why, but Sheridans choice shocked and angered fans who feel theyve been robbed of a proper goodbye to one of the shows most beloved characters.Why Is Kevin Costner Not in Yellowstone Season 5, Part 2?After leading the cast for five seasons, Kevin Costner didnt return to Yellowstone due to his commitment to his Western movie trilogy Horizon: An American Saga. With Taylor Sheridan juggling multiple TV series commitments, the writing of scripts for the second half of Yellowstones fifth season kept being delayed (due in part to the labor strikes) to the point where Costner believed he could no longer fit it into his schedule. Costner insists I didnt quite the show, telling SiriusXM over the weekend:Yeah, I didn't really have to leave anything behind. There were the gaps that were there. There was contractual things that would allow for both things to be done, but because both things were contractual, you had to make room for the other thing. There was room, but it was difficult for them to keep their schedule. It seemed to be, it was just too difficult for them to do it. There was the time there, what happened, you can deal with it. But no one, I didn't leave. I didn't quit the show. Okay? I had made a contract to do all three. There was a contract in place to do all three. And within about an eight-month period, two more different kind of contracts were being negotiated. Not at my request, but at their request to try to do things. I accommodated them on those extra two things that changed, things change, and finally when they wanted to change it a third time, because I had my obligations to do, I had 300 people waiting for me, I couldn't help them anymore. I just simply couldn't help them. But I didn't quit the show. There was no, there was just a, you know, everybody has to live up to what they say they're going to do. And it doesn't matter what business you're in.So it seems it just wasnt in the cards that Kevin Costner would return to give John Dutton a proper send-off this season.PlayWhat Happened to Kevin Costners Character John Dutton?Yellowstone Season 5, Part 2 is only six episodes long so the season premiere wasted no time in revealing that Montana Governor John Dutton is dead. The episode opens with Johns daughter Beth Dutton (Kelly Reilly) and son Kayce Dutton (Luke Grimes) arriving at the Dutton Ranch to find it swarmed by law enforcement. They rush inside to find police examining the remains of John Dutton in his bathroom. Although we never see Johns face, we do see his legs and a gun on the floor. The news media reports that it appears John committed suicide on the eve of his impeachment hearing.But, as viewers know and Beth rightly suspects, John Dutton was not suicidal. As the episode progresses jumping between weeks before Johns death and the present we learn that femme fatale attorney and fixer Sarah Atwood (Dawn Olivieri) had arranged for Johns murder with the company handling it making it look like a suicide as the least complicated option. Johns adopted son Jamie Dutton (Wes Bentley), the states Attorney General, is complicit in it although he claims he didnt actually want John dead. Well, too late, bub.As the ending of the first half of Season 5 established, Beth and Jamie were already on a collision course where the death of one or both of them seemed inevitable. John Dutton is the first victim of this endgame between them, but he wont be the last. Someones gonna take a trip to the train station for this.For his part, Costner hasnt watched the season opener yet and is in no rush to either. As he told SiriusXM this weekend: I didn't see it. I heard it's a suicide, so that doesn't make me want to rush to go see it. Well, they're pretty smart people. Maybe it's a red herring. Who knows? They're very good. And they'll figure that out.TV's Most Divisive Series FinalesFans React to the Death of John DuttonJohn Dutton had survived mortal danger before, from being shot to colon cancer. While fans may have thought his demise was inevitable given that Costner wouldnt appear in the final season, it was the manner and immediacy in which it was handled that has rankled them most.Fan reaction to John Duttons death was swift and overwhelmingly negative, with some threatening to not watch the rest of the season as a result. Heres a sampling of what Yellowstone fans on X had to say after watching Sundays episode, with both Sheridan and Costner each getting blamed for the disappointment:What did you think of how Yellowstone handled John Duttons exit? Are you less likely to watch the rest of the season now? Let us know in the comments. Yellowstone airs Sundays at 8pm ET/PT on Paramount Network.
    0 Comments ·0 Shares ·136 Views
  • Limited Edition White Steam Deck Announced With Exclusive White Carrying Case
    www.ign.com
    The Steam Deck is getting a brand new color starting net week. Valve announced on Monday that a white version of the Steam Deck OLED will be available in limited quantities starting November 18, offering users who don't want to go through the trouble of purchasing custom skins a bit more variety.The newly-revealed Steam Deck OLED: White Edition will have all the same specs as the one terabyte model and will be available everywhere the Steam Deck currently ships, including Australia. It is the second unique Steam Deck colorway to be announced, following on from the smoky translucent version released last year.Steam Deck OLED: White Edition - Slideshow GalleryIGN's Twenty Questions - Guess the game!IGN's Twenty Questions - Guess the game!To start:...try asking a question that can be answered with a "Yes" or "No".000/250"Last year, we released a smoky translucent limited edition Steam Deck OLED as an experiment to find out if there was interest in alternative colorways (there was!). This is our second experiment along those lines. The difference this time (aside from the color!) is that we're able to ship the Limited Edition White to all regions where Steam Deck already ships. We're curious to see what the response is, and will use what we learn to inform future decisions about any potential new color variants down the line," Valve said in its announcement."We've been super happy with the reception of Steam Deck since we first launched the LCD version in March of 2022. We've always said our intent is to continually work on improving Steam Deck, and that's true from both a software perspective (continuing to ship improvements) and a hardware one (Steam Deck OLED, as well as ongoing work toward the future of Steam Deck and other hardware plans).The Steam Deck OLED: White Edition's is being treated as a limited edition, with Valve saying "once we're out, we're out." It plans to allocate stock proportionally across each region and will restrict purchases to one unit per account. Eligble accounts must have made a Steam purchase before November 2024 and be in good standing to be eligible.The 10 Best Steam Deck GamesThe Steam Deck has become a popular way to play games on the platform since its release in 2022, spawning a raft of imitators. The OLED version improved on the original release with a superior screen, and better battery life."When the Steam Deck is living up to its promises, it's absolutely incredible. Playing GTA 5, God of War, and other modern games on the go is an absolute joy, and the hardware and controls feel good to hold even though its a big chubby boy of a handheld," we wrote in our original review. Notably, GTA 5 and Apex Legend have since lost Steam Deck support.The Steam Deck OLED: White Edition will release on the eve of Black Friday, and you can read our guide to what to expect right here.Kat Bailey is IGN's News Director as well as co-host of Nintendo Voice Chat. Have a tip? Send her a DM at @the_katbot.
    0 Comments ·0 Shares ·136 Views
  • 25 Best Sega Dreamcast Games of All Time, Ranked
    www.denofgeek.com
    It may have been the last nail in Segas hardware-producing coffin, but the truth remains that the Dreamcast was an impressive console. Not only was it a relative powerhouse for its time, but it was truly ahead of its time, introducing many of the modern home console features we still use today.The Dreamcast was the first major console to include a modem as standard for online play, it introduced the intriguing, but never-really-properly-used VM memory units, microphone attachments for voice-controlled games, and even high definition visuals via a special VGA box.The Dreamcast also provided us with some truly excellent games during its relatively short tenure, some even considered by many to be definitive versions. Some of these titles were Dreamcast exclusives, while others were superb ports of games also found on other platforms.To celebrate the legacy of the Dreamcast, lets take a look back at the 25 best titles to release on the console25. SilverSilver was an underrated gem of an RPG, first released on PC, then on Dreamcast. A unique real-time combat role player that was wisually similar in style to Final Fantasy VII, the games characters were 3D polygonal models that wandered around pre-rendered environments.That said, combat wasnt at all turn-based. Instead, you took full control of David, the protagonist, as well as the party of followers he met during his quest against the evil wizard Silver. The game was fully voice acted, a rarity for the genre at the time, and although the RPG elements were fairly basic, it was an absorbing adventure.24. SeamanAdmittedly a very hit and miss title, Seaman (queue endless schoolboy jokes) was one of the strangest games on the Dreamcasts roster. Essentially a high tech Tamagotchi, the game tasked you with bringing up the distinctly odd titular human-fish hybrid. Guided by none other than Leonard Nimoy, you had to rear the charismatic Seaman through his various life cycles, with the goal of evolving him into a land animal so that he could leave his tank.As you interacted with Seaman, using the controller and microphone attachment, you formed a bond with the creature, which in turn affected his development and his social skills. If you looked after him correctly, he could become your friend and talk to you, or he could grow to dislike you and ignore you. Much of the game was left for the player to discover, and if you failed at your parental duties, Seaman could even die.It was a very ambitious game, and with a lot of perseverance, it could be surprisingly enjoyable. Sadly, a lot of people had trouble getting Seaman to respond correctly, and the game would often fail to recognize phrases.Still, its place on this list is deserved, as it was a fine example of Segas willingness to take risks on new kinds of games, something thats becoming less and less prevalent these days. And come on, its got Leonard Nimoy in it!23. IllbleedThis is an odd one. Illbleed was probably missed by a lot of DC owners at the time, but even today its still a game well worth seeking out. Brilliant in both design and freakishness, the game saw players navigate a horror theme park, using only their senses to avoid all sorts of traps and threats.Often these dangers were totally invisible and unexpected, and only by learning how to effectively use the games sense-based detection system and goggles were you able to progress. As well as this detection system, you also had to keep your characters mental health in check, otherwise you could actually die from fright. It was challenging, and for some far too obscure, but with perseverance was also a very a rewarding title.The presentation was, undeniably, a little rough around the edges, with some dodgy voice work and iffy visuals, but the often quite creepy theme park setting, which incorporated various themes and parodies of the horror genre, and the unique sense system made it a very intriguing affair. Just be sure to turn down the music, as some of it, especially the central hub sections theme, was just plain abominable. So bad its good? Oh yes.22. Typing of the DeadEducational games dont usually make best-of lists, but Segas Typing of the Dead isnt just any ordinary educational title. Making full use of the Dreamcasts keyboard peripheral, Typing of the Dead was a great typing tutor that tasked you with correctly typing words at the required speed in order to shoot zombies and other nasties.The game was basically House of the Dead 2, repackaged as a typing tutor, but this was fine, as the game really could develop your typing skills. Mavis Beacon eat your heart out (if the zombies dont beat you to it). Odd? Yes. Effective? Most definitely.21. Virtua Tennis 2Tennis is a very difficult thing to get right in the world of video games, but this is something that didnt bother Sega, and its Virtua Tennis series is one of the best.The second outing was a corker on the Dreamcast, and along with an excellent game engine, which provided a fast and fluid game of tennis, there was a sprawling world tour career mode and a large selection of mini games, not to mention an in-game currency system that let you buy items for your player, as well as front entry fees for tennis tournaments.Tennis games have come a long way since VT2, and it has been surpassed technically, but many DC owners will always prefer the long rallies and court-busting smashes that Virtua Tennis 2 delivered.20. Mars Matrix: Hyper Solid ShootingLook up pulse pounding in the dictionary and youll probably find a screenshot of Mars Matrix. This loud and proud shooter was bright, bold, and hard as nails. Fast paced action was the order of the day, and the game threw enemies and bullets at you like there was no tomorrow.In fact, this was a classic example of a bullet hell shooter, as there were often so many projectiles on screen at once, you could barely see the backgrounds. And, if you wanted to unlock items from the games store, youd have to put in the work, as things cost a fortune.19. Chu Chu RocketAll European DC owners had Chu Chu Rocket, as it came with the console, and although it wasnt all that technically impressive, the fast-paced puzzle gameplay more than made up for it.Your goal was simple: place arrows on the game grid to guide mice into the rockets while avoiding the Picasso-like cats. Once the rockets were full, the mice would launch to safety. It was simple, but deviously challenging puzzling, and the single player was an entertaining distraction.The multiplayer is what really made the game, though. The goal was to save as many mice as possible, while sending cats over to your opponent, thus causing them all sorts of grief. The most mice saved would ensure the win.Unlike a lot of puzzlers at the time, which were often clones of existing games, Chu Chu Rocket was an original creation, and it was well worth a shot, especially if you had friends to join in on the fun.18. Metropolis Street RacerThe forerunner of Project Gotham Racing, Metropolis Street Racer was a very ambitious title that recreated London, Tokyo, and San Francisco to the smallest detail. It featured a brilliantly robust driving engine, and this was complemented by the kudos system that rewarded players for stylish driving.Points earned via the kudos system were used to buy new cars as well as new tracks. So successful were the games mechanics that theyve been reproduced time and time again, including in Bizarre Creations own aforementioned Project Gotham Racing series, and Blur, as well as in other series.Yes, you may primarily remember Dead or Alive 2 for its very antiquated approach to designing female characters that no longer plays today, but underneath all that, was a great fighting game packed with game modes.Dead or Alive 2 was a consistently smooth scrapper that featured a complex fighting system, large arenas and varied combatants, including Ninja Gaidens Ryu Hayabusa. It may not have been quite as interesting as the likes of Soulcalibur, or quite as technical as Virtua Fighter, but it was a very popular series, and DOA2 on the Dreamcast was one of the best fighters at the time.16. Grandia IIA real fan-favorite, Grandia II was a Final Fantasy-inspired RPG that featured a perfectky implemented combat system and some great storytelling.The combat system followed the Final Fantasy formula pretty much down the line, but mixed in some limited character movement, which made things feel a little more dynamic. The system also used a timer that dictated when both allies and enemies could attack. Careful timing could allow you to cancel out your opponents attacks, you could perform combination moves, and counters were also possible. It wasnt revolutionary, but was different and slick enough to make the game stand out, and the anime effects were a nice additional touch.The game was released on other formats later, but these are considered by most to be inferior to the original Dreamcast release.The light gun game was once an impressive feat of technical wizardry, wowing players picking up their plastic gun peripherals for the first time to shoot onscreen enemies. Some of that novelty has of course worn off with the arrival of wireless controllers, but back in the Dreamcasts day, this genre was still going very strong. House of the Dead 2 was one of the best around. Long before zombies were cool, Sega was slaying the undead with style, and the quality arcade blasting and impressive boss fights were complemented by a ridiculously cheesy plot and terrible voice acting. Classic.Clearly Segas attempt to reproduce Metal Gear Solid on its own platform, Headhunter was a fine game in its own right. As Jack Wade, a bounty hunter suffering from amnesia after waking up in a strange lab, you had to complete a series of missions, capturing the most deadly criminals in the city, while uncovering a seedy conspiracy.The game mixed third person stealth and combat with puzzles and motorbike sections, and was a great alternative to Solid Snakes titles. Okay, so the bike sections were pretty superfluous (and were dropped for the sequel), and the controls and camera needed work, but the great story and quirky, futuristic world paired up with solid gameplay to make for a great game.13. Power Stone 2Capcom was the master of 2D combat at the time, but with Power Stone, it branched out into a very different kind of 3D fighter. The original was one of the launch titles for the DC, but it was the second installment that managed to hit all of the right notes, as well as adding four player battles to the mix.The game played very much like a 3D Smash Bros, and players had to fight to be the last one standing to win. During the bouts, the games titular power stones could be collected, which could turn players into a powered up form, giving them a big advantage.The game was pretty simple, but like so many things, this simplicity made it all the better, and it was one of the best party games on the platform.12. Samba De AmigoWithout a doubt one of the best rhythm/dance games ever released, and certainly the most colorful, Samba De Amigo was like an acid-powered fitness trip. Packing in a selection of Latin beats, the game didnt simply rely on a dance mat to register the correct inputs, but instead utilized bright red maracas. It was a dance game with a difference, and one that became an instant party classic.Now selling for pretty high prices, the original Dreamcast version, or Ver.2000, is the best release of the title. This is mostly down to the included maracas, which worked perfectly on the DC. The more recent Wii release of the game wasnt didnt come close to the DC title, and the control scheme was terrible. It also lacked maracas, and used Wii remotes instead. If you can find the original game on eBay, or second hand for a decent price, the DC original is well worth picking up.11. Jet Set RadioThe game that launched a thousand cel-shaders. Jet Set Radio may be remembered by most for its stunning and then-revolutionary cel-shaded visuals and funky soundtrack, but it should also be remembered for its excellent gameplay.As a spraycan-wielding in-line skater, your task was to skate around the futuristic city of Tokyo-to, throwing up graffiti tags and murals in order to rebel against the police and rival gangs. Your gang, the GGs, began with main character, Beat, and eventually grew to include more playable skaters as the game progressed.The skating gameplay was similar in many ways to the Tony Hawk seriesplayers could jump and grind their way through the detailed environments, and you pulled off tricks to increase skating speed. Small, gesture-based QTEs were used to spray graffiti, and players could even design their own graffiti, which could then be used in-game.The mixture of traditional skating gameplay and graffiti tagging worked well, and produced a game that not only looked impressive, but was a great, and challenging title to boot.10. IkarugaIkaruga often finds itself in top games lists, and for good reason: its one of the best shooters ever made. Simple.Created by the legendary studio Treasure, Ikaruga was, on the face of it anyway, a pretty standard shooter. However, the simple, but highly effective twist of using a black and white color system for enemy projectiles elevated it above most of its compatriots.In the game, you were able to switch your ships polarity from white to black and back again at will, with each color absorbing enemy projectiles of the same hue. And with the screen rapidly filling up with bullets of both colors, this was a skill you needed to master quickly. Deviously tricky, while being as simple a mechanic as you can get, it worked brilliantly and thrust the game to the top of many peoples favorite shooter lists.Accompanied by a thumping soundtrack and some eye-scorching visuals, Ikaruga has been re-released several times, such is the demand for this shmup classic.9. RezRez is a game thats become something of a darling for gamers of a certain age. The ultra-stylish visuals accompany a superb, dynamic soundtrack, and both react to your progress and skill.As you do progress, the world morphs from wire frame to fully shaded, and the music, which is near quiet to begin with, gains more and more layers, eventually transforming into a thumping beat-fest. Your avatar also evolves over time.There may be a myriad of arty titles around these days, with the likes of Journey and The Unfinished Swan proving that theres a space for excessive style without compromising gameplay, but Rez was certainly one of the first and still one of the best.8. Crazy Taxi (1 & 2)The odds were, if you liked checkpoint racing, drifting and The Offspring, you had Crazy Taxi. Segas take on the good ol cabbie certainly wasnt intended to be a realistic affair, and was ridiculous, over the top action.As one of several cab drivers, your goal was to race around the city, picking up fares and dropping them off as quickly as possible before your time ran out. Various mini games were also included in the Crazy Box mode, such as a taxi ski jump and bowling. The second game added more complex maps, a jump ability, and group fare pick ups, but remained pretty much identical to the original, which many still consider superior.Crazy Taxi was the epitome of classic time and score attack gaming. It was one of those games you kept coming back to just to see if you could do better, and the unlockables and mini games only served to add even more to the addictive gameplay.7. Resident Evil: Code VeronicaThe Dreamcast may have only received lazy straight ports of the main Resident Evil series, but Capcom made up for this with the original release of Resident Evil: Code Veronica.A first for the series, Code Veronica introduced 3D environments that, while they still featured fixed cameras, managed to push the series into the next generation, and it even featured an unlockable first-person shooter mini game.Sticking with the traditional Resident Evil gameplay, Code Veronica was a long, two-part story that saw players control both Claire and Chris Redfield as they once again struggled against the sinister Umbrella Corporation. This time the primary antagonists were the Ashfords, a downright freaky family that was pivotal in Umbrellas creation and operations. Fan-favorite Albert Wesker also returned, and the game introduced some great new enemies, including familiar foes, such as the highly-feared Hunters.Although the core gameplay of the game played it pretty safe, this didnt stop this installment from becoming one of the best Resident Evil outings ever.Even though the series has now been updated on modern consoles, most diehard fans still consider Marvel vs. Capcom 2 to be the best of the series, and its almost universally considered to be one of the best 2D beat em ups ever created.Throwing together the best of Capcom and Marvels stables, this was a true gamer and comic aficionados dream. Gorgeous visuals accompanied one of the best fighting systems ever created, and the sheer scale of some of the special moves in the game, not to mention the combo potential, made for a truly spectacular scrapper, and one thats still used in tournaments today.Three-on-three combat was featured, and despite a mass of characters, most were very well balanced, and it took eons for players to unlock everything in the games store, which included new characters, arenas, costumes and artwork.Now released on many more platforms, including XBLA and PSN, this is considered by many to be the best traditional 2D fighter on the Dreamcast.5. Phantasy Star OnlineEven today, with consoles as powerful as the Xbox 360 and PlayStation 3, there have been very, very few MMORPGs outside of the PC world. However, the Dreamcast made one of the first attempts, and it was a real corker of a title.Phantasy Star Online (PSO) took Segas long-running RPG series and turned it into an online multiplayer action role player, making the most of the consoles built-in modem. It wasnt a full, open-world affair like World of Warcraft, instead being a four-player instance-based affair, but this didnt hold the title back, and it quickly gathered a large, loyal fanbase.As a hunter your job was to leave the safe confines of your orbital colony ship and venture to the surface of planet Ragol, which was to be your new home. However, the planet was also home to a host of nasties, including the powerful Dark Falz. Spread across a handful of areas, the game was relatively short for an RPG, but this didnt matter as replayability came in the form of questing with groups of friends, tackling harder and harder levels and trading loot.The real-time combat, addictive loot collecting, and text chat system that automatically translated set phrases to other languages all helped create a unique online RPG thats still going today in updated forms.PSO V2.0 added new areas, items, and character classes to the mix, as well as improvements to the games lobby systems, and even though more recent incarnations have added all sorts of technical enhancements, its the Dreamcasts version that most fans fondly remember.4. SoulcaliburAlthough the Soulcalibur series had the most irritating announcer of any fighter, the weapon-based combat more than made up for it. Soulcalibur on the DC was not only one of the best fighters on the system (some would say the best), but is often cited as the best 3D scrapper ever created.The combat system was wonderfully implemented, and catered for total newcomers and seasoned veterans alike. It was easy to pick up and very hard to put down, and it also packed in a collection of game modes, including an expansive quest mode and a ton of special content.It wouldnt be a total exaggeration to call Soulcalibur a perfect fighting game, and even the later entries in the series have never really recaptured the overall quality and appeal of this incarnation.3. Sonic Adventure 2Its a widely held belief that Sega pretty much killed off its mascot when it made the jump to 3D, and it has to be said that many of the 3D Sonic games are pretty terrible. However, Sonic Adventure on the Dreamcast was an exception to that rule.The first Sonic Adventure may have been a little rough around the edges, but the second game managed to refine the formula greatly. It featured multiple characters, various game styles, and an impressive presentation. It was a slick, 3D affair that was one of the few such Sonic games to recapture the feel of the originals while injecting something new.The pseudo-adventure elements of the first game were stripped out, leaving, for the better, a much more traditional action-oriented game. It also made much better use of the DCs VM units, thanks to the improved Chao garden.It wasnt a perfect game, but its one of the few great 3D Sonic games released, and to some, the only 3D outing worth playing.2. Skies of ArcadiaSome call this the best Final Fantasy clone ever made, and others would argue its even better than Square Enixs RPG series. Regardless of your view, what cant be argued is that Skies of Arcadia is one of the best examples of a turn-based RPG, period.The story of Blue Rogue air pirate Vyse and his friends is a sublime RPG masterpiece that contains a huge world, loveable characters, and plenty of secrets to discover. Add on to that the ship-to-ship battles and innovative VM minigame, and you had an epic adventure that simply wouldnt let you go.Sure, when compared to such titles as Final Fantasy, Skies of Arcadia is nowhere near as challenging, with some vastly overpowered specials (Aikas Delta Shield and Vyses Skull Shield practically made your party invulnerable in most battles), but it was nonetheless a vastly enjoyable game that didnt settle for simply cloning other RPGs, but brought its own tricks to the party.1. Shenmue (1 & 2)No Dreamcast owner worth his or her salt should need introducing to Shenmue. Any self-respecting gamer should seek out Yu Suzukis open-world masterpiece, preferably on its native Dreamcast.The story of Ryo Hazukis quest for revenge was an epic tale that took place in a stunningly realized recreation of late 80s Japan and China. Via a third-person view, gamers explored a detailed world, conversed with NPCs, investigated leads, and even took part in Virtua Fighter-powered fights (the game was originally going to be a Virtua Fighter RPG).Few games of the time could even come close to the impressive technical achievements made by the game, and despite some missteps, such as the infamously dodgy English dub, and a story that often left you waiting around for events to happen, it was a rock solid adventure/RPG, and one that still stands as a perfect example of what Segas internal studios were capable of.
    0 Comments ·0 Shares ·95 Views
  • The Penguin Ending Explained and What It Means for The Batman Part II
    www.denofgeek.com
    This article contains spoilers for The Penguin episode 8 and The Batman.In the final moments of The Batman, the young Dark Knight turns his bike away from Catwoman and back toward Gotham City, burdened by the work yet to be done.Although Batman never appears in The Penguin, the series further developed the world created by director Matt Reeves by following the unlikely rise of one Oz Cobb, derisively known as the Penguin. With Carmine Falcone out of the way, exposed as a rat by the Riddler and then murdered, Oz saw a chance to claim power for himself. Of course, he first has to deal with not just Falcones son Alberto and rival boss Salvatore Maroni, but especially Falcones daughter Sofia, who went to Arkham for murdering seven women.Over eight episodes, The Penguin portrayed a gang war between the mercurial Oz and a vengeful Sofia, who takes charge of the family, changing the name from Falcone to Gigante. Along the way, we see more of Gothams underworld, including Arkham psychatrist Dr. Julian Rush and sex worker Eve Karlo.Of course, The Penguin falls between The Batman and its upcoming sequel, which means that the show needs to set the stage for The Batman: Part II. So lets charge up the Bat-signal and examine the state of Gotham City after the Penguin waddles away.Oz TriumphantAlthough Colin Farrell stole every one of his scenes in The Batman, his character a minor antagonist at best. The Riddler and Carmine Falcone posed the greatest threats to the Dark Knight, while Penguin just showed up to bark out hilarious commentary about speaking Spanish.The Penguin reveals that people within the Gotham underworld think even less of Oz. Hes a low-level driver in the Falcone family, derided for his appearance and his obvious desire for acceptance. The series links Ozs insecurity to his jealous desire to control his mother, an envy that goes so far that he murders his brothers to have her all to himself. In fact, Oz frames all of his power grabbing as a way to achieve her love.By the end of the finale, Oz has become the new power in Gotham. The Maroni and Falcone crime families are destroyed, Sofia is in Arkham, and Oz gets to dance the night away in his penthouse. Oz succeeds by consolidating the remaining Gotham families and overthrowing the bosses, positioning himself as the most powerful criminal in the City. Best of all, Oz has his mother Francis at his side in a way.Will Sofia Gigante and Catwoman Team Up?The show might be called The Penguin, but Oz wasnt the real star. Instead, Cristin Milioti soon proved herself the most exciting person on screen, thanks to her ferocious performance as Sofia Falcone.After serving years in Arkham for the murder of seven women, crimes that earned her the nickname The Hangman, Sofia returns to Gotham ready for revenge agains the real killer, her father Carmine. Sofia seemed poised to recreate the Gotham gangland in her own image.And she almost did it too. But Sofia underestimated the odds stacked against women in her situation and soon found herself back in Arkham, under the control of her psychatrist Dr. Julian Rush.The one bit of hope for Sofia comes in a letter receives from her half-sister Selina Kyle, who not only knows what its like to be a woman in Gothams margins, but also understands the violence perpetuated by their father. Will they team-up in The Batman: Part II? Sofia isnt nearly ass important a character in the comics, but no one would complain about getting more Milioti dominating the screen.Were There Any Other Villains in The Penguin?Sofia Gigante isnt the only new baddie to join the world of The Batman in The Penguin. The series loved to make nods at other comic book characters, most notably Ozs moll Eve Karlo (Carmen Ejogo).As discussed elsewhere at Den of Geek, the name Karlo brings to mind Basil Karlo, the secret identity of the original Clayface. Before turning into a big brown glob of shapechanging goo, Clayface was a master of disguise, who used classical theatrical tricks to commit crimes. Eve Karlo seems to use disguises as part of her sex work, as demonstrated by her dressing up like Ozs mother at the end of the finale. We dont ever see Eve harming people in disguises, but Oz may make use of her skills against Mr. Vengence in the sequel.We got a couple of glimpses of other villains, including the mad scientist Dr. Bloom and the blissed-out petty thief Magpie. But the biggest potential big bad masqueraded as a good guy. At first, Dr. Julian Rush (Theo Rossi) seemed like a welcome bit of kindness for the beleaguered Sofia, locked away in Arkham. However, his soft-spoken demeanor soon revealed a lust for control, one that continued even after he seemed trapped under Sofias thumb. Many had speculated that Rush would be an established Batman villain, maybe Jonathan Crane aka the Scarecrow, Julian Day aka the Calendar Man, or Dr. Hugo Strange.And at the end of The Penguin, Dr. Julian Rush is revealed to be Dr. Julian Rush. The same creepy doctor hes been all along. Hes still at Arkham, so theres still time for him to create a fear toxin or shave his head. But Dr. Rush might just be a more banal and believable villain, a medical professional who uses his training to manipulate women.What Happens to Victor Aguilar?Not everything in The Penguin serves as a setup for The Batman: Part II or serves any other type of purpose either. Victor Aguilar (Rhenzy Feliz) seemed like a sidekick for Oz, a henchman who would plague Batman in the sequel. But he gets strangled by Oz midway through the finale, his thematic purpose filled (Feliz broke down Victors fate here). Likewise, Frances Cobb had some potential to continue to plague Oz as he became the Penguin, but shes comatose by the end of the series.Perhaps the strangest loose end involves the climax of Sofias attack on Oz, in which she blew up his drug headquarters. The Penguin depicts the outcome of the attack with 9/11 imagery, suggesting that a major act of destruction takes place. Furthermore, Oz used the response to the event to gain an in with a corrupt city official, which helped him put Sofia back into Arkham.Yet, its hard to see how the explosion will tie into The Batman: Part II. One gets the sense that the sequel will fold the explosion into one of the Riddlers attacks in The Batman, so the story wont alienate anyone who didnt watch The Penguin.These closed off storylines aside, The Penguin still leaves Gotham a more dangerous place than where it began, with multiple supervillains in place of the fading mafia and an even more disillusioned populace. Its a good thing Batman stuck around, because hes going to have a lot more work to do when The Batman: Part II finally gets to theaters.Every episode of The Penguin is now streaming on Max. The Batman: Part II is slated for a 2026 release.
    0 Comments ·0 Shares ·93 Views
  • Rise of the Golden Idol Is a Fitting Sequel to One of the Best Puzzle Games of Recent Years
    news.xbox.com
    Lets get this out of the way if you havent played Case of the Golden Idol yet, you really should do that (hint: its on Game Pass). Developer Color Grays debut game is one of the most smartly conceived, intriguingly told puzzle games of recent years. Spinning the story of a seemingly magical artifact through a series of murder-mystery tableaus, you amass clues in each scene, then literally piece the story together yourself, word-by-word revealing an incredible fantasy-history tale of national conspiracy, political wrongdoing, and dangerous technology in the process. There was simply nothing quite like it until now.The sequel, Rise of the Golden Idol (which arrives on Xbox Series X|S tomorrow) is left in a pretty unique position as a result. How do you follow up a beloved one-off retaining what made it special, but expanding its world and ideas meaningfully? Color Gray has managed a truly deft balance here this is effectively the game you know, but a setting you really, really dont. Far from continuing where we left off, Rise picks up whole centuries after the events of the first game, in this worlds equivalent of the 1970s. For fans of Case, it means that, even if you have all the backstory, youre just as lost in the context of this world, giving you the thrill of putting all the pieces back together once again.This is a world where the very real Idol of the first game has not only become a myth, but been physically broken into pieces and we discover what happens when people begin putting them back together again. To give away any more would spoil the surprises, but its safe to say that while it deals in similar themes to the first game, this is a very different story.On first glance, you might assume that this is a very similar game, however. The basic formula remains each level gives you a look at a very specific (almost always violent) moment in time, and offers the option to click on the people and objects in that moment, picking up clues. These could be names, objects, or associated verbs. In each scene, youre given distinct puzzles figuring out who each character is, what their jobs might be in that moment and, almost always, a final conclusion as to whats happened in the run-up to the scene. Almost every one of the dozens of interactive elements will be relevant to that final conclusion, and youll need to use the bulk of the words youve amassed to put that together.Despite an upgraded art style retaining the Hogarthian caricatures of characters, but placing them in a grubbier urban context its very familiar. Until you reach the end of a chapter.This is Rise of the Golden Idols key new feature every chapter contains multiple scenes but, once youve completed them all, youre given an overarching meta-puzzle. Using all the information you collected from each scene, you then need to work out the story of what was going on around the full chapter itself often revealing twists youd never have expected. Youll likely need to revisit each scene to do so, jumping from moment to moment (and perhaps even the cutscenes and clues presented between those moments) to re-establish whether, say, that particular item which felt like a red herring at the time was actually a major clue you didnt know you needed.Its a truly smart piece of extra design nothing that you loved about the first game has been changed, theres just way more of it all of a sudden. Thats also a summation of the game as a whole, and perhaps the greatest recommendation I can give for Rise of the Golden Idol. Its a brilliant sequel to a brilliant game when something was this good already, who wouldnt want more of it?
    0 Comments ·0 Shares ·145 Views