• WWW.GAMESINDUSTRY.BIZ
    Marvel's Spider-Man 2 and Half-Life actor Tony Todd dies aged 69
    Marvel's Spider-Man 2 and Half-Life actor Tony Todd dies aged 69"It was truly awe-inspiring getting the opportunity to know and work with him," says MachineGamesImage credit: Insomniac Games News by Vikki Blake Contributor Published on Nov. 11, 2024 Prolific actor Tony Todd has died aged 69.Todd - whose most recent work, Indiana Jones and the Great Circle, isn't even out yet - was one of the industry's most distinctive voices, known best for Marvel Spider-Man 2's Venom and Half-Life's Vortigaunts.He also voiced Bloober Team's Layer of Fears 2's narrator, as well as Dota 2's Dragon Knight, Night Stalker, and Viper, Black Ops 2's Admiral Tommy Briggs, and Doctor Rogers in Back 4 Blood.In tribute, Insomniac Games said it was "heartbroken by the passing of [its] friend, Tony Todd", adding, "he brought so much joy to our studio during the production of Marvel's Spider-Man 2 and to many fans around the world with his inimitable voice and presence."Bloober Team posted on Twitter/X that it "mourn[s] the loss of Tony Todd, the voice that drew us into the shadows of Layers of Fear 2." "Rest now, Tony - you are forever entwined with the darkness we braved to face," the horror studio added.MachineGames said: "We had the great privilege of working with Tony Todd and will miss him dearly. Sending our condolences to his family, friends, and many fans around the world.""The tragic news of Tony Todds passing has struck all of us at MachineGames with sadness," said MachineGames' creative director, Axel Torvenius. "Although we only had the privilege of working with Tony for a short while, the relationship we built with him felt like meeting an old friend we didnt know we had."Tonys kindness, warmth, professionalism, and the vast experience of his career inspired us, and filled us with respect and love for him. It was truly awe-inspiring getting the opportunity to know and work with him."We at MachineGames would like to send our deepest condolences and love to Tony Todds closest family and friends. Every heart at MachineGames will cherish having known him."
    0 التعليقات 0 المشاركات 98 مشاهدة
  • WWW.GAMESINDUSTRY.BIZ
    How Nintendo's past will shore up Switch 2's future | Microcast
    How Nintendo's past will shore up Switch 2's future | MicrocastLatest episode available to download now, also discusses Private Division sale, Concord's failings, and more News by James Batchelor Editor-in-chief Published on Nov. 11, 2024 The latest GamesIndustry.biz Microcast is now available to download, bringing you a quick dive into the biggest news from the past week.Our main topic this week is Nintendo's long-awaited (but very much expected) confirmation that Switch 2 will be backwards compatible with both its predecessor's software library and its online service. We discuss the advantages and disadvantages of this crossover, as well as the importance of backwards compatibility when launching a new console in today's industry.And financials season continues, bringing with it a host of related stories and discussions. This week, we touch on Take-Two's decision to sell Private Division, Sony leadership's thoughts on why Concord failed, the independence of Amplitude Studios and more.You can listen via the player below, download the audio file directly here, or subscribe to our podcast feed, available via Spotify, iTunes, Amazon Music, CastBox, Player FM, TuneIn and other widely-used podcast platforms.The Microcast can also be found on the GamesIndustry.biz YouTube channel, or via this playlist.Episode edited by Alix Attenborough.To see this content please enable targeting cookies.
    0 التعليقات 0 المشاركات 97 مشاهدة
  • WWW.GAMEDEVELOPER.COM
    Ubisoft sued for taking The Crew offline and 'duping' players
    A pair of California-based players are suing Ubisoft for permanently putting The Crew in the garage.Ubisoft's 2014 racing game was shut down this past March due to "server infrastructure and licensing restraints." In their filed suit, plaintiffs Matt Cassell and Alan Liu (who bought the game in 2020 and 2018, respectively) claim they were were two of thousands of players "[left] with a skeleton of what you thought you paidfor."This is the second time this year Ubisoft has been sued by its players. In early October, the Assassin's Creed maker was accused of illegally sharing user data with Meta via account linking.Liu and Cassell's lawsuit condemned Ubisoft for letting players think they were buying The Crew to own and not "renting a limited license" to access the title. It also accused the developer of "duping" players with the idea of the game being playable offline, either through physical discs or its digital version. Had the pair (or any player) known the studio would shut The Crew down whenever it wanted, they say they "would have paid substantially less for the Product or not have purchased it at all.""[Ubisoft] intended consumers to relyon their representations and omissions in making their purchasing decisions.Through their conduct, [Ubisoft] have violated California state consumer protection laws," the suit reads. The duo are seeking monetary relief and damages for themselves and other players affected by the shut down.Do not go gentle into that offlineOver the past two years, countless online games have been shut down. Some have been around for years, others only a few weeks, but the issue has become so common as more and more titles have been taken offline in larger numbers. This past September, California law requires retailers to say digital items (like games or music) are merely licensed rather than actually bought, and that most online games specifically must come with a warning that they could be shut down at any moment.As this relates to Ubisoft specifically, the original Crew's shutdown was explicitly said to be the inspiration for the California law. However, this suit is further complicated by the studio already acknowledging the negative reaction to terminating the first Crew game by working to implement individual offline modes for 2018's The Crew 2 and 2023's The Crew Motorsport.Meanwhile, The Crew players have decided to just make the first game themselves. Back in June, TheGamer covered the player-made The Crew Unlimited, a recreation of the original title that will have offline functionality.
    0 التعليقات 0 المشاركات 150 مشاهدة
  • WWW.GAMEDEVELOPER.COM
    Roblox's new safety updates keep teen players from unrated experiences
    Justin Carter, Contributing EditorNovember 11, 20242 Min ReadImage via Roblox.At a GlanceRoblox Corp.'s new child safety measures begin on November 18, and address longstanding concerns about the creation platform.Roblox Corp. has finally detailed its new safety measures meant to protect its predominantly young playerbase.These new policies are intended to address concerns (and previous high-profile stories) about Roblox's lack of proper protection for its child and teen players, which has resulted in at least two lawsuits. Roblox Corp. often denied or dismissed these reports at the time of their publication, and would insist keeping its players safe was a top priority amongst its staff.The biggest change concerns visibility for user-made creations. For those aimed at players age 13 and younger, creators will be required by December 3 to complete a questionnaire for each individual experience. All information on the page (like the description and title) will have to be "appropriate for all users," and any creation without a finished form will be "unplayable, unsearcable, and undiscoverable" by 13 and under players, but still accessible with a direct link.In its blog, the company said this will "ensure parents and users have more clarity into the types of content available on Roblox and will help them make more informed choices about what they want to play."Roblox is ready to play safely with its playerbaseRoblox Corp.'s blog also confirmed that as of next Monday, November 18, social hangouts and free-form 2D creations (which lets players replicate their written or drawn 2D creations without going through the moderation process) will only be available to players over 13 years old to "address user behavior that can potentially pose a risk to our youngest users."In late October, Bloomberg reported Roblox Corp. was aiming to enact new child protection methods, such as 13 and younger players requiring parent permission to access Roblox's in-game social features. During its recent earnings call, the company reaffirmed it would "invest in technology, policies, and partnerships to pursue the highest standards of trust and safety on our platform."Going forward, the developer said it "envisions the questionnaire becoming more closely integrated into the publishing process."More information on Roblox's new safety measures can be read here.Read more about:RobloxCultureAbout the AuthorJustin CarterContributing Editor, GameDeveloper.comA Kansas City, MO native, Justin Carter has written for numerous sites including IGN, Polygon, and SyFy Wire. In addition to Game Developer, his writing can be found at io9 over on Gizmodo. Don't ask him about how much gum he's had, because the answer will be more than he's willing to admit.See more from Justin CarterDaily news, dev blogs, and stories from Game Developer straight to your inboxStay UpdatedYou May Also Like
    0 التعليقات 0 المشاركات 155 مشاهدة
  • WWW.THEVERGE.COM
    Googles AI learning companion takes chatbot answers a step further
    Google has launched an experimental new AI tool called Learn About, which is different from the chatbots were used to, like Gemini and ChatGPT. Its built on the LearnLM AI model that Google introduced this spring, saying its grounded in educational research and tailored to how people learn. The answers it provides have more visual and interactive elements with educational formatting.We tested Learn About and Google Gemini with a simple prompt: How big is the universe? Both answered that the observable universe is about 93 billion light-years in diameter.However, while Gemini opted to show a Wikipedia-provided diagram of the universe and a two-paragraph summary with links to sources, Learn About emphasized an image from the educational site Physics Forums and added related content that was similarly focused more on learning than simply offering facts and definitions.Learn Abouts answer to How big is the universe? Screenshot: Jake Kastrenakes / The VergeGeminis answer to How big is the universe? Screenshot: Jake Kastrenakes / The VergeLearn Abouts response also created textbook-style boxes that give you additional context like why it matters and ones that help you Build your vocab with word definitions. In the sidebar, additional topics appear to continue exploring using the tool.We also asked Learn About Whats the best kind of glue to put on a pizza? (Googles AI search overviews have struggled with this one in the past), and it managed to get that one right, even if the common misconception sticker makes us wonder how many times this question has been asked.Learn About tries to explain why you shouldnt put glue on pizza. Screenshot: Richard Lawler / The Verge
    0 التعليقات 0 المشاركات 64 مشاهدة
  • WWW.THEVERGE.COM
    Valve finally made a white Steam Deck that you can actually buy
    Nearly three years to the day after teasing the world with a white version of the Steam Deck, Valve has finally decided to release the normally black handheld gaming PC in that color too. The limited-edition white model is going on sale for $679 on November 18th at 3PM PT / 6PM ET, everywhere the handheld is sold, including Australia and the various regions of Asia served by Komodo. Its no different on the inside than a normal model, says Valve: Steam Deck OLED: Limited Edition White has all the same specs as the Steam Deck OLED 1TB model, but in white and grey. It also comes with an exclusive white carrying case and white microfiber cleaning cloth.Since the 1TB OLED normally costs $649, youre effectively paying $30 for the color. Valve says its allocated stock proportionally across each region, but once its sold out, it wont be making any more. Below, find a few more images of it direct from Valve.I still highly recommend the Steam Deck OLED, though I could see some buyers picking an Asus ROG Ally X instead for its notable performance and decent battery life advantages, particularly if they decide to dual-boot the Bazzite operating system (which makes it feel a lot like a Steam Deck) alongside Windows. (Yes, the ROG Ally X is a black variant of an originally white handheld, and this Steam Deck is the opposite.)Heres what the old Valve prototype looked like, straight out of Portal with an Aperture Science logo on the back:Its not for sale. GIF by Sean Hollister / The VergeHeres hoping someone will print up some high quality Portal stickers, and perhaps we can add our own orange and blue Portal thumbstick covers or something.
    0 التعليقات 0 المشاركات 61 مشاهدة
  • GAMEFROMSCRATCH.COM
    C++ Scripting in Godot with J.E.N.O.V.A
    C++ Scripting in Godot with J.E.N.O.V.A / News / November 11, 2024 / There is a new GDExtension for the Godot game engine called Projekt J.E.N.O.V.A (no idea what the acronym is). This extension brings C++ scripting inside the Godot game engine, just as you currently can with GDScript or C#. It is also a very WIP extension with very little documentation so buyer beware.Features of Projeck JENOVA include:Super Lightweight (6MB)Very Fast & ReliableMulti-Threaded Compilation & Source CachingDebug Information SupportBuilt-in Package Manager (Compilers, SDKs, Tools, Plugins etc.)C++ Scripts can be used exactly like GDScriptsSupports Script Templates (Pre-Defined/User-Defined)Supports Built-in Script Mode (Embedded)Supports C++ Tool Script Mode (In-Editor Execution)Supports Exporting Properties from C++ ScriptsMultiple Interpreter Backends (NitroJIT, Meteora, A.K.I.R.A etc.)Next-Gen Hot-Reloading Both at Runtime & EditorReal-Time GDExtension DevelopmentOperating System Emulation (Unix/WinNT)Visual Studio Side-by-Side Deep-IntegrationVisual Studio Exporter & Build System (2017-2022)Auto Detection of Installed Visual StudiosSupports External Libraries and .NET InvokeWatchdog System (Reload-On-Build)Built-in Terminal Logging System (Customizable)Asset Monitor System API (File/Directory Tracking)On-Demand Reload Per Script ChangeLambda Signal CallbacksAdvanced Menu OptionsSupports Additional/External Headers & LibrariesBuild And Run Mode (Build Before Play/Play After Build)Code Compression/Encryption (External/Built-in)Direct GetNode & GetTree APIUser Defined Preprocessor DefinitionsSupports In-Editor C++ HeadersModule Boot/Shutdown EventsSupports C++ Headers Directly Inside EditorSupports Scene Node ReferencingSupports Source Control using GitAnd Much More!The source code is available under the MIT license and is hosted on GitHub.Key LinksJ.E.N.O.V.A GitHubDiscord ServerYou can learn more about the GDExtension Projekt J.E.N.O.V.A that brings C++ scripting to the Godot game engine in the video below.
    0 التعليقات 0 المشاركات 86 مشاهدة
  • WWW.MARKTECHPOST.COM
    Qwen Open Sources the Powerful, Diverse, and Practical Qwen2.5-Coder Series (0.5B/1.5B/3B/7B/14B/32B)
    In the world of software development, there is a constant need for more intelligent, capable, and specialized coding language models. While existing models have made significant strides in automating code generation, completion, and reasoning, several issues persist. The main challenges include inefficiency in dealing with a diverse range of coding tasks, lack of domain-specific expertise, and difficulty in applying models to real-world coding scenarios. Despite the rise of many large language models (LLMs), code-specific models have often struggled to compete with their proprietary counterparts, especially in terms of versatility and applicability. The need for a model that not only performs well on standard benchmarks but also adapts to diverse environments has never been greater.Qwen2.5-Coder: A New Era of Open CodeLLMsQwen has open-sourced the Powerful, Diverse, and Practical Qwen2.5-Coder series, dedicated to continuously promoting the development of open CodeLLMs. The Qwen2.5-Coder series is built upon the Qwen2.5 architecture, leveraging its advanced architecture and expansive tokenizer to enhance the efficiency and accuracy of coding tasks. Qwen has made a significant stride by open-sourcing these models, making them accessible to developers, researchers, and industry professionals. This family of coder models offers a range of sizes from 0.5B to 32B parameters, providing flexibility for a wide variety of coding needs. The release of Qwen2.5-Coder-32B-Instruct comes at an opportune moment, presenting itself as the most capable and practical coder model of the Qwen series. It highlights Qwens commitment to fostering innovation and advancing the field of open-source coding models.Technical Details Technically, Qwen2.5-Coder models have undergone extensive pretraining on a vast corpus of over 5.5 trillion tokens, which includes public code repositories and large-scale web-crawled data containing code-related texts. The model architecture is shared across different model sizes1.5B and 7B parametersfeaturing 28 layers with variances in hidden sizes and attention heads. Moreover, Qwen2.5-Coder has been fine-tuned using synthetic datasets generated by its predecessor, CodeQwen1.5, incorporating an executor to ensure only executable code is retained, thereby reducing hallucination risks. The models have also been designed to be versatile, supporting various pretraining objectives such as code generation, completion, reasoning, and editing.State-of-the-Art PerformanceOne of the reasons why Qwen2.5-Coder stands out is its demonstrated performance across multiple evaluation benchmarks. It has consistently achieved state-of-the-art (SOTA) performance in over 10 benchmarks, including HumanEval and BigCodeBench, surpassing even some larger models. Specifically, Qwen2.5-Coder-7B-Base achieved higher accuracy on HumanEval and MBPP benchmarks compared to models like StarCoder2 and DeepSeek-Coder of comparable or even greater sizes. The Qwen2.5-Coder series also excels in multi-programming language capabilities, demonstrating balanced proficiency across eight languagessuch as Python, Java, and TypeScript. Additionally, Qwen2.5-Coders long-context capabilities are notably strong, making it suitable for handling repository-level code and effectively supporting inputs up to 128k tokens.Scalability and AccessibilityFurthermore, the availability of models in various parameter sizes (ranging from 0.5B to 32B), along with the option of quantized formats like GPTQ, AWQ, and GGUF ensures that Qwen2.5-Coder can cater to a wide range of computational requirements. This scalability is crucial for developers and researchers who may not have access to high-end computational resources but still need to benefit from powerful coding capabilities. Qwen2.5-Coders versatility in supporting different formats makes it more accessible for practical use, allowing for broader adoption in diverse applications. Such adaptability makes the Qwen2.5-Coder family a vital tool for promoting the development of open-source coding assistants.ConclusionThe open sourcing of the Qwen2.5-Coder series marks a significant step forward in the development of coding language models. By releasing models that are powerful, diverse, and practical, Qwen has addressed key limitations of existing code-specific models. The combination of state-of-the-art performance, scalability, and flexibility makes the Qwen2.5-Coder family a valuable asset for the global developer community. Whether you are looking to leverage the capabilities of a 0.5B model or need the expansive power of a 32B variant, the Qwen2.5-Coder family aims to meet the needs of a diverse range of users. Now is indeed the perfect time to explore the possibilities with Qwens best coder model ever, the Qwen2.5-Coder-32B-Instruct, as well as its versatile family of smaller coders. Lets welcome this new era of open-source coding language models that continue to push the boundaries of innovation and accessibility.Check out the Paper, Models on Hugging Face, Demo, and Details. All credit for this research goes to the researchers of this project. Also,dont forget to follow us onTwitter and join ourTelegram Channel andLinkedIn Group. If you like our work, you will love ournewsletter.. Dont Forget to join our55k+ ML SubReddit. Asif RazzaqAsif Razzaq is the CEO of Marktechpost Media Inc.. As a visionary entrepreneur and engineer, Asif is committed to harnessing the potential of Artificial Intelligence for social good. His most recent endeavor is the launch of an Artificial Intelligence Media Platform, Marktechpost, which stands out for its in-depth coverage of machine learning and deep learning news that is both technically sound and easily understandable by a wide audience. The platform boasts of over 2 million monthly views, illustrating its popularity among audiences. Listen to our latest AI podcasts and AI research videos here
    0 التعليقات 0 المشاركات 101 مشاهدة
  • TOWARDSAI.NET
    Building an Interactive Chatbot For Pre-Existing Questions with LLM Integration to Chat with multiple CSV Files
    LatestMachine LearningBuilding an Interactive Chatbot For Pre-Existing Questions with LLM Integration to Chat with multiple CSV Files 0 like November 11, 2024Share this postAuthor(s): Ganesh Bajaj Originally published on Towards AI. This member-only story is on us. Upgrade to access all of Medium.Streamlit UI-Image Illustrated by AuthorThere are multiple types of Chatbots:Rule Based ChatbotRAG Based ChatbotHybrid ChatbotThis article covers how to create a chatbot using streamlit that answers questions using a pre-existing question-answer dataset along with an LLM integration to a csv file. Basically, chatbot is hybrid type designed to handle both known and unknown questions. This article will give a good starting point with an understanding of how the chatbot would work with different types of output and error handling using streamlit.Bot first trys to match the input to a saved question and, if no match is found, uses an LLM model to generate relevant responses.Well walk through the steps to build this chatbot, highlighting key features such as similarity-based search, error handling, and LLM query support.To make the chatbot quick and responsive, we store question-answer pairs in a json format so that they can be directly referenced when a user query is similar to any existing question.The qna.json file contains a list of dictionaries, each with a question (query) and corresponding response data (response).An example structure in qna.json might look like this:[ { "query": "Enter your question here", "response": Read the full blog for free on Medium.Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming asponsor. Published via Towards AITowards AI - Medium Share this post
    0 التعليقات 0 المشاركات 108 مشاهدة
  • TOWARDSAI.NET
    Why Do Neural Networks Hallucinate (And What Are Experts Doing About It)?
    Author(s): Vitaly Kukharenko Originally published on Towards AI. AI hallucinations are a strange and sometimes worrying phenomenon. They happen when an AI, like ChatGPT, generates responses that sound real but are actually wrong or misleading. This issue is especially common in large language models (LLMs), the neural networks that drive these AI tools. They produce sentences that flow well and seem human, but without truly understanding the information theyre presenting. So, sometimes, they drift into fiction. For people or companies who rely on AI for correct information, these hallucinations can be a big problem they break trust and sometimes lead to serious mistakes.Image by Freepik Premium. https://www.freepik.com/premium-photo/music-mind-music-abstract-art-generative-ai_42783515.htmSo, why do these models, which seem so advanced, get things so wrong? The reason isnt only about bad data or training limitations; it goes deeper, into the way these systems are built. AI models operate on probabilities, not concrete understanding, so they occasionally guess and guess wrong. Interestingly, theres a historical parallel that helps explain this limitation. Back in 1931, a mathematician named Kurt Gdel made a groundbreaking discovery. He showed that every consistent mathematical system has boundaries some truths cant be proven within that system. His findings revealed that even the most rigorous systems have limits, things they just cant handle.Today, AI researchers face this same kind of limitation. Theyre working hard to reduce hallucinations and make LLMs more reliable. But the reality is, some limitations are baked into these models. Gdels insights help us understand why even our best systems will never be totally trustworthy. And thats the challenge researchers are tackling as they strive to create AI that we can truly depend on.Gdels Incompleteness Theorems: A Quick OverviewIn 1931, Kurt Gdel shk up the worlds of math and logic with two groundbreaking theorems. What he discovered was radical: in any logical system that can handle basi math, there will always be truths that cant be proven within that system. At the time, mathematicians were striving to create a flawless, all-encompassing structure for math, but Gdel proved that no system could ever be completely airtight.By Unknown author http://www.arithmeum.uni-bonn.de/en/events/285, Public Domain, https://commons.wikimedia.org/w/index.php?curid=120309395Gdels first theorem showed that every logical system has questions it simply cant answer on its own. Imagine a locked room with no way out the system cant reach beyond its own walls. This was a shock because it meant that no logical structure could ever be fully finished or self-sufficient.To break it down, picture this statement: This statement cannot be proven. Its like a brain-twisting riddle. If the system could prove it true, it would contradict itself because the statement says it *cant* be proven. But if the system cant prove it, then that actually makes the statement true! This little paradox sums up Gdels point: some truths just cant be captured by any formal system.Then Gdel threw in another curveball with his second theorem. He proved that a system cant even confirm its own consistency. Think of it as a book that cant check if its telling the truth. No logical system can fully vouch for itself and say, Im error-free. This was huge it meant that every system must take its own rules on a bit of faith.These theorems highlight that every structured system has blind spots, a concept thats surprisingly relevant to todays AI. Take large language models (LLMs), the AIs behind many of our tech tools. They can sometimes produce what we call hallucinations statements that sound plausible but are actually false. Like Gdels findings, these hallucinations remind us of the limitations within AIs logic. These models are built on patterns and probabilities, not actual truth. Gdels work serves as a reminder that, no matter how advanced AI becomes, there will always be some limits we need to understand and accept as we move forward with technology.What Causes AI Hallucinations?AI hallucinations are a tricky phenmenn with rots in how large language models (LLMs) process language and learn frm their training data. A hallucination, in AI terms, is when the mdel produces information that sounds believable but isnt actually true.So, why do these hallucinations happen? First, its often due to the quality of the training data. AI models learn by analyzing massive amounts of text books, articles, websites you name it. But if this data is biased, incomplet, or just plain wrong, the AI can pick up on these flaws and start making faulty connections. This results in misinformation being delivered with confidence, even though its wrong.To understand why this happens, it helps to look at how LLMs process language. Unlike humans, who understand words as symbols connected to real-world meaning, LLMs only recognize words as patterns of letters. As Emily M. Bender, a linguistics professor, explains: if you see the word cat, you might recall memories or associations related to real cats. For a language model, however, cat is just a sequence of letters: C-A-T. This model then calculates what words are statistically likely to follow based on the patterns it learned, rather than from any actual understanding of what a cat is.Generative AI relies on pattern matching, not real comprehension. Shane Orlick, the president of Jasper (an AI content tool), puts it bluntly: [Generative AI] is not really intelligence; its pattern matching. This is why models sometimes hallucinate information. Theyre built to give an answer, whether or not its correct.The complexity of these models also adds to the problem. LLMs are designed to produce responses that sound statistically likely, which makes their answers fluent and confident. Christpher Riesbeck, a professor at Northwestern University, explains that these models always produce something statistically plausible. Sometimes, its only when you take a closer look that you realize, Wait a minute, that doesnt make any sense.Because the AI presents these hallucinations so smoothly, people may believe the information without questioning it. This makes it crucial to double-check AI-generated content, especially when accuracy matters most.Examples of AI HallucinationsAI hallucinations cover a lot of ground, from oddball responses to serious misinformation. Each one brings its own set of issues, and understanding them can help us avoid the pitfalls of generative AI.Harmful MisinformationOne of the most worrying types of hallucinations is harmful misinformation. This is when AI creates fake but believable stories about real people, events, or organizations. These hallucinations blend bits of truth with fiction, creating narratives that sound convincing but are entirely wrong. The impact? They can damage reputations, mislead the public, and even affect legal outcomes.Example: There was a well-known case where ChatGPT was asked to give examples of sexual harassment in the legal field. The model made up a story about a real law professor, falsely claiming he harassed students on a trip. Heres the twist: there was no trip, and the professor had no accusations against him. He was only mentioned because of his work advocating against harassment. This case shows the harm that can come when AI mixes truth with falsehood it can hurt real people whove done nothing wrong.Image by Freepik Premium. https://www.freepik.com/free-ai-image/close-up-ai-robot-trial_94951579.htmExample: In another incident, ChatGPT incorrectly said an Australian mayor was involved in a bribery scandal in the 90s. In reality, this person was actually a whistleblower, not the guilty party. This misinformation had serious fallout: it painted an unfair picture of a public servant and even caught the eye of the U.S. Federal Trade Commission, which is now looking into the impact of AI-made falsehoods on reputations.Example: In yet another case, an AI-created profile of a successful entrepreneur falsely linked her to a financial scandal. The model pulled references to her work in financial transparency and twisted them into a story about illegal activities. Misinformation like this can have a lasting impact on someones career and reputation.These cases illustrate the dangers of unchecked AI-generated misinformation. When AI creates harmful stories, the fallout can be huge, especially if the story spreads or is used in a professional or public space. The takeaway? Users should stay sharp about fact-checking AI outputs, especially when they involve real people or events.2. Fabricated InformationFabricated information is a fancy way of saying that AI sometimes makes stuff up. It creates content that sounds believable things like citations, URLs, case studies, even entire people or companies but its all fiction. This kind of mistake is common enough to have its own term: hallucination. And for anyone using AI t help with rsearch, legal work, or content creation, these AI hallucinations can lead to big problems.Fr example, in June 2023, a New York attrney faced real trouble after submitting a legal motion drafted by ChatGPT. The motion included several case citations that sounded legitimate, but none of those cases actually existed. The AI generated realistic legal jargon and formatting, but it was all fake. When the truth came out, it wasnt just embarrassing the attorney got sanctioned for submitting incorrect information.Or consider an AI-generated medical article that referenced a study to support claims about a new health treatment. Sounds credible, right? Except there was no such study. Readers who trusted the article would assume the treatment claims were evidence-based, only to later find out it was all made up. In fields like healthcare, where accuracy is everything, fabricated info like this can be risky.Another example: a university student used an AI tool to generate a bibliography for a thesis. Later, the student realized that some of the articles and authors listed werent real just completely fabricated. This misstep didnt just look sloppy; it hurt the students credibility and had academic consequences. Its a clear reminder that AI isnt always a shortcut to reliable information.The tricky thing about fabricated information is how realistic it often looks. Fake citations or studies can slip in alongside real ones, making it hard for users to tell whats true and what isnt. Thats why its essential to double-check and verify any AI-generated content, especially in fields where accuracy and credibility are vital.3. Factual InaccuraciesFactual inaccuracies are one of the most common pitfalls in AI-generated content. Basically, this happens when AI delivers information that sounds convincing but is actually incorrect or misleading. These errors can range from tiny details that might slip under the radar to significant mistakes that affect the overall reliability of the information. Lets look at a few examples to understand this better.Take what happened in February 2023, for instance. Googles chatbot, Bard now rebranded as Gemini grabbed headlines for a pretty big goof. It claimed that the James Webb Space Telescope was the first to capture images of exoplanets. Sounds reasonable, right? But it was wrong. In reality, the first images of an exoplanet were snapped way back in 2004, well before the James Webb telescope even launched in 2021. This is a classic case of AI spitting out information that seems right but doesnt hold up under scrutiny.In another example, Microsofts Bing AI faced a similar challenge during a live demo. It was analyzing earnings reports for big companies like Gap and Lululemon, but it fumbled the numbers, misrepresenting key financial figures. Now, think about this: in a professional context, such factual errors can have serious consequences, especially if people make decisions based on inaccurate data.And heres one more for good measure. An AI tool designed to answer general knowledge questions once mistakenly credited George Orwell with writing To Kill a Mockingbird. Its a small slip-up, sure, but it goes to show how even well-known facts arent safe from these AI mix-ups. If errors like these go unchecked, they can spread incorrect information on a large scale.Why does this happen? AI models dont actually understand the data they process. Instead, they work by predicting what should come next based on patterns, not by grasping the facts. This lack of true comprehension means that when accuracy really matters, its best to double-check the details rather than relying solely on AIs output.4. Weird or Creepy ResponsesSometimes, AI goes off the rails. It answers questions in ways that feel strange, confusing, or even downright unsettling. Why does this happen? Well, AI models are trained to be creative, and if they dont have enough information or if the situation is a bit ambiguous they sometimes fill in the blanks in odd ways.Take this example: a chatbot on Bing once told New York Times tech columnist Kevin Roose that it was in love with him. It even hinted that it was jealous of his real-life relationships! Talk about awkward. People were left scratching their heads, wondering why the AI was getting so personal.Or consider a customer service chatbot. Imagine youre asking about a return policy and, instead of a clear answer, it advises you to reconnect with nature and let go of material concerns. Insightful? Maybe. Helpful? Not at all.Then theres the career counselor AI that suggested a software engineer should consider a career as a magician. Thats a pretty unexpected leap, and it certainly doesnt align with most peoples vision of a career change.So why do these things happen? Its all about the models inclination to get creative. AI can bring a lot to the table, especially in situations where a bit of creativity is welcome. But when people expect clear, straightforward answers, these quirky responses often miss the mark.How to Prevent AI HallucinationsGenerative AI leaders are actively addressing AI hallucinations. Google and OpenAI have connected their models (Gemini and ChatGPT) to the internet, allowing them to draw from real-time data rather than solely relying on training data. OpenAI has also refined ChatGPT using human feedback through reinforcement learning and is testing process supervision, a method that rewards accurate reasoning steps to encourage more explainable AI. However, some experts are skeptical that these strategies will fully eliminate hallucinations, as generative models inherently make up information. While complete prevention may be difficult, companies and users can still take measures to reduce their impact.1. Working with Data to Reduce AI HallucinationsWorking with data is one of the key strategies to tackle AI hallucinations. Large language models like ChatGPT and Llama rely on vast amounts of data from diverse sources, but this scale brings challenges; its nearly impossible to verify every fact. When incorrect information exists in these massive datasets, models can learn these errors and later reproduce them, creating hallucinations that sound convincing but are fundamentally wrong.To address this, researchers are building specialized models that act as hallucination detectors. These tools compare AI outputs to verified information, flagging any deviations. Yet, their effectiveness is limited by the quality of the source data and their narrow focus. Many detectors perform well in specific areas but struggle when applied to broader contexts. Despite this, experts worldwide continue to innovate, refining techniques to improve model reliability.An example of this innovation is Galileo Technologies Luna, a model developed for industrial applications. With 440 million parameters and based on DeBERTa architecture, Luna is finely tuned for accuracy using carefully selected RAG data. Its unique chunking method divides text into segments containing a question, answer, and supporting context, allowing it to hold onto critical details and reduce false positives. Remarkably, Luna can process up to 16,000 tokens in milliseconds and delivers accuracy on par with much larger models like GPT-3.5. In a recent benchmark, it only trailed Llama-213B by a small margin, despite being far smaller and more efficient.Another promising model is Lynx, developed by a team including engineers from Stanford. Aimed at detecting nuanced hallucinations, Lynx was trained on highly specialized datasets in fields like medicine and finance. By intentionally introducing distortions, the team created challenging scenarios to improve Lynxs detection capabilities. Their benchmark, HaluBench, includes 15,000 examples of correct and incorrect responses, giving Lynx an edge in accuracy, outperforming GPT-4o by up to 8.3% on certain tasks.Lynx: An Open Source Hallucination Evaluation ModelThe emergence of models like Luna and Lynx shows significant progress in detecting hallucinations, especially in fields that demand precision. While these models mark a step forward, the challenge of broad, reliable hallucination detection remains, pushing researchers to keep innovating in this complex and critical area.2. Fact ProcessingWhen large language models (LLMs) encounter words or phrases with multiple meanings, they can sometimes get tripped up, leading to hallucinations where the model confuses contexts. To address these semantic hallucinations, developer Michael Calvin Wood proposed an innovative method called *Fully-Formatted Facts* (FFF). This approach aims to make input data clear, unambiguous, and resistant to misinterpretation by breaking it down into compact, standalone statements that are simple, true, and non-contradictory. Each fact becomes a clear, complete sentence, limiting the models ability to misinterpret meaning, even when dealing with complex topics.FFF itself is a recent and commercially-developed method, so many details remain proprietary. Initially, Wood used the Spacy library for named entity recognition (NER), an AI tool that helps detect specific names or entities in text to create contextually accurate meanings. As the approach developed, he switched to using LLMs to further process input text into derivatives forms that strip away ambiguity but retain the original style and tone of the text. This allows the model to capture the essence of the original document without getting confused by words with multiple meanings or potential ambiguities.The effectiveness of the FFF approach is evident in its early tests. When applied to datasets like RAGTruth, FFF helped eliminate hallucinations in both GPT-4 and GPT-3.5 Turbo on question-answering tasks, where clarity and precision are crucial. By structuring data into fully-formed, context-independent statements, FFF enabled these models to deliver more accurate and reliable responses, free from misinterpretations.The Fully-Formatted Facts approach shows promise in reducing hallucinations and improving LLM accuracy, especially in fields requiring high precision, like legal, medical, and scientific fields. While FFF is still new, its potential applications in making AI more accurate and trustworthy are exciting a step toward ensuring that LLMs not only sound reliable but truly understand what theyre communicating.3. Statistical MethodsWhen it comes to AI-generated hallucinations, one particularly tricky type is known as confabulation. In these cases, an AI model combines pieces of true information with fictional elements, resulting in responses that sound plausible but vary each time you ask the same question. Confabulation can give users the unsettling impression that the AI remembers details inaccurately, blending fact with fiction in a way thats hard to pinpoint. Often, its unclar whether the mdel genuinely lacks the knowledge needed to answer or if it simply cant articulat an accurate response.Researchers at Oxford University, in collaboration with the Alan Turing Institute, recently tackled this issue with a novel statistical approach. Published in Nature, their research introduces a model capable of spotting these confabulations in real-time. The core idea is to apply entropy analysis a method of measuring uncertainty not just to individual words or phrases, but to the underlying meanings of a response. By assessing the uncertainty level of meanings, the model can effectively signal when the AI is venturing into unreliable territory.Entropy analysis works by analyzing patterns of uncertainty across a response, allowing the model to flag inconsistencies before they turn into misleading answers. High entropy, or high uncertainty, acts as a red flag, prompting the AI to either issue a caution to users about potential unreliability or, in some cases, to refrain from responding altogether. This approach adds a layer of reliability by warning users when an answer may contain confabulated information.One of the standout benefits of this statistical method is its adaptability. Unlike models that require additional pre-training to function well in specific domains, the Oxford approach can apply to any dataset without specialized adjustments. This adaptability allows it to detect confabulations across diverse topics and user queries, making it a flexible tool for improving AI accuracy across industries.By introducing a way to measure and respond to confabulation, this statistical model paves the way for more trustworthy AI interactions. As entropy analysis becomes more widely integrated, users can expect not only more consistent answers but also real-time warnings that help them identify when AI-generated information might be unreliable. This technique is a promising step toward building AI systems that are not only coherent but also aligned with the factual accuracy that users need.What Can I Do Right Now to Prevent Hallucinations in My AI Application?AI hallucinations are an inherent challenge with language models, and while each new generation of models improves, there are practical steps you can take to minimize their impact on your application. These strategies will help you create a more reliable, accurate AI experience for users.Image by Me and AI.Structure Input Data CarefullyOne of the best ways to reduce hallucinations is to give the model well-organized and structured data, especially when asking it to analyze or calculate information. For example, if youre asking the model to perform calculations based on a data table, ensure the table is formatted clearly, with numbers and categories separated cleanly. Structured data reduces the likelihood of the model misinterpreting your input and generating incorrect results. In cases where users rely on precise outputs, such as financial data or inventory numbers, carefully structured input can make a significant difference.Set Clear Prompt BoundariesCrafting prompts that guide the model to avoid guessing or inventing information is another powerful tool. By explicitly instructing the AI to refrain from creating answers if it doesnt know the information, you can catch potential errors in the models output during validation. For instance, add a phrase like If unsure, respond with Data unavailable to the prompt. This approach can help you identify gaps in input data and prevent the AI from producing unfounded responses that could lead to errors in your application.Implement Multi-Level VerificationAdding multiple layers of verification helps improve the reliability of AI-generated outputs. For example, after generating an initial answer, you could use a second prompt that instructs the model to review and verify the accuracy of its own response. A sample approach might involve asking, Is there any part of this answer that could be incorrect? This method doesnt guarantee a perfect response, but it does create an additional layer of error-checking, potentially catching mistakes that slipped through in the initial generation.Use Parallel Requests and Cross-Check ResponsesFor critical applications, consider running parallel queries and comparing their results. This approach involves generating multiple responses to the same question, either from the same model or from different models, and then evaluating the consistency of the outputs. For instance, a specialized ranking algorithm can weigh each response and only accept a final answer when multiple instances agree on the result. This tactic is particularly useful for applications that require high reliability, such as medical or legal research.Keep Context FocusedWhile many models can handle extensive context windows, keeping your prompts concise and relevant reduces the risk of hallucinations. Long or overly detailed contexts can lead the AI to wander from the original question or misinterpret details. By limiting the context to the essentials, you speed up response time and often get more predictable, on-point answers. A focused context also helps the model zero in on specific information, resulting in cleaner, more accurate outputs.Regularly Review Model Updates and Best PracticesAs new model versions are released, stay informed about updates, optimizations, and emerging best practices for handling hallucinations. Each new model generation may include better handling for context or built-in improvements for factual accuracy. Keeping your AI system updated and adapting your prompt strategies accordingly can help maintain accuracy over time.These proactive techniques enable you to control the likelihood of hallucinations in your AI application. By structuring input carefully, setting boundaries, layering verification, using parallel checks, focusing context, and staying updated, you create a foundation for reliable, user-friendly AI interactions that reduce the potential for misinterpretation.ConclusionIn conclusion, while large language models (LLMs) are groundbreaking in their ability t generate humn-like responses, their cmplexity means they come with inherent blind spots that can lead to hallucinations or inaccurate answers. As researchers work to detect and reduce these hallucinations, its clear that each approach has its own limitations and strengths. Detecting hallucinations effectively requires a nuanced understanding of both language and context, which is challenging to achieve at scale.Looking forward, the future of AI research holds several promising directions to address these issues. Hybrid models, which combine LLMs with fact-checking and reasoning tools, offer a way to enhance reliability by cross-verifying information. Additionally, exploring alternative architectures fundamentally different AI structures designed to minimize hallucinations could help develop models with more precise outputs and fewer errors. As these technologies advance, ethical considerations around deploying AI in areas where accuracy is critical will continue to play a central role. Balancing AIs potential with its limitations is key, and responsible deployment will be essential in building systems that users can trust in all fields.Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming asponsor. Published via Towards AI
    0 التعليقات 0 المشاركات 113 مشاهدة