• 0 Comentários 0 Compartilhamentos 3 Visualizações
  • 0 Comentários 0 Compartilhamentos 3 Visualizações
  • 0 Comentários 0 Compartilhamentos 3 Visualizações
  • WWW.GAMESPOT.COM
    Spider-Verse 4K Collector's Edition Drops To Best Price Yet In Amazon's Black Friday Sale
    Spider-Man fans can save big on the awesome Spider-Verse 2-Movie Collector's Edition at Amazon for Black Friday. Originally released last year for $120, Amazon's discount drops the price more than 50%, all the way down to $53. With this deal, you're getting the 4K Collector's Edition for only seven bucks more than the Amazon-exclusive combo pack without the extras. This box set comes with 4K Blu-ray, 1080p Blu-ray, and digital editions of Into the Spider-Verse and Across the Spider-Verse, two of the best superhero and animated movies of all time. Get Spider-Verse Collector's Edition Spider-Verse 2-Movie Collector's Edition is just one of many great Blu-ray box set deals up for grabs in Amazon's Black Friday Sale. You can pair your new Spider-Verse collection with other Spider-Man box set deals, too. Sam Raimi's Spider-Man Trilogy starring Tobey Maguire is discounted to only $36 on 4K Blu-ray, and the three most recent Tom Holland-led live-action Spidey flicks are available in a 3-Movie Blu-ray collection for just $23. For more great offers, check out GameSpot's roundup of Blu-ray, TV, and anime deals at Amazon.Continue Reading at GameSpot
    0 Comentários 0 Compartilhamentos 3 Visualizações
  • WWW.GAMESPOT.COM
    Best Black Friday Blu-Ray Box Set Deals - Movies, TV, And Anime
    The winter season is perfect for long movie nights, and Black Friday is the best opportunity to load up your shelves with new things to watch. Amazon's Black Friday sale is stacked with big discounts on Blu-ray box sets for popular sagas like The Lord of the Rings, The Godfather, and more. Plus, you'll find price cuts to TV shows and anime. See all Blu-ray deals at Amazon Be sure to check out the lists below for more of the best movies, TV series, and anime Blu-rays on sale in Amazon's Black Friday Event. And once you've browsed all the Blu-ray discounts, be sure to head over to GameSpot's Black Friday hub to see the latest deals on video games, Lego sets, electronics, and more.Amazon Black Friday Movie Box Set DealsThe Dark Knight Trilogy, Fast & Furious 10-Movie CollectionThe movie box set deals are some of the biggest highlights in Amazon's sale. These multi-film collections are a great way to pick up full movie series in one package. Christopher Nolan's The Dark Knight Trilogy is down to $30 (was $71), while the Fast & Furious 10-Movie 4K Collection is just $33 (was $135). You can also pick up the entire James Bond 24-Film Collection for $44 (was $55).Continue Reading at GameSpot
    0 Comentários 0 Compartilhamentos 3 Visualizações
  • GAMERANT.COM
    Twitch Has Banned Dan Saltman
    Twitch has banned Dan Saltman, an outspoken critic of the livestreaming platform and host of the Anything Else? podcast. The content creator has openly accused Twitch of antisemitism and taken particular offense at Hasan Hasanabi Piker not being held accountable by the platform for his anti-Israel livestreams.
    0 Comentários 0 Compartilhamentos 3 Visualizações
  • GAMERANT.COM
    Mortal Kombat 1 Mod Gives Ghostface a New Mask
    A Mortal Kombat 1 player has created a new mod for Ghostface, and it adds the famous Wazzup mask to the game. Mortal Kombat has an impressive history of guest characters released in previous games, with the franchise reaching new heights in Mortal Kombat 1.
    0 Comentários 0 Compartilhamentos 3 Visualizações
  • GAMEDEV.NET
    Feedback on Horror Music
    Hello,I've been creating music for many years now but I have decided that I would like to start scoring video games.Would like to know if my style of music is something that you could imagine working well with a horror game.Here is a link to a song of mine: https://adamklimt.bandcamp.com/track/eternalPlease let me know what you think. Open to any criticisms
    0 Comentários 0 Compartilhamentos 3 Visualizações
  • GAMEDEV.NET
    Drag or tap?
    What do you prefer if you play on a mobile phone?Or do you not care?I do care, I prefer tap more. But I also understand that it may be less convenient for other people.
    0 Comentários 0 Compartilhamentos 3 Visualizações
  • BLOGS.NVIDIA.COM
    What Is Retrieval-Augmented Generation, aka RAG?
    Editors note: This article, originally published on November 15, 2023, has been updated.To understand the latest advance in generative AI, imagine a courtroom.Judges hear and decide cases based on their general understanding of the law. Sometimes a case like a malpractice suit or a labor dispute requires special expertise, so judges send court clerks to a law library, looking for precedents and specific cases they can cite.Like a good judge, large language models (LLMs) can respond to a wide variety of human queries. But to deliver authoritative answers that cite sources, the model needs an assistant to do some research.The court clerk of AI is a process called retrieval-augmented generation, or RAG for short.How It Got Named RAGPatrick Lewis, lead author of the 2020 paper that coined the term, apologized for the unflattering acronym that now describes a growing family of methods across hundreds of papers and dozens of commercial services he believes represent the future of generative AI.Patrick LewisWe definitely would have put more thought into the name had we known our work would become so widespread, Lewis said in an interview from Singapore, where he was sharing his ideas with a regional conference of database developers.We always planned to have a nicer sounding name, but when it came time to write the paper, no one had a better idea, said Lewis, who now leads a RAG team at AI startup Cohere.So, What Is Retrieval-Augmented Generation (RAG)?Retrieval-augmented generation (RAG) is a technique for enhancing the accuracy and reliability of generative AI models with facts fetched from external sources.In other words, it fills a gap in how LLMs work. Under the hood, LLMs are neural networks, typically measured by how many parameters they contain. An LLMs parameters essentially represent the general patterns of how humans use words to form sentences.That deep understanding, sometimes called parameterized knowledge, makes LLMs useful in responding to general prompts at light speed. However, it does not serve users who want a deeper dive into a current or more specific topic.Combining Internal, External ResourcesLewis and colleagues developed retrieval-augmented generation to link generative AI services to external resources, especially ones rich in the latest technical details.The paper, with coauthors from the former Facebook AI Research (now Meta AI), University College London and New York University, called RAG a general-purpose fine-tuning recipe because it can be used by nearly any LLM to connect with practically any external resource.Building User TrustRetrieval-augmented generation gives models sources they can cite, like footnotes in a research paper, so users can check any claims. That builds trust.Whats more, the technique can help models clear up ambiguity in a user query. It also reduces the possibility a model will make a wrong guess, a phenomenon sometimes called hallucination.Another great advantage of RAG is its relatively easy. A blog by Lewis and three of the papers coauthors said developers can implement the process with as few as five lines of code.That makes the method faster and less expensive than retraining a model with additional datasets. And it lets users hot-swap new sources on the fly.How People Are Using RAGWith retrieval-augmented generation, users can essentially have conversations with data repositories, opening up new kinds of experiences. This means the applications for RAG could be multiple times the number of available datasets.For example, a generative AI model supplemented with a medical index could be a great assistant for a doctor or nurse. Financial analysts would benefit from an assistant linked to market data.In fact, almost any business can turn its technical or policy manuals, videos or logs into resources called knowledge bases that can enhance LLMs. These sources can enable use cases such as customer or field support, employee training and developer productivity.The broad potential is why companies including AWS, IBM, Glean, Google, Microsoft, NVIDIA, Oracle and Pinecone are adopting RAG.Getting Started With Retrieval-Augmented GenerationTo help users get started, NVIDIA developed an AI Blueprint for building virtual assistants. Organizations can use this reference architecture to quickly scale their customer service operations with generative AI and RAG, or get started building a new customer-centric solution.The blueprint uses some of the latest AI-building methodologies and NVIDIA NeMo Retriever, a collection of easy-to-use NVIDIA NIM microservices for large-scale information retrieval. NIM eases the deployment of secure, high-performance AI model inferencing across clouds, data centers and workstations.These components are all part of NVIDIA AI Enterprise, a software platform that accelerates the development and deployment of production-ready AI with the security, support and stability businesses need.There is also a free hands-on NVIDIA LaunchPad lab for developing AI chatbots using RAG so developers and IT teams can quickly and accurately generate responses based on enterprise data.Getting the best performance for RAG workflows requires massive amounts of memory and compute to move and process data. The NVIDIA GH200 Grace Hopper Superchip, with its 288GB of fast HBM3e memory and 8 petaflops of compute, is ideal it can deliver a 150x speedup over using a CPU.Once companies get familiar with RAG, they can combine a variety of off-the-shelf or custom LLMs with internal or external knowledge bases to create a wide range of assistants that help their employees and customers.RAG doesnt require a data center. LLMs are debuting on Windows PCs, thanks to NVIDIA software that enables all sorts of applications users can access even on their laptops.An example application for RAG on a PC.PCs equipped with NVIDIA RTX GPUs can now run some AI models locally. By using RAG on a PC, users can link to a private knowledge source whether that be emails, notes or articles to improve responses. The user can then feel confident that their data source, prompts and response all remain private and secure.A recent blog provides an example of RAG accelerated by TensorRT-LLM for Windows to get better results fast.The History of RAGThe roots of the technique go back at least to the early 1970s. Thats when researchers in information retrieval prototyped what they called question-answering systems, apps that use natural language processing (NLP) to access text, initially in narrow topics such as baseball.The concepts behind this kind of text mining have remained fairly constant over the years. But the machine learning engines driving them have grown significantly, increasing their usefulness and popularity.In the mid-1990s, the Ask Jeeves service, now Ask.com, popularized question answering with its mascot of a well-dressed valet. IBMs Watson became a TV celebrity in 2011 when it handily beat two human champions on the Jeopardy! game show.Today, LLMs are taking question-answering systems to a whole new level.Insights From a London LabThe seminal 2020 paper arrived as Lewis was pursuing a doctorate in NLP at University College London and working for Meta at a new London AI lab. The team was searching for ways to pack more knowledge into an LLMs parameters and using a benchmark it developed to measure its progress.Building on earlier methods and inspired by a paper from Google researchers, the group had this compelling vision of a trained system that had a retrieval index in the middle of it, so it could learn and generate any text output you wanted, Lewis recalled.The IBM Watson question-answering system became a celebrity when it won big on the TV game show Jeopardy!When Lewis plugged into the work in progress a promising retrieval system from another Meta team, the first results were unexpectedly impressive.I showed my supervisor and he said, Whoa, take the win. This sort of thing doesnt happen very often, because these workflows can be hard to set up correctly the first time, he said.Lewis also credits major contributions from team members Ethan Perez and Douwe Kiela, then of New York University and Facebook AI Research, respectively.When complete, the work, which ran on a cluster of NVIDIA GPUs, showed how to make generative AI models more authoritative and trustworthy. Its since been cited by hundreds of papers that amplified and extended the concepts in what continues to be an active area of research.How Retrieval-Augmented Generation WorksAt a high level, heres how an NVIDIA technical brief describes the RAG process.When users ask an LLM a question, the AI model sends the query to another model that converts it into a numeric format so machines can read it. The numeric version of the query is sometimes called an embedding or a vector.Retrieval-augmented generation combines LLMs with embedding models and vector databases.The embedding model then compares these numeric values to vectors in a machine-readable index of an available knowledge base. When it finds a match or multiple matches, it retrieves the related data, converts it to human-readable words and passes it back to the LLM.Finally, the LLM combines the retrieved words and its own response to the query into a final answer it presents to the user, potentially citing sources the embedding model found.Keeping Sources CurrentIn the background, the embedding model continuously creates and updates machine-readable indices, sometimes called vector databases, for new and updated knowledge bases as they become available.Retrieval-augmented generation combines LLMs with embedding models and vector databases.Many developers find LangChain, an open-source library, can be particularly useful in chaining together LLMs, embedding models and knowledge bases. NVIDIA uses LangChain in its reference architecture for retrieval-augmented generation.The LangChain community provides its own description of a RAG process.Looking forward, the future of generative AI lies in creatively chaining all sorts of LLMs and knowledge bases together to create new kinds of assistants that deliver authoritative results users can verify.Explore generative AI sessions and experiences at NVIDIA GTC, the global conference on AI and accelerated computing, running March 18-21 in San Jose, Calif., and online.
    0 Comentários 0 Compartilhamentos 4 Visualizações