• Microsoft says ID@Xbox has paid $5B to date to indie game devs
    venturebeat.com
    Microsoft said that ID@Xbox indie gaming label has paid out $5 billion to date to independent game developers.Read More
    0 Comentários ·0 Compartilhamentos ·35 Visualizações
  • Orion Security emerges from stealth using LLMs to track your enterprises data flow and stop it from leaking out
    venturebeat.com
    Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn MoreIf you pay attention at all to cybersecurity news, theres a strong chance youve heard scary reports of firms hiring remote contractors that turn out to be hackers or North Korean spies making off with sensitive, proprietary data.But even without that cloak-and-dagger, international espionage veneer, the truth is that all organizations have reasons to be concerned about their data security and the prospect of exfiltration, or the movement of data without authorization. IBMs 2024 Cost of a Data Breach Report found that incidents involving data exfiltration are now on the rise extortion now average around $5.21 million per incident.Credit: IBM, Cost of a Data Breach Report 2024 In an age when data has never been more important or valuable to an organization yet is also moving around between siloes more than ever before how can enterprises best protect their sensitive information without breaking the bank?A new firm, Orion Security, believes generative AI large language models are the key. Today, the company announced its emergence from stealth with $6 million in seed funding led by Pico Partners and FXP, with participation from Underscore VC and prominent cybersecurity leaders, such as the founders of Perimeter 81 and the CISO of Elastic. Orion Security, founded by Nitay Milner (CEO) and Yonatan Kreiner (CTO), is already working with leading technology companies to help them safeguard sensitive business data from insider threats, according to an interview VentureBeat conducted with Milner over video call last week. Orions co-founders Nitay Milner (CEO) and Yonatan Kreiner (CTO).I spent a lot of years as a product leader in several companies solving very complicated challenges around observability and security in cloud environments, helping T-Mobile and BlackRock to get ahold of, and better understand, their very complex system stacks, Milner said. I experienced firsthand that the main problem in data security is understanding the business context of how sensitive data is being used in a company.AI-powered Contextual Data Protection (AI CDP)Unlike traditional data protection tools that rely on rigid rules and manual policies, Orion Securitys platform dynamically learns and maps an organizations business processes. By understanding how data typically moves within an organization, Orion can distinguish between legitimate workflows and potential threats, whether intentional or accidental.Orion revolutionizes data protection by understanding business processes and data flows in the company and automating data loss prevention with the power of AI, Milner explains.This approach is a departure from conventional manual policy-based security models, which Milner believes are fundamentally flawed. Most security solutions rely on manual policies, but policies dont scale. There are new applications and workflows that make them obsolete pretty often. He further emphasized how security teams struggle with outdated methods: Security teams are stuck writing endless policies over and over again, getting hit by false positives, and still, data keeps leaking from enterprises. Its a really bad situation.Orion Security employs a combination of proprietary AI models and fine-tuned open-source LLMs to automate data protection. All our AI is something that we developed were not using a third party, like ChatGPT or something like that. We developed our AI internally, so its all our IP, he told VentureBeat. The platform relies on two core models: one for classification, which identifies how sensitive data is based on context, and another for business reasoning, which assesses user roles, workflows, and typical data movement to detect anomalies. Orions AI is further fine-tuned on industry-specific and organization-specific data to improve accuracy, ensuring it adapts to each companys unique operations. While they leverage fine-tuned open-source LLMs, Milner notes their surprising effectiveness even without extensive pre-training, saying, LLMs that are open source have a lot of context, and you wouldnt believe the level they give you just by throwing sensitive data on them.How Orions solution worksThe platform connects to an organizations cloud services, browsers, and devices to map data flows comprehensively. At the core of its detection capabilities is its Indicators of Leakage (IOL) engine, which leverages proprietary reasoning models and large language model (LLM) classification to analyze data movement patterns.Key features include:Real-time risk assessment: The platform continuously evaluates business processes, assigning risk scores based on observed behavior.Sensitive data detection: Orion identifies and classifies data types, including personally identifiable information (PII), trade secrets, payroll details, and intellectual property (IP).Minimal manual configuration: Unlike traditional DLP tools that require extensive setup, Orion automates detection and response with minimal user intervention.Reduced false alerts: By incorporating business context, Orion ensures that security teams are only alerted to genuinely suspicious activity, cutting down on noise and unnecessary investigations.Milner compares Orions approach to endpoint detection and response (EDR) solutions, but for data protection. We act as an EDR for datathink of it like a CrowdStrike for your data. If something anomalous happens, we catch and prevent it in real-time, even if there wasnt a predefined policy.Beyond catching malicious insiders, Orion also distinguishes between human errors and external attackers. The three main vectors for data leaks are malicious insiders, human errors, and external attackers. We detect and differentiate between all of them, Milner says.Enterprise leaders can see the flow of their firms data at a glanceOrion Security provides users with a dashboard-driven experience, offering real-time insights into business data flows. The interface categorizes risk by severity, allowing security teams to quickly identify and address high-risk activities.Some notable elements of Orions UI include:Top Data Types Monitored: The system classifies and tracks PII, marketing materials, product-related data, and source code.Risk Score Distribution: A visual breakdown of critical, high, medium, and low-risk activities helps prioritize security responses.Top Outbound Sources: Displays the most common platforms where data is being transferred, helping security teams detect unusual exfiltration patterns.Business Flow Risk Scores: Each monitored business process is assigned a risk score, with specific activities (e.g., Engineering teams moving data before leaving the company) flagged based on severity.This intuitive approach to data security allows security teams to quickly assess potential threats and take immediate action when necessary.Milner described the platforms visibility capabilities thusly: Imagine having a dynamic map of all the sensitive data movement in your companybetween people, devices, and applicationsand making sure it doesnt leave your organization.High investor confidenceBacking from cybersecurity veterans further reinforces Orions approach. Gil Zimmermann, Partner at FXP, who previously co-founded CloudLock (acquired by Cisco), sees Orions technology as a long-overdue evolution in data protection:AI is creating a watershed moment for data protection, and Orion Security is at the forefront of this transformation, he wrote in a prepared statement in a press release provided to VentureBeat. Orions AI-powered approach solves the core challenges we faced for years the lack of business context and overwhelming manual work. This is the future of data security we envisioned but which couldnt be built a decade ago.Beyond detection, Orion offers flexibility in response mechanisms, letting companies customize their approach. Some companies want us to block data exfiltration in real-time, while others prefer just getting notifications or educating employees on security policies. We let them decide how aggressive the approach should be, Milner said.Whats next for Orion Security and its tech?Orion Security is already working with leading technology companies (confidential due to business agreements) and plans to further refine its AI models to stay ahead of evolving insider threats. The companys onboarding process ensures customers see immediate value. We take three months of historical data when onboarding a new customer, so our AI delivers value from day one, Milner explains.Additionally, Orion emphasizes privacy-first security architecture. We dont store any sensitive dataonly metadata. If a company prefers, they can even install our classifier in their own environment so nothing leaves their systems, Milner says.With an AI-driven approach that reduces manual workload, false positives, and security blind spots, Orion Security is well-positioned to shape the next generation of context-aware data protection solutions.Daily insights on business use cases with VB DailyIf you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI.Read our Privacy PolicyThanks for subscribing. Check out more VB newsletters here.An error occured.
    0 Comentários ·0 Compartilhamentos ·36 Visualizações
  • The technical magic behind Doctor Strange's portals in Marvel Rivals
    www.gamedeveloper.com
    Boasting an expansive roster of playable heroes and villains from across the Marvel multiverse, many with bespoke mechanics, Marvel Rivals presented developer-publisher NetEase games with a myriad of technical challenges to overcome, especially when optimizing the hero shooter for consoles.Tieyi Zhang, a senior graphics engineer at NetEase Games, hosted a talk at GDC this year to demystify the process behind one specific mechanic: Doctor Strange's portals. If you haven't seen these portals in action in Marvel Rivals, Doctor Strange can open a portal from where he's standing to anywhere on the map within a certain distance. That portal will (in most circumstances) show where players will end up after passing through once the portal is placed. They work and look very similar to those seen in the recent Doctor Strange movies.Although teleporting a player from one point of three-dimensional space to another already presents technical challenges for the device rendering the environment, the talk mostly focused on a greater challenge: how NetEase managed to render multiple angles of the same environment at the same time with minimal frame loss.Zhang walked the audience through the many steps of the rendering techniques used to produce the effect, discussed the inspirations behind the effect, and showed the differences in frame rate between various iterations of the technique during his talk.Related:He started the talk by showing how the mechanic functioned when using a tool baked into Unreal Engine called Scene Capture 2D, which works like a virtual camera without any optimization changes. He introduced the tool, saying, For those of you who are familiar with [Unreal Engine 5], the go-to solution for portals is the Scene Capture 2D component. As a built-in tool, it is perfectly documented and supported by many tutorials, before going on to praise the tools Consistent visual quality in creating captured content.Despite the tools apparent reliability, the frame rate dropped drastically as Doctor Strange passed through the portal in the videos shown in Zhangs slideshow. The example shown onstage exhibited a doubled required rendering time, causing the games frame rate to plummet to roughly half. However, this could vary in extremity based on the complexity of lighting in the scene. Regardless, given Marvel Rivals' competitive bent, dropped frames during a match can be fatal, especially in such a competitive landscape where less than one fifth of active PC players play new titles.Related:It Takes Two offered a guiding lightThen, the engineer dredged up a surprising point of inspiration for his solution to this framerate issue: 2021s widely acclaimed co-op hit It Takes Two. Hazelights use of Unreal Engine to render two simultaneous views without drawing an overwhelming amount of power from the computer or console running it led Zheng to utilize a similar rendering technique for the portals. Zheng joked about It Takes Twos largely airtight framerate, quipping, Does this game use some kind of magic? I don't know!Using lagless split screen rendering techniques pioneered by Hazelight and EAs heavy hitter, Zheng worked out an optimized, frame-efficient solution that took advantage of tech thats usually reserved for local split-screen co-op, essentially placing another player into the game in order to render the view from the other side. This approach allowed NetEase to use the GPU on board the device much more efficiently, drastically decreasing dropped frames. But NetEase pushed things even further by skipping roughly every 5-10 frames per second within the portal, conserving the computers power even more.Doctor Stranges portals have proven to be one of Marvel Rivals most beloved mechanics, with players finding clever placements and strategies that highlight just how versatile this tech can be, including a now-patched exploit that allowed players to essentially trap their opponents inside of a maps geometry.Related:
    0 Comentários ·0 Compartilhamentos ·34 Visualizações
  • The comedic cheat sheet that helped build Tactical Breach Wizards
    www.gamedeveloper.com
    It all started with one joke. What if Call of Duty was filled with wizards? Suspicious Developments creative director Tom Francis barely even remembers where the joke originally started, but it helped him and his team build out the foundation for one of 2024's success stories: Tactical Breach Wizards.The joke was combined with a simple ideawhat if there was an XCOM alternative that was easier to parse so that casual players could jump in smoothly?to create the core design philosophy behind Tactical Breach Wizards. A game that earned 98% positive reviews on Steam and doubled the revenue of Suspicious Developments' previous game.Francis held a short talk about the variety of "cheats" he used to turn one joke into a 15 hour tactical adventure at the Game Developers Conference.Cheats for jokesThe core premise for Tactical Breach Wizards was so simple that Francis and team put their brains together and combined "tactical" and "wizardry" words to create ideas for characters. It was a processand their first "cheat"they called their joke engine, or a system that came up with jokes quickly and easily. They came up with and illustrated characters like a "Riot Priest" that would eventually become full on characters in their game alongside the likes of the Traffic Warlock and the Necro Medic.Related:Francis's cheats detailed how several well known writing methods worked for Tactical Breach Wizards. Methods like dialogue trees that put the player's sense of humor first, word-by-word text that helped timed-based jokes hit in stride, character-based comedy that doubled as both jokes and context, and the idea that jokes should be told with the lowest amount of friction possible. Don't put a ton of work into a single jokeor take your player on a detourto avoid letting them down in a big way.Francis detailed one joke, in which two players are hugging a wall as they prepare to breach a room, where one character starts trash talking someone that she noticed inside. The character on the other side of the door could hear her and a speech bubble popped up as he said, "wow, Jen."We got a ton of value out of that out of very little cost," Francis said. "It doesn't matter if it bombed, because we didn't take much effort to get there."Moving beyond the jokeWhile Tactical Breach Wizards may be a game built off a joke, Francis emphasized that you can't stop there. Some games that have nothing but jokes exist and do well, but others that want to inspire empathy and emotion have to dig deeper to work with the player. This led to another cheat, including the idea that parody doesn't give a license to repeat a genre's mistakes.Related:"I like military action games, but their views don't align with mine," Francis said, adding that he couldn't set his game within a different genre, because it wouldn't align with the premise of Tactical Breach Wizards. "We wanted a Tom Clancy-like universe with wizards, so we needed it to look like a Tom Clancy universe."Francis felt that the majority of military games shared the idea that "killing people is cool, as long as they are foreigners," and wanted to give players something vastly different. Tactical Breach Wizards doesn't revolve around a national military fighting foreign enemies or terrorists, it has a group of criminals fighting a theocracy. It focused on punching up.Telling a story with cheatsThe process of choosing how to tell a story in a game is just as complicated, if not far more complicated, then whatever story you choose to tell in a game. Games offer a massive variety of tools to deliver a story to the player, which is why Francis provided a third and final set of cheats during this talk that included the idea that comedy and magic can let you get away with nearly anything and two methods to effectively deliver context and dialogue to players: breach conversations and conspiracy boards.Related:A special type of mission exists throughout Tactical Breach Wizards called an anxiety dream, and it is exactly what you think it is. Each character has a dream mission where they talk to and fight alongside another version of themselves. It gives the player deep insight into that character's thoughts and feelings that wouldn't be possible in any other mission in the game. There would be no reason for those players to share those feelings elsewhere."A glimpse of each character's private life goes a long way," Francis said, as he admitted that the mere idea of an anxiety dream felt ludicrous to him. It was accepted because magic gives developers a license to get away with almost any ridiculous idea.After breaking down many of the writing methods that he and his team used to flesh out Tactical Breach Wizards, Francis also detailed two production design ideas that made the delivery of that writing much smoother.The first was the main setting for the game, breaches. Throughout the game characters follow the same mission and dialogue structure: they line up right outside a door, share a bit of dialogue, and then breach the room. That happens several times within a single mission, giving characters multiple opportunities to share quick and witty bits of dialogue. It's a major reason why many players found the game's jokes natural and funny.Francis ended his talk by showcasing the Tactical Breach Wizards conspiracy board, which laid out the game's plot like an evidence board with pins and string in a police station. It was a simple way for the players to recap the story, cement the ideas the story presented, and reinforce that the story was taking place in a larger world. He encouraged the developers of any game with a story to have a similar mechanic that would help visual learners keep track of what was happening while they were playing.
    0 Comentários ·0 Compartilhamentos ·35 Visualizações
  • Our five favorite dunks from Drakes label over his Not Like Us lawsuit
    www.theverge.com
    Universal Music Group has finally responded to Drakes claims that the label damaged his reputation with Kendrick Lamars diss track Not Like Us, and there are some spicy tidbits in there.UMG, which represents both artists, broadly argues that the court should dismiss Drakes lawsuit because hes just the sore loser of an ugly rap battle and cant back up any of his claims. Instead of accepting the loss like the unbothered rap artist he often claims to be, he has sued his own record label in a misguided attempt to salve his wounds, UMG says in the filing.But thats only the start of UMGs response. Here are a few points that stuck out.Drake previously agreed prosecutors shouldnt use lyrics against rappersThough Drake is now suing UMG for defamation, the artist previously agreed that rappers shouldnt be criminalized because of their lyrics. In 2022, Drake, along with several other prominent artists, signed a letter in support of Young Thug, a rapper whose lyrics were used against him at trial. The trend of prosecutors using artists creative expression against them is happening with troubling frequency, the letter said.That irony isnt lost on UMG: As Drake recognized, when it comes to rap, [t]he final work is a product of the artists vision and imagination. Drake was right then and is wrong now.Everyone expected a big reaction from LamarUMG says Drake cant claim that Not Like Us is defamatory, as the broader context surrounding the song meant the audience was anticipating the use of aggressive lyrics.It cites the seven preceding tracks in which Drake and Lamar hurled increasingly vitriolic allegations at each other, including claims that Lamars son isnt his and that hed abused his fianc. If ever there was circumstance for the audience to anticipate the use of epithets, fiery rhetoric or hyperbole, this is it, UMG says.Drake used fiery lyrics, tooAs stated above, Drake is no stranger to rapping similarly vitriolic lyrics. UMG claims it engaged in the same conduct when it distributed Drakes song, Family Matters, which is a scathing attack on Lamar, laden with hyperbolic slurs.The label goes on to refute allegations that Not Like Us issued a call to violence, as Drakes security guard was shot outside the rappers home days after the songs release. UMG claims that Drake attempts to contort violent metaphors in the lyrics into incitement. It adds that fiery lyrics are par for the course in rap music especially on diss tracks. Rappers know that their lyrics are exaggerated and nonfactual; that is part of the craft, the label argues. Drakes own diss tracks employed imagery at least as violent, such as gunshot sounds.Drake acknowledged the controversies in Not Like UsUMG claims that the controversies mentioned in Lamars diss track are well-known, saying that facts and criticism concerning Drakes relationships with minors predate Not Like Us and have been widely reported.The label also says that Drake acknowledged and perpetuated these allegations in his song, Taylor Made Freestyle, which features an AI-generated version of Tupacs voice suggesting Lamar should talk about [Drake] likin young girls.Drake also affirmed that he understood Lamars statements in Not Like Us to refer to the Millie Bobby Brown controversy, stating, This Epstein angle was the shit I expected and Only fuckin with Whitneys, not Millie Bobby Browns, Id never look twice at no teenager. Clearly, Drake himself understands that Lamars lyrics refer to well-known issues.UMG says Drake doesnt have evidence to back up his bots and payola claimsUMG pushes back on Drakes accusations that the label artificially inflated streams of Not Like Us by using bots and payola. The label claims Drake based his bots theory on an allegation espoused by an anonymous individual on Twitch, who claimed Lamars label paid him to boost the diss tracks streams on Spotify.However, this already a dubious source later claimed that he was specifically hired by Lamars manager not UMG or its subsidiary, Interscope. To be clear, UMG disputes the contention that anyone paid for or otherwise used bots to inflate streams of Not Like Us, as there is no evidence of any such stream manipulation, UMG says. But the specific claim that someone affiliated with UMG did so is entirely unsupported by the very source Drake cites.UMG goes on to say that Drakes pay-for-play allegations are made on information and believe without stating the basis therefor. It also refutes Drakes claims of injury and causation. Drakes theory that every time the Recording was played, Drake lost the opportunity for one of his songs to be played, is wildly speculative and not cognizable, the filing says.Drakes lawyer, Mike Gottlieb, isnt backing down from the artists initial claims. UMG wants to pretend that this is about a rap battle in order to distract its shareholders, artists and the public from a simple truth: a greedy company is finally being held responsible for profiting from dangerous misinformation that has already resulted in multiple acts of violence, Gottlieb told NBC. This motion is a desperate ploy by UMG to avoid accountability, but we have every confidence that this case will proceed and continue to uncover UMGs long history of endangering, abusing and taking advantage of its artists.See More:
    0 Comentários ·0 Compartilhamentos ·36 Visualizações
  • GM taps Nvidia to boost its embattled self-driving projects
    www.theverge.com
    At Nvidias annual GTC conference in San Jose, Calif. today, the chipmaker announced it was teaming up with General Motors to develop next-generation cars, robots, and factories. GM says it will apply several of Nvidias products to its business, such as the Omniverse 3D graphics platform which will run simulations on virtual assembly lines with an eye on reducing downtime and improving efficiency. The automaker also plans to equip its next-generation vehicles with Nvidias AI brain for advanced driver assistance and autonomous driving. And it will employ the chipmakers AI training software to make its vehicle assembly line robots better at certain tasks, like precision welding and material handling. GM already uses Nvidias GPUs to train its AI software for simulation and validation. Todays announcement was about expanding those use cases into improving its manufacturing operations and autonomous vehicles, GM CEO Mary Barra said in a statement. (Barra is joining Nvidia CEO Jensen Huang for a fireside chat during the GTC conference today.) Image: NvidiaAI not only optimizes manufacturing processes and accelerates virtual testing but also helps us build smarter vehicles while empowering our workforce to focus on craftsmanship, Barra said. By merging technology with human ingenuity, we unlock new levels of innovation in vehicle manufacturing and beyond. GM will adopt Nvidias in-car software products to build next-gen vehicles with autonomous driving capabilities. That includes the companys Drive AGX system-on-a-chip (SoC), similar to Teslas Full Self-Driving chip or Intels Mobileye EyeQ. The SoC runs the safety-certified DriveOS operating system, built on the Blackwell GPU architecture, which is capable of delivering 1,000 trillion operations per second (TOPS) of high-performance compute, the company says. Like most automakers, GM has sunk billions of dollars in the development of fully autonomous vehicles with mixed results. The companys advanced driver assist feature, Super Cruise, is considered one of the safest and most capable on the market today. But its work to deploy fully autonomous vehicles has been less successful. Last year, GM pulled funding for its Cruise robotaxi company after a number of safety lapses cast doubt on the operations future. GM will use Nvidias AI software to run factory improvement simulations. Image: GMBefore it was shuttered, Cruise was exploring developing its own chips to reduce costs for its parent company. The robotaxi startup had been using Nvidias in-car computers to power its autonomous vehicles, which executives complained were too expensive. GM hopes to improve its self-driving fortunes by selling passenger vehicles with autonomous driving capabilities though it hasnt said when or using what technology. In a briefing with reporters, Ali Kani, Nvidias vice president and general manager of automotive, described the chipmaking companys automotive business as still in its infancy, with the expectation that it will only bring in $5 billion this year. (Nvidia reported over $130 billion in revenue in 2024 for all its divisions.)Nvidias chips are in less than 1 percent of the billions of cars on the road today, he added. But the future looks promising. The company is also announcing deals with Tier 1 auto supplier Magna, which helped build Sonys Afeela concept, to use Drive AGX in the companys next-generation advanced driver assist software. We believe automotive is a trillion dollar opportunity for Nvidia, Kani said. GM is the latest car company to strike a deal with Nvidia. The San Jose-based chipmaker has made serious in-roads in the auto industry in recent years, including partnerships with Jaguar-Land Rover, Volvo, Mercedes-Benz, Hyundai, Lucid, Toyota, Hyundai, Zoox, and a host of Chinese EV startups.See More:
    0 Comentários ·0 Compartilhamentos ·34 Visualizações
  • Blender 4.4 Released
    gamefromscratch.com
    Blender 4.4 Released / News / March 18, 2025 / BlenderAlong side the long-awaited GIMP 3 release we have another open-source icon doing a major release, Blender 4.4. The Blender 4.4 release is heavily focused on improving and refining the core experience using Blender, paying old technical debt, fixing bugs and more. That isnt to say there arent new features here, like the new slotted Action system, vastly improved video editor and new sculpting brush and modeling tools.Details of the release focus from the Blender 4.4 release notes:Blender 4.4 is all about stability. During the 20242025 northern hemisphere winter, Blender developers doubled down on quality and stability in a group effort called Winter of Quality.Amount of high severity bugs since January 1st, 2025Winter ofQualityIn just a few months, developers fixedover 700reported issues, revisited old bug reports, and addressed unreported problems.Alongside bug fixes, Winter of Quality also included tackling technical debt and improving documentation.Key LinksBlender 4.4 Release NotesWinter of Quality Blog PostBlender HomepageYou can learn more about the Blender 4.4 release in the video below.
    0 Comentários ·0 Compartilhamentos ·35 Visualizações
  • Building a Retrieval-Augmented Generation (RAG) System with FAISS and Open-Source LLMs
    www.marktechpost.com
    Retrieval-augmented generation (RAG) has emerged as a powerful paradigm for enhancing the capabilities of large language models (LLMs). By combining LLMs creative generation abilities with retrieval systems factual accuracy, RAG offers a solution to one of LLMs most persistent challenges: hallucination.In this tutorial, well build a complete RAG system using:FAISS (Facebook AI Similarity Search), as our vector databaseSentence Transformers for creating high-quality embeddingsAn open-source LLM from Hugging Face (well use a lightweight model compatible with CPU)A custom knowledge base that well createBy the end of this tutorial, youll have a functioning RAG system that can answer questions based on your documents with improved accuracy and relevance. This approach is valuable for building domain-specific assistants, customer support systems, or any application where grounding LLM responses in specific documents is important.Let us get started.Step 1: Setting Up Our EnvironmentFirst, we need to install all the required libraries. For this tutorial, well use Google Colab.# Install required packages!pip install -q transformers==4.34.0!pip install -q sentence-transformers==2.2.2!pip install -q faiss-cpu==1.7.4!pip install -q accelerate==0.23.0!pip install -q einops==0.7.0!pip install -q langchain==0.0.312!pip install -q langchain_community!pip install -q pypdf==3.15.1Lets also check if we have access to a GPU, which will speed up our model inference:import torch# Check if GPU is availableprint(f"GPU available: {torch.cuda.is_available()}")if torch.cuda.is_available(): print(f"GPU name: {torch.cuda.get_device_name(0)}")else: print("Running on CPU. We'll use a CPU-compatible model.")Step 2: Creating Our Knowledge BaseFor this tutorial, well create a simple knowledge base about AI concepts. In a real-world scenario, one can use it to import PDF documents, web pages, or databases.import osimport tempfile# Create a temporary directory for our documentsdocs_dir = tempfile.mkdtemp()print(f"Created temporary directory at {docs_dir}")# Create sample documents about AI conceptsdocuments = { "vector_databases.txt": """ Vector databases are specialized database systems designed to store, manage, and search vector embeddings efficiently. They are crucial for machine learning applications, particularly those involving natural language processing and image recognition. Key features of vector databases include: 1. Fast similarity search using algorithms like HNSW, IVF, or exact search 2. Support for various distance metrics (cosine, euclidean, dot product) 3. Scalability for handling billions of vectors 4. Often support for metadata filtering alongside vector search Popular vector databases include FAISS (Facebook AI Similarity Search), Pinecone, Weaviate, Milvus, and Chroma. FAISS specifically was developed by Facebook AI Research and is an open-source library for efficient similarity search. """, "embeddings.txt": """ Embeddings are dense vector representations of data in a continuous vector space. They capture semantic meaning and relationships between entities by positioning similar items closer together in the vector space. Types of embeddings include: 1. Word embeddings (Word2Vec, GloVe) 2. Sentence embeddings (Universal Sentence Encoder, SBERT) 3. Document embeddings 4. Image embeddings 5. Audio embeddings Embeddings are created through various techniques, including neural networks trained on specific tasks. Modern embedding models like those from OpenAI, Cohere, or Sentence Transformers can capture nuanced semantic relationships. The dimensionality of embeddings typically ranges from 100 to 1536 dimensions, with higher dimensions often capturing more information but requiring more storage and computation. """, "rag_systems.txt": """ Retrieval-Augmented Generation (RAG) is an AI architecture that combines information retrieval with text generation. The RAG process typically works as follows: 1. User query is converted into an embedding vector 2. Similar documents or passages are retrieved from a knowledge base using vector similarity 3. Retrieved content is provided as context to the language model 4. The language model generates a response informed by both its parameters and the retrieved information Benefits of RAG include: 1. Reduced hallucination compared to pure generative approaches 2. Up-to-date information without model retraining 3. Attribution of information sources 4. Lower computation costs than increasing model size RAG systems can be enhanced through techniques like reranking, query reformulation, and hybrid search approaches. """}# Write documents to filesfor filename, content in documents.items(): with open(os.path.join(docs_dir, filename), 'w') as f: f.write(content) print(f"Created {len(documents)} documents in {docs_dir}")Step 3: Loading and Processing DocumentsNow, lets load these documents and process them for our RAG system:from langchain_community.document_loaders import TextLoaderfrom langchain.text_splitter import RecursiveCharacterTextSplitter# Initialize a list to store our documentsall_documents = []# Load each text filefor filename in documents.keys(): file_path = os.path.join(docs_dir, filename) loader = TextLoader(file_path) loaded_docs = loader.load() all_documents.extend(loaded_docs)print(f"Loaded {len(all_documents)} documents")# Split documents into chunkstext_splitter = RecursiveCharacterTextSplitter( chunk_size=500, chunk_overlap=50, separators=["nn", "n", ".", " ", ""])document_chunks = text_splitter.split_documents(all_documents)print(f"Created {len(document_chunks)} document chunks")# Let's look at a sample chunkprint("nSample chunk content:")print(document_chunks[0].page_content)print(f"Source: {document_chunks[0].metadata}")Step 4: Creating EmbeddingsNow, lets convert our document chunks into vector embeddings:from sentence_transformers import SentenceTransformerimport numpy as np# Initialize the embedding modelmodel_name = "sentence-transformers/all-MiniLM-L6-v2" # A good balance of speed and qualityembedding_model = SentenceTransformer(model_name)print(f"Loaded embedding model: {model_name}")print(f"Embedding dimension: {embedding_model.get_sentence_embedding_dimension()}")# Create embeddings for all document chunkstexts = [doc.page_content for doc in document_chunks]embeddings = embedding_model.encode(texts)print(f"Created {len(embeddings)} embeddings with shape {embeddings.shape}")Step 5: Building the FAISS IndexNow well build our FAISS index with these embeddings:import faiss# Get the dimensionality of our embeddingsdimension = embeddings.shape[1]# Create a FAISS index - we'll use a simple Flat L2 index for demonstration# For larger datasets, consider using indexes like IVF or HNSW for better performanceindex = faiss.IndexFlatL2(dimension) # L2 is Euclidean distance# Add our vectors to the indexindex.add(embeddings.astype(np.float32)) # FAISS requires float32print(f"Created FAISS index with {index.ntotal} vectors")# Create a mapping from index position to document chunk for retrievalindex_to_doc_chunk = {i: doc for i, doc in enumerate(document_chunks)}Step 6: Loading a Language ModelNow lets load an open-source language model from Hugging Face. Well use a smaller model that works well on CPU:from transformers import AutoTokenizer, AutoModelForCausalLM# We'll use a smaller model that works on CPUmodel_id = "TinyLlama/TinyLlama-1.1B-Chat-v1.0"# Load the tokenizer and modeltokenizer = AutoTokenizer.from_pretrained(model_id)model = AutoModelForCausalLM.from_pretrained( model_id, torch_dtype=torch.float32, # Use float32 for CPU compatibility device_map="auto" # Will use CPU if GPU is not available)print(f"Successfully loaded {model_id}")Step 7: Creating Our RAG PipelineLets create a function that combines retrieval and generation:def rag_response(query, index, embedding_model, llm_model, llm_tokenizer, index_to_doc_map, top_k=3): """ Generate a response using the RAG pattern. Args: query: The user's question index: FAISS index embedding_model: Model to create embeddings llm_model: Language model for generation llm_tokenizer: Tokenizer for the language model index_to_doc_map: Mapping from index positions to document chunks top_k: Number of documents to retrieve Returns: response: The generated response sources: The source documents used """ # Step 1: Convert query to embedding query_embedding = embedding_model.encode([query]) query_embedding = query_embedding.astype(np.float32) # Convert to float32 for FAISS # Step 2: Search for similar documents distances, indices = index.search(query_embedding, top_k) # Step 3: Retrieve the actual document chunks retrieved_docs = [index_to_doc_map[idx] for idx in indices[0]] # Create context from retrieved documents context = "nn".join([doc.page_content for doc in retrieved_docs]) # Step 4: Create prompt for the LLM (TinyLlama format) prompt = f"""<|system|>You are a helpful AI assistant. Answer the question based only on the provided context.If you don't know the answer based on the context, say "I don't have enough information to answer this question."Context:{context}<|user|>{query}<|assistant|>""" # Step 5: Generate response from LLM input_ids = llm_tokenizer(prompt, return_tensors="pt").input_ids.to(model.device) generation_config = { "max_new_tokens": 256, "temperature": 0.7, "top_p": 0.95, "do_sample": True } # Generate the output with torch.no_grad(): output = llm_model.generate( input_ids=input_ids, **generation_config ) # Decode the output generated_text = llm_tokenizer.decode(output[0], skip_special_tokens=True) # Extract the assistant's response (remove the prompt) response = generated_text.split("<|assistant|>")[-1].strip() # Return both the response and the sources sources = [(doc.page_content, doc.metadata) for doc in retrieved_docs] return response, sourcesStep 8: Testing Our RAG SystemLets test our system with some questions:#Define some test questionstest_questions = [ "What is FAISS and what is it used for?", "How do embeddings capture semantic meaning?", "What are the benefits of RAG systems?", "How does vector search work?"]# Test our RAG pipelinefor question in test_questions: print(f"nn{'='*50}") print(f"Question: {question}") print(f"{'='*50}n") response, sources = rag_response( query=question, index=index, embedding_model=embedding_model, llm_model=model, llm_tokenizer=tokenizer, index_to_doc_map=index_to_doc_chunk, top_k=2 # Retrieve top 2 most relevant chunks ) print(f"Response: {response}n") print("Sources:") for i, (content, metadata) in enumerate(sources): print(f"nSource {i+1}:") print(f"Metadata: {metadata}") print(f"Content snippet: {content[:100]}...")OUTPUT:Step 9: Evaluating and Improving Our RAG SystemLets implement a simple evaluation function to assess the performance of our RAG system:def evaluate_rag_response(question, response, retrieved_sources, ground_truth_sources=None): """ Simple evaluation of RAG response quality Args: question: The query response: Generated response retrieved_sources: Sources used for generation ground_truth_sources: (Optional) Known correct sources Returns: evaluation metrics """ # Basic metrics response_length = len(response.split()) num_sources = len(retrieved_sources) # Simple relevance score - we'd use better methods in production source_relevance = [] for content, _ in retrieved_sources: # Count overlapping words between question and source q_words = set(question.lower().split()) s_words = set(content.lower().split()) overlap = len(q_words.intersection(s_words)) source_relevance.append(overlap / len(q_words) if q_words else 0) avg_relevance = sum(source_relevance) / len(source_relevance) if source_relevance else 0 return { "response_length": response_length, "num_sources": num_sources, "source_relevance_scores": source_relevance, "avg_relevance": avg_relevance }# Evaluate one of our previous responsesquestion = test_questions[0]response, sources = rag_response( query=question, index=index, embedding_model=embedding_model, llm_model=model, llm_tokenizer=tokenizer, index_to_doc_map=index_to_doc_chunk, top_k=2)# Run evaluationeval_results = evaluate_rag_response(question, response, sources)print(f"nEvaluation results for question: '{question}'")for metric, value in eval_results.items(): print(f"{metric}: {value}")Step 10: Advanced RAG Techniques Query ExpansionLets implement query expansion to improve retrieval:# Here's the implementation of the expand_query function:def expand_query(original_query, llm_model, llm_tokenizer): """ Generate multiple search queries from an original query to improve retrieval Args: original_query: The user's original question llm_model: The language model for generating variations llm_tokenizer: Tokenizer for the language model Returns: List of query variations including the original """ # Create a prompt for query expansion prompt = f"""<|system|>You are a helpful assistant. Generate two alternative versions of the given search query.The goal is to create variations that might help retrieve relevant information.Only list the alternative queries, one per line. Do not include any explanations, numbering, or other text.<|user|>Generate alternative versions of this search query: "{original_query}"<|assistant|>""" # Generate variations input_ids = llm_tokenizer(prompt, return_tensors="pt").input_ids.to(llm_model.device) with torch.no_grad(): output = llm_model.generate( input_ids=input_ids, max_new_tokens=100, temperature=0.7, do_sample=True ) # Decode the output generated_text = llm_tokenizer.decode(output[0], skip_special_tokens=True) # Extract the generated variations response_part = generated_text.split("<|assistant|>")[-1].strip() # Split response by lines to get individual variations variations = [line.strip() for line in response_part.split('n') if line.strip()] # Ensure we have at least some variations if not variations: variations = [original_query] # Add the original query and return the list with duplicates removed all_queries = [original_query] + variations return list(dict.fromkeys(all_queries)) # Remove duplicates while preserving orderStep 11: Evaluating and Improving Our expand_query functionLets implement a simple evaluation function to assess the performance of our expand_query function# Example usage of expand_query functiontest_query = "How does FAISS help with vector search?"# Generate query variationsexpanded_queries = expand_query( original_query=test_query, llm_model=model, llm_tokenizer=tokenizer)print(f"Original Query: {test_query}")print(f"Expanded Queries:")for i, query in enumerate(expanded_queries): print(f" {i+1}. {query}")# Enhanced RAG with query expansionall_retrieved_docs = []all_scores = {}# Retrieve documents for each query variationfor query in expanded_queries: # Get query embedding query_embedding = embedding_model.encode([query]).astype(np.float32) # Search in FAISS index distances, indices = index.search(query_embedding, 3) # Track document scores across queries (using 1/(1+distance) as score) for idx, dist in zip(indices[0], distances[0]): score = 1.0 / (1.0 + dist) if idx in all_scores: # Take max score if document retrieved by multiple query variations all_scores[idx] = max(all_scores[idx], score) else: all_scores[idx] = score# Get top documents based on scorestop_indices = sorted(all_scores.keys(), key=lambda idx: all_scores[idx], reverse=True)[:3]expanded_retrieved_docs = [index_to_doc_chunk[idx] for idx in top_indices]print("nRetrieved documents using query expansion:")for i, doc in enumerate(expanded_retrieved_docs): print(f"nResult {i+1}:") print(f"Source: {doc.metadata['source']}") print(f"Content snippet: {doc.page_content[:150]}...")# Now use these documents with the LLM to generate a responsecontext = "nn".join([doc.page_content for doc in expanded_retrieved_docs])# Create prompt for the LLMprompt = f"""<|system|>You are a helpful AI assistant. Answer the question based only on the provided context.If you don't know the answer based on the context, say "I don't have enough information to answer this question."Context:{context}<|user|>{test_query}<|assistant|>"""# Generate responseinput_ids = tokenizer(prompt, return_tensors="pt").input_ids.to(model.device)with torch.no_grad(): output = model.generate( input_ids=input_ids, max_new_tokens=256, temperature=0.7, top_p=0.95, do_sample=True )# Extract responsegenerated_text = tokenizer.decode(output[0], skip_special_tokens=True)response = generated_text.split("<|assistant|>")[-1].strip()print("nFinal RAG Response with Query Expansion:")print(response)Output:FAISS can handle a wide range of vector types, including text, image, and audio, and can be integrated with popular machine learning frameworks such as TensorFlow, PyTorch, and Sklearn.ConclusionIn this tutorial, we have built a complete RAG system using FAISS as our vector database and an open-source LLM. We implemented document processing, embedding generation, and vector indexing, and integrated these components with query expansion and hybrid search techniques to improve retrieval quality.Further, we can consider:Implementing query reranking with cross-encodersCreating a web interface using Gradio or StreamlitAdding metadata filtering capabilitiesExperimenting with different embedding modelsScaling the solution with more efficient FAISS indexes (HNSW, IVF)Fine-tuning the LLM on your domain-specific dataUseful resources:Here is the Colab Notebook. Also,dont forget to follow us onTwitterand join ourTelegram ChannelandLinkedIn Group. Dont Forget to join our80k+ ML SubReddit. Mohammad AsjadAsjad is an intern consultant at Marktechpost. He is persuing B.Tech in mechanical engineering at the Indian Institute of Technology, Kharagpur. Asjad is a Machine learning and deep learning enthusiast who is always researching the applications of machine learning in healthcare.Mohammad Asjadhttps://www.marktechpost.com/author/mohammad_asjad/Meet PC-Agent: A Hierarchical Multi-Agent Collaboration Framework for Complex Task Automation on PCMohammad Asjadhttps://www.marktechpost.com/author/mohammad_asjad/Implementing Text-to-Speech TTS with BARK Using Hugging Faces Transformers library in a Google Colab environmentMohammad Asjadhttps://www.marktechpost.com/author/mohammad_asjad/Salesforce AI Releases Text2Data: A Training Framework for Low-Resource Data GenerationMohammad Asjadhttps://www.marktechpost.com/author/mohammad_asjad/Q-Filters: A Training-Free AI Method for Efficient KV Cache Compression
    0 Comentários ·0 Compartilhamentos ·35 Visualizações
  • Ballerina: Exclusive Photos From the John Wick Spin-Off
    www.ign.com
    IGN can exclusively reveal four new photos from the upcoming John Wick universe movie Ballerina featuring cast members Ana de Armas, Ian McShane, Norman Reedus and Anjelica Huston. Our exclusive comes ahead of the next Ballerina trailer dropping tomorrow, March 19, at 7am PT. The new trailer will be revealed as part of a livestream fan event across the film's and Lionsgate's social platforms.In Ballerina, de Armas stars as Eve Macarro, an assassin training in the traditions of the Ruska Roma, one of the organizations that has a seat at the High Table. The Ruska Roma are led by Hustons character, The Director, and operate out of the Tarkovsky Theater. The Director is also John Wicks adoptive mother as was revealed in John Wick: Chapter 3 Parabellum."Ballerina takes place in the John Wick world after the end of John Wick Chapter 3 and before the start of John Wick Chapter 4, John Wick star Keanu Reeves, who cameos in Ballerina, explained in a statement to IGN. It was so great to have the chance to play John Wick again and discover more of the John Wick world."Scroll through the gallery below for exclusive images from Ballerina.From the World of John Wick: BallerinaIn an e-mail interview with IGN, de Armas sang the praises of both Reeves and the John Wick franchise.I have had the pleasure of working with Keanu three times already, and Ballerina was particularly special because it was me stepping into his world. The world of John Wick, which is so loved by the fans. I respect and appreciate Keanu as an artist, as a person and as a friend, hes a leader and force on set, and always a gentleman, de Armas wrote. His creativity and input during the rehearsal is always so inspiring, he knows these films better than anyone and I loved having him by my side. When I saw him walking on set dressed as John, it was pretty surreal. Ill never forget that day. Keanu has always been a huge influence on me, since the day we met. I never imagined I was going to work with him again; and here we are, living a really cool full circle moment."Eve has been trained to be the best among the best, by the same people that trained John Wick.So who exactly is the Ballerina? Eve Macarro was recruited and trained by the Ruska Roma to be a ballerina and assassin. She had a very traumatic experience as a child when she witnessed some men killing her father, de Armas wrote. The details about these men have been kept from her for years, but Eve cannot let it go, so she embarks on a journey to find them and take vengeance, violating pacts and rules that will bring very bad consequences for her.De Armas continued: Eve has been trained to be the best among the best, by the same people that trained John Wick, but shes a woman, smaller than most men in the film, so I wanted to find a way of building the fights embracing that quality, using it to my advantage. I wanted to be realistic with the efforts she makes to stay alive and the pain shes feeling every step of the way. Because shes a ballerina, shes very good with close combat flights, shes fast and always three steps ahead, using real life objects as weapons, turning simple things into something deadly, which is a quality that I love. This also allowed us a sort of freedom when creating fun and interesting fights, every set piece was our playground.Ballerina is the first movie in the John Wick franchise to not be directed by Chad Stahelski, with Len Wiseman (the Underworld series, Live Free or Die Hard) stepping behind the camera this time. PlayIGN recently asked Wiseman about the challenge of adopting the distinctive aesthetic of such a popular existing franchise while also attempting to put his own directorial stamp on it: The tone and aesthetic of John Wick is largely what I love and respect about the franchise. So my approach with Ballerina was multi-layered; I wanted it to feel familiar and connected in ways that fans like myself would hope it to be, while bringing in many new and creative layers of my own as a filmmaker. It was a challenging balance, but one I was excited to tackle. I had fun with it.Ballerina hits theaters June 6th.
    0 Comentários ·0 Compartilhamentos ·36 Visualizações
  • Assassin's Creed Shadows Review
    www.ign.com
    Its wild that it took almost 20 years and dozens of games for the biggest stealth action series around to finally bend towards feudal Japan. Assassin's Creed Shadows makes the most of that theme, with a great pair of shinobi and samurai heroes sharing center stage that are well-written and fun to skulk through giant castles or wade into vicious battles with. Besides the setting, the bulk of the changes this time focus on making smaller tweaks to well-established systems, such as less cluttered maps and skill trees, while also doubling down on things that really worked in 2023s Assassins Creed Mirage, like the more focused and tougher combat that accompanies its better paced main quests. Its not a perfect reset, as imbalances and missed opportunities abound, but I feel more confident than ever that Assassins Creed could be back and here to stay.Like a river in the rainy season, Shadows story overflows with cliches that are signature to fiction set in this era. Warriors wander the land to bring honor to themselves and their masters. Absent rulers let wealthy bureaucrats exploit the poor. Bandits hold the countryside in the cold grip of fear. If youre a fan of James Clavells Shogun or the excellent movies of Akira Kurosawa, you have certainly seen the bulk of what protagonists Yasuke and Naoe are made to navigate. This isnt a bad thing, and morally complex intersecting plots still keep the intrigue high, which is the same trick that made Assassins Creed Valhallas stories work when they did. I dont think I was particularly wowed by the writing on a regular basis, but there are some standout moments of tense reflection and curious happenings sprinkled throughout. The typical Assassins Creed conspiracy woven into it fits perfectly within the war torn Sengoku period of Japan, too, like a hidden blade snugly in its wrist sheath. Assassin's Creed Shadows Review ScreenshotsThe leads themselves are wonderful. You spend a lot of the early game with the sharp-witted and broody Naoe, who is among the last shinobi warriors of the Iga clan, a role thrust upon her by tragedy. That tragedy befell her in part by the hands of the charismatic hulk Yasuke, who is a tireless warrior for justice and peace. When they begin working together, they are frequently each others most reliable consul, with sound and often different perspectives on the events going on around them. In other words, they truly do balance one another, and while I dont think either one would win popularity contests against other series stars like Ezio or Edward, together they serve as the bright light in the center of a largely dark tale of revenge.The story is organized in a way that can be enjoyed in pieces and at your leisure without getting too lost between plot points.The story overall is paced similarly to Valhalla, where the cardinal reason to be in each of the nine regions of the map is to play through a mostly self-contained chapter. That said, Shadows does a better job of making sure at least some story elements and characters dont just completely vanish when you leave a region the way they did in its predecessor. Not every new lord or business man you meet becomes completely irrelevant after youve solved their problems. I also found these sections, and the overall time it took to move from chapter to chapter, to be more brisk and less filled with frustrating filler than past games. Its still a bit too full of go here, do that as bridges between major moments than Id prefer, but it's organized in a way that can be enjoyed in pieces and at your leisure without getting too lost between plot points, almost like how one might read a good book. Most of the missions in Shadows start on the objective board, a bigger and more elaborate chart of people that need assistance and targets that need eliminating adapted from Assassins Creed Mirage. Thematically, this approach matches the tone of using all the information you gather to identify hidden members of the secret society trying to plunge Japan into chaos. Functionally, the way it organizes outstanding tasks and the people involved is far more useful than the old bulleted quest lists. It does trade some of the magic of exploration away as a cost of this efficacy, though. More than once I organically stumbled across a jerk that couldnt be talked down, just to kill them and find not only his crossed out profile tacked to my board, but also the exact number of remaining silhouettes of the gang I had no idea they were a part of until right then. But its a trade I would make every time.Selecting a quest gave me a short list of clues to help discern where the objective was, which is easier to figure out depending on how well Id searched that part of the map already. Past games have given hints to identify targets like this before, hoping to create some friction between you and the effort to find your quarry, but Shadows is the first one that I felt constantly made me look at my map and actually deduce where the spot in question might be by using those clues and some educated guesses. I could use scouts, one of the assets you can develop at your hideout, to assist in the narrowing process, pinging an area on the map and highlighting unidentified objectives in the zone. This doesn't reveal hidden locations or features of the map outside of just a marker though, so it's a bad way to clear fog of war from a distance. It will also cost one scout whether they find something or not, and scouts are replenished in very few ways, so scouting can be a real risk if youre trying to make progress in the main story especially early on.I felt compelled to just ride through the countryside and genuinely explore.Rather than lighting up your map with a galaxy of tooltips, Shadows mostly relies on sparse point-of-interest icons to push you towards the areas youll need to see the finer details of in-person. Even when you climb up to the signature highpoints to take a good long look at your surroundings, what youll see is a bevy of nondescript icons that tell you that something is out there, but youre gonna have to hop down from that perch and go check them out for yourself to know what. I love this I could feel my brain starting to detangle the checklist conditioning that years of these games had instilled in me. Not only did I feel compelled to just ride through the countryside and genuinely explore stuff without much expectation of grand rewards, I also felt no nagging compulsion to check off every possible thing to do in a region inorganically. Jarrett Green's Top 10 Assassin's Creed GamesSee AllMost of these undiscovered locations fall into one of a number of reliable categories, like castles you can infiltrate and attempt to steal special gear from or any of the many villages scattered across Japan, but you cant be sure unless you take it in for yourself. A common thing I would always stop to handle whenever I came across them were world activities these are smaller locations and events that, when completed, add knowledge points to your characters, increasing their knowledge levels and adding new options to their skill trees. Not all of these events are exciting, with running around temples to find missing scroll pages being my least favorite, but they often dont take too long and the points are worth it in the end. And in the case of something like the horse archery challenges, they can add an interesting distraction from the action for a short spell.I was absolutely flooded by the cosmetics I unlocked just in the natural course of completing tasks and looting.Between outings, I spent some time at the hideout, this iteration of Valhallas Ravensthorpe settlement. After collecting minerals, crops, and wood out in the world, you can use those resources to build and upgrade important buildings here that give you access to new assets. I spent the majority of my time at the forge managing my equipment, while other important buildings provide more passive additions or have features that can be managed in places outside of the hideout, like the new summoning ability from the dojo which let me call in help from certain allies I met during my adventure. Im glad I didnt have to dote on this place very much as I personally cant be bothered to decorate a homestead, but for those interested in that sort of thing I was absolutely flooded by the cosmetics I unlocked just in the natural course of completing tasks and looting, so youll never be starved for options to spice the place up.What We Said About Assassin's Creed MiragePlayAssassins Creed Mirages return to the stealthy style that launched this series doesnt do everything right, but everything it does feels like it was done with purpose. This means a shorter game with a smaller map, fewer collectibles, smaller scope in combat, and a limited selection of gear to play with all of which I found refreshing relative to the arguably bloated scale of 100-hour games like Odyssey and Valhalla. It also means an overly simplistic plot with mostly forgettable characters, but what the story lacks in depth it makes up for with its straightforward quest progression and fast pacing. Though there's no big standout wow moment, Baghdad is a beautiful location in its own right, and the worlds detail is focused inward, making every alley and hovel feel well traveled and full of detail and history. Id recommend Mirage to anyone whos lapsed on Assassins Creed, as its back-to-basics approach is a successful first step in returning the feeling that the earlier industry-defining games gave me so long ago. Jarrett Green, October 4, 2023Score: 8Read the full Assassin's Creed Mirage review
    0 Comentários ·0 Compartilhamentos ·34 Visualizações