0 Commentarios
0 Acciones
33 Views
Directorio
Directorio
-
Please log in to like, share and comment!
-
UNITY.COMThe Game Kitchen on 3 technical challenges making The Stone of MadnessEarlier this year, The Game Kitchen launched The Stone of Madness, a tactical RPG where players help five inmates escape from an inquisitorial prison. In this guest post, three devs from the studio share how they tackled rendering, UI, and testing challenges during development.We’re The Game Kitchen, and we recently released The Stone of Madness on PC and consoles. We want to share some of the most pressing challenges we faced during the development of our latest project, approaching them from a technical perspective with practical examples. In this collaborative article, our programming team breaks down key solutions we implemented in Unity to optimize both performance and development efficiency. First, Adrián de la Torre (graphics programmer) will explain how we designed and rendered the game's art pipeline to achieve its distinctive visual style. Next, Alberto Martín (UI programmer) will detail how we leveraged Noesis to streamline UI development, enhancing the workflow with UX improvements based on user feedback. Finally, Raúl Martón (gameplay programmer) will showcase how we externalized and automated tests for complex in-game actions on a server, ensuring that multiple corner cases were handled without disrupting integration.Making madness look good: A look at the custom render pipelineAdrián de la Torre, Graphics Programmer, The Game Kitchen The Stone of Madness combines 2D visuals with 3D gameplay mechanics, which presents a unique technical challenge. While players see a 2D world, the game's underlying systems operate in three-dimensional space, creating a distinctive duality in its design.To address this challenge, our development team created a custom rendering pipeline that effectively bridges the gap between 3D gameplay information and 2D visual representation. This solution implements multiple rendering passes and specialized techniques to maintain visual consistency while preserving the intended gameplay depth, allowing for seamless translation of 3D elements into the game's distinctive 2D art style.In The Stone of Madness, there are two main scenarios that contribute to the rendering of a frame.The first scenario, which we call the Proxy Scenario, is comprised of geometric primitives that calculate the lighting of the final frame.The second scenario is the Canvas Scenario, which consists of sprites that match the Proxy geometry’s shape and position. The Canvas is arranged in layers to simulate 3D space and achieve proper Z-sorting with moving game elements.The following section details each step in our graphics pipeline for frame rendering.1. Cone of visionWhenever a cone of vision or game ability is enabled, it initiates the first step in the pipeline. We position a camera at the NPC’s point of view (PoV) to render the depth of proxies within its field of view (FoV).Then, in another render texture, the camera outputs a gradient of the distance from the player’s origin in the B channel, which is used for skill area effects.Using the NPC’s PoV render texture, the cone of vision camera renders a cone over the previous texture in the R and G channels with information about obstacles and distance.The final pass renders sound waves in the Alpha channel.This is the final texture created in this step, which will be used in the Canvas Camera step to render the scene’s sprites.2. Canvas Render ID CameraEach proxy in our project has an associated Render ID (a float value). The proxy and its related sprite share the same Render ID. In this step, we render the Render ID float value into a render texture.In the subsequent step, we use this texture to match the lighting information calculated in the proxy scenario with the sprites in the Canvas Scenario.3. LightingThe lighting in our game consists of:Baked lighting: Natural lights that remain permanently active, such as exterior lightingMixed lighting: Static lights in the scene that can be toggled on and off, such as candlesReal-time lighting: Light that moves throughout the scene and can be toggled on and off (we implemented this in only one instance, Alfredo’s oil lamp)Using the RenderID texture, we create a render texture containing the lighting information from the proxy scene.4. Canvas CameraAfter creating all render textures, a camera begins rendering the sprites with information about lighting, skill areas of effect, cones of vision, and noise waves.5. Post-processingColor grading, vignetting, and other effects are applied in a post-processing pass.6. UIFinally, the UI is overlaid.Madness in the HUD: Speeding up UI processesAlberto Martín, UI Programmer, The Game KitchenThe final release version of The Stone of Madness features over 50 user interfaces. The reason behind that number is that this game has a lot of data to show the user. Our UI work was very time consuming, especially with how small the team was at the start, and so we were continuously optimizing our processes to ensure we were achieving good results in as little time as possible.Our UI work spanned the whole project, so it was important that our UI/UX designers clearly understood all the features we needed to implement. To ensure that our game provided a good user experience and was fun to play, we were careful to keep an open line of communication between the programming and design teams.To create the best versions of all of our UI components, we needed to remove the silos between our technical teams and our creative/research teams so everyone was actively involved in the game’s development. Here’s how we approached this two-part workflow.Research and creative’s role in UI designOur UI/UX designers are responsible for defining how UI elements will look in the final game, and ensuring we deliver a satisfying user experience. With this in mind, they began by creating each element with minimal technical load and validating it with potential users. That process looked like this:Requisites: Understanding the player’s needs and creating a list of the game’s needs and user goalsInvestigation: Looking at other games to see how they handled similar problemsWireframes: Working on the schematics and the structure (no final art at this point)Mock-up: At this point, we mount the almost fully designed interface with previously created elements (buttons, scrolls, frames, etc.), allowing us to iterate without much effortPrototype: We build a prototype on Figma using our mock-up, simulating interactions with gamepads and keyboard/mouse to show how it will work in a real environment.User test: Using our previously created prototype, we initiate a user test, validating the needs and goals we identified in Step 1.Iteration phase: If the user test meets expectations, it’s passed on to technical part processes, make more iterations, or perform further testing if it’s convenient. Technical UI implementationAs mentioned previously, the number of UI elements in The Stone of Madness is huge. Developing a UI engine is expensive, so we needed to use a framework that was easy to learn with decent tools and workflows. After evaluating a range of middleware, we choose Noesis GUI, which follows the Model-View-ViewModel (MVVM) pattern.We chose Noesis because it’s based on WPF (Windows Presentation Framework) and follows the MVVM model in a manner that we can reuse most documentation, bibliography, forum entries, and so on to troubleshoot the majority of issues. This framework has been around for a while – it’s now 18 years since its first release – and is familiar to a large number of UI devs, which gives our studio the option to hire from a comparatively larger talent pool to implement interfaces and tools for our projects. Another important thing about Noesis is that we can use the same tools from WPF.With XAML, our UI creative team was involved in layout work and polishing all the elements with minimal technical involvement. Thanks to the MVVM approach, our technical UI programmers could focus on functionality and provide support to the creative teams in certain areas when necessary. Testing (or, how not to go mad creating a game with a systemic design)Raul Martón, Gameplay Programmer, Teku StudiosGameplay in The Stone of Madness is based on three fundamental pillars: Player skills, NPC AI, and scene interactions. Each of these three systems are fundamentally intertwined, which exponentially increases the number of situations the player needs to control – and the number of scenarios we need to test.As soon as we started the project, we realized that a traditional QA system was going to be insufficient. There were simply too many scenarios that depended on several pieces interacting with each other in a particular way, creating an uncontrolled situation. Moreover, these situations could well occur in a window of time that’s just too small for a QA team to test comfortably.To solve these problems we created a suite of automatic tests. The idea was that all the possible scenarios/situations that could occur to our development team in relation to a particular system, could be accounted for and automatically tested much more efficiently in a simulated game environment.To provide an example, one of The Stone of Madness’s lead characters, Amelia Exposito, has a pickpocket ability. While implementing this skill, we initiated a series of tests to ensure:The basic functioning of the skill was correct: When stealing from an NPC, the pickpocketing mini-game would open and the game would pause until it’s over.Less common situations are also covered: If you try to steal from an NPC while another NPC (like a guard) is watching you, or if the NPC is running, the action is impossible.Creating an integration testEach integration test we created required setup based on the following requirements:1. A scene specially prepared to create this particular situationTo test the pickpocket skill, we created a scene with two guards and one player. We positioned each character so they’re facing in the direction needed for the situation to be tested accurately (remember, the player can’t use pickpocket if they’re within the FoV of a guard).Additionally, the scene should only include the minimum components necessary to test the scenario, as extraneous elements can add noise to the measurement. This is why our example scene has no HUD, manual input system, sound effects, and so on.This step requires that the game structure is well compartmentalized, which can take some effort, but, once achieved, is well worth it! 😉2. A test code capable of forcing the situation to be testedMany of the situations we needed to test can be difficult and time consuming to create manually and need a code push to initiate.For example, if we want to create a test scenario to ensure our NPCs never step on mousetraps unless the NPC is moving, the chain of instructions would be:Launch the sceneWait one secondSpawn a mousetrap under the NPCWait another secondCommand the NPC to start walking in any directionThis part of the project is very sensitive to any changes during development (dependent on factors like changing game specs and various unexpected scenarios), so it’s critical that both the test code and resulting feedback are as clear as possible.There’s nothing worse than a test that fails without giving any clear information about what’s actually going wrong.3. A reliable way of knowing whether the scenario is working as intended, or whether the test has detected an error in logicAutomated testing still requires oversight. Increasing numbers of tests with greater specificity on what’s being tested can become difficult to monitor, or scenarios end up not being tested for long enough to be statistically significant. To get around these problems, we created custom tools.For example, some of our tests involved combined interactions between several NPCs in a scene. To monitor these cases properly, we created a system to log the different AI states that NPCs cycle through during the test.We also needed a good API that would give us visibility into the current game state (has an NPC been knocked unconscious? Has an NPC entered a routed state? How many times? Which player character has been captured? And so on).4. A system to be able to launch all these tests quickly:Unlike unit tests, automated tests must be conducted with the game running in real-time. This can make running these tests very slow.In these circumstances, we’re able to take advantage of the fact that our game does not use Unity’s standard updates system. Instead, all of our components use a Tick() function, which simulates Unity updates but launched in a controlled way by our game engine.This helped us achieve a couple of different goals with our tests:First, we could speed up their execution with a forcing function that runs several frames of code for every frame of the game.Second, because these tests are conducted in real-time, they’re very susceptible to variations caused by the frame rates on the computer running the testing scenario. By converting them to a controlled frame rate, we avoid this variance: If a test passes on one machine, it will pass on all machines, and, and vice versa.And this would be the result.How secure testing helps us avoid broken buildsWith the creation of this test suite, we also needed to implement a safeguard that would automatically interrupt the merge of a branch if it contained bugs. To ensure this, we create an automatic merge script that launches every time a change is committed to the main project branch.This script makes sure to launch all these tests and monitor their results. If any test fails, it returns with an error detection and interrupts the merge.With this system, we can avoid situations where a change in an apparently isolated system breaks other mechanics it interacts with.Thank you to The Game Kitchen for sharing this behind-the-scenes look at The Stone of Madness's development. Explore more Made With Unity games on our Steam Curator page and get more developer insights on Unity’s Resources page.0 Commentarios 0 Acciones 30 Views
-
TECHCRUNCH.COMOpenAI may be developing its own social platform but who’s it for?OpenAI is reportedly building its own X-like social network. The project is still in the early stages, but there’s an internal prototype focused on ChatGPT’s image generation that contains a social feed, The Verge reports. A social app would give OpenAI its own unique, real-time data that X and Meta already use to help train their AI models.0 Commentarios 0 Acciones 32 Views
-
VENTUREBEAT.COMEthically trained AI startup Pleias releases new small reasoning models optimized for RAG with built-in citationsPleias emphasizes the models’ suitability for integration into search-augmented assistants, educational tools, and user support systems.Read More0 Commentarios 0 Acciones 31 Views
-
VENTUREBEAT.COMEthically trained AI startup Pleias releases new small reasoning models optimized for RAG with built-in citationsPleias emphasizes the models’ suitability for integration into search-augmented assistants, educational tools, and user support systems.Read More0 Commentarios 0 Acciones 34 Views
-
WWW.THEVERGE.COMMotorola’s new Razr Ultra brings the wood back panel backMotorola is kicking off hot foldable summer in style — and a little early this year. This time around it’s offering not two but three Razr models with the introduction of the premium Razr Ultra. It comes with some notable upgrades, and all three phones get some quality-of-life improvements like a sturdier hinge. But the hardware updates are otherwise minimal while Moto leans hard into eye-catching colors and finishes for its fun flip phone.Let’s just say it up front: the Ultra costs $1,299. That’s like, Samsung Galaxy Ultra territory, which is just a lot of money for a phone. For that price you get a generous 16GB of RAM and 512GB of storage in the base model and a Snapdragon 8 Elite chipset, which is the Android flagship processor du jour. The Ultra offers up to 30W wireless charging and 68W wired charging. It also has a slightly larger 7-inch inner screen than the Razr and Razr Plus, which gets a little brighter at up to 4,500 nits compared to 3,000 nits. Oh, and there’s a dedicated AI button, which… more on that in a sec.C’mon, this rules.The Ultra shares the same 4-inch outer screen size as the Razr Plus, but the Razr Ultra has some differentiating hardware in the camera department. The main, ultrawide, and selfie cameras each have 50 megapixel sensors. The Razr and Razr Plus also offer 50-megapixel main cameras, but the Ultra’s sensor uses bigger pixels: 2.0μm binned “quad pixels” versus 0.8μm on the other two.Here’s the thing though: wood grain back panel. The Ultra gets some fun new finishes, including a real wood back panel that’s giving Moto X. More wood gadgets, please. There’s also something called Alcantara that you may have seen once upon a time on a Microsoft Surface. It’s a synthetic fabric with kind of a suede feel and comes in a dark green that I really dig, too. The Ultra also comes in a textured deep red finish and a magenta-ish pink that looks at home on the Razr line.All three phones come with an updated hinge that uses titanium rather than stainless steel, and Motorola says it’s four times stronger than the previous design. It also reduces the inner screen crease when the phone is flipped open. I never found the crease too bothersome to start with, but it practically disappears on these new phones, and you can barely feel it under your finger. It’s kind of spooky.You only see the crease when you really go looking for it.All three Razr models come with an IP48 rating this time, like Samsung’s Z Flip 6 foldables. That means they’re fully water-resistant, but dust is still a concern. That “4” rating means the phones are protected against particles bigger than 1mm, but dust is smaller than that, so you’ll still want to be careful if you bring your Razr to the beach. Otherwise, the standard Razr and Razr Plus are pretty minor refreshes. The Razr Plus still uses a Snapdragon 8S Gen 3 chipset like last year’s model, though the Razr gets a bump up from a Dimensity 7300X to a 7400X chipset.Remember that AI button on the Ultra? Well, Motorola is formally introducing several AI features across the Razr lineup, most of which we’ve seen previously in beta. They’re all housed in an interface called Moto AI, which is available to all three Razrs — though only the Ultra has a physical shortcut button to get there. And honestly they look more promising than the usual AI gimmicks — though there are AI gimmicks here, too. I’m particularly interested to see how “Remember this” works. It’s designed in the same vein as Pixel Screenshots and Nothing’s Essential Space, offering a place to keep screenshots, photos, and notes handy without having to go find them in different apps.You can ask Moto AI to “remember” something and then you can ask about it later. You can also prompt it with “Catch me up” to summarize recent notifications that came in while you were busy. Could be handy! Motorola has also announced a partnership with Perplexity, which you can access through Moto AI, and also helps power some predictive suggestions. But as with all AI features right now, we need to see if it actually does what it’s supposed to before getting too excited.1/8Photo: Allison Johnson / The VergeThere’s no wood grain option on the regular Razr or Razr Plus, boooo, but there are plenty of new colors and finishes to choose from. Last year’s mocha mousse is back on the Razr Plus, and the standard Razr comes in a fun minty green that kept catching my eye at Moto’s launch event.The non-Ultra Razrs cost the same as their predecessors — $999 for the 2025 Razr Plus and $699 for the Razr. All three phones will be available for preorder in the US on May 7th and go on sale May 15th.Photography by Allison Johnson / The VergeSee More:0 Commentarios 0 Acciones 31 Views
-
TOWARDSDATASCIENCE.COMPredicting the NBA Champion with Machine LearningEvery NBA season, 30 teams compete for something only one will achieve: the legacy of a championship. From power rankings to trade deadline chaos and injuries, fans and analysts alike speculate endlessly about who will raise the Larry O’Brien Trophy. But what if we could go beyond the hot takes and predictions, and use data and Machine Learning to, at the end of the regular season, forecast the NBA Champion? In this article, I’ll walk through this process — from gathering and preparing the data, to training and evaluating the model, and finally using it to make predictions for the upcoming 2024–25 Playoffs. Along the way, I’ll highlight some of the most surprising insights that emerged from the analysis. All the code and data used are available on GitHub. Understanding the problem Before diving into model training, the most important step in any machine learning project is understanding the problem:What question are we trying to answer, and what data (and model) can help us get there? In this case, the question is simple: Who is going to be the NBA Champion? A natural first idea is to frame this as a classification problem: each team in each season is labeled as either Champion or Not Champion. But there’s a catch. There’s only one champion per year (obviously). So if we pull data from the last 40 seasons, we’d have 40 positive examples… and hundreds of negative ones. That lack of positive samples makes it extremely hard for a model to learn meaningful patterns, specially considering that winning an NBA title is such a rare event that we simply don’t have enough historical data — we’re not working with 20,000 seasons. That scarcity makes it extremely difficult for any classification model to truly understand what separates champions from the rest. We need a smarter way to frame the problem. To help the model understand what makes a champion, it’s useful to also teach it what makes an almost champion — and how that differs from a team that was knocked out in the first round. In other words, we want the model to learn degrees of success in the playoffs, rather than a simple yes/no outcome. This led me to the concept of Champion Share — the proportion of playoff wins a team achieved out of the total needed to win the title. From 2003 onward, it takes 16 wins to become a NBA Champion. However, between 1984 and 2002, the first round was a best-of-five series, so during that period the total required was 15 wins. A team that loses in the first round might have 0 or 1 win (Champion Share = 1/16), while a team that makes the Finals but loses might have 14 wins (Champion Share = 14/16). The Champion has a full share of 1.0. The @warriors take home the NBA title A final look at the bracket for the 2021-22 #NBAPlayoffs presented by Google Pixel. pic.twitter.com/IHU72Kr8AN— NBA (@NBA) June 17, 2022 Example of playoff bracket from the 2021 Playoffs This reframes the task as a regression problem, where the model predicts a continuous value between 0 and 1 — representing how close each team came to winning it all. In this setup, the team with the highest predicted value is our model’s pick for the NBA Champion. This is a similar approach to the MVP prediction from my previous article. Predicting the NBA MVP with Machine Learning Data Basketball — and the NBA in particular — is one of the most exciting sports to work with in data science, thanks to the volume of freely available statistics. For this project, I gathered data from Basketball Reference using my python package BRScraper, that allows easy access to the players’ and teams data. All data collection was done in accordance with the website’s guidelines and rate limits. The data used includes team-level statistics, final regular season standings (e.g., win percentage, seeding), as well as player-level statistics for each team (limited to players who appeared in at least 30 games) and historical playoff performance indicators. However, it’s important to be cautious when working with raw, absolute values. For example, the average points per game (PPG) in the 2023–24 season was 114.2, while in 2000–01 it was 94.8 — an increase of nearly 20%. This is due to a series of factors, but the fact is that the game has changed significantly over the years, and so have the metrics derived from it. Evolution of some per-game NBA statistics (Image by Author) To account for this shift, the approach here avoids using absolute statistics directly, opting instead for normalized, relative metrics. For example: Instead of a team’s PPG, you can use their ranking in that season. Instead of counting how many players average 20+ PPG, you can consider how many are in the top 10 in scoring, and so on. This enables the model to capture relative dominance within each era, making comparisons across decades more meaningful and thus permitting the inclusion of older seasons to enrich the dataset. Data from the 1984 to 2024 seasons were used to train and test the model, totaling 40 seasons, with a total of 70 variables. Before diving into the model itself, some interesting patterns emerge from an exploratory analysis when comparing championship teams to all playoff teams as a whole: Comparison of teams: Champions vs Rest of Playoff teams (Image by Author) Champions tend to come from the top seeds and with higher winning percentages, unsurprisingly. The team with the worst regular season record to win it all in this period was the 1994–95 Houston Rockets, led by Hakeem Olajuwon, finishing 47–35 (.573) and entering the playoffs as only the 10th best overall team (6th in the West). Another notable trend is that champions tend to have a slightly higher average age, suggesting that experience plays a crucial role once the playoffs begin. The youngest championship team in the database with an average of 26.6 years is the 1990–91 Chicago Bulls, and the oldest is the 1997–98 Chicago Bulls, with 31.2 years — the first and last titles from the Michael Jordan dinasty. Similarly, teams with coaches who have been with the franchise longer also tend to find more success in the postseason. Modeling The model used was LightGBM, a tree-based algorithm widely recognized as one of the most effective methods for tabular data, alongside others like XGBoost. A grid search was done to identify the best hyperparameters for this specific problem. The model performance was evaluated using the root mean squared error (RMSE) and the coefficient of determination (R²). You can find the formula and explanation of each metric in my previous MVP article. The seasons used for training and testing were randomly selected, with the constraint of reserving the last three seasons for the test set in order to better assess the model’s performance on more recent data. Importantly, all teams were included in the dataset — not just those that qualified for the playoffs — allowing the model to learn patterns without relying on prior knowledge of postseason qualification. Results Here we can see a comparison between the “distributions” of both the predictions and the real values. While it’s technically a histogram — since we’re dealing with a regression problem — it still works as a visual distribution because the target values range from 0 to 1. Additionally, we also display the distribution of the residual error for each prediction. (Image by Author) As we can see, the predictions and the real values follow a similar pattern, both concentrated near zero — as most teams do not achieve high playoff success. This is further supported by the distribution of the residual errors, which is centered around zero and resembles a normal distribution. This suggests that the model is able to capture and reproduce the underlying patterns present in the data. In terms of performance metrics, the best model achieved an RMSE of 0.184 and an R² score of 0.537 on the test dataset. An effective approach for visualizing the key variables influencing the model’s predictions is through SHAP Values, atechnique that provides a reasonable explanation of how each feature impacts the model’s predictions. Again, a deeper explanation about SHAP and how to interpret its chart can be found in Predicting the NBA MVP with Machine Learning. SHAP chart (Image by Author) From the SHAP chart, several important insights emerge: Seed and W/L% rank among the top three most impactful features, highlighting the importance of team performance in the regular season. Team-level stats such as Net Rating (NRtg), Opponent Points Per Game (PA/G), Margin of Victory (MOV) and Adjusted Offensive Rating (ORtg/A) also play a significant role in shaping playoff success. On the player side, advanced metrics stand out: the number of players in the top 30 for Box Plus/Minus (BPM) and top 3 for Win Shares per 48 Minutes (WS/48) are among the most influential. Interestingly, the model also captures broader trends — teams with a higher average age tend to perform better in the playoffs, and a strong showing in the previous postseason often correlates with future success. Both patterns point again to experience as a valuable asset in the pursuit of a championship. Let’s now take a closer look at how the model performed in predicting the last three NBA champions: Predictions for the last three years (Image by Author) The model correctly predicted two of the last three NBA champions. The only miss was in 2023, when it favored the Milwaukee Bucks. That season, Milwaukee had the best regular-season record at 58–24 (.707), but an injury to Giannis Antetokounmpo hurt their playoff run. The Bucks were eliminated 4–1 in the first round by the Miami Heat, who went on to reach the Finals — a surprising and disappointing postseason exit for Milwaukee, who had claimed the championship just two years earlier. 2025 Playoffs Predictions For this upcoming 2025 playoffs, the model is predicting the Boston Celtics to go back-to-back, with OKC and Cleveland close behind. Given their strong regular season (61–21, 2nd seed in the East) and the fact that they’re the reigning champions, I tend to agree. They combine current performance with recent playoff success. Still, as we all know, anything can happen in sports — and we’ll only get the real answer by the end of June. (Photo by Richard Burlton on Unsplash) Conclusions This project demonstrates how machine learning can be applied to complex, dynamic environments like sports. Using a dataset spanning four decades of basketball history, the model was able to uncover meaningful patterns into what drives playoff success. Beyond prediction, tools like SHAP allowed us to interpret the model’s decisions and better understand the factors that contribute to postseason success. One of the biggest challenges in this problem is accounting for injuries. They can completely reshape the playoff landscape — particularly when they affect star players during the playoffs or late in the regular season. Ideally, we could incorporate injury histories and availability data to better account for this. Unfortunately, consistent and structured open data on this matter— especially at the granularity needed for modeling — is hard to come by. As a result, this remains one of the model’s blind spots: it treats all teams at full strength, which is often not the case. While no model can perfectly predict the chaos and unpredictability of sports, this analysis shows that data-driven approaches can get close. As the 2025 playoffs unfold, it will be exciting to see how the predictions hold up — and what surprises the game still has in store. (Photo by Tim Hart on Unsplash) I’m always available on my channels (LinkedIn and GitHub). Thanks for your attention! Gabriel Speranza Pastorello The post Predicting the NBA Champion with Machine Learning appeared first on Towards Data Science.0 Commentarios 0 Acciones 37 Views
-
WWW.GAMESPOT.COMElder Scrolls IV: Oblivion Remastered ReviewYes, the original version of Oblivion did not have any scruff in sight. No beards in the character creator and not a single mustache can be found in the enormous province of Cyrodiil. Adding beards to a handful of NPCs throughout the world doesn't change Oblivion's core experience. In fact, even with the facial hair and improved graphics, half of the characters I met during my adventure still looked unsettling. To some, this may be off-putting--especially when juxtaposed with the remaster's otherwise astounding visuals--but for me, Oblivion isn't Oblivion without some truly uncomfortable character models. It's all part of that "charm" that game director Todd Howard mentioned in the reveal stream.The folks at Virtuous seem to understand that trademark Oblivion "charm," too, because the remaster keeps the best of the Bethesda jank intact while gently reworking some of Oblivion's more dated mechanics. Purists will certainly find things to nitpick, and first-timers may scratch their heads at some of the jank that was left in, but Oblivion Remastered feels like the most logical compromise. The visuals have been entirely recreated to take advantage of Unreal Engine 5, but the characters still don't look quite right. The attack animations have been redone, but the combat is still generally bad. The streamlined leveling mechanics retain the class system, but it's much harder to get soft-locked. The UI and menus have been consolidated and refreshed, but Oblivion's iconic map screen is identical to the original. For the most part, Oblivion Remastered manages to walk that thin line of familiarity and freshness.The biggest surprise is its presentation. Oblivion Remastered looks stunning. Virtuous and Bethesda Game Studios have taken advantage of Unreal Engine 5 and it is without a doubt the most technically impressive game Bethesda Game Studios has ever released. The dynamic lighting, vibrant skyboxes, broader color palette, and hyper-realistic textures give the remaster that current-gen AAA sheen that players expect. These enhancements extend to the character models as well, as NPCs are lavishly detailed. You can see the strands of hair on their freshly grown beards and the pores on their faces, but they're still a little uncanny. In most cases, the NPCs look even stranger when they open their mouths. There's a bizarre disconnect between the hyper-realistic visuals and the weird faces and dated facial animations. The thing is, that awkwardness is part of what makes Oblivion so special, and there's plenty of it in this remaster.Continue Reading at GameSpot0 Commentarios 0 Acciones 5 Views
-
GAMERANT.COMBest Tabletop RPGs for Mecha Fans Who Love Giant Robot ActionEven though Dungeons & Dragons and its medieval‑fantasy setting stand as the main reference for tabletop RPGs, other genres and systems offer equally striking experiences, letting players explore different worlds and live unique adventures. One such setting is Mecha, able to provide unmatched freedom in building and customizing giant robots, ensuring epic battles across worlds ravaged by technological wars.0 Commentarios 0 Acciones 4 Views
-
WWW.POLYGON.COMVilverin dungeon puzzle solutions for Oblivion RemasteredVilverin is one of the first dungeons you can go to in The Elder Scrolls 4: Oblivion Remastered. It’s an early game dungeon, but if you know what to look for and how to complete it, you can exit this location with some high-value items like the Varla stone as well as a starting set of armor. This Oblivion Remastered guide will show you how to solve all the puzzles in the Vilverin dungeon plus show you some of the loot you can get. Vilverin dungeon location in Oblivion Remastered Vilverin is located just northeast of the Imperial City. You know you’ve found it when you find scattered ruins and a door with a glowing blue sigil. Walk up to the door, and the game will allow you to pass through the “Stone Door to Vilverin” and enter the dungeon. All Vilverin dungeon puzzle solutions in Oblivion Remastered There are a handful of puzzles (and other moments where finding the right key will help you) where it’s not always clear what you have to do. Below we explain how to solve the puzzles and make your way through the dungeon. Vilverin puzzle #1 solution The first little puzzle in this dungeon is a secret door located in the Vilverin Canosel section of the dungeon. The stone door to Vilverin Canosel will take you to a room with hallways that branch north, south, east, and west from a central point. Walk forward from where you entered and turn left to arrive at the hallway with the secret door in it. If you are confused at any point, pull up your dungeon map and it shows you the path to the next room (even if you haven’t discovered the door). The door will automatically open once you walk far enough into the room. Vilverin puzzle #2 solution The second puzzle is a block puzzle in Vilverin Wendesel. In the southern part of Vilverin Wendesel the dungeon, there is a puzzle with six blocks inside a room with six doors. Each block opens and closes a certain door in the room. Pressing the upper right block will open the door with the Vilverin chamber key (the key you need to proceed). Two of the other two blocks will open doors to rooms that contains chests, but be prepared to fight some undead. There is a special switch you need to use to get a rare item called a varla stone. You can get a varla Stone, a valuable item worth 1,000 gold, if you know the right switch to press. When in the far south room of Vilverin Wendesel, you will see a “Varla Stone Cage” hanging in the center. Defeat the undead and walk up the stairs that lead to the upper level of the room. There is a cube-shaped switch there. Press it and the cage containing the stone will lower and you can go collect it. Vilverin dungeon loot in Oblivion Remastered Other than the varla stone, this dungeon is a good opportunity to get some starter armor for your journey and other items like the ayeleid statue and welkynd stones. You can collect loads of welkynd stones as you explore the dungeon, they are the glowing blue stones that help light the building. Here is how to get everything else. Just as you enter the dungeon, walk down the spiral staircase to the first room. Defeat the bandits so you’re safe and look around. Near the center of the room there is a crate with a chest and other items on top of it. The chest contains 11 gold. Here is what you can grab sitting on crate: Iron curiass Iron boots Iron claymore If you look to the right of the cart, you’ll see some sleeping bags on the ground. You can also pick up a set of iron greaves that are sitting on the ground. The other item you will want to grab is the ayeleid statue located in Sel Sancremathi. It’s sitting right in the open but it’s easy to walk past if you aren’t looking for it. Just walk up to it to collect it. It’s worth 250 gold.0 Commentarios 0 Acciones 4 Views