• GAMINGBOLT.COM
    Dynasty Warriors: Origins Looks Set to Surprise Gamers With its Crazy and Over-the-Top Action
    Remember Dynasty Warriors 9? Sadly, we do. TheDynasty Warriors franchise had never been a paragon of quality in games, but up to a certain point, it had never failed to deliver at least a certain level of unabashed enjoyment. In 2018, however, when Dynasty Warriors 9 came alone, the Koei Tecmo series hit its lowest point ever, emerging as a widely criticized title for everything from its dull combat to its bland and shoehorned open world design.As anyone who played the game will tell you including the staunchest ofDynasty Warriors fans all of that criticism was well earned. At times, you definitely see games falling victim to a bandwagon mentality that maybe takes criticism farther than a game might deserve, but whenDynasty Warriors 9is described as one of the worst high profile releases in recent memory, its hard to argue with that. What should have been an open world revitalization of an aging franchise instead turned out to be its worst mainline outing ever, to the extent that it almost seemed to entirely miss the point of what makesDynasty Warriorsgreat most of the time.Sadly, thats been our lingering memory of the series for a while now (2022sDynasty Warriors 9: Empiresdidnt do much to rectify things), but thankfully, at long last, it looks like the beloved franchise might finally be on the comeback trail. In May of 2024, Koei Tecmo announcedDynasty Warriors: Origins,a game billed as a return to the roots outing for the long-running series, and each time we have seen more of it, as we get closer and closer to its looming release, each time, it has looked like an increasingly exciting prospect- especially forDynasty Warriorsfans who are longing for a return to the series golden days.Eschewing its predecessors open world structure,Dynasty Warriors: Originsis instead going to adopt a world map structure, with players being allowed to travel to and mess about in maps of varying sizes, both big and large, both linear and more open ended. Its a structure that will be way more familiar to fans of the series compared toDynasty Warriors 9sbland take on open world design, and that in and of itself is enough to excite fans of the franchise. But of course, thats not the only reasonOriginshas piqued our curiosity.Dynasty Warriorscombat is obviously always been its biggest drawn, and in this department,Originsis looking like its hitting all the right notes. It probably makes sense to lead with developer Omega Forces ambition to finally be able to display as many as 10,000 enemies on screen at once withDynasty Warriors: Origins.Even its less busy scenes will, in fact, see players going up against much larger groups of foes than in previous titles. Coming up against massive hordes and enemies and mowing them down like an unstoppable force of nature has always been one ofDynasty WarriorsUSPs, so its encouraging to seeOriginsfocusing so much of its energies on that area.In fact, if hands-on impressions for the title and the extensive gameplay footage revealed for it so far are anything to go by, it also looks likeOriginsis going to bring back some of the relatively stiffer challenge ofDynasty Warriorsgames. Obviously, this has always been a series about delivering strong power fantasies to players, so youre not going to find FromSoftware levels of difficulty in aDynasty Warriorstitle by any means, but with time, the series has lost more and more of its edge, to the extent that, even though combat has almost always remained fun, after a certain point, it felt like it had lost almost all of its bite.WithDynasty Warriors: Origins, though players will still have the option to stick with lower difficulties and plow through endless hordes of foes with reckless abandon, ramping up the difficulty will result in an experience that pushes back. Based on what has been revealed so far, it seems like several enemy types wont be mere pushovers anymore, and actually making use of tactical abilities, managing the morale of your army, and smartly commanding the soldiers under your command will be a big part of the gameplay loop. Theres obviously inherent appeal toDynasty Warriorsbrand of mindless button mashing action, but its good to know thatOriginsis going to be a little less one-dimensional than that- for those who wish it to be, at least.Meanwhile, the games different take on the series tried and true narrative formula is also looking interesting.Dynasty Warriorsgames are, of course, always retellings of the well-wornRomance of the Three Kingdomsstory, and that much obviously isnt going to change withOrigins. Whatischanging is how much of that story the game is covering, with Omega Force deciding to only cover a part of theThree Kingdomsstory, rather than all of it in its entirety. Now more focused on telling the story of the rise of the eponymous three kingdoms rather than covering the entire saga, the much more zoomed in tale of the game will be able to dive deeper into characters and events in a way that past games werent able to do. Meanwhile,Originswill also feature younger versions of plenty of well-known characters, which means fans of the series (or really, of that period of time as a whole) will be able to see familiar personalities through a different lens, which always has potential to be interesting- when done right, at least.Another area where the game is looking impressive is its visuals. No, maybe it hasnt taken our breath away with spectacular next-gen graphics, but were obviously not going to hold every single game that releases to that standard.On its own merits,Dynasty Warriors: Originstouts great graphics, and thats doubly true if youre familiar with what the series standards have traditionally been in this department in the past. Dont get me wrong, its not likeDynasty Warriorsgames were consistently ugly or anything along those lines, but without a shadow of doubt,Originsis promising to be the series best looking outing by a country mile. From the character models to the detailed and varied environments to the enemies, the large armies you take on, and the way the explosive action is presented on-screen,the game is looking impressive in more ways than one.WhetherDynasty Warriors: Originscan touch the highest highs of the franchise from4to5to8,or whatever one strikes your fancy remains to be seen, and with a series that has had as many instalments as this one has had, it can definitely be difficult to keep hitting the highest levels of quality after a certain point- because of diminish returns, if nothing else. But after the disastrous misstep that wasDynasty Warriors 9and a few years of rest and recuperations afterwards, the series is back with whats looking like the exact right sort of comeback that it needed.Dynasty Warriors: Originscould still end up disappointing in ways that we havent been able to foresee based on its pre-launch coverage, but for now, its looking like that classic, adrenaline-fueled musou action game that fans have been starved of for so long- and thats incredibly exciting.Note: The views expressed in this article are those of the author and do not necessarily represent the views of, and should not be attributed to, GamingBolt as an organization.
    0 Comments 0 Shares 35 Views
  • GAMINGBOLT.COM
    Why We Cant Wait for Sniper Elite: Resistance
    The Sniper Elitefranchise has become something of a personal favourite of mine over the years, and Im sure Im not alone in that. Rebellions historical stealth shooter franchise hasnt shaken any foundations or pulled up any trees with any of its outings, and has largely been happy to sail around in familiar and charted waters. But it has also consistently delivered solid, well-made games that fans of the genre cant help but eat up- which is why every time a new one gets announced, if you enjoy stealth games, chances are youre going to be paying attention to whatever it is thatSniper Eliteis cooking up next.Sniper Elite 5launched in 2022 and turned out to be a predictably solid game, but when the series returns for its next outing imminently, it wont do so with its next numbered entry. No, instead, Rebellion is set to releaseSniper Elite: Resistance, which, on paper, is a standalone spinoff- though honestly, looking at it, it may just as well be a full newSniper Elitegame, based on everything that its promising to deliver.There are some ways where its nature as a standalone spinoff is very apparent. For starters, its clearly making use of the work that Rebellion did forSniper Elite 5 which, honestly, we need to be seeing more of from the games industry. Asset reuse gets a bad rap, and of course, there are plenty of situations where itisfar from desirable, but there are also cases where developers manage to save a great deal of time and money by reusing previous work without having to compromise in terms of actual quality.Like a Dragondoes it time and time again, and were not opposed toSniper Elitedoing the same.Theres also the fact thatSniper Elite: Resistanceis being billed very much as a side story. For the first time in the series history, players wont be playing as American soldier Karl Fairburne, with British Special Operations Executive agent Harry Hawker instead stepping in as the protagonist, in a story that will see him going after a mysterious and deadly new Nazi war machine in Occupied France, running parallel with the events ofSniper Elite 5.Then again, thats an intriguing enough premise in its own right, while more importantly, story has never really been what most people come toSniper Elitefor anyway- soResistancebeing billed as a narrative sidequel shouldnt end up being much of a factor.In the areas that matter (at least in the context of this specific series), the game is making all the right promises, however. Something thatSniper Elitegames have become increasingly better at with time is letting players loose in large, open-ended stealth sandboxes and letting them accomplish a variety of objectives however they wish. DeliveringHitman-style gameplay (but obviously not quite as expansive),the series loves to reward players for being creative, and emphasizes player agency and freedom above all else. That is combined with strongly designed maps that are also densely packed with compelling optional content to take on.All of these are boxes thatSniper Elite: Resistancelooks set to check. Of course, things could always go wrong, and we havent actually played the game ourselves yet, but everything that has been shown of it so far (which is a fair bit of gameplay) has looked incredibly promising, while hands-on previews by various outlets have also suggested thatthe game is looking likely to deliver another dose of that open-ended historical stealth sandbox experience that the series has effectively got down to a science by now.There are other reasons to be excited aboutSniper Elite: Resistanceas well. The Nazi killing genre is one that has obviously always been a classic, and were never going to say no to more (even thoughwe just got a good, healthy taste of it not too long ago in the form of Indiana Jones and the Great Circle). Blasting Nazis in the most hilariously gruesome, creative, and brutal ways possible, with x-ray shots killcams and everything, is what many of us come toSniper Elitefor, and it looks likeResistancewont disappoint in that area.Youll likely have noticed that much of what weve spoken about so far isnt necessarily unique toSniper Elite: Resistance. No, in fact, more of the same is a very accurate way to describe this game, based on everything that weve seen of it so far. Its not a numbered sequel, and in this sense, its definitely treating itself like that- though then again, evenSniper Elite 5 didnt necessarily change things up from its own predecessor that much either. So yes, the upcomingResistanceis definitely looking like its going to be a very familiar experience for anyone whos played any of the series last couple of outings- but theres strong appeal to something like that. You know exactly whatSniper Elite: Resistanceis going to be, and chances are, its going to be good.Given the fact that this is a standalone sidequel, there will probably be some wholl be wondering just how much bang for buck theyre going to get from the game, especially with it being sold as a full priced $60 game. Based on what Rebellion is saying, however, it doesnt look like that should be much of a worry. According to Rebellion, the games campaign will be about as long asSniper Elite 5was, which, as per HowLongToBeat, can be anywhere between 10 to 40 hours long, depending on how thoroughly youre playing it. Even if youre only looking to stick with its campaign, then, it will have plenty of meat on the bone.Beyond its campaign, meanwhile, the game will also feature a co-op survival mode, as well as a PvP multiplayer component, the ability to play the entire campaign co-op, and an Invasions mechanic for the campaign. Again, theres not much here that, in terms of the broad strokes, wont be familiar to anyone who playedSniper Elite 5 thats definitely a running theme with this game but if theres one thing that it seems like we can be sure of, its thatSniper Elite: Resistanceis at least going to have plenty of content on offer, right out the gate. Will Rebellion add to it with post-launch support? The studio has been known to do that with pretty much all of its games, but this isnt technically a numbered, flagship release, so it remains to be seen how heavily it will be supported. Perhaps that will depend entirely on how it sells.Either way, as a fan of the stealth genre and honestly, after the last few years, of the Sniper Eliteseries Im all aboard the hype train forResistance. No, its not the sort of shiver-inducing hype that you get with the biggest of the biggest releases-Sniper Elitenever pretends to be anything more than a AA series, after all. But as a series that consistently delivers well-made and engaging stealth sandbox experiences, even if they never really revolutionize anything,Sniper Elitedeserves the recognition it gets, and then some. Hopefully, it will continue to attract eyeballs withResistanceslooming launch.Note: The views expressed in this article are those of the author and do not necessarily represent the views of, and should not be attributed to, GamingBolt as an organization.
    0 Comments 0 Shares 36 Views
  • GAMINGBOLT.COM
    HBOs The Last of Us Season 2 is Coming in April
    Previously confirmed to be coming sometime this Spring, HBO has narrowed down the release window forThe Last of UsSeason 2 further still, confirming that the highly anticipated next batch of episodes of the acclaimed adaptation will arrive in April. A brief new teaser has also been released. You can view it below.The teaser shows glimpses of a number of scenes and locations that those whove playedThe Last of Us Part 2will find familiar, including the Hospital in Seattle that Ellie finds herself in on her hunt for Abby. Speaking of whom, Abby played by Kaitlyn Dever also takes centerstage in the trailer.As previously confirmed,The Last of Ussecond season will consist of seven episodes, making it shorter than the first season. It will cover only part of The Last of Us Part 2.In addition to the likes of Pedro Pascal (Joel), Bella Ramsey (Ellie), Gabriel Luna (Tommy), and Rutina Wesley (Marie) reprising their roles, it will also introduce a larger cast including the aforementioned Devers Abby, Young Mazinos Jesse, Isabela Merceds Dina, and more. Jeffrey Wright will also play Isaac, a role he played in the game as well.
    0 Comments 0 Shares 36 Views
  • WWW.THEVERGE.COM
    Sam Altmans sister files sexual abuse lawsuit against him his family says its utterly untrue
    Sam Altmans sister files sexual abuse lawsuit against him his family says its utterly untrueSam Altmans sister files sexual abuse lawsuit against him his family says its utterly untrue / Ann Altmans attorney says the family is trying to divert attention away from the harm that they caused.By Jacob Kastrenakes, The Verge's executive editor. He has covered tech, policy, and online creators for over a decade. Jan 8, 2025, 1:55 AM UTCShare this story Image: Cath Virginia / The Verge; Getty ImagesAnn Altman has filed a lawsuit against her brother, OpenAI CEO Sam Altman, alleging that he sexually abused her throughout childhood over a period of nearlyThe rest of the Altman family immediately pushed back on the lawsuit, saying the allegations are utterly untrue and stem from mental health challenges that Ann has faced for years.Annie has made deeply hurtful and entirely untrue claims about our family, and especially Sam, writes Sam, his mother, and his two brothers, in a statement that Sam released on X.Anns lawsuit alleges that Sam abused her from 1997 through 2006, beginning when Ann was three and Sam was 12 and continuing until Sam was a legal adult. The lawsuit, filed in a federal court in Missouri, says that Ann suffered severe emotional distress and has been unable to live a normal life as a result of Sams alleged abuse.An attorney for Ann described the Altman familys statement as an attempt to divert attention away from the harm that they caused. The attorney, Ryan J. Mahoney, said that sexual abuse can cause mental health outcomes such as, persistent PTSD, depression, and anxiety. He also said of Ann specifically that there is no evidence that her own mental health has contributed to her allegations.The Altman familys statement alleges that Ann has made conspiratorial claims over the years about various family members while demanding money from them. The family members say they have offered financial support and asked her to receive medical help but that she refuses conventional treatment.This situation causes immense pain to our entire family, the family statement says. The family says they have chosen not to respond publicly when Ann has made similar claims in the past, but that they feel we have no choice but to address this now that she has filed a lawsuit.Most PopularMost Popular
    0 Comments 0 Shares 29 Views
  • WWW.THEVERGE.COM
    Deltas giving its in-flight screens a major 4K HDR upgrade
    Deltas giving its in-flight screens a major 4K HDR upgradeDeltas giving its in-flight screens a major 4K HDR upgrade / Your in-flight movie is about to look a lot better. By Andrew J. Hawkins, transportation editor with 10+ years of experience who covers EVs, public transportation, and aviation. His work has appeared in The New York Daily News and City & State. Jan 8, 2025, 1:10 AM UTCShare this story Image: DeltaDelta Air Lines announced plans to install new 4K HDR QLED screens in its commercial airplanes, so passengers can experience ultra high-definition entertainment at ultra-high altitudes. The news came as part of Deltas CES keynote at the Sphere in Las Vegas, where it also planned to celebrate its centennial with a musical performance by Lenny Kravitz. The airline announced a raft of new features for air travelers, including new partnerships with YouTube and Uber as well as a new AI-powered chatbot for customer service. But the decision to add 4K screens to its airplanes is one thats sure to tickle the fancy of any air traveler whos ever balked at the middling quality of the current crop of seat-back displays.The news came as part of Deltas CES keynote at the Sphere in Las VegasDelta says its working with Thales Avionics, an in-flight technology company that is also helping to install high-definition screens in Emirates Airbus A350-900s. But dont go looking for the new screens just yet: Delta says it wont start delivering the upgrades in aircraft until 2026.Who actually gets access to the screens, though, will answer the question of whether Delta sees this as technology for all passengers or just the ones in first class. A spokesperson for Delta did not immediately respond to questions about access. Delta has also been testing out Bluetooth connectivity for its in-flight entertainment for several years and has even started quietly rolling it out to some planes, as discovered by a TikTok user. Now, the airline says it plans to offer Bluetooth in all cabins so travelers can pair their personal wireless devices though it didnt offer any specifics beyond that. Deltas in-flight entertainment will also feature an advanced recommendation engine tailored to each passengers unique taste. Again, were lacking details about whats powering this engine and how it will know your particular taste. But in late 2025, were getting improved connectivity through a partnership with Wi-Fi provider Hughes, which replaced Intelsat in 2023. Delta says this will allow for multi-network connectivity for more reliable and stable in-flight internet. That surely will help when streaming YouTube, which SkyMiles members will be able to do ad-free, thanks to the platforms new partnership with Delta. And a new Do Not Disturb mode for their seat-back screen will ensure passengers can sleep without disturbance. Delta is revamping its app to include an AI-powered Concierge chatbot as well as a multi-modal feature that will include Uber and, eventually, air taxis from Joby. The Concierge feature will use the travelers location and arrival and departure information to suggest more efficient routes and will notify users about upcoming passport expirations or visa requirements. And in the years to come, Delta says the feature will be able to make more specific recommendations around packing and weather planning. Delta is also offering SkyMiles customers the ability to link their Uber account to earn miles and other perks. These include: SkyMiles Members can earn 1 mile per dollar spent on UberX rides to and from airports, 2 miles per dollar on premium rides and 3 miles per dollar on Uber Reserve rides. Plus, 1 mile per dollar spent on eligible restaurant and grocery orders.And lastly, Delta says its working with Airbus to design more fuel-efficient airplanes. Delta has said its goal is for sustainable aviation fuel to make up at least 95 percent of its fuel consumption by 2050. But achieving net-zero emissions will be a tall task for an airline and will require rethinking every part of the business. Now, Delta says it will work with Airbus to scale the use of sustainable aviation fuel, which is mostly biofuels made from plant or animal material. And the two companies will collaborate on hydrogen-powered flight projects as well as new designs, like more fuel-efficient wings or new formations to drive wake energy retrieval, Delta says.Most PopularMost Popular
    0 Comments 0 Shares 28 Views
  • WWW.THEVERGE.COM
    Googles new Pixel 4A update is going to lower battery life for some owners
    Google has announced that it is shipping an unexpected update to Pixel 4A phones this week. According to Ars Technica, the company emailed Pixel 4A owners to tell them the update will address battery performance stability but that their batteries may not last as long after its applied.Google repeats that in a new help page titled Pixel 4a Battery Performance Program, where it writes that it had noticed issues with some Pixel 4A phones.From January 8, 2025, Pixel 4a devices will receive an automatic software update to Android 13. After the software update is downloaded, your device will restart automatically to apply the update. For some devices (Impacted Devices), the update includes new battery management features to improve the stability of your batterys performance, so the battery may last for shorter periods between charges. Users of Impacted Devices may also notice other changes, like reduced charging performance or changes to how the battery-level indicator on your phone shows your battery capacity.We want our customers to have the best possible experience with their products, so users of these Impacted Devices are eligible for an appeasement from Google. Not all Pixel 4a devices are impacted by the reduction in battery capacity and charging performance, therefore if your device is not impacted the battery will perform the same as before, and you will not be eligible for an appeasement.Besides having less runtime, the update could mean reduced charging performance or change how the phone shows battery capacity. Google hasnt been specific about whats behind the issue, but the circumstances are similar to Apples iPhone batterygate mess in 2017. Apple said its software slowed down iPhones with aging batteries to prevent accidental shutdowns, but it didnt inform customers about why their devices had reduced performance and ended up with hundreds of millions in court settlement payments. In this case, Google is also offering owners with affected 4A devices their choice of compensation: They can opt for a free battery swap, a $50 payday, or a $100 credit toward a new Pixel phone from its online store. 4A owners can enter their IMEI number on this page to find out if theirs is affected. Google didnt immediately respond to our questions about why the 4A, which hasnt been updated since late 2023, needs this attention now.
    0 Comments 0 Shares 29 Views
  • WWW.MARKTECHPOST.COM
    HBI V2: A Flexible AI Framework that Elevates Video-Language Learning with a MultivariateCo-Operative Game
    Video-Language Representation Learning is a crucial subfield of multi-modal representation learning that focuses on the relationship between videos and their associated textual descriptions. Its applications are explored in numerous areas, from question answering and text retrieval to summarization. In this regard ,contrastive learning has emerged as a powerful technique that elevates video-language learning by enabling networks to learn discriminative representations. Here, global semantic interactions between predefined video-text pairs are utilized for learning.One big issue with this method is that it undermines the models quality on downstream tasks. These models typically use video-text semantics to perform coarse-grained feature alignment. Contrastive Video Models are, therefore, unable to align fine-tuned annotations that capture the subtleties and interpretability of the video. The nave approach to solving this problem of fine-grained annotation would be to create a massive dataset of high-quality annotations, which is unfortunately unavailable, especially for vision-language models. This article discusses the latest research that solves the problem of fine-grained alignment through a game.Peking University and Pengcheng Laboratory researchers introduced a hierarchical Banzhaf Interaction approach to solve alignment issues in General Video-Language representation learning by modeling it as a multivariate cooperative game. The authors designed this game with video and text formulated as players. For this purpose, they grouped the collection of multiple representations as a coalition and used Banzhaf Interaction, a game-theoretic interaction index, to measure the degree of cooperation between coalition members.The research team extends upon their conference paper on a learning framework with a Hierarchical Banzhaf Interaction, where they leveraged cross-modality semantics measurement as functional characteristics of players in the video-text cooperative game. In this paper, the authors propose HBI V2, which leverages single-modal and cross-modal representations to mitigate the biases in the Banzhaf Index and enhance video-language learning. In HBI V2, the authors reconstruct the representations for the game by integrating single and cross-modal representations, which are dynamically weighted to ensure fine granularity from individual representations while preserving the cross-modal interactions.Regarding impact, HBI V2 surpasses HBI with its capability to perform various downstream tasks, from text-video retrieval to VideoQA and video captioning. To achieve this, the authors modified their previous structure into a flexible encoder-decoder framework, where the decoder is adapted for specific tasks.This framework of HBI V2 is divided into three submodules: Representation-Reconstruction, the HBI Module, and Task-Specific Prediction Heads. The first module facilitates the fusion of single and cross-modal components. The research team used CLIP to generate both representations. For video input, frame sequences are encoded into embeddings with ViT. This component integration helped overcome the problems of dynamically encoding video while preserving inherent granularity and adaptability. For the HBI module, the authors modeled video text as players in a multivariate cooperative game to handle the uncertainty during fine-grained interactions. The first two modules provide flexibility to the framework, enabling the third module to be tailored for a given task without requiring sophisticated multi-modal fusion or reasoning stages.In the paper, HBI V2 was evaluated on various text-video retrieval, video QA, and video captioning datasets with the help of multiple suitable metrics for each. Surprisingly, the proposed method outperformed its predecessor and all other methods on all the downstream tasks. Additionally, the framework achieved notable advancements over HBI on the MSVD-QA and ActivityNet-QA datasets, which assessed its question-answering abilities. Regarding reproducibility and inference, the inference time was 1 second for the whole test data.Conclusion: The proposed method uniquely and effectively utilized Banzhaf Interaction to provide fine-grained labels for a video-text relationship without manual annotations. HBI V2 extended upon the preceding HBI to infuse the granularities of single representation into cross-modal representations. This framework exhibited superiority and the flexibility to perform various downstream tasks.Check out the Paper and GitHub Page. All credit for this research goes to the researchers of this project. Also,dont forget to follow us onTwitter and join ourTelegram Channel andLinkedIn Group. Dont Forget to join our60k+ ML SubReddit. Adeeba Alam Ansari+ postsAdeeba Alam Ansari is currently pursuing her Dual Degree at the Indian Institute of Technology (IIT) Kharagpur, earning a B.Tech in Industrial Engineering and an M.Tech in Financial Engineering. With a keen interest in machine learning and artificial intelligence, she is an avid reader and an inquisitive individual. Adeeba firmly believes in the power of technology to empower society and promote welfare through innovative solutions driven by empathy and a deep understanding of real-world challenges. [Recommended Read] Nebius AI Studio expands with vision models, new language models, embeddings and LoRA (Promoted)
    0 Comments 0 Shares 28 Views
  • TOWARDSAI.NET
    Going Beyond the 1000-Layer Convolution Network
    Author(s): Bartosz Ludwiczuk Originally published on Towards AI. Introduction Vanishing gradient issue Mitigation of the vanishing gradient issue Training 1000 layer network Training component analysis Diving Deeper into Skip Connections 10000-layer networkMean gradient for 1st layer in all experimentsIntroductionOne of the largest Convolutional Networks, ConvNext-XXLarge[1] from OpenCLIP[2], boasts approximately 850 million parameters and 120 layers (counting all convolutional and linear layers). This is a dramatic increase compared to the 8 layers of AlexNet[3] but still fewer than the 1001-layer experiment introduced in the PreResNet[4] paper.Interestingly, about a decade ago, training networks with more than 100 layers was considered nearly impossible due to the vanishing gradient problem. However, advancements such as improved activation functions, normalization layers, and skip connections have significantly mitigated this issue or so it seems. But is the problem truly solved?In this blog post, I will explore:What components enable training neural networks with more than 1,000 layers?Is it possible to train a 10,000-layer Convolutional Neural Network successfully?Vanishing gradient issueBefore diving into experiments, lets briefly revisit the vanishing gradient problem, a challenge that many sources have already explored in detail.The vanishing gradient problem occurs when the gradients in the early layers of a neural network become extremely small, effectively halting their ability to learn useful features. This issue arises due to the chain rule used during backpropagation, where the gradient is propagated backward from the final layer to the first. If the gradient in any layer is close to zero, the gradients for preceding layers shrink exponentially. A major cause of this behavior is the saturation of activation functions.To illustrate this, I trained a simple 5-layer network using the sigmoid activation function, which is particularly prone to saturation. You can find the code for this experiment on GitHub. The goal was to observe how the gradient norms of the networks weights evolve over time.Gradient Norms Per Layer (Vanishing Gradient Issue). FC5 is the top layer, FC1 is the first layer. Image by authorThe plot above shows the gradient norms for each linear layer over several training iterations. FC5 represents the final layer, while FC1 represents the first.Vanishing Gradient Problem:In the first training iteration, theres a huge difference in gradient norms between FC5 and FC4, with FC4 being approximately 10x smaller.By the time we reach FC1, the gradient is reduced by a factor of ~10,000 compared to FC5, leaving almost nothing of the original gradient to update the weights.This is a textbook example of the vanishing gradient problem, primarily driven by activation function saturation.Sigmoid activation function and its gradient. Plot add preactivation and activations/gradient values. Image by authorLets delve deeper into the root cause: the sigmoid activation function. To understand its impact, I analyzed the first layer's pre-activation values (inputs to the sigmoid). The findings:Most pre-activation values lie in the flat regions of the sigmoid curve, resulting in activations close to 0 or 1.In these regions, the sigmoid gradient is nearly zero, as shown in the plot above.This means that any gradient passed backward through these layers is severely diminished, effectively disappearing by the time it reaches the first layers.The maximum gradient of the sigmoid function is 0.25, achieved at the midpoint of the curve. Even under ideal conditions, with 5 layers, the maximum gradient diminishes to 0.25 1e-3. This reduction becomes catastrophic for networks with 1,000 layers, rendering negligible the first layers' gradients.Skip connection. Source: Deep Residual Learning for Image Recognition, Kaiming HeMitigation of the vanishing gradient issueSeveral advancements have been instrumental in addressing the vanishing gradient problem, making it possible to train very deep neural networks. The key components that contribute to this solution are:1. Activation Functions (e.g., Tanh, ReLU, GeLU)Modern activation functions have been designed to mitigate vanishing gradients by offering higher maximum gradient values and reducing regions where the gradient is zero. For example:ReLU (Rectified Linear Unit) has a maximum gradient of 1.0 and eliminates the saturation problem for positive inputs. This ensures gradients remain significant during backpropagation.Other functions, such as GeLU[5] and Swish[6], smooth out the gradient landscape, further improving training stability.2. Normalization Techniques (e.g., BatchNorm[7], LayerNorm[8])Normalization layers play a crucial role by adjusting pre-activation values to have a mean close to zero and a consistent variance. This helps in two significant ways:It reduces the likelihood of pre-activation values entering the saturation regions of activation functions, where gradients are nearly zero.Normalization ensures more stable training by keeping the activations well-distributed across layers.For instance:BatchNorm[7] normalizes the input to each layer based on the batch statistics during training.LayerNorm[8] normalizes across features for each sample, making it more effective in some scenarios.3. Skip Connections (Residual Connections)Skip connections, introduced in architectures like ResNet[9], allow input signals to bypass one or more intermediate layers by directly adding the input to the layer's output. This mechanism addresses the vanishing gradient problem by:Providing a direct pathway for gradients to flow back to earlier layers without being multiplied by small derivatives or passed through saturating activation functions.Preserving gradients even in very deep networks, ensuring effective learning for earlier layers.By avoiding multiplications or transformations in the skip path, gradients remain intact, making them a simple yet powerful tool for training ultra-deep networks.Skip connection equation. Image by authorTraining 1000 layer networkFor this experiment, all training was conducted on the CIFAR-10[10] dataset. The baseline architecture was ConvNext[1], chosen for its scalability and effectiveness in modern vision tasks. To define successful convergence, I used a validation accuracy of >50% (compared to the 10% accuracy of random guessing). Source code on GitHub. All runs are available at Wandb.The following parameters were used across all experiments:Batch size: 64Optimizer: AdamW[11]Learning rate scheduler: OneCycleLRMy primary objective was to replicate the findings of the PreResNet paper and investigate how adding more layers impacts training. Starting with a 26-layer network as the baseline, I gradually increased the number of layers, ultimately reaching 1,004 layers.Throughout the training process, I collected statistics on the mean absolute gradient of the first convolutional layer. This allowed me to evaluate how effectively gradients propagated back through the network as the depth increased.Training 1k layer experiments. Image by authorGradient plot for all experiments. Despite the depth, gradient at the first layer are similar in each run. Image by authorKey ObservationsDespite increasing the depth to 1,000 layers, the networks successfully converged, consistently achieving the validation accuracy threshold (>50%).The mean absolute gradient of the first layer remained sufficiently large across all tested depths, indicating effective gradient propagation even in the deepest networks.The scores of ~94% are weak as SOTA is ~99%. I couldnt get better scores, leaving space for the next investigations.Training component analysisBefore diving deeper into ultra-deep networks, its crucial to identify which components most significantly impact the ability to train a 1000-layer network. The candidates are:Activation functionsNormalization layersSkip connectionsTraining component analysis experiments. Image by authorGradient plot for training component analysis experiments. . Image by authorSkip Connections: The Clear WinnerAmong all components, skip connections stand out as the most critical factor. Without skip connections, no other modifications advanced activation functions or normalization techniques can sustain training for such deep networks. This confirms that skip connections are the cornerstone of vanishing gradient mitigation.Activation Functions: Sigmoid and Tanh Still CompetitiveSurprisingly, the performance of Sigmoid and Tanh activation functions was competitive with modern alternatives like GeLU when accompanied by the normalization layer, and even without LayerNorm Sigmoid got a competitive score compared to GELU without LayerNorm. As we see, the mean gradient for all experiments is quite similar, with TanH without LayerNorm having the highest mean value but at the same time the lowest accuracy.Mean Gradient ValuesThe mean gradient values are relatively consistent across experiments, but the gradient trajectories differ. In experiments with LayerNorm, gradients initially rise to approximately 0.5 early in training before steadily declining. In contrast, experiments without LayerNorm exhibit a nearly constant gradient throughout the training process. Importantly, the gradient remains present in all cases, with no evidence of vanishing gradients in the networks first layer.Diving Deeper into Skip ConnectionsSkip connections can be implemented in various ways, with the main difference being how the raw input and transformed output are merged, often controlled by a learnable scaling factor . In ConvNext, for instance, the LayerScale[12] trick is employed, where the transformed data is scaled by a small learnable , initialized to 1e-6.This setup has a profound implication:During the initial training stages, most information flows through the skip connections, as the contribution from the transformation branch (via matrix multiplication and activation functions) is minimal.As a result, the vanishing gradient issue is effectively bypassed.Skip connection in ConvNext, with symbol. Image by authorExperiment: Varying LayerScale InitializationTo test whether the initialization of plays a critical role, I experimented with different starting values for LayerScale. Below is a diagram of a typical skip connection and a table summarizing the results:Skip connection scale analysis experiments. Image by authorThe results show that even with initialized to 1 (effectively turning on all transformation branches from the start), training a 1000-layer network remained stable. This suggests that while different versions of skip connections may vary slightly in their implementation, all are equally effective at mitigating the vanishing gradient problem.> 1000-layer networkSince weve established that skip connections are the key to training very deep networks, lets push the limits further by experimenting with even deeper architectures. To do this, I will gradually increase the network depth, but deeper networks require significantly more computational resources. Therefore, Ive decided to fit the largest possible network that can run on an RTX 4090 with 24 GB of memory.Fitting the biggest possible network on 24 GB. Image by author.The 1607-layer ConvNext was the biggest one I could fit into a GPU memory. There is still no issue with convergence, and the CIFAR10 results are the same.SummaryTo sum up a key finding:the skip connection is a main vanishing gradient mitigation toolTanh/Sigmoid are competitive to GELU when used with skip-connection and LayerNorm. It means that despite flat gradient areas Tanh/Sigmoid works well when accompanied by Skip-Connection and LayerNormwith skip-connection, you can try any depth you want, only resources constrain you, no matter what activation function you chooseIf anybody does not agree with that thesis during the recruitment process, send the link to my blog post as my experiment shows clear evidence![1] A ConvNet for the 2020s, Zhuang Liu, CVPR 2022[3] ImageNet Classification with Deep Convolutional Neural Networks, Alex Krizhevsky, NIPS 2012[4] Identity Mappings in Deep Residual Networks, Kaiming He, ECCV 2016[5] Gaussian Error Linear Units, Dan Hendrycks, 2016[6] Searching for Activation Functions, Prajit Ramachandran, ICLR 2018[7] Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift, Sergey Ioffe, ICML 2015[8] Layer Normalization, Jimmy Lei Ba, 2016[9] Deep Residual Learning for Image Recognition, Kaiming He, CVPR 2016[10] Learning Multiple Layers of Features from Tiny Images, Alex Krizhevsky, 2009[11] Decoupled Weight Decay Regularization, Ilya Loshchilov, ICLR 2019[12] Going deeper with Image Transformers, Hugo Touvron, ICCV 2021Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming asponsor. Published via Towards AI
    0 Comments 0 Shares 29 Views
  • WWW.IGN.COM
    Nvidia Geforce RTX 5080: Specs, Release Date and What We Know So Far
    After months of agonizing anticipation, Nvidia has finally announced the RTX 5080, along with the rest of the Blackwell lineup, including the RTX 5090, RTX 5070 Ti and RTX 5070 at CES 2025. We'll finally be able to get our hands on the next-generation graphics card on January 30. Until then, Nvidia has revealed the full specs of the card so we can get a rough idea of what to expect when it makes its way into a gaming PC near you. Nvidia RTX 5080 release dateThe Nvidia GeForce RTX 5080 launches January 30, 2025, along with its bigger sibling, the RTX 5090. Nvidia has also announced the RTX 5070 and RTX 5070 Ti, though those don't have a definite release date though we can expect them by March. As for the laptop version of the RTX 5080, Nvidia claims availability will 'start in March', though that is going to largely depend on the laptop manufacturers. it could be April before we see the likes of Alienware, MSI and Asus work the RTX 5080 into their next-generation laptops. Nvidia RTX 5080 priceWhen Nvidia unveiled the RTX 5080, it revealed a starting price of $999 for the RTX 5080, with third party cards likely being much more expensive, depending on how fancy their coolers and features are. While I don't know how likely it'll be to get an RTX 5080 for $999 when it hits the street, it is a significantly lower launch price than the RTX 4080, which launched for $1,199 back in 2022. That's surprising, when you consider the RTX 5090 saw a price jump from $1,599 to $1,99 also a starting price. As for the lower-tier cards, the RTX 5070 Ti will start at $749, with the RTX 5070 starting at $549. Getting a gaming laptop with an RTX 5080 is going to be quite a bit more expensive, of course, as you're buying an entire system instead of a single component. During the keynote at CES, Nvidia claims systems will start at $2,199, with more premium systems likely getting a substantial price bump. With these gaming laptops, though, keep in mind that they'll be much less performant than the equivalent desktop GPU. My general rule of thumb, without seeing testing, is that the laptop GPU is the equivalent of two tiers down. So, for instance, the RTX 5090 mobile will likely perform at the level of the desktop RTX 5070, with the RTX 5080 likely matching a desktop RTX 5060 even if that hasn't even been announced yet. Nvidia RTX 5080 specsThe Nvidia GeForce RTX 5080 is built on the Blackwell architecture that Nvidia's been using to power Supercomputers for the past year or so. While I'm not lucky enough to to test a data center GPU in Cyberpunk, Nvidia is making some lofty claims about the performance of this architecture, especially when it comes to AI performance, which is important for upscaling in modern PC games. This graphics card sports 10,752 CUDA cores across 84 Streaming Multiprocessors (SMs). That's actually a raw increase over the RTX 4080, which only sported 9,728 shaders. Assuming each Blackwell-based CUDA core has a significant IPC improvement over their last-gen counterparts, this increase in cores could mean significantly better performance. Of course, each SM has more than just CUDA cores. Nvidia hasn't released the chip layout, but assuming Blackwell has a similar layout to Ada Lovelace, each SM should have 4 Tensor Cores, which would make for a total of 336 Tensor Cores. Each SM also features a RT Core, which powers Ray Tracing. Nvidia is claiming a theoretical 1,801 TOPS of AI performance through the Tensor Cores and 171 Teraflops of ray tracing performance through the RT cores. Finally, the RTX 5080 sports 16GB of GDDR7 memory on a 256-bit bus. Because the RTX 5000 series are the first graphics cards to ever use GDDR7, I have no idea what impact this will have on performance, but it should be much faster than the GDDR6X on the RTX 4080 though only time will tell. Nvidia RTX 5080 performanceWhen Jensen Huang took the stage at CES 2025 with his flashy new jacket, he made some lofty claims about RTX 5090 performance, and even claimed that the RTX 5070 would match the RTX 4090. he supported these claims with benchmarks using the new DLSS 4, which coincidentally won't run on RTX 4000 cards, so you should take them with a grain of salt. The truth of the matter is that I have no idea how fast these graphics cards are, and I won't have a clear picture until I get them in the lab to actually test them in a controlled setting. Nvidia also made really lofty claims of gen-on-gen performance when it launched the RTX 4080, and that didn't turn out so well for Team Green. Luckily, with the RTX 5080 launching on January 30, we won't have to wait long to see what they have in store. Jackie Thomas is the Hardware and Buying Guides Editor at IGN and the PC components queen. You can follow her @Jackiecobra
    0 Comments 0 Shares 32 Views
  • WWW.IGN.COM
    Hands-On with the Lenovo Legion Go S: CES 2025
    When Lenovo announced the Legion Go S earlier today at CES, I thought it was just a lightweight version of the existing behemoth of a gaming handheld, and in many ways, the Windows 11 version is exactly that. However, the Lenovo Legion Go S is also available with SteamOS, which makes it $100 cheaper and so much easier to use. It gives us a glimpse of SteamOS's future and how it could become a serious threat to Windows especially for handheld gaming PCs and gaming laptops. Lenovo Legion Go S Hands-on PhotosDesignFrom the images I saw in the press release, I thought the Lenovo Legion Go S would be much smaller than the original Legion Go. I was wrong. The Lenovo Legion Go S feels about the same in my hands as the Asus ROG Ally X, which is currently the best handheld gaming PC on the market right now. Even though it sports a big screen, it still feels comfortable, especially without the knobs and dials of the original device. Instead, the sides of the Lenovo Legion Go S are smooth and rounded, contouring nicely in the hand, and the hatched texture on the grips of the device will probably help prevent accidental drops. The rear side of those grips hides the only "extra" buttons on the device, two paddle-like buttons, one on either side of the device. That's a stark contrast from the Lenovo Legion Go, which had a ton of extra buttons and dials, as the removable controllers were supposed to be used as a stand-in for a mouse. Luckily, the Lenovo Legion Go S retains the touchpad on the front of the device, even if it shrinks it down considerably. On the Windows 11 version of the device, it allows you to navigate the OS easily, though it was disabled on the SteamOS version that I played around with. A Valve representitive told me that a fix is in the works, and the little trackpad should be functional when the handhelds make it to market later this month. Also on the front of the device, of course, are the face buttons present on any handheld gaming PC. These all feel nice and tactile, and the analog sticks also have RGB lighting surrounding them another thing Valve had to build support for in SteamOS for the Go S. But the menu buttons are surprisingly the star of the show. Like with any other handheld out there, there are four menu buttons in total, two on each side of the display. The top button on each side functions as the start button on the right and the 'select' button on the left. Beneath those are the menu buttons that call up either Steam or a quick settings panel. Unlike other handhelds, though, it was incredibly smooth and responsive, with the menus coming up immediately, where something like the ROG Ally might make me wait a second to bring up Armoury Crate if it even opens Armoury Crate to begin with. Lenovo Legion Go S ImagesOn the top of the device, you'll find an outtake vent that spits out hot air, that stretches between the two triggers. Luckily, the vent doesn't take up the entire height of the device, with half of that stretch being dedicated to the power button, headphone jack and two USB-C ports. The Lenovo Legion Go S display is an 8" 1200p LCD panel with a 120Hz framerate, and it is gorgeous. It's big enough that you'll clearly see anything you're playing and bright enough to use at least in the brightly lit demo room at CES 2025. It marks probably the biggest improvement over the Steam Deck, as Valve's handheld is still limited to an 800p display.PerformanceBoth the Windows 11 and SteamOS version of the device are powered either by the recently announced Z2 Go or the current-generation Z1 Extreme. Obviously, I'll need to test it through a suite of games to get a clear picture of its gaming performance, but the games I did play on it had a high frame rate (admittedly Lenovo didn't exactly stock the thing with the most demanding games). Beyond the APU at the core of the device, the Lenovo Legion Go S also sports 'up to' 32GB of LPDDR5X RAM and 'up to' a 1TB SSD though I'm not sure how much memory or storage was on the device I actually used at the event. Again, another reason to wait for full reviews before you commit to this handheld. While I don't yet have a clear picture of the real-world performance of Lenovo Legion Go S, I'm optimistic, especially given the affordable $599 price. Price and AvailabilityAs far as the Lenovo Legion Go S release date, right now there are two different launch windows: The high-end spec with the Z1 Extreme will be available later in January running Windows 11 for $729, and the version with the Z2 Go will be available in May, which will cost $599 for the Windows 11 version, and $499 for the SteamOS version. Jackie Thomas is the Hardware and Buying Guides Editor at IGN and the PC components queen. You can follow her @Jackiecobra
    0 Comments 0 Shares 30 Views