• www.facebook.com
    In just a few years time, the galleries inside the Toledo Museum of Art will look different. Michael Maltzan Architecture, Studio Zewde, and Once-Future Office have been selected by TMA officials to carry out a comprehensive reinstallation.ow.ly/pFwl50U7sP9
    0 Comments ·0 Shares ·162 Views
  • In Seoul, South Korea, Odong Public Library by UNSANGDONG Architects offers a master class in timber construction
    www.facebook.com
    A public library designed by UNSANGDONG Architects in South Korea offers a fresh case study in bringing natural light into deep spaces, and wood construction more broadly.A public library designed by UNSANGDONG Architects in South Korea offers a fresh case study in bringing natural light into deep spaces, and wood construction more broadly. Odong Public Library is tucked away in the woods of Seouls Odong Park forest.
    0 Comments ·0 Shares ·159 Views
  • Xbox Boss Says He Doesnt Want Manipulative Expansions
    gamingbolt.com
    WithStarfield: Shattered SpaceandDiablo 4: Vessel of Hatred, weve seen a couple of major first-party Microsoft games getting expansions this year, so clearly, adding to its games with sizeable post-release expansions is something the company isnt against. But is it something that we can expect to see on a regular basis?In an interview with Game File, Microsoft Gaming CEO Phil Spencer touched on the topic, stating that developing post-launch expansions is by no means a top-down mandate for all first-party studios, and that above all, he doesnt want expansions for games that feel manipulative, and instead add actual value to the larger experience.Its really left to the creators [regarding] what plan they have for their stories, he said. I think its a great way for us to reengage players who may be lapsed.I dont like expansions that are manipulative. I want it to have a unique point of view. I dont want it to be, like, the third level that you cut before you launched.Spencer went on to add that every release continues to be a learning experience even now- like the aforementionedShattered Space, which released earlier this year to divisive responses, with criticism being directed at its lack of new features. Interestingly, as per Spencer, those criticisms led director Todd Howard to wonder whether pairing the expansion with the release of buggies (which were added to the game for free earlier in the year) would have made it seem more substantial.But were always learning, he said. Todd and I were talking about Shattered Space. Starfield is a game I put a ton of hours into and really love, but theyve had this thing where theyve added features throughout the year and then they had an expansion.I think some of the feedback on the expansion is: We wanted more features. And hes like, Well, should we have waited to put buggies out?'Ultimately, however, Spencer says that not every first-party Xbox title is necessarily going to do expansions.I think youre trying to tune both development effort and the impact of the expansion. And I think there will always be a balance to managing the game month to month. But not every game will do expansions.
    0 Comments ·0 Shares ·135 Views
  • Payday 3 Will See Significantly Lower Level of Investment from Starbreeze Going Forward
    gamingbolt.com
    Payday 3has been nothing short of a disaster fordeveloper Starbreeze Studios. As big of a success as its predecessor was, since its launch last year, the multiplayer shooter has consistently been on the receiving end of criticism from its ever-dwindling player base. Starbreeze itself has stated repeatedly thatPayday 3has been performing below the companys internal expectations and that continued underperformance is now yielding predictable consequences.In its recently published quarterly fiscal report, the developer has announced its plans to significantly decrease investment inPayday 3and its post-launch plans, which doesnt inspire a lot of confidence in the games future.The level of investment duringPayday 3s first year on the market, both through launched DLCs and Operation Medic Bag, has been at an elevated level, Starbreeze says. Ahead of year two, we are confident in being able to continue delivering amounts of value to our players with a significantly lower level of investment.Payday 3has failed to maintain a healthy player base since its release, so Starbreezes decision to pull back support for the shooter doesnt come as a complete surprise, though it should be interesting to see what this means for the studios long-term plans for the game, and for thePaydayfranchise in general.Payday 3is available on PS5, Xbox Series X/S, and PC.
    0 Comments ·0 Shares ·138 Views
  • Call of Duty
    www.facebook.com
    Call of Duty #GamesMix | #CallOfDuty | #BlackOps6
    0 Comments ·0 Shares ·165 Views
  • www.facebook.com
    #GamesMix | #Steam | #Windows7
    0 Comments ·0 Shares ·168 Views
  • Trump names Brendan Carr as his FCC leader
    www.theverge.com
    President-elect Donald Trump said on Sunday that intends to name Brendan Carr as chairman of the Federal Communications Commission. Carr, a commissioner at the FCC since 2017, has made a name for himself by threatening to use the commissions powers to regulate speech online and over the airwaves.Carr authored Project 2025s section on the FCC, using it to propose restrictions on social media platforms meant to bolster conservative speech. He proposed limiting the legal shield that gives websites wide latitude to host and moderate user-generated content. He also suggested putting regulations on tech companies that would limit their ability to block and prioritize that content as they choose.RelatedIn the lead up to the election, Carr threatened to use the commissions powers to punish companies for speech he doesnt like. Just this month, he floated revoking NBCs broadcast license after SNL featured Kamala Harris. As commissioner, he voted to repeal net neutrality rules in 2017 and later voted against restoring net neutrality earlier this year.In an exceptionally vague statement, Trump says Carr will end the regulatory onslaught crippling Americas Job Creators and Innovators. He also says that Carr will ensure that the FCC delivers for rural America.
    0 Comments ·0 Shares ·132 Views
  • Sometimes you just need a straightforward, old-school RPG
    www.theverge.com
    Were in a pretty good moment for sprawling, complex role-playing games. New releases like Metaphor: ReFantazio and Dragon Age Absolution, along with older titles like Elden Ring (including this years expansion), Cyberpunk 2077, and Baldurs Gate 3 have sucked millions of people into their expansive worlds. They can be all-consuming experiences, offering players all kinds of freedom to explore their worlds and characters. But honestly? Sometimes I dont want to fuss with conversation wheels or make difficult, narrative-altering choices. I just want to go on a big adventure and slowly turn into an overpowered hero fighting monsters and thats where the excellent new remake of Dragon Quest 3 comes in.The game originally came out way back in 1988 (it was initially called Dragon Warrior 3 in North America) and it has a refreshingly straightforward premise: youre on a quest to kill a great evil that your father failed to destroy many years before. There are a few colorful characters, but there isnt a whole lot else to it beyond a quest for revenge. This isnt a game you play for the story. Instead, its about going on adventures, exploring strange and dangerous locations, and killing lots of cute blue slimes.RelatedThat simplicity has stood the test of time rather well, with systems that are easy to grasp while still offering a challenge (even if the regular battles can eventually get tedious). This is a good thing, because the remake doesnt change all that much. The exploration and turn-based battles all still feel largely as they did. This means that the game follows a largely predictable path, as you go from one dungeon to the next, visiting towns in between to rest and gear up for the next challenge. There are some additions, like a new character class that lets you collect monsters, along with additional story scenes to flesh out the barebones narrative.Though its mostly the same as the original, there are major changes to the presentation and some notable quality of life features. To start, this version of Dragon Quest 3 looks incredible. It uses the same visual style as games like Octopatch Traveller and Triangle Strategy, which Square Enix awkwardly calls HD 2D. Basically, these are still games with pixel art characters, but they explore incredibly detailed worlds. The developers then throw in a tilt-shift effect that gives the whole thing a diorama-like appearance. The result is a game that looks decidedly old-school, but in a slick, modern way. I especially love the adorable monster animations in battle. Theres also an updated orchestral score performed by the Tokyo Metropolitan Symphony, along with all-new voice acting.Its more than just an aesthetic overhaul, though. The Dragon Quest 3 remake also makes some very smart tweaks that often make it less frustrating to play. These include a more useful map that makes it easier to find where youre headed, an option to speed up battles so you can grind quicker, and a handful of difficulty options. These may sound small, but theyre the kind of modern conveniences that can make many older games difficult to play, especially ones built around repetitive combat like a classic RPG. Smoothing out those few rough edges has a big impact on the overall feel of a game like Dragon Quest 3.Even though its technically the third game of the series, this version of Dragon Quest is also a great place for newcomers to see what the series is all about (and really, things havent changed all that much over the years). Its a pretty, approachable way to experience an epic quest without getting overwhelmed by story or features. Its just you, some swords and magic, and a whole lot of monsters to defeat. Sometimes, thats all you really need.Dragon Quest 3 HD-2D is available now on the Switch, PlayStation, Xbox, and PC.
    0 Comments ·0 Shares ·132 Views
  • Kinetix: An Open-Ended Universe of Physics-based Tasks for Reinforcement Learning
    www.marktechpost.com
    Self-supervised learning on offline datasets has permitted large models to reach remarkable capabilities both in text and image domains. Still, analogous generalizations for agents acting sequentially in decision-making problems are difficult to attain. The environments of classical Reinforcement Learning (RL) are mostly narrow and homogeneous and, consequently, hard to generalize.Current reinforcement learning (RL) methods often train agents on fixed tasks, limiting their ability to generalize to new environments. Platforms like MuJoCo and OpenAI Gym focus on specific scenarios, restricting agent adaptability. RL is based on Markov Decision Processes (MDPs), where agents maximize cumulative rewards by interacting with environments. Unsupervised Environment Design (UED) addresses these limitations by introducing a teacher-student framework, where the teacher designs tasks to challenge the agent and promote efficient learning. Certain metrics ensure tasks are neither too easy nor impossible. Tools like JAX enable faster GPU-based RL training through parallelization, while transformers, using attention mechanisms, enhance agent performance by modeling complex relationships in sequential or unordered data.To address these limitations, a team of researchers has developed Kinetix, an open-ended space of physics-based RL environments.Kinetix, proposed by a team of researchers from Oxford University, can represent tasks ranging from robotic locomotion and grasping to video games and classic RL environments. Kinetix uses a novel hardware-accelerated physics engine, Jax2D, that allows for the cheap simulation of billions of environmental steps during training. The trained agent exhibits strong physical reasoning capabilities, being able to zero-shot solve unseen human-designed environments. Furthermore, fine-tuning this general agent on tasks of interest shows significantly stronger performance than training an RL agent tabula rasa. Jax2D applies discrete Euler steps for rotational and positional velocities and uses impulses and higher-order corrections to constrain instantaneous sequences for efficient simulation of diversified physical tasks. Kinetix is suited for multi-discrete and continuous action spaces and for a wide array of RL tasks.The researchers trained a general RL agent on tens of millions of procedurally generated 2D physics-based tasks. The agent exhibited strong physical reasoning capabilities, being able to zero-shot solve unseen human-designed environments. Fine-tuning this demonstrates the feasibility of large-scale, mixed-quality pre-training for online RL.In conclusion, Kinetix is a discovery that addresses the limitations of traditional RL environments by providing a diverse and open-ended space for training, leading to improved generalization and performance of RL agents. This work can serve as a foundation for future research in large-scale online pre-training of general RL agents and unsupervised environment design.Check out the Paper and GitHub Page. All credit for this research goes to the researchers of this project. Also,dont forget to follow us onTwitter and join ourTelegram Channel andLinkedIn Group. If you like our work, you will love ournewsletter.. Dont Forget to join our55k+ ML SubReddit. Nazmi Syed+ postsNazmi Syed is a consulting intern at MarktechPost and is pursuing a Bachelor of Science degree at the Indian Institute of Technology (IIT) Kharagpur. She has a deep passion for Data Science and actively explores the wide-ranging applications of artificial intelligence across various industries. Fascinated by technological advancements, Nazmi is committed to understanding and implementing cutting-edge innovations in real-world contexts. LinkedIn event, 'One Platform, Multimodal Possibilities,' where Encord CEO Eric Landau and Head of Product Engineering, Justin Sharps will talk how they are reinventing data development process to help teams build game-changing multimodal AI models, fast
    0 Comments ·0 Shares ·118 Views
  • [AI/ML] Keswanis Algorithm for 2-player Non-Convex Min-Max Optimization
    towardsai.net
    Author(s): Shashwat Gupta Originally published on Towards AI. Keswanis Algorithm introduces a novel approach to solving two-player non-convex min-max optimization problems, particularly in differentiable sequential games where the sequence of player actions is crucial. This blog explores how Keswanis method addresses common challenges in min-max scenarios, with applications in areas of modern Machine Learning such as GANs, adversarial training, and distributed computing, providing a robust alternative to traditional algorithms like Gradient Descent Ascent (GDA).Problem Setting:We consider differentiable sequential games with two players: a leader who can commit to an action, and a follower who responds after observing the leaders action. Particularly, we focus on the zero-sum case of thisproblem which is also known as minimax optimization, i.e.,Unlike simultaneous games, many practical machine learning algorithms, including generative adversarial net-works (GANs) [2] [3] , adversarial training [4] and primal-dual reinforcement learning [5], explicitly specify theorder of moves between players and the order of which player acts first is crucial for the problem. In particular, min-max optimisation is curcial for GANs [2], statistics, online learning [6], deep learning, and distributed computing [7].Figure 1 : Non-Convex function Visualisation (Source: https://www.offconvex.org/2020/06/24/equilibrium-min-max/)Therefore, the classical notion of local Nash equilibrium from simultaneous games may not be a proper definition of local optima for sequential games since minimax is in general not equal to maximin. Instead, we consider the notion of local minimax [8] which takes into account the sequential structure of minimax optimization.Models and Methods:The vanilla algorithm for solving sequential minimax optimization is gradient descent-ascent (GDA), where both players take a gradient update simultaneously. However, GDA is known to suffer from two drawbacks.It has undesirable convergence properties: it fails to converge to some local minimax and can converge to fixed points that are not local minimax [9] [10]GDA exhibits strong rotation around fixed points, which requires using very small learning rates[11] [12] toconverge.Figure 2 : A Visualisation of GDA (Source: https://medium.com/common-notes/gradient-ascent-e23738464a19)Recently, there has been a deep interest in min-max problems, due to [9] and other subsequent works. Jin et al. [8] actually provide great insights to the work.Keswanis Algorithm:The algorithm essentially makes response function : maxy{R^m} f (., y) tractable by selecting y-updates (maxplayer) ingreedy manner by restricting selection of updated (x,y) to points along sets P(x,y) (which is defined as set of endpoints of paths such that f(x,.) is non-decreasing). There are 2 new things that this algorithm does to makecomputation feasible:Replace P(x, y) with P (x, y) (endpoints of paths along which f(x,.) increases at some rate > 0 (which makesupdates to y by any greedy algorithm (as Algorithm 2) feasible)Introduce a soft probabilistic condition to account for discontinuous functions.Experimental Efficacy:A Study [16] done at EPFL (by Shashwat et al., ) confirmed the efficacy of Keswanis Algorithm in addressing key limitations of traditional methods like GDA (Gradient Descent Ascent) and OMD (Online Mirror Descent), especially in avoiding non-convergent cycling. The study proposed following future research avenues:Explore stricter bounds for improved efficiency.Incorporate broader function categories to generalize findings.Test alternative optimizers to refine the algorithms robustness.The full study for different functions is as follows:Keswani's Algorithm for non-convex 2 player min-max optimisationKeswani's Algorithm for non-convex 2 player min-max optimisation www.slideshare.netReferences:[1] V. Keswani, O. Mangoubi, S. Sachdeva, and N. K. Vishnoi, A convergent and dimension-independent first-order algorithm for min-maxoptimization, arXiv preprint arXiv:2006.12376, 2020.[2] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, Generative adversarialnetworks, Communications of the ACM, vol. 63, no. 11, pp. 139144, 2020.[3] M. Arjovsky, S. Chintala, and L. Bottou, Wasserstein generative adversarial networks, pp. 214223, 2017.[4] A. Madry, A. Makelov, L. Schmidt, D. Tsipras, and A. Vladu, Towards deep learning models resistant to adversarial attacks, arXivpreprint arXiv:1706.06083, 2017.[5] W. S. Cho and M. Wang, Deep primal-dual reinforcement learning: Accelerating actor-critic using bellman duality, arXiv preprintarXiv:1712.02467, 2017.[6] N. Cesa-Bianchi and G. Lugosi, Prediction, Learning, and Games. Cambridge University Press, 2006.[7] J. Shamma, Cooperative Control of Distributed Multi-Agent Systems. Wiley & Sons, Incorporated, John, 2008.[8] C. Jin, P. Netrapalli, and M. Jordan, What is local optimality in nonconvex-nonconcave minimax optimization? pp. 48804889, 2020.[9] Y. Wang, G. Zhang, and J. Ba, On solving minimax optimization locally: A follow-the-ridge approach, arXiv preprint arXiv:1910.07512,2019.[10] C. Daskalakis and I. Panageas, The limit points of (optimistic) gradient descent in min-max optimization, Advances in neural informationprocessing systems, vol. 31, 2018.[11] L. Mescheder, S. Nowozin, and A. Geiger, The numerics of gans, Advances in neural information processing systems, vol. 30, 2017.[12] D. Balduzzi, S. Racaniere, J. Martens, J. Foerster, K. Tuyls, and T. Graepel, The mechanics of n-player differentiable games, pp. 354363,2018.[13] D. M. Ostrovskii, B. Barazandeh, and M. Razaviyayn, Nonconvex-nonconcave min-max optimization with a small maximization domain,arXiv preprint arXiv:2110.03950, 2021.[14] J. Yang, N. Kiyavash, and N. He, Global convergence and variance reduction for a class of nonconvex-nonconcave minimax problems,Advances in Neural Information Processing Systems, vol. 33, pp. 11531165, 2020.[15] G. Zhang, Y. Wang, L. Lessard, and R. B. Grosse, Near-optimal local convergence of alternating gradient descent-ascent for minimaxoptimization, pp. 76597679, 2022.[16] S. Gupta, S. Breguel, M. Jaggi, N. Flammarion Non-convex min-max optimisation, https://vixra.org/pdf/2312.0151v1.pdfJoin thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming asponsor. Published via Towards AI
    0 Comments ·0 Shares ·150 Views