• Most gamers spend almost half their time playing the same 10 games
    metro.co.uk
    Most gamers spend almost half their time playing the same 10 gamesAdam StarkeyPublished February 26, 2025 6:22pmUpdated February 26, 2025 6:23pm Marvel Rivals is the latest live service hit but for how long? (NetEase Games)A new report has shed light on the games industrys increasingly dire situation, as decades old live service games continue to dominate.The games industry has been in a state of upheaval over the last few years, between the pandemic, growing development costs, and widespread layoffs.Another key issue is the enduring popularity of several key live service games, like Fortnite and Minecraft, which are accounting for a larger and larger chunk of peoples playing habits, stealing the attention away from new titles.This trend has been reinforced in new data from Circana, which highlights how the chances of success for new games is becoming increasingly slim.According to Circanas executive director and games industry analyst Mat Piscatella, over 70% of active PlayStation 5 and Xbox Series X/S players in the US played at least one of the top 10 live service games in January. This includes Call Of Duty, Fortnite, Marvel Rivals, Roblox, and Minecraft.While this isnt too concerning in itself, Piscatella notes that over 40% of time spent playing on the PlayStation 5 and Xbox Series X/S went to those same top 10 live service games so people are playing those games, and only those games, for a significant chunk of time.Piscatella highlights how total video game spending and total hours played in the US peaked in 2021, but the pace of games being released hasnt slowed since then. As such, without a growing audience of players, these live service titles are pulling a limited pool of players away from playing new games.Over 70% of US active PS5/XBS players played at least 1 of the top 10 live service games of the month during January.More than 40% of all time spent playing on PS5/XBS in the US during January went to those same top 10 live service games.Source: Circana Player Engagement Tracker Mat Piscatella (@matpiscatella.bsky.social) 2025-02-26T01:22:10.782ZUsed to be that players would jump from big game to big game to some other games but they were most often moving to something new, Piscatella wrote on Bluesky.Now, the live service games suck out a ton of available time, and its hard to beat free if its good. So. Here we are.More TrendingA big part of the problem is companies like Sony and Ubisoft, who have continued to chase the live service trend at the expense of funding new games which will actually last beyond two weeks (ahem, Concord).As such, were now in a situation where theres a lack of big name new games (on the near horizon at least) to potentially turn the tide with Ghost Of Ytei being the only major title in Sonys stable confirmed for the PlayStation 5 this year.There is some hope for 2025 though, with GTA 6 and the Nintendo Switch 2 both slated for this year, at a time when the industry desperately needs a shot in the arm.If either disappoints though, or are unexpectedly delay, then theres going to be increasingly little to drag casual gamers away from the same small number of games theyve always played. Theres a lot riding on the Switch 2 (Nintendo)Emailgamecentral@metro.co.uk, leave a comment below,follow us on Twitter, andsign-up to our newsletter.To submit Inbox letters and Readers Features more easily, without the need to send an email, just use ourSubmit Stuff page here.For more stories like this,check our Gaming page.GameCentralSign up for exclusive analysis, latest releases, and bonus community content.This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply. Your information will be used in line with our Privacy Policy
    0 Comments ·0 Shares ·47 Views
  • New Valve VR headset launching in 2025 and you wont believe the price
    metro.co.uk
    New Valve VR headset launching in 2025 and you wont believe the priceAdam StarkeyPublished February 26, 2025 5:30pmUpdated February 26, 2025 5:30pm The successor to the Index might be nearly here (Valve)A reliable leaker has claimed Valve will launch a new VR headset later this year, although it will come at a significant cost.Valves main moneymaker might be its Steam storefront, but the company has released several pieces of hardware over the years.The most popular piece of kit from the company is the Steam Deck, even if its popularity is overplayed in comparison to the Nintendo Switch and other game consoles. Valve also launched its own VR headset, the Valve Index, in 2019, and previously tried to launch a line of console-like devices with the Steam Machine concept in 2015.While Valves hardware ventures have never sold in much volume, it seems the company isnt giving up on virtual reality with a prominent leaker claiming a new wireless VR headset is set to launch later this year.A new standalone VR headset codenamed Deckard (a Blade Runner reference) has long been rumoured but according to prominent Valve leaker Gabe Follower, the device will launch by the end of 2025 at the eye-watering price of $1,200 (945).This price is apparently for the full bundle which includes some in-house games (or demos) that are already done. While theres no indication of what these games are, they could be in a similar vein to The Lab, which launched alongside the HTC Vive headset.This rumoured price isnt too surprising though, considering the most expensive bundle for the Valve Index sells for 919 on Steam, which includes controllers, base stations, and Half-Life: Alyx.According to Gabe Follower, Valve want to give the user the best possible experience without cutting any costs with its Deckard headset, and even at the $1,200 price, it will be sold at a loss.More TrendingIts claimed the headset will utilise the Steam operating system from the Steam Deck, albeit adapted for virtual reality. The leaker further states that a core feature of the headset is the ability to play flat screen game[s] that are already playable on Steam Deck, but in VR on a big screen without a PC.In November last year, 3D models of the Deckards controllers were discovered in SteamVR driver files, codenamed Roy (as in Roy Batty). A reference to new Steam Controller, codenamed Ibex, was also discovered, which was reportedly being tooled for a mass production goal in their factories.This all suggests Valve is preparing to announce a whole new slate of hardware in the coming months, although the company is notoriously elusive, so theres every chance we dont hear anything at all. As for when a new headset could be revealed, Valve previously announced hardware during the Game Developers Conference (GDC) in 2015, so its possible something could be shown at this years event next month. Are these the Deckard controllers? (X/GabeFollower)Emailgamecentral@metro.co.uk, leave a comment below,follow us on Twitter, andsign-up to our newsletter.To submit Inbox letters and Readers Features more easily, without the need to send an email, just use ourSubmit Stuff page here.For more stories like this,check our Gaming page.GameCentralSign up for exclusive analysis, latest releases, and bonus community content.This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply. Your information will be used in line with our Privacy Policy
    0 Comments ·0 Shares ·49 Views
  • How artificial intelligence can make board games better
    www.economist.com
    It can iron out glitches in the rules before they go on the market
    0 Comments ·0 Shares ·56 Views
  • The skyrocketing demand for minerals will require new technologies
    www.economist.com
    Flexible drills, distributed power systems and, of course, artificial intelligence
    0 Comments ·0 Shares ·53 Views
  • Gundam GQuuuuuuX Beginning Is a Fascinating Re-Imagination of an Anime Classic
    gizmodo.com
    Mobile Suit Gundam has spent 45 years being in almost constant conversation with itself. From the successor series that built on the story of the 1979 anime, to myriad alternate universes that extrapolate on its series and themes, the venerable mecha franchise has always, in some ways, existed in a state of trying to evolve and live up to the bold ideas that made its original self so compelling in the first place. But for all those continuations and conversations, its taken four and a half decades for the franchise to deliver one of its simplest, yet most fascinating twists on that original idea: and in that idea, Gundam GQuuuuuuX offers itself up to a huge amount of intriguing potential. Warning: This review contains spoilers for the overall premise of Gundam GQuuuuuuX. If youve kept up with news of the show following Beginningsdebut in Japan last month, none of this is new, but if you havent, well Mobile Suit Gundam GQuuuuuuX Beginning , a collaboration between Neon Genesis Evangelion studio Khara and Gundam creator Bandai Filmworks arriving in international theaters this week after its debut in Japan last month, is two very different prospects smashed into a single 80-odd-minute runtime. The latter half of the film is a condensed compilation of the first few episodes of the upcoming TV series, following the stories of three young teensAmate Machu Yuzuriha (Tomoyo Kurosawa), Nyaan (Yui Ishikawa), and Shuji Ito (Shimba Tsuchiya)as they live their lives on a far-flung space colony five years after the conclusion of a devastating interstellar war, their paths crossing over the return of one of the legendary giant mecha from that conflict, the first Gundam-type mobile suit, and the ruling powers desperate to find it. But the first half of it is the most shocking prospect: a prequel that details the events of that interstellar war, revealing that it is none other than an alternate version the infamous One Year War from the original 1979Gundam anime. One where the antagonistic forces of the secessionist space colony Zeon, lead by Char Aznable (Yuuki Shin, taking on the mantle ofGundams most famous character from original voice actor Shuichi Ikeda), successfully steal the prototype Gundam from the Earth Federation and ultimately win the conflict, changing history as Gundam fans have known it for 45 years. GKids/Bandai Namco Filmworks Gundamfans of all stripes are more than familiar with alternate universesthe franchise has existed for so long not just by continuing the story of its prime timeline, the Universal Century, but by creating original series set in their own distinct universes. But this scenario depicting an alternate history of the Universal Century is one that has usually been consigned to non-canonical side stories and spinoffs, rather than being the underpinning for a brand new mainline entry, andGQuuuuuuX is clearly eager to mine the potential of it, and all fannish the questions it raises, from the get go. It makesBeginning and its bifurcated structure an interesting film to watch, albeit one that might initially seem somewhat overbearing for a complete newcomer to the franchise. While the movie is unabashed in laying out its connection to, and vision of, an alternate version of the 1979 animeand does just enough to introduce unfamiliar audiences to that world before its second half picks up with the series primary cast of characters six years laterthe first half of Beginning is equally unabashed in being pure fan service for fans of the originalGundam. GKids/Bandai Namco Filmworks The film has two distinct art styles (and does a surprisingly good job of making that feel less jarring than it sounds), with the first half emulating the modern-retro vision of the 1979 anime envisioned by original Gundam character designer Yoshikazu Yas Yasuhiko seen in series likeGundam: The Origin and the 2022 movie Cucuruz Doans Island. To anyone even remotely familiar with the firstGundam, its a joyous experience:Beginning uses the original shows brilliant soundtrack, it recreates exact shots and lines of dialogue from its opening episodes to a tee, twisting and reframing them in fascinating ways to fit its alternative rendition of events where Char is suddenly the protagonist of this familiar narrative. Theres of course a thematic parallel to be had here with director GQuuuuuuX director Kazuya Tsurumakis work on theRebuild of Evangelion film series, which itself took another icon of mecha anime and both lovingly recreated moments of it before spinning off in its own unique vision of that original narrative. But, at least in this opening half of Beginning, GQuuuuuuX is interested in the immediate contrast of having it both ways at once, constantly calling to iconic moments from the original show while ceaselessly prodding at them and twisting them to cast them in new light. If this was allGQuuuuuuX was, it would be a loving, intriguing tribute to one of the most influential anime of all time, one befitting an anniversary celebration. But as the retelling of the One Year War gives way to the actual meat of the show itself in the second of Beginningset five years after the conclusion of the war in 0085the film (and the series itself) proves it is more than willing to move beyond the fan service of its recreation of the original Gundam, and marinate itself in the world it has built out of that recreation. While some characters carry over from the first half (mainly Challia Bull, played by Shinji Kawada, a minor but significant character from the 1979 anime who becomes a major player in GQuuuuuuX), the second half establishes a premise that has more parallels with a series likeZeta Gundam, imagining what life looks like in a post-war world for the generation that came of age after its conclusion. GKids/Bandai Namco Filmworks Beyond the unique art style its given herestylized through the vision of famous Pokmon character designer Take, giving GQuuuuuuXs a boldly colorful, poppy sci-fi aestheticthe back half of the film, and the opening episodes of the series its pulling from, display a great deal of promise. Machu, Nyaan, and Shuji all feel grounded and earnestly real as young people trying to navigate a world that has disenfranchised them each in myriad ways, tackling typical Gundam themes like the oppression of a police state, systemic inequality, and astropolitical divides, all the while thrusting them into the front lines of conflict. But the ways they interact with the classic Gundam worldbuilding elements beyond those themesespecially the idea of the Newtype, an evolutionary path for space-living humans that sees them develop enhanced psionic awareness and communicative skillsplaces them in an incredibly intriguing parallel to the fanservice ofBeginnings first half.All of that (as well as, of course, some sumptuous mecha action in both halves of the story), synthesizesGundam GQuuuuuuX, both in this film and whatevers to come in the TV anime, into something uniquely fascinating and positively vibrating with potential. Marrying a loving vision of the original Gundam with a bold new vision for it,Beginning represents a pan-generational celebration ofGundam as a wholeone thats sure set to have diehard fans head spinning at all the questions and ramifications it raises, while equally introducing newcomers to a new cast of compelling characters that likewise has to navigate those ramifications themselves alongside the audience. GKids/Bandai Namco Filmworks Mobile Suit Gundam GQuuuuuuX Beginning is set to hit US theaters in a limited capacity February 28, with early screenings starting today. The TV anime will begin broadcasting in Japan April 8. Want more io9 news? Check out when to expect the latest Marvel, Star Wars, and Star Trek releases, whats next for the DC Universe on film and TV, and everything you need to know about the future of Doctor Who.
    0 Comments ·0 Shares ·54 Views
  • Watch This Startup Pitch Dystopian Sweatshop-Monitoring Software
    gizmodo.com
    Following intense backlash, startup accelerator Y Combinator quietly pulled a video from its X account demonstrating a new startups AI-powered worker monitoring software. The startup, called Optifye, says on its website that it is developing AI line optimization for manual assembly that can boost efficiency by up to 30%. That sounds anodyne enough until you watch the video. Thirty-seven percent line efficiency? Thats bad, the video starts, as a young man looks at a dashboard showing the supposed performance metrics of a specific worker on a manufacturing line. The man calls his supervisor, who looks at a dashboard filled with red and begins haranguing the worker, whom he refers not by name but only as Workspace 17, over a video feed pointing down at the workers station. The worker pleads that he has been working all day, only for the manager to look at another dashboard and retort, you havent even hit your hourly output once today, and you had 11.4% efficiency. How that efficiency number is calculated, or what that would even mean to a line worker, is unclear.Its just been a rough day, the worker adds, only for the manager to say, Rough day? More like a rough month. Y Combinator is considered the premier boot camp for new startups to get off the ground and provides accepted companies with $500,000 in initial funding. There are many things one could say about this video. It, of course, comes off as cold and inhumane. But what is perhaps most funny is that, despite claiming it can increase assembly line efficiency, in the demo video itself, Optifyes software has zero impact other than to harass the worker. The so-called managers do not take any tangible steps to resolve the issue other than yelling at the man. How exactly the software can improve efficiency other than encouraging managers to berate their reports is unclear. Optifyes website leans on the idea that only what is measured can be improved.Maybe the video garnered such a visceral reaction from people across Silicon Valley due to an underlying PTSD from the way in which software engineers are already monitored through tracking software like Jira. But the few defenders out there have pointed out that the founders of Optifye appear to be from India, and dubiously argue that work ethic in the country is much less reliable than one can expect in the United States. Optifye is likely targeting the Indian manufacturing base, where different, and more, accountability tools may be necessary. However, the poor productivity may be due in part to a bad managerial class in the country, where a 2022 report found 45% of workers dreaded going to work due to poor treatment by a supervisor. And needless to say, video monitoring is not an accepted practice in most of the world and is never received well when it is identified.Another argument that has been made defending the video is that critics are hypocritical to complain about sweatshop practices while using devices, like iPhones, made using cheap foreign labor. But it is tough to avoid these products today due to the complex global supply chain and the glacial pace at which change can be made. One can still denounce these types of surveillance practices, not endorse or support them, without being a hypocrite. No matter where you come down on the subject, especially considering the cultural nuance, the video was quite tone-deaf considering it was published on the X account of a U.S.-based investment firm. How nobody at the company recognized what type of feedback the video would receive is damning.Gizmodo reached out to Y Combinator for comment.
    0 Comments ·0 Shares ·59 Views
  • New Churches St. Luke & St. Matthew / Meixner Schlter Wendt
    www.archdaily.com
    New Churches St. Luke & St. Matthew / Meixner Schlter WendtSave this picture! Christoph KraneburgArchitects: Meixner Schlter WendtAreaArea of this architecture projectArea:685 mYearCompletion year of this architecture project Year: 2024 PhotographsPhotographs:Christoph KraneburgManufacturersBrands with products used in this architecture project Manufacturers: JUNG, Sto, Duravit, FSB Franz Schneider Brakel, Bauwerk Parkett, Creation Baumann, Knauf, SLV More SpecsLess SpecsSave this picture!Text description provided by the architects. The existing ensemble consisting of a childcare facility and the Lukaskirche (Church of St. Luke) is being expanded to feature a new church square, a new parish hall, and the new outer church. The new and enlarged ensemble with the combination of old and new builds reflects the fact that in the course of advancing existing structures, the users by having to give up old builds in exchange for the new, frequently have to bid farewell to the past and try and make the new their own. The site extends, as if along a timeline, from west to east, starting with the childcare center and the existing church and moving into the new parish hall, creating a kind of "building history" which is open-ended thanks to the outer church. The latter, as a constructed intermediate state, can be read ambivalently as half-open/half-closed, and using construction to depict a process. As a realm of possibilities and as an open space in both senses it points to the future.Save this picture!The expansion is work divided up into three sequential zones: the church square, the parish hall, and the outer church. This continues eastwards the existing sequence of childcare facility and church on the west side of the site. To the west the new church square connects the parish hall with the existing church. It is a public square, allowing visitors to the church and parish hall to disperse easily, and encourages social gatherings such as events and festivities. Facing Brunnenweg to the south, the "street altar" opens onto the street and the urban environment, highlighting the ensemble's ecclesiastical function. The parish hall has a simple, rectangular footprint but stands out for its lengthwise orientation. To the south a skylight isrunning the entire width of the building embedding it in the urban fabric. This is the church hall: the place where the parish activities and events take place, including the church services. The edifice flattens out at an angle towards the back in line with the use of the side rooms. The hall boasts two generous floor-to-ceiling windows located opposite each other, forging a link between the square, the parish hall, and the outer church. This line of vision allows users to experience the sequencing concept for themselves.Save this picture!To the east, the new outer church provides an additional outdoor space that unlike the church square is sheltered and introspective, whereby there is an unimpeded view upwards exclusively of trees and the sky. Like a traditional church spire, a skylight sloping towards Gersprenzweg acts as an urban highlight. The skylight creates atmospheric lighting inside the outer church, lending church services and other events a spiritual touch. Under the skylight the altar, the ambo and the "stairway to heaven" echoes the structure of the outer church on the scale of furniture. Since both use fair-face concrete, the fit-out and the building meld. The white cross on the outside wall catches the eye thanks to its special color and materials.Save this picture!Project gallerySee allShow lessProject locationAddress:Offenbach, GermanyLocation to be used only as a reference. It could indicate city/country but not exact address.About this officePublished on February 26, 2025Cite: "New Churches St. Luke & St. Matthew / Meixner Schlter Wendt" 26 Feb 2025. ArchDaily. Accessed . <https://www.archdaily.com/1027333/new-churches-st-luke-and-st-matthew-meixner-schluter-wendt&gt ISSN 0719-8884Save!ArchDaily?You've started following your first account!Did you know?You'll now receive updates based on what you follow! Personalize your stream and start following your favorite authors, offices and users.Go to my stream
    0 Comments ·0 Shares ·55 Views
  • Krill or anchovy? Baleen whale songs may indicate whats on the menu
    www.popsci.com
    For humpback whales in the Pacific, their songs may be a solid indicator of the type of food that is swimming nearby. By listening in on their songs from year to year, a team of scientists found variations in their tunes based on the species that are available for them to eat. The findings are detailed in a study published February 26 in the journal PLOS One.The Pacific Ocean is the largest body of water on Earth and monitoring populations of large marine mammals like whales can be challenging. According to NOAA, some humpback whales in the Northern Pacific migrate roughly 3,000 miles from Alaska to feed and then to Hawaii to breed. They can even complete this journey in only 28 days. Depending on the species, whale songs can also travel thousands of miles.To monitor and track these baleen whales, scientists deploy underwater microphones called hydrophones. In the new study, the team monitored songs from blue, fin, and humpback whales. They followed the pods traveling off of the West Coast of the United States for six years, looking to see what song data might say about the health of their ecosystem.They found large variations in how often the whales were vocalizing over the six years of study. The humpback whale songs continually increased and were detected on 34 percent of days at the beginning of the study. They eventually rose to 76 percent of days after six years. This increase in song also consistently tracked alongside improved foraging conditions for humpback whales. Initially, there was a large increase in krill, shortly followed by a leap in anchovy abundance.Example photo of a humpback whale fluke from which identification of individuals is enabled through distinction of fluke shape and coloration. This photo by T. Cheeseman is of the individual most frequently identified in the Monterey Bay region during the study period, Fran, who was killed by a ship strike in August 2022. CREDIT: Ryan et al., 2025, PLOS One. By comparison, blue and fin whale song rose primarily during the years where krill was more plentiful. The increase of humpback whale songs is also consistent with their ability to switch between dominant prey. Skin biopsy samples confirmed that changes had occurred in the whales diets.Other factors that may have contributed to the patterns include whether or not other whales were vocalizing nearby. However, the changes in foraging conditions were the most consistent factor in this study.[ Related: We finally know how baleen whales make noise. ]According to the team, these findings indicate that seasonal and yearly changes to the amount of baleen whale song might mirror changes in the local food web. A better understanding of the relationship between using whale song detection and the availability of krill, anchovy, and other fish may help researchers better interpret hydrophone data,Surprisingly, the acoustic behavior of baleen whales provides insights about which species can better adapt to changing ocean conditions, John Ryan, a study co-author and biological oceanographer at Monterey Bay Aquarium Research Institute said in a statement. Our findings can help resource managers and policymakers better protect endangered whales.The post Krill or anchovy? Baleen whale songs may indicate whats on the menu appeared first on Popular Science.
    0 Comments ·0 Shares ·50 Views
  • Morning lark or night owl? Prevailing ideas of mammal activity are outdated
    www.popsci.com
    Squirrels are plentiful when the suns out, while rats are a more common sight after dusk. Often, the former are described as diurnal, and the latter nocturnal. But for many animals, those labels are due for a revision. Most mammals daily activity cycles arent quite so cut and dry, according to a new study of global wildlife camera data. The rhythms of many species are more varied and flexible than previously thought, and those rhythms are shifting, in response to humans.The research, published February 26 in the journal Science Advances, combines nearly 9 million recorded observations into one of the most complete ever analyses of the timing of mammalian activity. It upends many commonly held assumptions about wildlife habits, and has consequences for conservation. Get the Popular Science newsletter Breakthroughs, discoveries, and DIY tips sent every weekday. By signing up you agree to our Terms of Service and Privacy Policy.You see all these terms in the literature. This species is nocturnal, this one is diurnal. It sounds very authoritative, but we werent so sure based on our collective field experience, Kadambari Devarajan, an ecologist and conservation biologist who co-led the study as part of her post-doctoral fellowship at the University of Rhode Island, tells Popular Science. Anecdotally, it seemed like there was a lot more plasticity, particularly in response to urbanization and human development. So, Devarajan and her collaborators took on the mammoth task of checking for themselves.The four core team members worked with hundreds of other researchers around the world to compile data from 200 camera trap projects across 38 countries. They organized and assessed this data based on factors like time of day, species, location, and daylight length given the date and geography. Then, they searched previously published research for mentions of each species established activity pattern: nocturnal, diurnal, crepuscular (most active at dawn and dusk), or cathemeral (active both day and night).Comparing the observational analysis with the research review revealed a significant mismatch. Just 39 percent of the 445 species captured on camera were accurately described in the previously published research. Common raccoons, for instance, are cataloged as nocturnal in the scientific literature, but hundreds of observations from around the Americas showed instances of raccoon activity in the day, night, and twilight. In many locations, the furry masked bandits tended towards nighttime movement, but in others they spanned the 24-hour schedule.Wider-ranging species tended to show more varied patterns. Between 60 and 73 percent of the species recorded in more than one project showed signs of at least two types of daily rhythms: some mixture of daytime, night, and twilight activity. Geography of any given population also had an influence: the farther animals were away from the equator, the more likely they were to be active during the day. Smaller species also tended to be more nocturnal overall, and increased daylight hours were correlated with more daytime animal activity.A fosa active during the daytime in Madagascar; species that was observed to be sometimes diurnal, sometimes nocturnal, and sometimes cathemeral. CREDIT: Erin Wampole, Washington Department of Fish and Wildlife. Amazingly, not one species included in the dataset showed an exclusively crepuscular pattern of dawn and dusk activity, as defined in the study, despite many animals being characterized as such in prior research. For example, white-tailed deer are generally described as twilight specialists, but according to the new photo evidence, they tend more towards intermittent bouts of both daytime and nighttime activity across much of their range. In some places, they seem to be active only during the day or only at night.Finally, where human environmental impacts like habitat loss and urbanization were more pronounced, animals were more likely to shift their schedules and display greater flexibility. In most instances, increased human influence tended to push wildlife towards nighttime activity, in agreement with past research. Striped skunks, gray foxes, and American porcupines along with more than a dozen other mammals were more likely to be nocturnal in highly developed areas. A handful of animals, like the common tapeti (a species of South American rabbit) also demonstrated the opposite effectbecoming more active during the day. All of the study results can be explored in detail via a publicly shared online tool.The ambition of what they did in terms of the number of observations and also how they set out to test these typical classifications is novel and important, says Cole Burton, a conservation biologist and associate professor at the University of British Columbia who wasnt involved in the new research.Burton has previously conducted large camera trap studies of wildlife, including one assessing how animals responded to human activity amid pandemic lockdowns. Though more narrow in scope, his own work also showed that many mammals are flexible about their activity timing.In light of the new research, he wonders if theres any point in strictly defining mammalian patterns at all. Maybe we should be moving to a more continuous understandinga spectrumrather than trying to put the animals in these neat little boxes. Individual wildlife populations might be better understood via timetable than a label, Burton suggests.The idea goes beyond just terminology. Decades of previous scientific research has incorporated these apparently oversimplified animal activity categories into ecology models, to help scientists grasp animals niches and interactions with each other. The new study implies species previously understood ecological roles may not be so concrete and contained.Based on this camera trap data set, those old classifications often dont hold, says Burton. That is a really important result for people to be aware of, especially those who are using those kinds of traits in their models.Then, there are the ramifications for wildlife itself. A lot of conservation studies have focused on spatial distribution patterns, but the temporal component may be just as important, says Devarajan. When an animal is active and when it needs access to certain resources could be critical for protecting a species.As the study title says, its not just where the wild things are, but when. Everything from train schedules, traffic patterns, trash collection, and public park hours could either lead to human-wildlife conflict, or be adjusted with conservation in mind. Getting a better grasp on wildlife habits could allow land managers to preserve time slots as well as spaces for species to thrive undisturbed.Yet first, more research is needed. Though sweeping and impressive, both Devarajan and Burton note the new study has limitations. The researchers didnt look directly at human activity to determine its influence on animals, using development and urban density as an imperfect proxy instead. The study is one of the first times scientists have tried to quantify each type of animal tendency, but in doing so, their definitions may have been overly restrictive.Their camera trap dataset is incomplete; it doesnt account for every region around the world and is biased towards dry season observations because of the difficulty of maintaining a camera through monsoons. These gaps mean they didnt directly model the effect of seasonal change for any of their data despite the fact that animal activity is known to shift throughout the year.Finally, the study doesnt offer any insight into how these apparent shifts in animal activity are impacting wildlife. Follow-up research comparing specific populations of a species over time will be needed to understand if changing rhythms is a useful adaptation or detrimental in the long-term. What we really need to know is, is this working out for them, Burton says. Or is it too much of a cost?Despite mammals being relatively well studied, we need to maintain a certain humility. Theres a lot we dont know, says Burton. I feel like animals are working hard to adapt to us and how weve changed the environment. Being aware of that and trying to understand that helps us help them be successful in those changes, so they actually lead to coexistence.
    0 Comments ·0 Shares ·45 Views
  • More brainlike computers could change AI for the better
    www.sciencenews.org
    The tiny worm Caenorhabditis elegans has a brain just about the width of a human hair. Yet this animals itty-bitty organ coordinates and computes complex movements as the worm forages for food. When I look at [C. elegans] and consider its brain, Im really struck by the profound elegance and efficiency, says Daniela Rus, a computer scientist at MIT. Rus is so enamored with the worms brain that she cofounded a company, Liquid AI, to build a new type of artificial intelligence inspired by it.Rus is part of a wave of researchers who think that making traditional AI more brainlike could create leaner, nimbler and perhaps smarter technology. To improve AI truly, we need toincorporate insights from neuroscience, says Kanaka Rajan, a computational neuroscientist at Harvard University.Such neuromorphic technology probably wont completely replace regular computers or traditional AI models, says Mike Davies, who directs the Neuromorphic Computing Lab at Intel in Santa Clara, Calif. Rather, he sees a future in which many types of systems coexist.The tiny worm C. elegans is inspiration for a new type of artificial intelligence.Hakan Kvarnstrom/Science SourceImitating brains isnt a new idea. In the 1950s, neurobiologist Frank Rosenblatt devised the perceptron. The machine was a highly simplified model of the way a brains nerve cells communicate, with a single layer of interconnected artificial neurons, each performing a single mathematical function.Decades later, the perceptrons basic design helped inspire deep learning, a computing technique that recognizes complex patterns in data using layer upon layer of nested artificial neurons. These neurons pass input data along, manipulating it to produce an output. But, this approach cant match a brains ability to adapt nimbly to new situations or learn from a single experience. Instead, most of todays AI models devour massive amounts of data and energy to learn to perform impressive tasks, such as guiding a self-driving car.Its just bigger, bigger, bigger, says Subutai Ahmad, chief technology officer of Numenta, a company looking to human brain networks for efficiency. Traditional AI models are so brute force and inefficient.In January, the Trump administration announced Stargate, a plan to funnel $500 billion into new data centers to support energy-hungry AI models. But a model released by the Chinese company DeepSeek is bucking that trend, duplicating chatbots capabilities with less data and energy. Whether brute force or efficiency will win out is unclear.Meanwhile, neuromorphic computing experts have been making hardware, architecture and algorithms ever more brainlike. People are bringing out new concepts and new hardware implementations all the time, says computer scientist Catherine Schuman of the University of Tennessee, Knoxville. These advances mainly help with biological brain research and sensor development and havent been a part of mainstream AI. At least, not yet.Here are four neuromorphic systems that hold potential for improving AI.Making artificial neurons more lifelikeReal neurons are complex living cells with many parts. They are constantly receiving signals from the environment, with their electric charge fluctuating until it crosses a specific threshold and fires. This activity sends an electrical impulse across the cell and to neighboring neurons. Neuromorphic computing engineers have managed to mimic this pattern in artificial neurons. These neurons, part of spiking neural networks, simulate the signals of an actual brain, creating discrete spikes that carry information through the network. Such a network may be modeled in software or built in hardware.Spikes are not modeled in traditional AIs deep learning networks. Instead, in those models, each artificial neuron is a little ball with one type of information processing, says Mihai Petrovici, a neuromorphic computing researcher at the University of Bern in Switzerland. Each of these little balls links to the others through connections called parameters. Usually, every input into the network triggers every parameter to activate at once, which is inefficient. DeepSeek divides traditional AIs deep learning network into smaller sections that can activate separately, which is more efficient.But real brain and artificial spiking networks achieve efficiency a bit differently. Each neuron is not connected to every other one. Also, only if electrical signals reach a specific threshold does a neuron fire and send information to its connections. The network activates sparsely rather than all at once.Comparing networksTypical deep learning networks are dense, with interconnections among all their identical neurons. Brain networks are sparse, and their neurons can take on different roles. Neuroscientists are still working out how complex brain networks are actually organized.J.D. Monaco, K. Rajan and G.M. HwangJ.D. Monaco, K. Rajan and G.M. HwangImportantly, brains and spiking networks combine memory and processing. The connections that represent the memory are also the elements that do the computation, Petrovici says. Mainstream computer hardware which runs most AI separates memory and processing. AI processing usually happens in a graphical processing unit, or GPU. A different hardware component, such as random access memory, or RAM, handles storage. This makes for simpler computer architecture. But zipping data back and forth among these components eats up energy and slows down computation.The neuromorphic computer chip BrainScaleS-2 combines these efficient features. It contains sparsely connected spiking neurons physically built into hardware, and the neural connections store memories and perform computation.BrainScaleS-2 was developed as part of the Human Brain Project, a 10-year effort to understand the human brain by modeling it in a computer. But some researchers looked at how the tech developed from the project might make AI more efficient. For example, Petrovici trained different AIs to play the video game Pong. A spiking network running on the BrainScaleS-2 hardware used a thousandth of the energy as a simulation of the same network running on a CPU. But the real test was to compare the neuromorphic setup with a deep learning network running on a GPU. Training the spiking system to recognize handwriting used a hundredth the energy of the typical system, the team found.For spiking neural network hardware to be a real player in the AI realm, it has to be scaled up and distributed. Then, it could be useful to computation more broadly, Schuman says.Connecting billions of spiking neuronsThe academic teams working on BrainScaleS-2 currently have no plans to scale up the chip, but some of the worlds biggest tech companies, like Intel and IBM, do.In 2023, IBM introduced its NorthPole neuromorphic chip, which combines memory and processing to save energy. And in 2024, Intel announced the launch of Hala Point, the largest neuromorphic system in the world right now, says computer scientist Craig Vineyard of Sandia National Laboratories in New Mexico.Despite that impressive superlative, theres nothing about the system that visually stands out, Vineyard says. Hala Point fits into a luggage-sized box. Yet it contains 1,152 of Intels Loihi 2 neuromorphic chips for a record-setting total of 1.15 billion electronic neurons roughly the same number of neurons as in an owl brain.Like BrainScaleS-2, each Loihi 2 chip contains a hardware version of a spiking neural network. The physical spiking network also uses sparsity and combines memory and processing. This neuromorphic computer has fundamentally different computational characteristics than a regular digital machine, Schuman says.This BrainScaleS-2 computer chip was built to work like a brain. It contains 512 simulated neurons connected with up to 212,000 synapses. Heidelberg Univ.These features improve Hala Points efficiency compared with that of typical computer hardware. The realized efficiency we get is definitely significantly beyond what you can achieve with GPU technology, Davies says.In 2024, Davies and a team of researchers showed that the Loihi 2 hardware can save energy even while running typical deep learning algorithms. The researchers took several audio and video processing tasks and modified their deep learning algorithms so they could run on the new spiking hardware. This process introduces sparsity in the activity of the network, Davies says.A deep learning network running on a regular digital computer processes every single frame of audio or video as something completely new. But spiking hardware maintains some knowledge of what it saw before, Davies says. When part of the audio or video stream stays the same from one frame to the next, the system doesnt have to start over from scratch. It can keep the network idle as much as possible when nothing interesting is changing. On one video task the team tested, a Loihi 2 chip running a sparsified version of a deep learning algorithm used 1/150th the energy of a GPU running the regular version of the algorithm.The audio and video test showed that one type of architecture can do a good job running a deep learning algorithm. But developers can reconfigure the spiking neural networks within Loihi 2 and BrainScaleS-2 in numerous ways, coming up with new architectures that use the hardware differently. They can also implement different kinds of algorithms using these architectures.Its not yet clear what algorithms and architectures would make the best use of this hardware or offer the highest energy savings. But researchers are making headway. A January 2025 paper introduced a new way to model neurons in a spiking network, including both the shape of a spike and its timing. This approach makes it possible for an energy-efficient spiking system to use one of the learning techniques that has made mainstream AI so successful.Neuromorphic hardware may be best suited to algorithms that havent even been invented yet. Thats actually the most exciting thing, says neuroscientist James Aimone, also of Sandia National Labs. The technology has a lot of potential, he says. It could make the future of computing energy efficient and more capable.Designing an adaptable brainNeuroscientists agree that one of the most important features of a living brain is the ability to learn on the go. And it doesnt take a large brain to do this. C. elegans, one of the first animals to have its brain completely mapped, has 302 neurons and around 7,000 synapses that allow it to learn continuously and efficiently as it explores its world.Ramin Hasani studied how C. elegans learns as part of his graduate work in 2017 and was working to model what scientists knew about the worms brains in computer software. Rus found out about this work while out for a run with Hasanis adviser at an academic conference. At the time, she was training AI models with hundreds of thousands of artificial neurons and half a million parameters to operate self-driving cars.A C. elegans brain (its neurons are colored by type in this reconstruction) learns constantly and is a model for building more efficient AI.D. Witvliet et al/bioRxiv.org 2020If a worm doesnt need a huge network to learn, Rus realized, maybe AI models could make do with smaller ones, too.She invited Hasani and one of his colleagues to move to MIT. Together, the researchers worked on a series of projects to give self- driving cars and drones more wormlike brains ones that are small and adaptable. The end result was an AI algorithm that the team calls a liquid neural network.You can think of this like a new flavor of AI, says Rajan, the Harvard neuroscientist.Standard deep learning networks, despite their impressive size, learn only during a training phase of development. When training is complete, the networks parameters cant change. The model stays frozen, Rus says. Liquid neural networks, as the name suggests, are more fluid. Though they incorporate many of the same techniques as standard deep learning, these new networks can shift and change their parameters over time. Rus says that they learn and adapt based on the inputs they see, much like biological systems.To design this new algorithm, Hasani and his team wrote mathematical equations that mimic how a worms neurons activate in response to information that changes over time. These equations govern the liquid neural networks behavior.Such equations are notoriously difficult to solve, but the team found a way to approximate a solution, making it possible to run the network in real time. This solution is remarkable, Rajan says.In 2023, Rus, Hasani and their colleagues showed that liquid neural networks could adapt to new situations better than much larger typical AI models. The team trained two types of liquid neural networks and four types of typical deep learning networks to pilot a drone toward different objects in the woods. When training was complete, they put one of the training objects a red chair into completely different environments, including a patio and a lawn beside a building. The smallest liquid network, containing just 34 artificial neurons and around 12,000 parameters, outperformed the largest standard AI network they tested, which contained around 250,000 parameters.The team started the company Liquid AI around the same time and has worked with the U.S. militarys Defense Advanced Research Projects Agency to test their model flyingan actual aircraft.The company has also scaled up its models to compete directly with regular deep learning. In January, it announced LFM-7B, a 7-billion-parameter liquid neural network that generates answers to prompts. The team reports that the network outperforms typical language models of the same size.Im excited about Liquid AI because I believe it could transform the future of AI and computing, Rus says.This approach wont necessarily use less energy than mainstream AI. Its constant adaptation makes it computationally intensive, Rajan says. But the approach represents a significant step towards more realistic AI that more closely mimics the brain.Matt ChinworthBuilding on human brain structureWhile Rus is working off the blueprint of the worm brain, others are taking inspiration from a very specific region of the human brain the neocortex, a wrinkly sheet of tissue that covers the brains surface.The neocortex is the brains powerhouse for higher-order thinking, Rajan says. Its where sensory information, decision-making and abstract reasoning converge.This part of the brain contains six thin horizontal layers of cells, organized into tens of thousands of vertical structures called cortical columns. Each column contains around 50,000 to 100,000 neurons arranged in several hundred vertical minicolumns.These minicolumns are the primary drivers of intelligence, neuroscientist and computer scientist Jeff Hawkins argues. In other parts of the brain, grid and place cells help an animal sense its position in space. Hawkins theorizes that these cells exist in minicolumns where they track and model all our sensations and ideas. For example, as a fingertip moves, he says, these columns make a model of what its touching. Its the same with our eyes and what we see, Hawkins explains in his 2021 book A Thousand Brains.Its a bold idea, Rajan says. Current neuroscience holds that intelligence involves the interaction of many different brain systems, not just these mapping cells, she says.Though Hawkins theory hasnt reached widespread acceptance in the neuroscience community, its generating a lot of interest, she says. That includes excitement about its potential uses for neuromorphic computing.Hawkins developed his theory at Numenta, a company he cofounded in 2005. The companys Thousand Brains Project, announced in 2024, is a plan for pairing computing architecture with new algorithms.In some early testing for the project a few years ago, the team described an architecture that included seven cortical columns, hundreds of minicolumns but spanned just three layers rather than six in the human neocortex. The team also developed a new AI algorithm that uses the column structure to analyze input data. Simulations showed that each column could learn to recognize hundreds of complex objects.The practical effectiveness of this system still needs to be tested. But the idea is that it will be capable of learning about the world in real time, similar to the algorithms of Liquid AI.For now, Numenta, based in Redwood, Calif., is using regular digital computer hardware to test these ideas. But in the future, custom hardware could implement physical versions of spiking neurons organized into cortical columns, Ahmad says.Using hardware designed for this architecture could make the whole system more efficient and effective. How the hardware works is going to influence how your algorithm works, Schuman says. It requires this codesign process.A new idea in computing can take off only with the right combination of algorithm, architecture and hardware. For example, DeepSeeks engineers noted that they achieved their gains in efficiency by codesigning algorithms, frameworks and hardware.When one of these isnt ready or isnt available, a good idea could languish, notes Sara Hooker, a computer scientist at the research lab Cohere in San Francisco and author of an influential 2021 paper titled The Hardware Lottery. This already happened with deep learning the algorithms to do it were developed back in the 1980s, but the technology didnt find success until computer scientists began using GPU hardware for AI processing in the early 2010s.Too often success depends on luck, Hooker said in a 2021 Association for Computing Machinery video. But if researchers spend more time considering new combinations of neuromorphic hardware, architectures and algorithms, they could open up new and intriguing possibilities for both AI and computing.
    0 Comments ·0 Shares ·47 Views