• WWW.FACEBOOK.COM
    Did you hear?
    Did you hear? #RedGiantUniverse now has over 100 plugins with the addition of these awesome dither and palette tools! Retro the HECK out of your next project! https://maxonvfx.com/40SIJPl
    0 Commenti 0 condivisioni 5 Views
  • WWW.SMITHSONIANMAG.COM
    Fat Cells Retain a 'Memory' of Obesity, Making It Hard to Lose Weight and Keep It Off, Study Suggests
    Fat tissue, as seen here under a scanning electron micrograph, maintains a "memory" of obesity, new research suggests. Steve Gschmeissner / Science Photo Library via Getty ImagesFat cells have a memory of obesity, which may help explain why its so difficult to maintain weight loss, according to a new study published Monday in the journal Nature.Individuals who have lost weight often later gain the weight back, in a phenomenon known as the yo-yo effect. Now, the new research suggests changes at the cellular level may be partially responsible for the bodys tendency to revert to obesity after weight loss.Obesity leads toepigenetic changes, or chemical alterations to DNA that affect gene activity. The new paper suggests that in fat cells, these changes linger even after a person loses weight. And the cells, beyond simply remembering their prior state of obesity, likely aim to return to this state, says study co-author Ferdinand von Meyenn, an epigeneticist at ETH Zrich, to the Guardians Ian Sample.Scientists studied body fat, also known as adipose tissue, from two groups of participants: One group had never been obese, while the other group had experienced severe obesity. When the researchers compared fat cells between the two groups, they found differences in gene activity.Certain genes in the fat cells of participants with obesity were more active, and others were less active, compared to the control group, reports Nature News Traci Watson. The genes that were more active play a role in the formation of thick, scar-like tissue (called fibrosis), as well as inflammation. The genes that were less active are responsible for helping the fat cells function normally.These gene activity patterns remained constant, even after the individuals with severe obesity had undergone weight-loss surgeries. Though the participants had lost weight, the genes in their fat cells still behaved as if they were obese.The new results show whats happening at the molecular level, and thats really cool, Hyun Cheol Roh, an epigenome specialist at Indiana University School of Medicine who was not involved with the research, tells Nature News.Next, researchers found similar epigenetic changes to fat cells in mice. In another experiment, they put obese mice on a diet. Once the mice had lost weight, researchers fed them a high-fat diet for one month; they also fed the same high-fat diet to mice that had never been obese.The mice that had never been obese gained an average of 5 grams, while the previously obese mice gained an average of 14 grams, writes New Scientists Carissa Wong. When grown in a lab dish, the fat cells from the previously obese mice also absorbed more sugar and fat.From an evolutionary perspective, this makes sense, says study co-author Laura C. Hinte, an epigeneticist at ETH Zrich, to the Guardian. Humans and other animals have adapted to defend their body weight rather than lose it, as food scarcity was historically a common challenge.For now, researchers havent proved that the epigenetic changes to fat cells cause weight gaintheyve only shown a correlation. In addition, epigenetic changes likely arent solely responsible for weight gain. Other factors are probably at play as well, such as the difficulty of maintaining a low-calorie diet for a long period of time.Scientists are also not sure whether the obesity-linked epigenetic changes are permanent. And, if these DNA changes are reversible, researchers dont know how long they last. But the findings suggest preventing obesity in the first place is likely easier than trying to lose weight and keep it off.The knowledge that fat cells remember obesity could help doctors and public health experts design more effective weight-loss programs. Pharmaceutical companies could also one day develop new drugs that reverse the obesity-linked epigenetic changes, reports El Pas Jessica Mouzo.More broadly, the findings could help reduce some of the stigma surrounding obesity.This is not just a lack of willingness or a lack of willpower, theres really a molecular mechanism which fights against this weight loss, von Meyenn says to Bloombergs Naomi Kresge.Moving forward, the team wants to study other types of tissue, such as in the pancreas, liver and brain, to see whether their cells also have a memory of obesity. They also want to explore whether exercise or weight-loss drugs like semaglutide can affect the epigenetic changes linked with obesity.Get the latest stories in your inbox every weekday.Filed Under: Disease, Disease and Illnesses, DNA, Genetics, Health, Medicine, New Research, Obesity
    0 Commenti 0 condivisioni 5 Views
  • WWW.SMITHSONIANMAG.COM
    A Rare Atlas of Astronomy From the Dutch Golden Age Goes on Display in England
    The book recently underwent a three-month conservation process. Clare PrinceA newly restored 17th-century map of the stars and planets is going on display for the first time in England. As one of only 20 surviving copies of the Dutch mapmaker Andreas Cellarius Harmonia Macrocosmica, the atlas is a revealing relic of the Netherlands golden age of cartography.Known as the Star Atlas, this copy of Harmonia Macrocosmica is owned by the United Kingdoms National Trust. The book recently underwent an extensive conservation, and its now set to be displayed at Blickling Estate in Norfolk, England.Harmonia Macrocosmica, printed in Amsterdam in 1661, contains 29 charts which illustrate the astronomical theories of historical thinkers like Claudius Ptolemy of ancient Egypt, Nicolaus Copernicus of Renaissance Poland and Tycho Brahe of Renaissance Denmark. Running over 400 pages long, Harmonia includes text alongside Baroque depictions of the sun, moon, planets, and classical and biblical constellations. Librarian Rebecca Feakes studies theHarmonia MacrocosmicaNational Trust Images / Paul BaileyThis large folio was meant to be displayed and celebrated for its size and opulence, says Blickling librarian Rebecca Feakes in a statement. Owning it told the world about your status and intelligence.During the 1600s, the Netherlands was home to Europes most prominent mapmakers. The city of Antwerp had become a prominent hub of map printing in the late 1500s, and by the 1630s, Amsterdam was the world capital of cartographic publishing. Cellarius, a German-born schoolteacher, had written only history and architecture books before creating the Star Atlas at the suggestion of his publisher Johannes Janssonius.Harmonia Macrocosmica's third plate National Trust Images / Paul BaileyCellarius Harmonia exhibits Dutch mapmakers typical, highly decorative style, as well as contemporary shifts in space science. At the time the Star Atlas was published, societies had begun to accept the once-heretical theory of Copernicus: that Earth and the other planets revolve around the sunthat Earth is not the center of the galaxy.It was aimed at wealthy, learned collectors who valued it as a reference work, beautifully produced, says Feakes in the statement. The gold-tooled bindings and hand-coloured plates are spectacular. The title page of Andreas Cellarius' star atlas National Trust Images / Paul BaileyBlickling Estate has hosted this golden-bound copy of Harmonia Macrocosmica since 1742. Because of its fragility, the book hasnt been publicly displayed since the 1940swhen the National Trust acquired the mansion and its contents, including a vast library. The estates atlas collection is currently the subject of a research project about lights effects on book preservation, which prompted the recent conservation of Harmonia.Harmonia Macrocosmica's 27th plate National Trust Images / Paul BaileyThe parchment on the spine of the atlas was extremely dry and fractured, with large areas of loss, leaving it almost impossible to handle, says book conservation expert Clare Prince in the statement. Many of the pages within were torn and crumpled and in need of repair. Beautiful, hand-coloured, engraved plates had become loose and were at risk of further damage. Before conservation, the book's binding was split and fragile. Clare PrincePrince spent three months repairing the Star Atlas: dismantling the spine and lining it with padding paper, resewing its endbands, repairing its pages and reattaching its engraved plates. Per the statement, the book will be displayed open, alongside prints of some of its remarkably unfaded, often whimsical artwork depicting the Milky Ways sun, stars and planets.As Feakes says, Some of the ideas in the book seem strange to us now, but the stunning illustrations leave no doubt that Cellarius and his contemporaries were just as awestruck by the night sky as we are today.Get the latest stories in your inbox every weekday.Filed Under: Astronomers, Astronomy, Books, Conservation, Cultural Preservation, Maps, Netherlands, Planets, Sun
    0 Commenti 0 condivisioni 5 Views
  • WWW.FACEBOOK.COM
    Photos from CGarchitect.com's post
    Viz Pro of the Week Congratulations to SaltVision (@saltvision_bv) for winning this week's "Viz Pro of the Week" on CGarchitect.com!Their images for a bridge competition beautifully showcase technical prowess and artistic sensitivity, transforming a massive structure into one that appears light and graceful!https://tinyurl.com/vizpro10november2024Don't forget to check out their profile on CGarchitect.com to explore their full portfolio.#VizProOfTheWeek #CGarchitect #ArchViz #Inspiration #DesignInspiration #ArchitecturalVisualization #3Drendering #3D #Architecture #3dvisualization #render #3drender #3dmodeling #vray #coronarender #unreal #sketchup #3dsmax
    0 Commenti 0 condivisioni 5 Views
  • VENTUREBEAT.COM
    OpenScholar: The open-source A.I. thats outperforming GPT-4o in scientific research
    Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn MoreScientists are drowning in data. With millions of research papers published every year, even the most dedicated experts struggle to stay updated on the latest findings in their fields.A new artificial intelligence system, called OpenScholar, is promising to rewrite the rules for how researchers access, evaluate, and synthesize scientific literature. Built by the Allen Institute for AI (Ai2) and the University of Washington, OpenScholar combines cutting-edge retrieval systems with a fine-tuned language model to deliver citation-backed, comprehensive answers to complex research questions.Scientific progress depends on researchers ability to synthesize the growing body of literature, the OpenScholar researchers wrote in their paper. But that ability is increasingly constrained by the sheer volume of information. OpenScholar, they argue, offers a path forwardone that not only helps researchers navigate the deluge of papers but also challenges the dominance of proprietary AI systems like OpenAIs GPT-4o.How OpenScholars AI brain processes 45 million research papers in secondsAt OpenScholars core is a retrieval-augmented language model that taps into a datastore of more than 45 million open-access academic papers. When a researcher asks a question, OpenScholar doesnt merely generate a response from pre-trained knowledge, as models like GPT-4o often do. Instead, it actively retrieves relevant papers, synthesizes their findings, and generates an answer grounded in those sources.This ability to stay grounded in real literature is a major differentiator. In tests using a new benchmark called ScholarQABench, designed specifically to evaluate AI systems on open-ended scientific questions, OpenScholar excelled. The system demonstrated superior performance on factuality and citation accuracy, even outperforming much larger proprietary models like GPT-4o.One particularly damning finding involved GPT-4os tendency to generate fabricated citationshallucinations, in AI parlance. When tasked with answering biomedical research questions, GPT-4o cited nonexistent papers in more than 90% of cases. OpenScholar, by contrast, remained firmly anchored in verifiable sources.The grounding in real, retrieved papers is fundamental. The system uses what the researchers describe as their self-feedback inference loop and iteratively refines its outputs through natural language feedback, which improves quality and adaptively incorporates supplementary information.The implications for researchers, policy-makers, and business leaders are significant. OpenScholar could become an essential tool for accelerating scientific discovery, enabling experts to synthesize knowledge faster and with greater confidence.How OpenScholar works: The system begins by searching 45 million research papers (left), uses AI to retrieve and rank relevant passages, generates an initial response, and then refines it through an iterative feedback loop before verifying citations. This process allows OpenScholar to provide accurate, citation-backed answers to complex scientific questions. | Source: Allen Institute for AI and University of WashingtonInside the David vs. Goliath battle: Can open source AI compete with Big Tech?OpenScholars debut comes at a time when the AI ecosystem is increasingly dominated by closed, proprietary systems. Models like OpenAIs GPT-4o and Anthropics Claude offer impressive capabilities, but they are expensive, opaque, and inaccessible to many researchers. OpenScholar flips this model on its head by being fully open-source.The OpenScholar team has released not only the code for the language model but also the entire retrieval pipeline, a specialized 8-billion-parameter model fine-tuned for scientific tasks, and a datastore of scientific papers. To our knowledge, this is the first open release of a complete pipeline for a scientific assistant LMfrom data to training recipes to model checkpoints, the researchers wrote in their blog post announcing the system.This openness is not just a philosophical stance; its also a practical advantage. OpenScholars smaller size and streamlined architecture make it far more cost-efficient than proprietary systems. For example, the researchers estimate that OpenScholar-8B is 100 times cheaper to operate than PaperQA2, a concurrent system built on GPT-4o.This cost-efficiency could democratize access to powerful AI tools for smaller institutions, underfunded labs, and researchers in developing countries. Still, OpenScholar is not without limitations. Its datastore is restricted to open-access papers, leaving out paywalled research that dominates some fields. This constraint, while legally necessary, means the system might miss critical findings in areas like medicine or engineering. The researchers acknowledge this gap and hope future iterations can responsibly incorporate closed-access content.How OpenScholar performs: Expert evaluations show OpenScholar (OS-GPT4o and OS-8B) competing favorably with both human experts and GPT-4o across four key metrics: organization, coverage, relevance and usefulness. Notably, both OpenScholar versions were rated as more useful than human-written responses. | Source: Allen Institute for AI and University of WashingtonThe new scientific method: When AI becomes your research partnerThe OpenScholar project raises important questions about the role of AI in science. While the systems ability to synthesize literature is impressive, it is not infallible. In expert evaluations, OpenScholars answers were preferred over human-written responses 70% of the time, but the remaining 30% highlighted areas where the model fell shortsuch as failing to cite foundational papers or selecting less representative studies.These limitations underscore a broader truth: AI tools like OpenScholar are meant to augment, not replace, human expertise. The system is designed to assist researchers by handling the time-consuming task of literature synthesis, allowing them to focus on interpretation and advancing knowledge.Critics may point out that OpenScholars reliance on open-access papers limits its immediate utility in high-stakes fields like pharmaceuticals, where much of the research is locked behind paywalls. Others argue that the systems performance, while strong, still depends heavily on the quality of the retrieved data. If the retrieval step fails, the entire pipeline risks producing suboptimal results.But even with its limitations, OpenScholar represents a watershed moment in scientific computing. While earlier AI models impressed with their ability to engage in conversation, OpenScholar demonstrates something more fundamental: the capacity to process, understand, and synthesize scientific literature with near-human accuracy.The numbers tell a compelling story. OpenScholars 8-billion-parameter model outperforms GPT-4o while being orders of magnitude smaller. It matches human experts in citation accuracy where other AIs fail 90% of the time. And perhaps most tellingly, experts prefer its answers to those written by their peers.These achievements suggest were entering a new era of AI-assisted research, where the bottleneck in scientific progress may no longer be our ability to process existing knowledge, but rather our capacity to ask the right questions.The researchers have released everythingcode, models, data, and toolsbetting that openness will accelerate progress more than keeping their breakthroughs behind closed doors.In doing so, theyve answered one of the most pressing questions in AI development: Can open-source solutions compete with Big Techs black boxes?The answer, it seems, is hiding in plain sight among 45 million papers.VB DailyStay in the know! Get the latest news in your inbox dailyBy subscribing, you agree to VentureBeat's Terms of Service.Thanks for subscribing. Check out more VB newsletters here.An error occured.
    0 Commenti 0 condivisioni 5 Views
  • VENTUREBEAT.COM
    DeepSeeks first reasoning model R1-Lite-Preview turns heads, beating OpenAI o1 performance
    Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn MoreDeepSeek, an AI offshoot of Chinese quantitative hedge fund High-Flyer Capital Management focused on releasing high-performance open-source tech, has unveiled the R1-Lite-Preview, its latest reasoning-focused large language model (LLM), available for now exclusively through DeepSeek Chat, its web-based AI chatbot.Known for its innovative contributions to the open-source AI ecosystem, DeepSeeks new release aims to bring high-level reasoning capabilities to the public while maintaining its commitment to accessible and transparent AI.And the R1-Lite-Preview, despite only being available through the chat application for now, is already turning heads by offering performance nearing and in some cases exceeding OpenAIs vaunted o1-preview model.Like that model released in Sept. 2024, DeepSeek-R1-Lite-Preview exhibits chain-of-thought reasoning, showing the user the different chains or trains of thought it goes down to respond to their queries and inputs, documenting the process by explaining what it is doing and why.While some of the chains/trains of thoughts may appear nonsensical or even erroneous to humans, DeepSeek-R1-Lite-Preview appears on the whole to be strikingly accurate, even answering trick questions that have tripped up other, older, yet powerful AI models such as GPT-4o and Claudes Anthropic family, including how many letter Rs are in the word Strawberry? and which is larger, 9.11 or 9.9? See screenshots below of my tests of these prompts on DeepSeek Chat:A new approach to AI reasoningDeepSeek-R1-Lite-Preview is designed to excel in tasks requiring logical inference, mathematical reasoning, and real-time problem-solving. According to DeepSeek, the model exceeds OpenAI o1-preview-level performance on established benchmarks such as AIME (American Invitational Mathematics Examination) and MATH. DeepSeek-R1-Lite-Preview benchmark results posted on X.Its reasoning capabilities are enhanced by its transparent thought process, allowing users to follow along as the model tackles complex challenges step by step.DeepSeek has also published scaling data, showcasing steady accuracy improvements when the model is given more time or thought tokens to solve problems. Performance graphs highlight its proficiency in achieving higher scores on benchmarks such as AIME as thought depth increases.Benchmarks and Real-World ApplicationsDeepSeek-R1-Lite-Preview has performed competitively on key benchmarks. The companys published results highlight its ability to handle a wide range of tasks, from complex mathematics to logic-based scenarios, earning performance scores that rival top-tier models in reasoning benchmarks like GPQA and Codeforces.The transparency of its reasoning process further sets it apart. Users can observe the models logical steps in real time, adding an element of accountability and trust that many proprietary AI systems lack.However, DeepSeek has not yet released the full code for independent third-party analysis or benchmarking, nor has it yet made DeepSeek-R1-Lite-Preview available through an API that would allow the same kind of independent tests.In addition, the company has not yet published a blog post nor a technical paper explaining how DeepSeek-R1-Lite-Preview was trained or architected, leaving many question marks about its underlying origins.Accessibility and Open-Source PlansThe R1-Lite-Preview is now accessible through DeepSeek Chat at chat.deepseek.com. While free for public use, the models advanced Deep Think mode has a daily limit of 50 messages, offering ample opportunity for users to experience its capabilities.Looking ahead, DeepSeek plans to release open-source versions of its R1 series models and related APIs, according to the companys posts on X.This move aligns with the companys history of supporting the open-source AI community. Its previous release, DeepSeek-V2.5, earned praise for combining general language processing and advanced coding capabilities, making it one of the most powerful open-source AI models at the time.Building on a LegacyDeepSeek is continuing its tradition of pushing boundaries in open-source AI. Earlier models like DeepSeek-V2.5 and DeepSeek Coder demonstrated impressive capabilities across language and coding tasks, with benchmarks placing it as a leader in the field. The release of R1-Lite-Preview adds a new dimension, focusing on transparent reasoning and scalability.As businesses and researchers explore applications for reasoning-intensive AI, DeepSeeks commitment to openness ensures that its models remain a vital resource for development and innovation. By combining high performance, transparent operations, and open-source accessibility, DeepSeek is not just advancing AI but also reshaping how it is shared and used.The R1-Lite-Preview is available now for public testing. Open-source models and APIs are expected to follow, further solidifying DeepSeeks position as a leader in accessible, advanced AI technologies.VB DailyStay in the know! Get the latest news in your inbox dailyBy subscribing, you agree to VentureBeat's Terms of Service.Thanks for subscribing. Check out more VB newsletters here.An error occured.
    0 Commenti 0 condivisioni 5 Views
  • WWW.GAMESINDUSTRY.BIZ
    FromSoftware parent Kadokawa confirms buyout interest from Sony
    FromSoftware parent Kadokawa confirms buyout interest from SonySony sent the firm a letter of intent to acquire its shares, but "no decision has been made at this time," CEO saysImage credit: FromSoftware News by Marie Dealessandri Deputy Editor Published on Nov. 20, 2024 FromSoftware and Spike Chunsoft's parent company Kadokawa has confirmed that Sony has issued a letter of intent to acquire the firm's shares, but clarified that "no decision has been made at this time."The statement was posted on the firm's website and is signed by CEO Takeshi Natsuno, following a Reuters report earlier this week that Sony was in talks to acquire Kadokawa."There are some articles on the acquisition of Kadokawa Corporation (hereinafter 'the Company') by Sony Group Inc," the statement read. "However, this information is not announced by the company. The company has received an initial letter of intent to acquire the company's shares, but no decision has been made at this time. If there are any facts that should be announced in the future, we will make an announcement in a timely and appropriate manner."Reuters reported yesterday that acquisition talks were ongoing between the two firms, according to two anonymous sources the publication talked to. Following the story, Kadokawa's shares soared, reaching an all time high.In addition to FromSoftware and Spike Chunsoft, Kadokawa also owns Octopath Traveler co-developer Acquire and RPG Maker firm Gotcha Gotcha Games.Sony already owns around 14% of FromSoftware, which is owned by Kadokawa at 70% and by Tencent at 16.5%.
    0 Commenti 0 condivisioni 5 Views
  • WWW.GAMESINDUSTRY.BIZ
    Final Fantasy 14 coming to mobile
    Final Fantasy 14 coming to mobileThe Square Enix title will be developed by Lightspeed StudiosImage credit: Lightspeed Studios/Square Enix News by Sophie McEvoy Staff Writer Published on Nov. 20, 2024 Square Enix has announced Final Fantasy 14 is being developed for mobile by Lightspeed Studios.A launch date has yet to be set, and the game will be available to users in mainland China before its global release.Final Fantasy 14 Mobile will launch with nine duties from the main game, but it's not clear as to what other pieces of content will be included in this version."It has been 11 years since the rebirth of Final Fantasy 14, and it's been a remarkable journey," said game director Naoki Yoshida."This is our latest MMORPG title specifically tailored for the mobile platform. Despite the adjustments, Lightspeed Studios is working with tremendous enthusiasm and dedication to faithfully recreate the story, duties, battle content, and other aspects of the game."Lightspeed Studios is a Tencent subsidiary, and is also responsible for developing PUBG Mobile.Lightspeed Studios recently launched a new Japan studio led by Capcom veteran Hideaki Itsuno. The developer will focus on creating original AAA action games.
    0 Commenti 0 condivisioni 5 Views
  • WWW.GAMEDEVELOPER.COM
    Square Enix and Lightspeed team up to bring Final Fantasy XIV to phones
    Justin Carter, Contributing EditorNovember 20, 20242 Min ReadImage via Square Enix.At a GlancePublishers are bringing their biggest games to phones, and Final Fantasy XIV is the newest title to join the mobile market.Final Fantasy XIV is expanding to phones, courtesy of Lightspeed Studios and key developers, including director and producer Naoki Yoshida.Square Enix's hit free-to-play MMO is coming to mobile devices, building on the game's arrival on Xbox Series X|S earlier this year. As with the console version, phone players will take on the role of a Warrior of Light and interact with others while going through Final Fantasy XIV's core and post-launch stories.Yoshida described the new version as "a sister to FFXIV, aiming to recreate the grandeur of the original's story and combat mechanics" for a new audience. Square Enix has previously ported over several mainline and spinoff Final Fantasy entries to phones, sometimes with concessions to the titles' graphics or gameplay.The game will release in China first through playtests, then have a global launch "soon after." At time of writing, it is unclear if the two versions of Final Fantasy XIV will feature any type of cross-save or cross-play.Big games on mobile screensIn recent years, major console and PC franchises have made their way to phones to draw in more players and revenue. Just last year, Call of Duty released a new mobile version of its popular battle royale Warzone, and Ubisoft is aiming to do the same with Tom Clancy series Rainbow Six and The Division.Fellow live-service games Warframe and Destiny 2 have or will soon follow suit. In the latter's case, its mobile title Destiny: Rising will be a spinoff of the series rather than a simple port.However, not all big games are built for the phones: last year, EA killed the mobile version of Apex Legends and another in development for its Battlefield shooter series. Similarly, iPhone ports for Capcom's 2023 remake of Resident Evil 4 and Ubisoft's Assassin's Creed Mirage reportedly failed to find much of an audience when they launched over the summer.Read more about:Square EnixAbout the AuthorJustin CarterContributing Editor, GameDeveloper.comA Kansas City, MO native, Justin Carter has written for numerous sites including IGN, Polygon, and SyFy Wire. In addition to Game Developer, his writing can be found at io9 over on Gizmodo. Don't ask him about how much gum he's had, because the answer will be more than he's willing to admit.See more from Justin CarterDaily news, dev blogs, and stories from Game Developer straight to your inboxStay UpdatedYou May Also Like
    0 Commenti 0 condivisioni 18 Views
  • WWW.GAMEDEVELOPER.COM
    Niantic's new AI model may have been built by unaware Pokmon Go players
    Justin Carter, Contributing EditorNovember 20, 20243 Min ReadImage via Niantic.At a GlanceThe AR dev has spent half a decade using its games and positioning tech to build a 'detailed understanding' of the planet.Niantic recently announced a Large Geospatial Model (LGM), which uses machine learning to "understand a scene and connect it to millions of other scenes globally." But according to 404 Media, the LGM was possibly made by conscripting unaware players into doing the studio's work for years through games like Pokmon Go.In its blog, Niantic expressed hope the LGM would "implement a shared understanding of geographic locations, and comprehending places yet to be fully scanned." It's built on the studio's Visual Positioning System (VPS), which lets players "position themselves in the world with centimeter-level accuracy" and see or place digital content in their exact location, even after they've left.Both Pokmon Go and Ingress are augmented reality (AR) titles, and collect geolocated images as players explore locations to find Pokmon or player-made art. By Niantic's admission, its VPS has over 10 million locations scanned globally from the past five years, and "receive[s] about 1 million fresh scans each week," which are collected from its gamesone of which has been quite popular for nearly a decade, meaning players may have unknowingly participated in helping build its LGM.Similar to a Large Language Model (LLM), Niantic's model scrapes data from real-world locations, and it hopes to use the technology to "enable computers not only to perceive and understand physical spaces, but also to interact with them in new ways." And what makes its data more substantial than something like Google Maps or Street is the point of view: as Niantic notes, the data is "taken from a pedestrian perspective and includes places inaccessible to cars."As of July 15, 2024, Niantic's privacy policy confirms it uses geospatial technology and player recordings to "build a 3D understanding of real-world places, with the goal of offering new types of AR experiences to our users." The feature is opt-in and can be disabled by players at any time, but appears to be a "critical component" of Niantic's goals for AR and its model."The path from LLMs to LGMs is another step in AIs evolution," its blog concluded. "The worlds future operating system will depend on the blending of physical and digital realities to create a system for spatial computing that will put people at the center."The games industry wants to go in on genAI, for better or worseCompanies like NVIDIA and OpenAI have previously come under criticism for using what or whoever they could to build their AI technology. In August, 404 published a report claiming NVIDIA did extensive scraping of copywritten material across YouTube for its AI tools, and would reportedly dismiss concerns about the practice or say it was "in full compliance with the letter and spirit of copyright law."Breaching consent has been a major point of criticism regarding generative AI and similar technology for some time: voice actors have talked about how their voices have been used by modders making content for popular games, and expressed similar worry that audio companies are forcing them into letting their performances be used to train AI models.Some developers have used genAI tech in non-voice acting ways as a means of lightening the development load and finding new fixes to old problems. But top-level executives think the technology can also be put toward concepting new game ideas or other creative means.Game Developer has reached to Niantic for clarification on how its geospacial data was obtained, and the transparency of that data's use. We will update when a response is given.Read more about:Generative AIAbout the AuthorJustin CarterContributing Editor, GameDeveloper.comA Kansas City, MO native, Justin Carter has written for numerous sites including IGN, Polygon, and SyFy Wire. In addition to Game Developer, his writing can be found at io9 over on Gizmodo. Don't ask him about how much gum he's had, because the answer will be more than he's willing to admit.See more from Justin CarterDaily news, dev blogs, and stories from Game Developer straight to your inboxStay UpdatedYou May Also Like
    0 Commenti 0 condivisioni 31 Views