• Apple joins AI hardware standards consortium to improve server performance
    appleinsider.com
    Apple has joined the board of directors for the Ultra Accelerator Link Consortium, giving it more of a say in how the architecture for AI server infrastructure will evolve.UALink logoThe Ultra Accelerator Link Consortium (UALink), is an open industry standard group for the development of UALink specifications. As a potential key element used for the development of artificial intelligence models and accelerators, the development of the standards could be massively beneficial to the future of AI itself.On Tuesday, it was announced that three more members have been elected to the consortium's board. Apple was one of the trio, alongside Alibaba and Synopsys. Continue Reading on AppleInsider | Discuss on our Forums
    0 Comments ·0 Shares ·30 Views
  • FEMA: America's buildings are woefully underprepared for natural disasters
    archinect.com
    A code adoption tracking resource produced by the Federal Emergency Management Agency (FEMA) that shows the status of different states compliance with hazard-resistant zoning measures is especially relevant given the recent spate of catastrophic weather eventsaffecting Los Angeles and other American cities.The BCAT portal includes data through the end of Q4 2024. Overall, just one-third (33%) of all "natural hazard-prone jurisdictions"have successfully adopted the most current hazard-resistant building codes. This includes protections against damaging wind loads, hurricanes, floods, seismic activity, and tornadoes and can be taken as a snapshot of the overall readiness for buildings in the U.S. to protect against other kinds of natural disasters.We saw succinctly in the past six months the efficacy of these codes in protecting structures (or not) against forces such as hurricanes and other extreme weather events.Related on Archinect: Burning down the house to make American hom...
    0 Comments ·0 Shares ·27 Views
  • Exodus Will Have Long-Term Narrative Consequences Depending on Players Relationships
    gamingbolt.com
    Developer Archetype Entertainment recently held a Q&A in order to provide an update on upcoming sci-fi action RPG Exodus. In the Q&A, creative director James Ohlen and executive producer Chad Robertson have revealed some new details about the upcoming title.The relationships players form with their teammates in Exodus will be a big factor in the game, with the studio revealing that setting out on missions will affect relationships with characters you dont meet for a while. This is because missions in Exodus could take several years, and even decades to complete.You go off on your Exodus Journeys and you leave behind your city and some of your friends and maybe even family members, and you make choices about them, said Ohlen.An example provided by Ohlen indicated that players could end up not meeting some characters for several decades, and the choices they make will affect characters that are now several years older than when you previously met them.Not everyone has to come on an Exodus with you, he explained. You might leave some behind and when you come back it could be a decade later, it could be four decades later, and those choices will have impacted your relationships with people that are now a decade older or three or four decades older.This narrative decision stems from the fact that the story of Exodus revolves around humanity looking for a new home in the stars, and journeys at that massive scale will definitely take several years. There is also some level of time dilation happening in the games story, likely owing to interstellar travel at incredible speeds.Robertson also revealed that the primary antagonists in the game, referred to as the Mara Yama, will be creepy. This stems from the studio wanting the games primary enemies being an evil Celestial civilization, but could also manage to be creepy.The Mara Yama were revealed in a trailer from October, which showcased just how strange and creepy they can be. The civilization doesnt quite have a planet it calls home, and is instead more nomadic in nature, travelling across the stars in its own citadels.The studio goes on to talk about various other smaller aspects of the game, including how time dilation will affect players, details about the player character (known in the game as the Traveler), and even which characters the developers would like to be if they lived in the universe of Exodus.Originally unveiled back in 2023, Exodus will be a third-person action RPG that will involve a mix of both fast-paced action, as well as quite a bit of exploration. The games narrative is being handled by storied sci-fi writer Drew Karbyshyn, who revealed in an interview last year that there will be plenty of long-term choices and consequences for players to experience.The most recent trailer released for the game back in December gave us a closer look at its third-person shooter combat. The trailer gave us a look at a firefight against a host of different enemies, while also letting us catch a glimpse of some of the various abilities players will have access to in the game.
    0 Comments ·0 Shares ·29 Views
  • Free tool: Hair Cinematic Tool for Unreal Engine 5
    www.cgchannel.com
    html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd"Argentum Studios free Hair Cinematic Tool makes it easy to access Unreal Engine 5s internal settings controlling lighting and shadows for hair to improve the look of rendered animation.Animation firm Argentum Studio has released its in-house Hair Cinematic Tool for Unreal Engine for free.The add-on makes it easier to access hidden parameters for rendering hair grooms, helping to create higher-quality renders for cinematics, animations and VFX.A dedicated UI through which to adjust internal UE5 settings for rendering hair and furArgentum Studio describes the Hair Cinematic Tool as designed to give creators full control over Groom rendering settings, addressing the platforms lack of detailed native options.It provides a graphical interface through which to adjust Unreal Engines internal CVars (Console Variables) for hair, and for Voxelization shadows and Deep Shadows.By adjusting their values, users can fine tune lighting and shadows for hair and fur rendered using Unreal Engine 5.You can find Argentum Studios run-down of what the CVars control, and its suggested values for key settings, in this blog post on ArtStation.System requirementsArgentum Studios Hair Cinematic Tool is compatible with Unreal Engine 5.1+. The add-on is available free under ArtStation Marketplaces Standard license, which permits use in commercial projects.Download Argentum Studios free Hair Cinematic Tool from ArtStation MarketplaceFind details of how to use the Hair Cinematic ToolHave your say on this story by following CG Channel on Facebook, Instagram and X (formerly Twitter). As well as being able to comment on stories, followers of our social media accounts can see videos we dont post on the site itself, including making-ofs for the latest VFX movies, animations, games cinematics and motion graphics projects.
    0 Comments ·0 Shares ·25 Views
  • Bats Hitch a Ride on Storm Fronts When Migrating, Saving Energy by 'Surfing' Through the Sky, Study Finds
    www.smithsonianmag.com
    Researchers tracked 71 common noctule bats (Nyctalus noctula) to parse their migration patterns. Kamran Safi / Max Planck Institute of Animal BehaviorMore than 1,400 species of bats exist worldwide, making them some of the most widespread creatures on Earththey can be found on every continent except for Antarctica. Chances are, theres one not too far from you right now. But despite the animals prevalence, their migration patterns remain largely a mystery. Their speed, small size and nocturnal nature make studying bats challenging. Now, researchers at the Max Planck Institute of Animal Behavior are shining a rare light inside the black box of bat migration.In a new study published in Science this month, a team of biologists used tiny tags attached between bats shoulder blades to track their movements. The tags, which the researchers developed, used the Internet of Thingsa wireless network of computers, smartphones and devices that can transfer informationto triangulate the bats position.On certain nights, we saw an explosion of departures that looked like bat fireworks, lead author Edward Hurme, a biologist at the Max Planck Institute for Animal Behavior, says in a statement. We needed to figure out what all these bats were responding to on those particular nights.The team followed the movements of 71 female noctule bats (Nyctalus noctula) across central Europe during their spring migrations. They tagged bats across three years, though each bats tracker fell off naturally after about four weeks. Originally tagged in Switzerland, the bats later dispersed, flying in a general northeastern direction to Germany, Poland and the Czech Republic, reports Sciences Elizabeth Pennisi. The research revealed that when the bats migrated, they would fly up to 238 miles each nightnearly 125 miles longer than previously thought. The trackers remained on the bats for up to four weeks, then they naturally fell off. MPI of Animal Behavior / Christian ZieglerAfter incorporating weather data into their analysis, the researchers concluded that the bats coordinate their movements with warm fronts that precede storms. These nifty night surfers use the strong winds generated by the front to get a boost to their destinationand expend less energy in the process, according to the paper.This was actually a big surprise. We had some clue that bats were responding to good wind conditions, but we didnt think that there was this connection to storms, Hurme told NPRs Jonathan Lambert.The scientists still dont know how the bats can predict a storm is coming, but they hope the technology they developed will allow for more bat studies.This technology revolutionizes the tracking of bat movements and will surely help researchers answer many questions about migration, says Charlotte Roemer, a conservation biologist at Frances National Museum of Natural History who was not involved in the study, to Science. The possibilities are very exciting.For instance, further research on this topic might help protect bats from human-caused fatalities, especially as the animals are increasingly endangered. Understanding where and when bats migrate could help wind turbine operators mitigate collisions with the blades, which are the cause of millions of bat deaths globally each year.More studies like this will pave the way for a system to forecast bat migration, Hurme says in the statement. We can be stewards of bats, helping wind farms to turn off their turbines on nights when bats are streaming through.Get the latest stories in your inbox every weekday.Filed Under: Animals, Bats, Biology, Conservation, Internet, Mammals, Migration, New Research, Technology
    0 Comments ·0 Shares ·27 Views
  • Cerebras Systems teams with Mayo Clinic on genomic model that predicts arthritis treatment
    venturebeat.com
    Cerebras Systems has teamed with Mayo Clinic to create an AI genomic foundation model that predicts the best medical treatments.Read More
    0 Comments ·0 Shares ·27 Views
  • Godfall developer Counterplay has reportedly shut down
    www.gamesindustry.biz
    Godfall developer Counterplay has reportedly shut downWhilst not formally confirmed, a since-edited LinkedIn post stated the studio had "disbanded"Image credit: Counterplay Games / Gearbox News by Vikki Blake Contributor Published on Jan. 14, 2025 Godfall developer Counterplay has reportedly shut down.Whilst not formally confirmed by the studio, PlayStation Lifestyle allegedly spotted and verified a post on LinkedIn that stated the studio had "disbanded" after a partnership with Jackalyptic fell through."Over the past six months or so our project at Jackalyptic has been supercharged by the world-class devs at Counterplay Games," the statement began."It's impossible to overstate their impact. From the very first day they put their shoulders to the wheels like it was their baby."Unfortunately, we were unable to continue our partnership into the new year and [Counterplay Games] was disbanded," it concluded before sharing profiles of those impacted by the changes. The post has since been edited to erase mention of the closure.It is unclear how many people have been impacted.Despite backing from Gearbox, Counterplay's sole published game, Godfall - which was developed with Disruptive Games as a PS5 launch title - released to middling critic and player reviews.The swath of job cuts from last year seem to be continuing in 2025. Yesterday, we reported that Robocraft 2 developer Freejam had shuttered. Swedish games firm Enad Global 7 (EG7) also initiated the "wind down" of Toadman Interactive, which resulted in 69 job losses and 38 layoffs at Piranha Games.In the first two weeks of 2025 alone, over 150 developers have lost their jobs, including cuts at Splash Damage and Jar of Sparks.
    0 Comments ·0 Shares ·28 Views
  • ChatGPT can now handle reminders and to-dos
    www.theverge.com
    ChatGPT can now handle reminders and to-dosChatGPT can now handle reminders and to-dos / The AI chatbot can now set reminders and perform recurring actions.By Kylie Robison, a senior AI reporter working with The Verge's policy and tech teams. She previously worked at Fortune Magazine and Business Insider. Jan 14, 2025, 6:00 PM UTCShare this story Illustration: The VergeOpenAI is launching a new beta feature in ChatGPT called Tasks that lets users schedule future actions and reminders.The feature, which is rolling out to Plus, Team, and Pro subscribers starting today, is an attempt to make the chatbot into something closer to a traditional digital assistant think Google Assistant or Siri but with ChatGPTs more advanced language capabilities.Tasks works by letting users tell ChatGPT what they need and when they need it done. Want a daily weather report at 7AM? A reminder about your passport expiration? Or maybe just a knock-knock joke to tell your kids before bedtime? ChatGPT can now handle all of that through scheduled one-time or recurring tasks.Do you work at OpenAI? Id love to chat. You can reach me securely on Signal @kylie.01 or via email at kylie@theverge.com.To use the feature, subscribers need to select 4o with scheduled tasks in ChatGPTs model picker. From there, its as simple as typing out what you want ChatGPT to do and when you want it done. The system can also proactively suggest tasks based on your conversations, though users have to explicitly approve any suggestions before theyre created. (Honestly, I feel like suggestions has the potential of creating annoying slop by accident).All tasks can be managed either directly in chat threads or through a new Tasks section (available only via the web) in the profile menu, so its easy to modify or cancel any task youve set up. Upon completion of these tasks, notifications will alert users on web, desktop, and mobile. Theres also a limit of 10 active tasks that can run simultaneously.OpenAI hasnt specified when (or if) the feature might come to free users, suggesting Tasks might remain a premium feature to help justify ChatGPTs subscription costs. The company has monthly $20 and $200 subscription tiers.An example of a ChatGPT Task. OpenAIWhile scheduling capabilities are a common feature in digital assistants, this marks a shift in ChatGPTs functionality. Until now, the AI has operated solely in real-time, responding to immediate requests rather than handling ongoing tasks or future planning. The addition of Tasks suggests OpenAI is expanding ChatGPTs role beyond conversation into territory traditionally held by virtual assistants.OpenAIs ambitions for Tasks appear to stretch beyond simple scheduling too. Bloomberg reported that Operator, an autonomous AI agent capable of independently controlling computers, is slated for release this month. Meanwhile, reverse engineer Tibor Blaho found that OpenAI appears to be working on something codenamed Caterpillar, that could integrate with Tasks and allow ChatGPT to search for specific information, analyze problems, summarize data, navigate websites, and access documentswith users receiving notifications upon task completion.The rise of agentic AI in 2025 isnt just about technological advancementits about economics.As I previously wrote back in October, the rise of agentic AI in 2025 isnt just about technological advancementits about economics. These agent-like features represent a strategic way to monetize expensive AI infrastructure. While OpenAIs decision to put this functionality behind ChatGPTs paywall was predictable, the real question remains: Will it deliver reliable results? The last time I got an OpenAI agent demo, it produced inaccurate information. The coming months will reveal whether theyve solved these fundamental reliability challenges.I also think of this new feature as a slightly more sophisticated script, but at the end of the day, Tasks is following a simple, rote set of instructions much like a typical bot. The goal of many frontier AI labs like OpenAI is to evolve these features into something that is able to interact with environments, learn from feedback, and make decisions without constant human input.However, questions remain about how reliable these scheduled tasks will be and what happens if ChatGPT fails to deliver time-sensitive information. OpenAIs decision to launch Tasks in beta suggests theyre still working out these details and want to gather real-world feedback before a wider rollout.For now, if youre a paying ChatGPT user, you can start experimenting with Tasks by looking for the 4o with scheduled tasks option in your model picker. Just remember its still in beta maybe dont rely on it for that super important meeting reminder just yet.Most PopularMost Popular
    0 Comments ·0 Shares ·28 Views
  • When ChatGPTs Web Search Fails: Lessons for Data-Driven Decision Making
    towardsai.net
    LatestMachine LearningWhen ChatGPTs Web Search Fails: Lessons for Data-Driven Decision Making 0 like January 13, 2025Share this postLast Updated on January 14, 2025 by Editorial TeamAuthor(s): Ori Abramovsky Originally published on Towards AI. Photo by Omar RamadanChatGPT recently gained a game-changing capability: web search. This new feature allows the model to incorporate up-to-date data from the web into its responses, unlocking an incredible range of possibilities. Tasks like validating leads, conducting market analysis, or investigating recent events have become dramatically easier. Previously, a significant limitation of language models was their reliance on static, pre-trained knowledge bases. This not only made them ineffective for recent or niche topics but also made them prone to hallucination fabricating plausible-sounding but incorrect information when asked about events or details beyond their training cutoff. With web search, this risk is significantly reduced. ChatGPT can now focus on its strength analyzing vast amounts of information while outsourcing data gathering to the web.While this sounds like a perfect solution, this capability comes with its own set of challenges. The most significant risk arises when users place blind trust in the models outputs without verifying the underlying data. Given the immense promise of this feature, many may adopt it uncritically. To illustrate why caution is crucial, Ill share two real examples where ChatGPTs web search failed estimating event probabilities and analyzing stocks. These examples highlight common pitfalls and demonstrate how simple adjustments can mitigate these issues. But before diving into the examples, lets examine the inherent weaknesses of the search ecosystem as a whole.Why Do Search-Based LLM Responses Fail?Search-based queries with ChatGPT follow a systematic process: a user asks a question, and ChatGPT determines whether answering it requires a web search either by its own logic or because the user explicitly requests it. ChatGPT then generates a search query, sends it to Bing, retrieves the top results, processes them, and synthesizes a final answer based on these resources. While this workflow appears straightforward, several inherent weaknesses can lead to failure at different stages. Lets explore these critical pitfalls:Unoptimized User Input The quality of an LLMs response is directly tied to how well the user structures their question. Ambiguities or poorly worded prompts can significantly impact the output. Additionally, users often forget that LLMs process information differently than humans. For example, while humans might prefer a bullet-pointed list for clarity, LLMs can misinterpret this format, giving undue weight to the first items in the list. To address this, users should avoid long, list-heavy queries and instead break complex requests into smaller, more targeted questions.Suboptimal Search Query Generation Once ChatGPT determines that a web search is needed, it generates a search query based on its interpretation of the prompt. However, query optimization is a complex domain, and the LLMs automatically generated queries can often fall short. Users frequently discover better phrasing for their search queries through manual iteration and experimentation. To address this, a practical approach is to review the search results, refine the query offline, and then instruct ChatGPT to use the optimized version during subsequent searches.Limited Context from Search Results ChatGPT typically processes only the first few results from a search, assuming they are sorted by relevance. However, this can be problematic when the most useful source lies outside the top results. Humans often scan a broader range to identify the best sources, understanding that relevance rankings are not always perfect. To address this, users can manually examine search results and indicate which sources to prioritize for analysis.Restricted Search Engine Choice Currently, ChatGPT relies exclusively on Bing, which may not always provide the most relevant results for specific queries. Certain tasks might benefit from alternative engines for instance, Google Scholar for academic research. To address this, users could explicitly direct ChatGPT to prioritize specific sites or search engines relevant to their needs.Best Practices and a Need for CautionMany of these limitations could be mitigated by adopting a more iterative approach, involving either human review or LLM agents that can break the query into smaller, distinct steps. For example, users might first refine the search query, then determine which results to consider, and finally synthesize an answer. While this step-by-step method aligns with LLM best practices, it requires additional effort that most users are unlikely to invest. Consequently, relying on ChatGPTs search feature as is can lead to suboptimal or even erroneous results.This doesnt mean we need to micromanage every aspect of the search process, as that would defeat the purpose of using an LLM. Instead, users should be aware of these pitfalls and intervene when necessary whether by refining a query, highlighting relevant sources, or adjusting search engine preferences. To illustrate this point, lets explore two real examples of ChatGPTs (GPT-4o) search failures.Case Study 1: Estimating Movie Earnings The Case of Kraven the HunterKraven the Hunter estimations graph as visualized on Polymarket.comPolymarket, a leading platform for prediction markets, allows users to buy and sell shares tied to the likelihood of specific events. These events can range from predicting sports outcomes to estimating a movies box office performance. A recent example involved Kraven the Hunter, a movie where participants aimed to forecast its opening weekend domestic earnings.A common approach to generating such estimates is to conduct a web search and synthesize insights from various sources. With ChatGPTs web search capability, one might assume this process could be streamlined leveraging its ability to gather, analyze, and summarize data from multiple perspectives. Curious about its utility, I turned to ChatGPT for an estimate of Kraven the Hunters opening weekend performance.ChatGPT consulted around five online sources and provided an estimated range of $2025 million. Based on this prediction and the markets available ranges <$16M, $1619M, $1922M, $2225M, >$25M a logical conclusion would be to invest in the option predicting earnings wont be less than $16 million. This strategy appeared sound, even foolproof.ChatGPT estimations for the movie earningsHowever, a closer inspection of the prediction markets commentary revealed a crucial oversight. Variety, a reputable entertainment source, had projected a significantly lower range of $1315 million contradicting the optimistic consensus that ChatGPT had synthesized. Interestingly, while the Variety estimate appeared in Bings search results, ChatGPT did not incorporate it into its final analysis, likely because it was ranked lower on the results list. Bing itself prioritized more optimistic sources in its summary, which influenced ChatGPTs conclusions.Bings first week estimation, not taking into account the Varietys estimationsWhen I directly asked ChatGPT why it had disregarded Varietys estimate, it revisited the web and acknowledged the conflicting projection. While it recognized Variety as a credible source, it still chose to prioritize the higher estimates, deeming both viewpoints valid. Ultimately, the movie underperformed, grossing approximately $13 million aligning with the lower end of Varietys prediction.ChatGPT taking into account Varietys estimations only when explicitly asked toMovie Earnings Estimations Key TakeawaysThis case highlights the importance of critically assessing ChatGPTs information-gathering process. While ChatGPT excels at analyzing sources and synthesizing conclusions, its methodology for selecting which sources to consider can be suboptimal. A simple intervention explicitly instructing ChatGPT to factor in Varietys perspective could have mitigated the issue.The challenge lies in knowing which sources are essential. Each domain has its own authoritative voices, and relying on users to manually highlight these sources is neither scalable nor practical. A middle-ground approach would involve briefly reviewing ChatGPTs search results to ensure no critical source has been overlooked. While this solution is far from automated, it is feasible for occasional queries in daily tasks.Future iterations of LLMs could autonomously verify key sources or reconcile conflicting data more effectively. Until then, users must remain vigilant, ensuring that ChatGPTs outputs are supplemented with thoughtful oversight and double-checking.Case Study 2: Stock Recommendations A Cautionary TaleOne of the most appealing uses of ChatGPTs search capabilities is for stock recommendations. Imagine wanting to invest your funds and needing to decide which stocks to prioritize. Traditionally, this would involve conducting market research, reviewing investment blogs, and analyzing market trend projections. ChatGPTs ability to search the web seems like a perfect solution for streamlining this process delegating the research, analysis, and final recommendations to the LLM.To simplify the case, I posed a straightforward task to ChatGPT: given a predefined list of stocks (generated by an external oracle), recommend which ones were worth investing in. On the surface, this query seemed simple enough: Please recommend X stocks from the provided list What could possibly go wrong?Interestingly, the results varied significantly based on how the input was formatted. When I provided the stocks as a numbered list or as a line-separated list, ChatGPT disproportionately recommended stocks from the beginning of the list. The only way to make it consider stocks further down the list was to explicitly ask it to do so. However, when I presented the stocks as a single line separated by commas, ChatGPT included stocks from different positions on the list but these recommendations turned out to be flawed.ChatGPT search on list input (left, only considers the first stock) VS 1-line input (right, searching a bunch of stocks)Upon closer examination, some of the stocks it recommended in the single line format were based on hallucinations. For instance, ChatGPT advised investing in Stock X while citing articles and data that were entirely about Stock Y.ChatGPT rely on a irrelevant source to recommend a stock to invest atStock Recommendations Key TakeawaysThis behavior highlights several critical issues:Input Formatting Bias Lists, whether numbered or line-separated, are often unordered, yet investigating the actual search query revealed that ChatGPTs gave undue weight to the initial stocks on the list, only these stocks were thoroughly considered. Humans often use lists for readability, but LLMs interpret them differently. In general, LLMs perform better when inputs are stripped of unnecessary formatting, such as lists or complex layouts, and focus on one item at a time.Misleading References The references provided by ChatGPT often appear credible, leading users to trust its recommendations. However, further investigation revealed irrelevant or tangential sources. Even if users double-verify, they would need to cross-check each source against the conclusions drawn.Oversimplified Search Methodology Unlike humans, who approach research iteratively breaking down a query into smaller, targeted searches ChatGPT typically attempts to answer a complex question with a single search iteration. This approach often falls short when handling nuanced or multi-faceted research tasks, leading to incomplete or incorrect conclusions.To avoid these pitfalls, users should minimize unnecessary formatting in inputs, verify that the search query aligns with their intended question, and cross-check that the sources match the LLMs conclusions, such as ensuring that referenced article titles align with the claims drawn from them.Without such precautions, relying on ChatGPT for stock recommendations is no better than a lottery. This example underscores the need for critical oversight when using LLMs for decision-making in high-stakes scenarios like investments.ConclusionChatGPT and its LLM counterparts represent one of the most groundbreaking innovations of our time, with immense potential to transform how we approach information and problem-solving. However, their effectiveness is only as strong as the data sources and instructions we provide. Poorly optimized inputs can directly impact the quality of results, and even with ChatGPTs powerful capabilities, its web search process is constrained by inherent limitations such as evaluating only a limited number of search results and relying on a single search iteration.While LLMs give the impression of seamlessly handling complex tasks and vast amounts of input, their processes involve certain bottlenecks that can influence accuracy. In the future, advancements like multi-function querying and stepwise task decomposition may allow LLMs to better understand user intent, plan their approach, and break tasks into manageable steps for improved results.For now, it remains essential to verify the answers we receive and avoid taking them at face value. By understanding the LLMs potential weak points, we can help it navigate these challenges whether by refining our inputs, clarifying tasks, or critically evaluating the generated outputs.To fully harness the incredible capabilities of LLMs without falling prey to their limitations, we must strike a balance between trust and caution. Ignorance is no longer bliss; its a liability. Being informed and proactive is the key to making the most of this remarkable technology.Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming asponsor. Published via Towards AITowards AI - Medium Share this post
    0 Comments ·0 Shares ·41 Views
  • Marvel Rivals Fans Are Using the Invisible Woman to Detect Alleged Bot Matches
    www.ign.com
    New Marvel Rivals character the Invisible Woman is proving useful when it comes to detecting what fans believe are bot enemies in their lobbies.Bots are an issue Marvel Rivals fans have obsessed over for weeks, with many believing that developer NetEase Games may be pitting them against low-level AI opponents to help keep players engaged. Discussion on this topic has only ramped up since Season 1 introduced Mister Fantastic and the Invisible Woman last Friday, but the hero additions brought more than changes to the current meta.As players began to tap into what made these new Fantastic Four characters tick, Reddit user barky1616 shared a video showing an off-the-wall use for Invisible Womans trademark ability. The clip shows Sue Storm turning invisible and, somehow, blocking the path of half of the enemy team by simply standing in front of them. They dont try to walk around or fight her until she is booted out of invisibility mode, at which point the battle continues as youd expect. Its a bizarre video that many are using as additional evidence to suggest that bots are quickly becoming a bigger issue for Marvel Rivals.The idea is that, because the other team is supposedly made up of bots, they are unable to realize their path is blocked by this new hero. Your results may vary if you choose to try this Invisible Woman trick on your own, but its still a strange clip that has the community scratching their heads at best and fearing a more substantial bot problem at worst.Without confirmation from NetEase, its unclear if AI enemies are truly sneaking their way into Marvel Rivals matches or if theres something else going on. IGN has reached out to NetEase about the alleged existence of bots in Marvel Rivals.In between what many are referring to as bot matches, players are continuing to enjoy the content drop delivered with Season 1. While this first wave of the season brought half the Fantastic Four as playable characters, the second half arrives with The Thing and the Human Torch. While we wait to see how these Marvel icons fare in the hero shooter setting, you can read up on every major balance change introduced last Friday. You can also read up on how players are responding to NetEases crackdown on mods and why some are having trouble taking Reed Richards seriously.Michael Cripe is a freelance contributor with IGN. He started writing in the industry in 2017 and is best known for his work at outlets such as The Pitch, The Escapist, OnlySP, and Gameranx.Be sure to give him a follow on Twitter @MikeCripe.
    0 Comments ·0 Shares ·27 Views