• The rise of on-device AI is reshaping the future of PCs and smartphones
    www.techspot.com
    The big picture: While everything related to generative AI (GenAI) seems to be evolving at breakneck speed, one area is advancing even faster than the rest: running AI-based foundation models directly on devices like PCs and smartphones. Even just a year ago, the general thinking was that most advanced AI applications would need to run in the cloud for some time to come. Recently, however, several major developments strongly suggest that on-device AI, particularly for advanced inferencing-based applications, is becoming a reality starting this year.The implications of this shift are huge and will likely have an enormous impact on everything from the types of AI models deployed to the kinds of applications created, how those applications are architected, the types of silicon being used, the requirements for connectivity, how and where data is stored, and much more.The first signs of this shift arguably started appearing about 18 months ago with the emergence of small language models (SLMs) such as Microsoft's Phi, Meta's Llama 8B, and others. These SLMs were intentionally designed to fit within the smaller memory footprint and more limited processing power of client devices while still offering impressive capabilities.While they weren't meant to replicate the capabilities of massive cloud-based datacenters running models like OpenAI's GPT-4, these small models performed remarkably well, particularly for focused applications.As a result, they are already having a real-world impact. Microsoft, for example, will be bringing its Phi models to Copilot+ PCs later this year a release that I believe will ultimately prove to be significantly more important and impactful than the Recall feature the company initially touted for these devices. Copilot+ PCs with the Phi models will not only generate high-quality text and images without an internet connection but will also do so in a uniquely customized manner.The reason? Because they will run locally on the device and have access (with appropriate permissions, of course) to files already on the machine. This means fine-tuning and personalization capabilities should be significantly easier than with current methods. More importantly, this local access will allow them to create content in the user's voice and style. Additionally, AI agents based on these models should have easier access to calendars, correspondence, preferences, and other local data, enabling them to become more effective digital assistants. // Related StoriesBeyond SLMs, the recent explosion of interest around DeepSeek has triggered wider recognition of the potential to bring even larger models onto devices through a process known as model distillation.The core concept behind distillation is that AI developers can create a new model that extracts and condenses the most critical learnings from a significantly larger large language model (LLM) into a smaller version. The result is models small enough to fit on devices while still retaining the broad general-purpose knowledge of their larger counterparts.Our devices and what we can do with them is about to change foreverIn real-world terms, this means much of the power of even the largest and most advanced cloud-based models including those using chain-of-thought (CoT) and other reasoning-focused technologies will soon be able to run locally on PCs and smartphones.Combining these general-purpose models with more specialized small language models suddenly expands the range of possibilities for on-device AI in astonishing ways (a point that Qualcomm recently explored in a newly released white paper).Of course, as promising as this shift is, several challenges and practical realities must be considered. First, developments are happening so quickly that it's difficult for anyone to keep up and fully grasp what's possible. To be clear, I have no doubt that thousands of brilliant minds are working right now to bring these capabilities to life, but it will take time before they translate into intuitive, useful tools. Additionally, many of these tools will likely require users to rethink how they interact with their devices. And as we all know, habits are hard to break and slow to change.Even now, for example, many people continue to rely on traditional search engines rather than tapping into the typically more intuitive, comprehensive, and better-organized results that applications such as ChatGPT, Gemini, Perplexity can offer. Changing how we use technology takes time.Furthermore, while our devices are becoming more powerful, that doesn't mean the capabilities of the most advanced cloud-based LLMs will become obsolete anytime soon. The most significant advancements in AI-based tools will almost certainly continue to emerge in the cloud first, ensuring ongoing demand for cloud-based models and applications. However, what remains uncertain is exactly how these two sets of capabilities advanced cloud-based AI and powerful on-device AI will coexist.Also see: NPU vs. GPU: What's the Difference?As I wrote last fall in a column titled How Hybrid AI is Going to Change Everything, the most logical outcome is some form of hybrid AI environment that leverages the best of both worlds. Achieving this, however, will require serious work in creating hybridized, distributed computing architectures and, more importantly, developing applications that can intelligently leverage these distributed computing resources. In theory, distributed computing has always sounded like an excellent idea, but in practice, making it work has proven far more challenging than expected.On top of these challenges, there are a few more practical concerns. On-device, for instance, balancing computing resources across multiple AI models running simultaneously won't be easy. From a memory perspective, the simple solution would be to double the RAM capacity of all devices, but that isn't realistically going to happen anytime soon. Instead, clever mechanisms and new memory architectures for efficiently moving models in and out of memory will be essential.In the case of distributed applications that utilize both cloud and on-device compute, the demand for always-on connectivity will be greater than ever. Without reliable connections, hybrid AI applications won't function effectively. In other words, there has never been a stronger argument for 5G-equipped PCs than in a hybrid AI-driven world.Even in on-device computing architectures, critical new developments are on the horizon. Yes, the integration of NPUs into the latest generation of devices was intended to enhance AI capabilities. However, given the enormous diversity in current NPU architectures and the need to rewrite or refactor applications for each of them, we may see more focus on running AI applications on local GPUs and CPUs in the near term. Over time, as more efficient methods are developed for writing code that abstracts away the differences in NPU architectures, this challenge will be resolved but it may take longer than many initially expected.There is no doubt that the ability to run impressively capable AI models and applications directly on our devices is an exciting and transformative shift. However, it comes with important implications that must be carefully considered and adapted to. One thing is certain: how we think about our devices and what we can do with them is about to change forever.Bob O'Donnell is the founder and chief analyst of TECHnalysis Research, LLC a technology consulting firm that provides strategic consulting and market research services to the technology industry and professional financial community. You can follow him on Twitter @bobodtechMasthead credit: Solen Feyissa
    0 Reacties ·0 aandelen ·63 Views
  • Asus Thunderbolt 5 eGPU to go on sale next week
    www.digitaltrends.com
    Asus latest external graphics unit, the ROG XG Mobile 2025, will be available starting February 25 as reported by IT Home. Showcased at CES 2025, the latest iteration comes with a sleek design and features Thunderbolt 5 connectivity, marking a major shift from the proprietary XG Mobile connector used in previous versions.The original XG Mobile eGPU, launched alongside the ROG Flow X13 in 2021, came equipped with RTX 3080, 3070, and Radeon RX 6850M XT options. The biggest issue with the XG Mobile was its PCIe 3.0 x8 interface which limited the bandwidth to 63Gbpsfar below what Thunderbolt 5 offers. Additionally, the previous proprietary XG Mobile port limited compatibility to select ROG laptops and the ROG Allyhandheld. The ROG XG Mobile 2023, which included RTX 4090 and RTX 4080 laptop GPUs, kept the same proprietary connector, further restricting its use outside of Asus ecosystem.Recommended VideosWith the introduction of Thunderbolt 5, the latest XG Mobile eGPU provides up to 80Gbps of data transfer, which is a significant jump over Thunderbolt 4s 40Gbps, potentially reducing performance bottlenecks in external GPU setups. Unlike the older models, which only worked with specific ROG Flow laptops, the 2025 version will be usable with any device that supports Thunderbolt 5, Thunderbolt 4, or USB4.AsusThe ROG XG Mobile 2025 will be available with the latest RTX 50-series laptop GPUs in two configurations: an RTX 5070 Ti model starting at $1,199 and an RTX 5090 variant for $2,199. The RTX 5090 variant features 24GB of GDDR7 VRAM and runs at a 150W power limitlower than the RTX 5090 desktop variant, which has 32GB VRAM and 575W power draw. The RTX 5070 Ti model, while more affordable, is aimed at mid-range users looking for an external GPU setup.AsusIn terms of design, the 2025 XG Mobile gets a new translucent finish and is slightly smaller than its predecessor. It also weighs under 1kg, making it more portable than traditional external GPU enclosures, which often require a separate power brick. Like previous versions, it also doubles as a docking station, featuring DisplayPort 2.1, HDMI 2.1, USB-A ports, an SD card reader, and Ethernet connectivity. By adopting Thunderbolt 5, Asus has addressed one of the biggest drawbacks of its previous XG Mobile lineuplimited compatibility. While the price remains steep, especially for the RTX 5090 model, this version is finally an option for a broader range of laptop users, not just those with ROG Flow devices. However, it remains to be seen whether the performance over Thunderbolt 5 can match the direct PCIe connection of previous models.Editors RecommendationsAsus pairs the portable ROG Flow X13 2-in-1 with a miniature XG Mobile eGPU
    0 Reacties ·0 aandelen ·65 Views
  • Samsung starts bringing Galaxy S25 camera tricks to Galaxy S24 phones
    www.digitaltrends.com
    html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd" Samsungs previous-gen flagship phones are yet to get a taste of Android 15, but for the adventurous few who signed up for beta testing, they finally have some consolation coming their way. Samsung has started the rollout of One UI 7s fourth beta update for the Galaxy S24 series smartphones.Multiple users on X, Samsungs Community Forum, and YouTube have shared news of the latest beta update arriving on their Galaxy S24 series phones in India and Korea. The update also brings the February Android security patch, but the most notable addition is a key camera trick that Samsung introduced with the Galaxy S25 series earlier this year.Recommended VideosGalaxy S24 Beta 4 (ZYBA) changelog: Added AI Filter in Camera Added Samsung Log in Camera (S24 Ultra) Fixed UI error of lock screen and AOD Fixed quick panel UI error Fixed grouping alarm error Fixed stuttering when releasing fingerprint Fixed disappearing pic.twitter.com/AjVtRL2PmR Tarun Vats (@tarunvats33) February 19, 2025Please enable Javascript to view this contentThe feature in question is LOG video capture. Apple introduced this pro feature with the iPhone 15 Pro, and this year, Samsung finally caught up. LOG, in the simplest terms, is a flat footage that is ideal for color-grading and post-processing to get the desired effect.LOG videos look grey-ish after capture, but they retain a lot more detail. Most importantly, they are untouched by the algorithmic color correction, especially with dynamic range data. Essentially, what you get is pristine sensor data in your video, ready for grading and chroma correction.These videos need specialized software for editing, such as Adobe Premiere Pro or DaVinci Resolve, and a lot of skills, too. To enable this feature on your compatible Galaxy smartphone, you will have to enable the Log toggle from within the Advanced video options in the camera settings.One UI 7 0 Beta 4 Update Released in india?Samsung S24 Series?S23 Series One UI 7 0 Official updateThankfully, Samsungs Gallery app gives users an option called Correct Color that automatically fixes the color situation in LOG videos without having to launch a pro-grade video editing app on a computer.In addition to LOG video capture support, which is exclusive to the Galaxy S24 Ultra, the entire Galaxy S24 series is also getting support for the new AI filters.These filters take an approach similar to photographic styles on the current-generation iPhones, but add more granular controls for adjusting values such as temperature, contrast, and saturation.The most notable element is that apart from the new crop of filters, users can create their own unique filters by simply picking up any picture from the phones gallery. The onboard AI will copy its signature looks and color profile, and will create a new camera filter based on it.Editors Recommendations
    0 Reacties ·0 aandelen ·62 Views
  • Tech, Media & Telecom Roundup: Market Talk
    www.wsj.com
    Find insight on Alphabet, STMicroelectronics, Delivery Hero, and more in the latest Market Talks covering Technology, Media and Telecom.
    0 Reacties ·0 aandelen ·66 Views
  • Microsofts new interactive AI world model still has a long way to go
    arstechnica.com
    Mile to go... Microsofts new interactive AI world model still has a long way to go Despite improvements, Microsoft's new model is still mainly useful for low-res prototypes. Kyle Orland Feb 19, 2025 1:40 pm | 0 Adding character using WHAM is as simple as dropping an image into existing footage. Credit: Microsoft / Nature Adding character using WHAM is as simple as dropping an image into existing footage. Credit: Microsoft / Nature Story textSizeSmallStandardLargeWidth *StandardWideLinksStandardOrange* Subscribers only Learn moreFor a while now, many AI researchers have been working to integrate a so-called "world model" into their systems. Ideally, these models could infer a simulated understanding of how in-game objects and characters should behave based on video footage alone, then create fully interactive video that instantly simulates new playable worlds based on that understanding.Microsoft Research's new World and Human Action Model (WHAM), revealed today in a paper published in the journal Nature, shows how quickly those models have advanced in a short time. But it also shows how much further we have to go before the dream of on AI crafting complete, playable gameplay footage from just some basic prompts and sample video footage are reality.More consistent, more persistentMuch like Google's Genie model before it, WHAM starts by training on "ground truth" gameplay video and input data provided by actual players. In this case, that data comes from Bleeding Edge, a 4 vs. 4 online brawler released in 2020 by Microsoft subsidiary Ninja Theory. By collecting actual player footage since launch (as allowed under the game's EULA), Microsoft gathered the equivalent of seven player-years' worth of gameplay video paired with real player inputs.Early in that training process, Microsoft Research's Katja Hoffman said the model would get easily confused, generating inconsistent clips that would "deteriorate [into] these blocks of color." After 1 million training updates, though, the WHAM model started showing basic understanding of complex gameplay interactions, such as a power cell item exploding after three hits from the player or the movements of a specific character's flight abilities. The results continued to improve as the researchers threw more computing resources and larger models at the problem, according to the Nature paper.To see just how well the WHAM model generated new gameplay sequences, Microsoft tested the model by giving it up to one second's worth of real gameplay footage and asking it to generate what subsequent frames would look like based on new simulated inputs. To test the model's consistency, Microsoft used actual human input strings to generate up to two minutes of new AI-generated footage, which was then compared to actual gameplay results using the Frechet Video Distance metric.Microsoft boasts that WHAM's outputs can stay broadly consistent for up to two minutes without falling apart, with simulated footage lining up well with actual footage even as items and environments come in and out of view. That's an improvement over even the "long horizon memory" of Google's Genie 2 model, which topped out at a minute of consistent footage.Microsoft also tested WHAM's ability to respond to a diverse set of randomized inputs not found in its training data. These tests showed broadly appropriate responses to many different input sequences, based on human annotations of the resulting footage, even as the best models fell a bit short of the "human-to-human baseline."The most interesting result of Microsoft's WHAM tests, though, might be in the persistence of in-game objects. Microsoft provided examples of developers inserting images of new in-game objects or characters into pre-existing gameplay footage. The WHAM model could then incorporate that new image into its subsequent generated frames, with appropriate responses to player input or camera movements. With just five edited frames, the new object "persisted" appropriately in subsequent frames anywhere from 85 to 98 percent of the time, according to the Nature paper.A long way to goDespite all the improvements Microsoft boasts about in its WHAM model, the company says it still sees rough prototyping by game developers as the primary current use case. Developers can play around with a prototype "WHAM Demonstrator" on the Azure AI Foundry to see how the system can generate new interactive gameplay sequences based on just a few frames of video.That demonstrator currently generates the resulting video based on pre-recorded inputs, at a rate much slower than necessary for actual live gameplay. In a private demonstration for press, though, Microsoft also showed an early prototype of a real-time WHAM-powered video-generation tool, which instantly generates new frames of gameplay based on immediate inputs from the user. Users can even jump from scene to scene instantly just by feeding a fresh set of sample frames into the system.That kind of real-time, "generate as you go" world model is something of a holy grail for this branch of AI research. And while the current version Microsoft showed off "is definitely not the same as playing the game," as Hoffman said during the demonstration, it's also "decidedly not like a traditional video game experience," she said. "It has a new quality. It's really interesting to explore and see what I can do in this setting."Don't get your hopes up for a new wave of AI-generated games any time soon, though. Microsoft's prototype WHAM tool is still severely limited to a very muddy 300180 resolution (comparable to a screen on the original Nintendo DS) at 10 frames per secondwell below the playable baseline for modern games.And despite all the much-ballyhooed improvements in consistency and persistence, there's still an ethereal, dreamlike quality to many of the objects shown even in the low-res WHAM footage. The player character in particular tends to morph and stretch like a shapeshifter rather than a tight player model with a solid and consistent skeleton.Still, Microsoft says it hopes WHAM is a first step toward a future where AI can craft high-end interactive experiences at the drop of a hat. "Hopefully this gives you a sense of just what we might be thinking about as we start to work towards interactive experiences that are generated on the fly by these real-time-capable generative AI models," Hoffman said.Kyle OrlandSenior Gaming EditorKyle OrlandSenior Gaming Editor Kyle Orland has been the Senior Gaming Editor at Ars Technica since 2012, writing primarily about the business, tech, and culture behind video games. He has journalism and computer science degrees from University of Maryland. He once wrote a whole book about Minesweeper. 0 Comments
    0 Reacties ·0 aandelen ·67 Views
  • AI making up cases can get lawyers fired, scandalized law firm warns
    arstechnica.com
    Burned by AI AI making up cases can get lawyers fired, scandalized law firm warns Nauseatingly frightening: Law firm condemns careless AI use in court. Ashley Belanger Feb 19, 2025 1:06 pm | 26 Morgan And Morga injury law firm ad is seen on a bus in New York City. Credit: NurPhoto / Contributor | NurPhoto Morgan And Morga injury law firm ad is seen on a bus in New York City. Credit: NurPhoto / Contributor | NurPhoto Story textSizeSmallStandardLargeWidth *StandardWideLinksStandardOrange* Subscribers only Learn moreMorgan & Morganwhich bills itself as "America's largest injury law firm" that fights "for the people"learned the hard way this month that even one lawyer blindly citing AI-hallucinated case law can risk sullying the reputation of an entire nationwide firm.In a letter shared in a court filing, Morgan & Morgan's chief transformation officer, Yath Ithayakumar, warned the firms' more than 1,000 attorneys that citing fake AI-generated cases in court filings could be cause for disciplinary action, including "termination.""This is a serious issue," Ithayakumar wrote. "The integrity of your legal work and reputation depend on it."Morgan & Morgan's AI troubles were sparked in a lawsuit claiming that Walmart was involved in designing a supposedly defective hoverboard toy that allegedly caused a family's house fire. Despite being an experienced litigator, Rudwin Ayala, the firm's lead attorney on the case, cited eight cases in a court filing that Walmart's lawyers could not find anywhere except on ChatGPT.These "cited cases seemingly do not exist anywhere other than in the world of Artificial Intelligence," Walmart's lawyers said, urging the court to consider sanctions.So far, the court has not ruled on possible sanctions. But Ayala was immediately dropped from the case and was replaced by his direct supervisor, T. Michael Morgan, Esq. Expressing "great embarrassment" over Ayala's fake citations that wasted the court's time, Morgan struck a deal with Walmart's attorneys to pay all fees and expenses associated with replying to the errant court filing, which Morgan told the court should serve as a "cautionary tale" for both his firm and "all firms."Reuters found that lawyers improperly citing AI-hallucinated cases have scrambled litigation in at least seven cases in the past two years. Some lawyers have been sanctioned, including an early case last June fining lawyers $5,000 for citing chatbot "gibberish" in filings. And in at least one case in Texas, Reuters reported, a lawyer was fined $2,000 and required to attend a course on responsible use of generative AI in legal applications. But in another high-profile incident, Michael Cohen, Donald Trump's former lawyer, avoided sanctions after Cohen accidentally gave his own attorney three fake case citations to help his defense in his criminal tax and campaign finance litigation.In a court filing, Morgan explained that Ayala was solely responsible for the AI citations in the Walmart case. No one else involved " had any knowledge or even notice" that the errant court filing "contained any AI-generated content, let alone hallucinated content," Morgan said, insisting that had he known, he would have required Ayala to independently verify all citations."The risk that a Court could rely upon and incorporate invented cases into our body of common law is a nauseatingly frightening thought," Morgan said, "deeply" apologizing to the court while acknowledging that AI can be "dangerous when used carelessly."Further, Morgan said, it's clear that his firm must work harder to train attorneys on AI tools the firm has been using since November 2024 that were intended to supportnot replacelawyers as they researched cases. Despite the firm supposedly warning lawyers that AI can hallucinate or fabricate information, Ayala shockingly claimed that he "mistakenly" believed that the firm's "internal AI support" was "fully capable" of not just researching but also drafting briefs."This deeply regrettable filing serves as a hard lesson for me and our firm as we enter a world in which artificial intelligence becomes more intertwined with everyday practice," Morgan told the court. "While artificial intelligence is a powerful tool, it is a tool which must be used carefully. There are no shortcuts in law."Andrew Perlman, dean of Suffolk University's law school, advocates for responsible AI use in court and told Reuters that lawyers citing ChatGPT or other AI tools without verifying outputs is "incompetence, just pure and simple."Morgan & Morgan declined Ars' request to comment.Law firm makes changes to prevent AI citationsMorgan & Morgan wants to ensure that no one else at the firm makes the same mistakes that Ayala did. In the letter sent to all attorneys, Ithayakumar reiterated that AI cannot be solely used to dependably research cases or draft briefs, as "AI can generate plausible responses that may be entirely fabricated information.""As all lawyers know (or should know), it has been documented that AI sometimes invents case law, complete with fabricated citations, holdings, and even direct quotes," his letter said. "As we previously instructed you, if you use AI to identify cases for citation, every case must be independently verified."While Harry Surden, a law professor who studies AI legal issues, told Reuters that "lawyers have always made mistakes," he also suggested that an increasing reliance on AI tools in the legal field requires lawyers to increase AI literacy to fully understand "the strengths and weaknesses of the tools." (A July 2024 Reuters survey found that 63 percent of lawyers have used AI and 12 percent use it regularly, after experts signaled an AI-fueled paradigm shift in the legal field in 2023.)At Morgan & Morgan, it has become clear in 2025 that better AI training is needed across its nationwide firm. Morgan told the court that the firms technology team and risk management members have met to "discuss and implement further policies to prevent another occurrence in the future."Additionally, a checkbox acknowledging AI's potential for hallucinations was added, and it must be clicked before any attorney at the firm can access the internal AI platform."Further, safeguards and training are being discussed to protect against any errant uses of artificial intelligence," Morgan told the court.Whether these efforts will help Morgan & Morgan avoid sanctions is unclear, but Ithayakumar suggested that on par with sanctions might be the reputational loss to the firm's or any individual lawyer's credibility."Blind reliance on AI is equivalent to citing an unverified case," Ithayakumar told lawyers, saying that it is their "responsibility and ethical obligation" to verify AI outputs. "Failure to comply with AI verification requirements may result in court sanctions, professional discipline, discipline by the firm (up to and including termination), and reputational harm. Every lawyer must stay informed of the specific AI-related rules and orders in the jurisdictions where they practice and strictly adhere to these obligations."Ashley BelangerSenior Policy ReporterAshley BelangerSenior Policy Reporter Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience. 26 Comments
    0 Reacties ·0 aandelen ·69 Views
  • Microsoft wants to use generative AI tool to help make video games
    www.newscientist.com
    The Muse AI was trained on the video game Bleeding EdgeMicrosoftAn artificial intelligence model from Microsoft can recreate realistic video game footage that the company says could help designers make games, but experts are unconvinced that the tool will be useful for most game developers.Neural networks that can produce coherent and accurate footage from video games are not new. A recent Google-created AI generated a fully playable version of the classic computer game Doom without access to the underlying game engine. The original Doom, however, was released in 1993; more modern games are far more complex, with sophisticated physics and computationally intensive graphics, which have proved trickier for AIs to faithfully recreate. AdvertisementNow, Katja Hofmann at Microsoft Research and her colleagues have developed an AI model called Muse, which can recreate full sequences of the multiplayer online battle game Bleeding Edge. These sequences appear to obey the games underlying physics and keep players and in-game objects consistent over time, which implies that the model has grasped a deep understanding of the game, says Hofmann.Muse is trained on seven years of human gameplay data, including both controller and video footage, provided by Bleeding Edges Microsoft-owned developer, Ninja Studios. It works similarly to large language models like ChatGPT; when given an input, in the form of a video game frame and its associated controller actions, it is tasked with predicting the gameplay that might come next. Its really quite mind-boggling, even to me now, that purely from training models to predict whats going to appear next it learns a sophisticated, deep understanding of this complex 3D environment, says Hofmann.To understand how people might use an AI tool like Muse, the team also surveyed game developers to learn what features they would find useful. As a result, the researchers added the capability to iteratively adjust to changes made on the fly, such as a players character changing or new objects entering a scene. This could be useful for coming up with new ideas and trying out what-if scenarios for developers, says Hofmann. The latest science news delivered to your inbox, every day.Sign up to newsletterBut Muse is still limited to generating sequences within the bounds of the original Bleeding Edge game it cant come up with new concepts or designs. And it is unclear if this is an inherent limitation of the model, or something that could be overcome with more training data from other games, says Mike Cook at Kings College London. This is a long, long way away from the idea that AI systems can design games on their own.While the ability to generate consistent gameplay sequences is impressive, developers might prefer to have greater control, says Cook. If you build a tool that is actually testing your game, running the game code itself, you dont need to worry about persistency or consistency, because its running the actual game. So these are solving problems that generative AI has itself introduced.Its promising that the model is designed with developers in mind, says Georgios Yannakakis at the Institute of Digital Games at the University of Malta, but it might not be feasible for most developers who dont have so much training data. It comes down to the question of is it worth doing? says Yannakakis. Microsoft spent seven years collecting data and training these models to demonstrate that you can actually do it. But would an actual game studio afford [to do] this?Even Microsoft itself is equivocal over whether AI-designed games could be on the horizon: when asked if developers in its Xbox gaming division might use the tool, the company declined to comment.While Hofmann and her team are hopeful that future versions of Muse will be able to generalise beyond their training data coming up with new scenarios and levels for games on which they are trained, as well as working for different games this will be a significant challenge, says Cook, because modern games are so complex.One of the ways a game distinguishes itself is by changing systems and introducing new conceptual level ideas. That makes it very hard for machine learning systems to get outside of their training data and innovate and invent beyond what theyve seen, he says.Topics:video games
    0 Reacties ·0 aandelen ·83 Views
  • The worlds glaciers have shrunk more than 5 per cent since 2000
    www.newscientist.com
    The Rhne glacier in the Swiss Alps in 2024FABRICE COFFRINI/AFP via Getty ImagesGlaciers worldwide have shrunk by more than 5 per cent on average since 2000, according to the most comprehensive assessment yet. This rapid rate of melting has accelerated by more than a third in the past decade as climate change continues apace.Any degree of warming matters for glaciers, says Noel Gourmelen at the University of Edinburgh, UK. They are a barometer for climate change. AdvertisementThe new numbers come from a global consortium of hundreds of researchers called the Glacier Mass Balance Intercomparison Exercise. The group aimed to reduce the uncertainty around how much the planets 200,000 or so glaciers have melted by using a standard procedure to assess different measures of their change in size. This includes gravity and elevation measurements from 20 satellites as well as ground-based measurements.Between 2000 and 2011, glaciers were melting at a rate of about 231 billion tonnes of ice per year on average, the researchers found. This melt rate increased between 2012 and 2023 to 314 billion tonnes per year, an acceleration of more than a third. 2023 saw a record loss of mass of around 548 billion tonnes.These numbers are in line with previous estimates. But this comprehensive look provides a bit more confidence about the change that we see on glaciers, says Gourmelen, who is part of the consortium. And theres a clear acceleration. Unmissable news about our planet delivered straight to your inbox every month.Sign up to newsletterAltogether, the thawing of more than 7 trillion tonnes of glacial ice since 2000 has raised sea levels by almost 2 centimetres, making this melt the second biggest contributor to sea level rise so far, behind the expansion of water due to warming oceans.This is a consistent story of glacial change, says Tyler Sutterley at the University of Washington in Seattle. Regions that have had glaciers since time immemorial are losing these icons of ice.Glaciers in the Alps have lost more ice than any other region, shrinking by nearly 40 per cent since 2000. In the Middle East, New Zealand and western North America, glaciers have also seen reductions of more than 20 per cent. Depending on future emissions, the worlds glaciers are projected to lose between a quarter and half of their ice by the end of the century.Journal reference:Nature DOI: 10.1038/s41586-024-08545-zTopics:
    0 Reacties ·0 aandelen ·85 Views
  • Apple's new $599 iPhone with AI is the Hail Mary it needs
    www.businessinsider.com
    Apple launched the new iPhone 16e on Wednesday.It offers the company's new Apple Intelligence technology and costs $599.It's an "on-ramp" for consumers to enter the Apple ecosystem, one analyst says.Apple just put its AI within reach of a lot more people. The move could be what it needs to reinvigorate sales of iPhones after several years of struggles.The new iPhone 16e the cheapest of the iPhone 16 lineup makes Apple Intelligence available for a fraction of the cost of its older cousins.Apple announced the new device on Wednesday after CEO Tim Cook teased a new "member of the family" in the week leading up to the launch. Starting at $599, the iPhone 16e is half the price of the $1199 iPhone 16 Pro Max.With pressure on iPhone sales mounting, this launch represents its strategy to "compete more aggressively" with rival entry-level smartphones, Jacob Bourne, an analyst at Business Insider's sister company EMARKETER, said."It's about expanding its ecosystem reach at a crucial moment when it's rolling out Apple Intelligence and revamping Siri," Bourne added.As Apple enters its AI era with Apple Intelligence previously only available on its priciest models, the new iPhone 16e gives more consumers a chance to test the new tech. Although the software has yet to move the needle for iPhone sales, lowering the price would encourage people to hop on the bandwagon, Forrester analyst Dipanjan Chatterjee said."Apple's brand of accessible luxury gets a little more accessible for people who don't want to settle for anything less than the real thing," he told BI.Revenue missed analyst estimates in Apple's first quarter for fiscal year 2025. A more affordable iPhone will be especially crucial in key regions, like India, "where iPhones are out of reach for most people" and Android competition is fierce, Chatterjee said.Some "leakage" from the sale of pricier models in the iPhone 16 lineup is to be expected, but Chatterjee said he doesn't expect it to "cannibalize the crown jewels" the Pro and Pro Max models."This is about growing its market share, which becomes increasingly vital as Apple shifts toward a services-led growth strategy that depends on exposure," Bourne said.Apple's services business, which includes paid subscriptions, is performing well. It grew revenue 14% year over year to reach a record $26.3 billion in Q1 FY25. Preorders for the iPhone 16e start Friday and it'll be available for purchase on February 28.The iPhone 16e is the "on-ramp" for customers who want the status symbol of an iPhone without spending up to $1000, Chatterjee said.The iPhone 16e has a 6.1-inch display the same size as the $799 iPhone 16. For hardware, it has Apple's own in-house cellular modem and the A18 processor, and a USB-C charging port.Last week, the iPhone 14 and SE models were the last phones with the Lightning port available on Apple's website, but it looks like the site has been updated to offer only USB-C compatible models. Bloomberg's Mark Gurman reported that the models were discontinued signaling the end of an era of new phones without Apple Intelligence or USB-C chargers.
    0 Reacties ·0 aandelen ·79 Views
  • What's it like to be neighbors with Mark Zuckerberg or the late Steve Jobs? Expect to get shown up at Halloween.
    www.businessinsider.com
    What's it like being neighbors with Silicon Valley's elite?Pro Football Hall of Famer Steve Young has some fond memories from his time living in Palo Alto.He talked about being neighbors with the late Steve Jobs, Mark Zuckerberg, and Larry Ellison on a recent podcast episode.Steve Young made a big name for himself in pro football, but on his old block in Silicon Valley, his neighbors were the talk of the town especially on Halloween.The former San Francisco 49ers quarterback used to count the late Apple cofounder Steve Jobs, Meta CEO Mark Zuckerberg, and Oracle cofounder Larry Ellison as neighbors while previously living in Palo Alto, California."Everywhere you went you were around people that were transforming the world, and so as a player, as an athlete, you always feel like you're a little bit of an impostor in that world because I don't have a product, I don't have this great idea, I just go play a game," Young said on an episode of the "In Depth with Graham Bensinger" released earlier this month.Surrounded by a who's who of Silicon Valley, Young said he felt a sense of "appreciation and kind of honor for the amazing things that are happening around me and seeing if I could play a small role in catching up and learning about it."Young recalled an interaction where he ran into Ellison and Jobs in the neighborhood.Ellison remembered Young from their time playing pickup basketball together, but Young didn't immediately recognize Ellison. And Jobs didn't remember Young, though they'd met five or six times before, Young said."I just thought it was a funny interaction where nobody knew each other, we're all neighbors, and that's the insanity of Silicon Valley in 1995 and 1998 and 2000," he said. "It was a crazy, crazy time. The whole world now spins off of what happens here in many ways in technology."Young talked about walking around the neighborhood and seeing Jobs at work in his home office."He'd just be working, doing his thing, and that's what I mean that's the neighborhood," he said.As for Zuckerberg, he "lives in a normal little house," Young said.Zuckerberg years ago spent over $30 million to buy four homes near his Palo Alto house for privacy. He's also snapped up property at Lake Tahoe and has a massive compound in Hawaii.Young remembered Zuckerberg showing him up on Halloween."He used to give out huge, giant Nestl Crunch bars," he said. "You're like bro, why are you making me look bad, quit trying to shame me, this is like neighbor shame."Throughout the neighborhood, "Halloween night here is happening," Young said."This whole block shuts down," he said. "It's a block party, and thousands of people come from all over the peninsula, and really all over northern California, to be in these four, five blocks."Jobs, former Yahoo CEO Marissa Mayer, and late casino magnate Sheldon Adelson's stepdaughter are among Silicon Valley's wealthy residents who have hosted Halloween parties for the public.After Jobs' passing in 2011, his widow, Laurene Powell Jobs, carried on their tradition of putting on a Halloween show for trick-or-treaters.Young no longer lives in the same house but looks back on his residence there as a pivotal time for tech."In Palo Alto, if you play football, pretty much nobody knows," he said. "A number of times, people have knocked on my door and say, 'Hey, does Mark Zuckerberg live nearby?' And I'm like, 'Yeah, yeah, just keep going that way.'"
    0 Reacties ·0 aandelen ·74 Views