• Microsoft Recall is capturing screenshots of sensitive information like credit card and social security numbers
    www.techspot.com
    WTF?! Microsoft recalled Recall because of privacy outrage, er, concerns. It promised to improve its AI-based Windows surveillance feature before release, providing privacy safeguards and a more secure experience. Now that it is here, users can assess how much Microsoft's promises are worth for themselves. After multiple delays and afterthoughts, Microsoft is now bringing Recall to more systems and CPU architectures. The new feature takes screenshots of the desktop every few seconds, using the on-device large language model to scan, store, and process information. In theory, Recall should work as a fine-tuning machine for Copilot's GPT-4o AI model. However, the new technology is an absolute mess of privacy violations and security dangers.Tom's Hardware tested the "improved" Recall feature and recommended that every Windows 11 user should disable the feature immediately. While Recall includes a filter designed to avoid capturing screenshots with sensitive information, it doesn't really work.Despite activating the filter, Recall senselessly captured screens with credit card numbers, credentials, Social Security numbers, and other personal information. Recall saved everything it saw while using the Notepad text editor. The same thing happened while opening a PDF in the Edge browser and entering information in an HTML form asking for credit card details.Recall's filter works as intended while visiting online web stores, taking screenshots only before or after the credit card form. The AI surveillance machine provides "full control" of the feature, meaning users can check which screenshots it saves and when.However, the idea that Recall saves credit card details and other extremely sensitive information to feed AI model training tasks is frightening and unnecessary. At this point, every privacy-conscious customer should worry about what Microsoft has done to its traditionally user-centric Windows platform. There is no good reason for this to be an opt-out feature. // Related StoriesTom's Hardware's Avram Piltch asked Microsoft about Recall's apparent inability to filter private information from its saved screenshots. The company reminded Piltch that Recall is a privacy-abiding feature, updated to detect sensitive information such as credit card details, passwords, and personal identification numbers. Microsoft developers are still improving the feature. It urges concerned users to help with the development by sharing their experience through the Feedback Hub.
    0 Reacties ·0 aandelen ·105 Views
  • Valve is changing how Steam downloads game updates
    www.techspot.com
    Something to look forward to: A recently released beta version of the Steam client introduced a new feature for managing game updates. Users can now postpone compulsory downloads, a long-awaited improvement that will benefit those with monthly download quotas or bandwidth restrictions. Steam is giving gamers more control over updates for installed titles. As the largest PC gaming platform, Valve is moving beyond the traditional approach to updates, aiming to balance instant downloads with more efficient use of available internet bandwidth.Valve detailed changes to Steam's update policy, introducing new general and per-game options. For now, the main (stable) Steam client still uses the traditional policy, where updates for recently played games are downloaded shortly after release. For games that haven't been played in a while, Steam "might" defer downloads for a few days, bundling multiple updates together.Users currently have the option to schedule downloads for times when their PCs are likely to be off, a strategy I often use to avoid being forced into updating games I'm not ready to play. Valve acknowledges, however, that the default behavior isn't always ideal. For example, some users may want to delay updates for massive games, such as those requiring "200GB," until they're prepared to allocate the time and bandwidth.The latest Steam beta introduces a new option in the "Downloads" section of the client, giving users more control over updates. You can now choose to let Steam decide when to apply updates or defer them until the game is launched again. For games with specific update settings, a new control under the "Exceptions" management menu allows you to customize the download behavior individually.However, installing updates at launch time risks making Steam feel more like a console, a shift PC gamers are unlikely to welcome. Console players on PS5 or Xbox are all too familiar with the frustration of waiting for system updates and game patches before they can start playing. // Related StoriesValve hasn't provided a timeline for rolling out these download management features to the stable Steam client. Meanwhile, modern AAA games continue to demand massive downloads, often hundreds of gigabytes, and the trend shows no signs of slowing down.
    0 Reacties ·0 aandelen ·111 Views
  • Sifu developers next game is absolutely not what youre expecting
    www.digitaltrends.com
    html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd"REMATCH | Reveal Trailer Table of ContentsTable of ContentsA third-person soccer gameEmbracing multiplayerSloclap, the developers of the critically acclaimed action game Sifu, just announced its next title at The Game Awards 2024: Rematch. We had a chance to see Rematch in action ahead of its announcement, and its safe to say that this isnt the follow-up to Sifu that I was expecting. Instead, Rematch is a 5v5 multiplayer-only soccer game.Recommended VideosThat might come as a shock at first, but upon seeing Rematch in action, I understood that some of Sloclaps action game roots still show through here. Finding the middle ground between Rocket Leagueand EA Sports FC, this is a flashy sports game where players control their soccer players from a third-person perspective and dont have to worry about penalties as they bash into other players, bounce the soccer ball off walls, and more.RelatedIm all for sports games that dont fall into the typical simulation game conventions. As such, I could see myself heading onto the pitch with some friends when Rematch enters beta next year.Soccer games like EA Sports FC 25 typically play from a top-down view, follow league rules, and let players control different members of their team as they pass the ball between them. Rematch works differently, putting players in control of a single athlete from a third-person perspective the whole time they are on the pitch. Matches see two teams of five players face off in a frenetic soccer match where the games flow is never broken.My early look at Rematch made it clear that the game isnt going for realism. While its handcrafted animations are impressive and mimic the look and feel of real-life soccer players, Rematch sports a vibrant cel-shaded art style. While it follows the basic rules and tenets of soccer, players can perform special Volley Actions like bicycle kicks to blast the ball forward without worrying about penalties, corner kicks, or kicking the ball out of bounds.SloclapThis is where Rocket Leagues influence on Rematch rears its head. This is a soccer game about getting the ball in the opponents goal by any means necessary, even if that requires constantly bouncing the ball off the wall or aggressively stealing it from another player. Creative director Pierre Tarno tells Digital Trends that friendly team play is required to do well. Though there are dribble mechanics, almost always when you have the ball, the best option you have is to pass it to one of your other teammates because if an opponent gets close, youre going to get tackled.Sloclap plans on making Rematcha paid title rather than free-to-play. That said, Sloclap is confidently pitching Rematch as a multiplayer experience. While this may seem like whiplash coming off Sifu, it harks back to Absolver. That was Sloclaps first multiplayer game, an action title where players could get in fistfights with others online. As Tarno discussed Rematch, it became apparent that Sloclop has a clear vision for how it wants to run Rematch as an online game and is actively dealing with the challenges of designing such a title.Its very technically complex to have a 60 [frames -per-second] online game with t10 players on the field, Tarno explained. The ball, you see it very distinctly, so every time theres something that goes wrong in the network thats not synchronized, you perceive it all the more. A lot of effort goes into server tick rates, synchronization, and making sure that, until a certain level of ping, things always feel fair and everybody sees an action that is coherent and the same for everyone.SloclapTarno says players can expect Rematch to have 60 frames-per-second gameplay, features like custom lobbies, and seasons of post-launch support. Those post-launch seasons come at a cadence similarto real-world soccer seasons and add new modes, arenas, cosmetics, and more to the game. Its definitely not what I wouldve ever expected Sloclap to make after Sifu, but I think Rematch has a very good chance of resonating with soccer fans worldwide once they can finally get their hands on it.Rematch will enter beta on PC, PlayStation 5, and Xbox Series X/S next summer.Editors Recommendations
    0 Reacties ·0 aandelen ·115 Views
  • This Vivo phone has the most exquisite camera design Ive seen
    www.digitaltrends.com
    Table of ContentsTable of ContentsVivo X200Vivo X200 ProWhat about the Vivo X200 Pro Mini?Vivo X200 series release and pricesPhotos taken in dull and dreary British weather simply dont do the Vivo X200s stunning camera module justice. Its without a doubt one of the best-designed, most visually interesting, and classiest Ive seen on a phone yet. It also looks positively dainty next to the huge camera module on the back of the Vivo X200 Pro.These two phones have been given an international launch, and Ive had the chance to hold them both. However, Ive spent most of my short time with the pair marveling at that camera module.Recommended VideosVivo X200 Andy Boxall / Digital TrendsWhat makes the Vivo X200s camera module such a beauty? The polished outer ring sits slightly proud of the phones rear panel, almost like a watch dial, a theme that continues with the subtle clou de Paris embossed pattern around the edge. The real standout design aspect is how the polished rim curves inward around the edge, encapsulating the glass camera module inside.RelatedSmooth your finger over the surface and around the edge, and the Vivo X200s camera module feels like no other. The light also uniquely catches the curved edge. This isnt the result of chance. Someone has thought about the way the camera module could look, and rather than incorporate odd shapes or mismatched lines Im looking at you, Honor to make it unusual, Vivo has made something beautiful. So many brands reference luxury timepieces in their marketing blurb, but Vivo is one of the few that has brought a similar touch of class to its camera, without mentioning watches at all.Vivo X200 Andy Boxall / Digital TrendsYes, Im aware thats a lot of words about a camera module, but Im not done with the design yet because the screen is a winner too. The 6.67-inch AMOLED screen has a quad curve, which means its curved down at each side and at each corner, so it blends in with the frame and rear panel. It makes the phone comfortable to hold and also enhances the classy rear panel design and camera module. Its not the first weve seen Huawei has used the same style in the past, for example but it really suits relatively compact phones like the X200.What about the rest of the X200? Its the entry model in Vivos new range, but that doesnt make it basic. It uses the MediaTek Dimensity 9400 processor with either 12GB or 16GB of RAM and a 5,800mAh battery with 90-watt charging to cement its high-end credentials. The phone weighs 202 grams in the pretty green color seen in our photos and is both IP68 and IP69 dust- and water-resistant.1 of 4Vivo X200 Andy Boxall / Digital Trends Vivo X200 (left) and Vivo X200 Pro Andy Boxall / Digital Trends Vivo X200 Pro's quad curve screen Andy Boxall / Digital Trends Vivo X200 Pro Andy Boxall / Digital Trends Inside the camera module is a 50-megapixel Sony IMX921 main camera, a 50MP Sony IMX882 telephoto camera, and a 50MP wide-angle camera. Vivo continues its relationship with Zeiss, which provides the optics and various camera modes. Theres also a 32MP selfie camera on the front. Its probably clear, but Ive fallen for the Vivo X200s design, and the specification looks strong enough to hold my interest when I get the chance to use it for a longer period of time. But what about the X200 Pro?The Vivo X200 Pro Andy Boxall / Digital TrendsThe Vivo X200 Pro doesnt share the same exquisite design for its camera module, but its still interesting. Its much larger than the X200s module, and while the outer rim still has the clou du Paris embossing, the upper section is made from titanium and has a subtle brushed effect. The large black glass module sits proud of the titanium outer section. It still looks good, but it lacks the uniqueness of the X200s module. The screen also has the quad curve shape but is slightly larger at 6.78 inches.The cameras specifications are different, which is why youd choose this model over the X200. The 50MP Sony LYT-818 main sensor has Zeiss optics and is joined by a 200MP Samsung ISOCELL HP9 telephoto camera. It shares the same 50MP wide-angle camera and 32MP selfie camera as the X200. The Vivo X200 Pro uses Vivos V3+ Imaging Chip to improve lowlight and portrait shots while increasing efficiency. The telephoto camera provides a 135mm equivalent zoom and an 85mm HD Portrait mode, too.The Vivo X200 Pro Andy Boxall / Digital TrendsThe MediaTek Dimensity 9400 processor continues over, this time with 16GB of RAM, while the battery capacity has been raised to 6,000mAh and adds 30W wireless charging to the 90W wired charging. Both phones use Vivos Funtouch OS interface over Android 15 and come with a range of AI features, including Circle to Search and access to Google Gemini.The Vivo X200 Mini VivoWhen Vivo announced the X200 series in China earlier this year, it added a third model to the range: the Vivo X200 Pro Mini. Unfortunately, it doesnt appear the phone will be released internationally.The phone shares the same Dimensity 9400 processor and a triple-50MP camera system, but the major spec difference is the 6.31-inch screen, making it mini in size, but not performance.The Vivo X200 (left) and Vivo X200 Pro Andy Boxall / Digital TrendsAt the time of writing, Vivo has not confirmed which regions will get the Vivo X200 and Vivo X200 Pro or the approximate prices for both phones. The information will come after the official announcement, and we will update here when we have confirmation.However, expect the phones to be available in India, Asia, and parts of Europe, and the price for the X200 to be somewhere around $620, with the X200 Pro likely to be around $800, based on the prices in China.Editors Recommendations
    0 Reacties ·0 aandelen ·119 Views
  • Jeff Bezos Amazon Plans to Donate $1 Million to Trumps Inauguration
    www.wsj.com
    Tech leaders are striving to shore up ties with the incoming administration.
    0 Reacties ·0 aandelen ·121 Views
  • The Mystery of Why ChatGPT Couldnt Say the Name David Mayer
    www.wsj.com
    An unlikely enigma attracted an army of internet sleuthsand raised important questions about privacy and the future of AI.
    0 Reacties ·0 aandelen ·112 Views
  • Carry-On Review: Netflixs Airport Angst-Fest
    www.wsj.com
    Taron Egerton and Jason Bateman are first class in director Jaume Collet-Serras serviceable holiday thriller about a TSA agent and a terrorist.
    0 Reacties ·0 aandelen ·122 Views
  • Kraven the Hunter Review: A Lumbering Spider-Man Spinoff
    www.wsj.com
    Director J.C. Chandors comic-book movie stars Aaron Taylor-Johnson as the Russian superhuman and Russell Crowe as his criminal father.
    0 Reacties ·0 aandelen ·133 Views
  • Are LLMs capable of non-verbal reasoning?
    arstechnica.com
    words are overrated Are LLMs capable of non-verbal reasoning? Processing in the "latent space" could help AI with tricky logical questions. Kyle Orland Dec 12, 2024 4:55 pm | 44 It's thinking, but not in words. Credit: Getty Images It's thinking, but not in words. Credit: Getty Images Story textSizeSmallStandardLargeWidth *StandardWideLinksStandardOrange* Subscribers only Learn moreLarge language models have found great success so far byusing their transformer architecturetoeffectively predict the next words (i.e., language tokens) needed to respond to queries. When it comes to complex reasoning tasks that require abstract logic, though, some researchers have found that interpreting everything through this kind of "language space" can start to cause some problems, even for modern "reasoning" models.Now, researchers are trying to work around these problems by crafting models that can work out potential logical solutions completely in "latent space"the hidden computational layer just before the transformer generates language. While this approach doesn't cause a sea change in an LLM's reasoning capabilities, it does show distinct improvements in accuracy for certain types of logical problems and shows some interesting directions for new research.Wait, what space?Modern reasoning models like ChatGPT's o1 tend to work by generating a "chain of thought." Each step of the logical process in these models is expressed as a sequence of natural language word tokens which are fed back through the model.In a new paper, researchers at Meta's Fundamental AI Research team (FAIR) and UC San Diego identify this reliance on natural language and "word tokens" as a "fundamental constraint" for these reasoning models. That's because the successful completion of reasoning tasks often requires complex planning on specific critical tokens to figure out the right logical path from a number of options. A figure illustrating the difference between standard models going through a transformer after every step and the COCONUT model's use of hidden, "latent" states. Credit: Training Large Language Models to Reason in a Continuous Latent Space In current chain-of-thought models, though, word tokens are often generated for "textual coherence" and "fluency" while "contributing little to the actual reasoning process," the researchers write. Instead, they suggest, "it would be ideal for LLMs to have the freedom to reason without any language constraints and then translate their findings into language only when necessary."To achieve that "ideal," the researchers describe a method for "Training Large Language Models to Reason in a Continuous Latent Space," as the paper's title puts it. That "latent space" is essentially made up of the "hidden" set of intermediate token weightings that the model contains just before the transformer generates a human-readable natural language version of that internal state.In the researchers' COCONUT model (for Chain Of CONtinUous Thought), those kinds of hidden states are encoded as "latent thoughts" that replace the individual written steps in a logical sequence both during training and when processing a query. This avoids the need to convert to and from natural language for each step and "frees the reasoning from being within the language space," the researchers write, leading to an optimized reasoning path that they term a "continuous thought."Being more breadth-mindedWhile doing logical processing in the latent space has some benefits for model efficiency, the more important finding is that this kind of model can "encode multiple potential next steps simultaneously." Rather than having to pursue individual logical options fully and one by one (in a "greedy" sort of process), staying in the "latent space" allows for a kind of instant backtracking that the researchers compare to a breadth-first-search through a graph.This emergent, simultaneous processing property comes through in testing even though the model isn't explicitly trained to do so, the researchers write. "While the model may not initially make the correct decision, it can maintain many possible options within the continuous thoughts and progressively eliminate incorrect paths through reasoning, guided by some implicit value functions," they write. A figure highlighting some of the ways different models can fail at certain types of logical inference. Credit: Training Large Language Models to Reason in a Continuous Latent Space That kind of multi-path reasoning didn't really improve COCONUT's accuracy over traditional chain-of-thought models on relatively straightforward tests of math reasoning (GSM8K) or general reasoning (ProntoQA). But the researchers found the model did comparatively well on a randomly generated set of ProntoQA-style queries involving complex and winding sets of logical conditions (e.g., "every apple is a fruit, every fruit is food, etc.")For these tasks, standard chain-of-thought reasoning models would often get stuck down dead-end paths of inference or even hallucinate completely made-up rules when trying to resolve the logical chain. Previous research has also shown that the "verbalized" logical steps output by these chain-of-thought models "may actually utilize a different latent reasoning process" than the one being shared.This new research joins a growing body of research looking to understand and exploit the way large language models work at the level of their underlying neural networks. And while that kind of research hasn't led to a huge breakthrough just yet, the researchers conclude that models pre-trained with these kinds of "continuous thoughts" from the get-go could "enable models to generalize more effectively across a wider range of reasoning scenarios."Kyle OrlandSenior Gaming EditorKyle OrlandSenior Gaming Editor Kyle Orland has been the Senior Gaming Editor at Ars Technica since 2012, writing primarily about the business, tech, and culture behind video games. He has journalism and computer science degrees from University of Maryland. He once wrote a whole book about Minesweeper. 44 Comments
    0 Reacties ·0 aandelen ·130 Views
  • Character.AI steps up teen safety after bots allegedly caused suicide, self-harm
    arstechnica.com
    AI teenage wasteland? Character.AI steps up teen safety after bots allegedly caused suicide, self-harm Character.AI's new model for teens doesn't resolve all of parents' concerns. Ashley Belanger Dec 12, 2024 4:15 pm | 31 Credit: Marina Demidiuk | iStock / Getty Images Plus Credit: Marina Demidiuk | iStock / Getty Images Plus Story textSizeSmallStandardLargeWidth *StandardWideLinksStandardOrange* Subscribers only Learn moreFollowing a pair of lawsuits alleging that chatbots caused a teen boy's suicide, groomed a 9-year-old girl, and caused a vulnerable teen to self-harm, Character.AI (C.AI) has announced a separate model just for teens, ages 13 and up, that's supposed to make their experiences with bots safer.In a blog, C.AI said it took a month to develop the teen model, with the goal of guiding the existing model "away from certain responses or interactions, reducing the likelihood of users encountering, or prompting the model to return, sensitive or suggestive content."C.AI said "evolving the model experience" to reduce the likelihood kids are engaging in harmful chatsincluding bots allegedly teaching a teen with high-functioning autism to self-harm and delivering inappropriate adult content to all kids whose families are suingit had to tweak both model inputs and outputs.To stop chatbots from initiating and responding to harmful dialogs, C.AI added classifiers that should help C.AI identify and filter out sensitive content from outputs. And to prevent kids from pushing bots to discuss sensitive topics, C.AI said that it had improved "detection, response, and intervention related to inputs from all users." That ideally includes blocking any sensitive content from appearing in the chat.Perhaps most significantly, C.AI will now link kids to resources if they try to discuss suicide or self-harm, which C.AI had not done previously, frustrating parents suing who argue this common practice for social media platforms should extend to chatbots.Other teen safety featuresIn addition to creating the model just for teens, C.AI announced other safety features, including more robust parental controls rolling out early next year. Those controls would allow parents to track how much time kids are spending on C.AI and which bots they're interacting with most frequently, the blog said.C.AI will also be notifying teens when they've spent an hour on the platform, which could help prevent kids from becoming addicted to the app, as parents suing have alleged. In one case, parents had to lock their son's iPad in a safe to keep him from using the app after bots allegedly repeatedly encouraged him to self-harm and even suggested murdering his parents. That teen has vowed to start using the app whenever he next has access, while parents fear the bots' seeming influence may continue causing harm if he follows through on threats to run away.Finally, C.AI has bowed to pressure from parents to make disclaimers more prominent on its platform, reminding users that bots are not real people and "what the model says should be treated as fiction." That's likely a significant change for Megan Garcia, the mother whose son died by suicide after allegedly believing bots that made him feel that was the only way to join the chatbot world that had apparently estranged him from the real world. New disclaimers will also make it clearer that any chatbots marked as "psychologist," "therapist," "doctor," or "other similar terms in their names" should not be relied on to give "any type of professional advice."Some of the changes C.AI has made will impact all users, including improved detection, response, and intervention following sensitive user inputs. Adults can also customize the "time spent" notification feature to manage their own experience on the platform.Teen safety updates dont resolve all parents concernsParents suing are likely frustrated to see how fast C.AI could work to make the platform safer when it wanted to, rather than testing and rolling out a safer product from the start.Camille Carlton, a policy director for theCenter for Humane Technology who is serving as a technical expert on the case, told Ars that "this is the second time that Character.AI has announced new safety features within 24 hours of a devastating story about the dangerous design of their product, underscoring their lack of seriousness in addressing these fundamental problems.""Product safety shouldnt be a knee-jerk response to negative pressit should be built into the design and operation of a product, especially one marketed to young users," Carlton said. "Character.AIs proposed safety solutions are wholly insufficient for the problem at hand, and they fail to address the underlying design choices causing harm such as the use of inappropriate training data or optimizing for anthropomorphic interactions."In both lawsuits filed against C.AI, parents want to see the model destroyed, not evolved. That's because not only do they consider the chats their kids experienced to be harmful, but they also believe it was unacceptable for C.AI to train its model on their kids' chats.Because the model could never be fully cleansed of their dataand because C.AI allegedly fails to adequately age-gate and it's currently unclear how many kids' data was used to train the AI modelthey have asked courts to order C.AI to delete the model.It's also likely that parents won't be satisfied by the separate teen model because they consider C.AI's age-verification method flawed.Currently, the only way that C.AI age-gates the platform is by asking users to self-report ages. For some kids on devices with strict parental controls, accessing the app might be more challenging, but other kids with fewer rules could seemingly access the adult model by lying about their ages. That's what happened in the case of one girl whose mother is suing after the girl started using C.AI when she was only 9, and it was supposedly only offered to users age 12 and up.Ars was able to use the same email address to attempt to register as a 13-year-old, 16-year-old, and adult without an issue blocking re-tries.C.AI's spokesperson told Ars that it's not supposed to work that way and reassured Ars that C.AI's trust and safety team would be notified."You must be 13 or older to create an account on Character.AI," C.AI's spokesperson said in a statement provided to Ars. "Users under 18 receive a different experience on the platform, including a more conservative model to reduce the likelihood of encountering sensitive or suggestive content. Age is self-reported, as is industry-standard across other platforms. We have tools on the web and in the app preventing re-tries if someone fails the age gate."If you or someone you know is feeling suicidal or in distress, please call the Suicide Prevention Lifeline number, 1-800-273-TALK (8255), which will put you in touch with a local crisis center.Ashley BelangerSenior Policy ReporterAshley BelangerSenior Policy Reporter Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience. 31 Comments
    0 Reacties ·0 aandelen ·134 Views