• 0 Commentarios 0 Acciones 93 Views
  • WWW.MACWORLD.COM
    Apple users are ditching the AirTag for this $30 alternative but why?
    MacworldApples AirTag does a fine job tracking stuff but it just cant fit into a wallet. Thats where this slim,credit card-sized trackercomes in. It works with Apples Find My app just like an AirTag, but without the bulky round shape poking out of your pocket.Its called theKeySmart SmartCard, and it offers a sleek alternative to Apples tracker, complete with Qi wireless charging, a built-in lanyard slot, and a higher waterproof rating. Right now, you can grab a 3-pack for $89.99 with free shipping $30 each and about the same price as the AirTag, but with a wallet-friendly design that may suit your needs better.This smart card slips right into your wallet and shows its location on Apples Find My network. Youll get helpful features like left behind alerts, audible location pings, and real-time tracking across millions of Apple devices. Bonus: While AirTags require additional purchases for keychains or straps, this tracker already has a slot for lanyards, badge reels, or anything else you want to clip it to.You can get three of theseAirTag alternativesfor $89.99 with free shipping. Keep one for yourself and share the others in Easter baskets, or use the others in your passport, luggage, pet carriers, or anywhere else you need tracking.KeySmart SmartCard Works with Apple Find My (3-Pack)See DealStackSocialprices subject to change.
    0 Commentarios 0 Acciones 91 Views
  • GAMINGBOLT.COM
    Mario Kart World Showcases Nearly 40 Minutes of New Gameplay
    Nintendo formally unveiled Mario Kart Worldat its Switch 2 Direct earlier in the week, and though the wait for aMario Kart 8successor has been a long one, its not going to drag on much longer.Mario Kart Worldwill release in June as a Switch 2 launch title, and ahead of its release, Nintendo isnt being stingy with the amount of gameplay footage it is showcasing.Hot on the heels of extensive gameplay showcased during Thursdays Treehouse livestream, Nintendo followed up today with more of the same. Close to 40 minutes of new gameplay footage have been revealed forMario Kart Worldin total, showing off more of its Grand Prix mode, some of the many tracks and characters that will be featured in the game, how it will incorporate GameChat features, and more. You can check out the gameplay below.Mario Kart Worldwill launch exclusively for the Nintendo Switch 2 on June 5. The game will be sold at a price of $79.99.
    0 Commentarios 0 Acciones 127 Views
  • WWW.YOUTUBE.COM
    Audio Visualizer Masterclass
    (Feed generated with FetchRSS)
    0 Commentarios 0 Acciones 99 Views
  • 0 Commentarios 0 Acciones 115 Views
  • WWW.THEVERGE.COM
    Trumps TikTok delay is against the law top Senate Intelligence Democrat says
    President Donald Trumps additional 75 day delay to TikToks sale-or-ban deadline leaves service providers like Apple, Google, and Oracle on shaky ground, and, according to one influential Democrat, is straight-up against the law.After Trump announced the extension on Friday, 12 Republican members of the House Select Committee on China, including Chair John Moolenaar (R-MI), released a joint statement in response. The statement did not address legal concerns with the second extension, but it said that any resolution must ensure that U.S. law is followed, and that the Chinese Communist Party does not have access to American user data or the ability to manipulate the content consumed by Americans. The letter says signatories look forward to more details on a proposed deal.In a separate statement, three Republican members of the House Energy and Commerce Committee, including Chair Brett Guthrie (R-KY) struck a similar note, saying that, any deal must finally end Chinas ability to surveil and potentially manipulate the American people through this app.The whole thing is a sham if the algorithm doesnt move from out of Beijings handsSenate Intelligence Committee Vice Chair Mark Warner (D-VA) was more critical in a phone interview with The Verge. The whole thing is a sham if the algorithm doesnt move from out of Beijings hands, Warner said. And close to 80 percent of Republicans knew this was a national security threat will they find their voice now?Trump signed an executive order on his first day in office delaying enforcement of the TikTok divestiture law, a move legal experts already found questionable. Then, he failed to announce a deal before the new April 5th deadline amid chaos over new global tariffs. Letting the delay expire would have put US companies that serviced TikTok after the deadline at even greater risk of hefty penalties.The original Protecting Americans from Foreign Adversary Controlled Applications Act was passed with overwhelming bipartisan support to address what lawmakers insisted was a pressing national security threat, then upheld by the Supreme Court in January. TikTok has long denied that the Chinese government could access US user data or put its thumb on the scales of the recommendation feed through ByteDance, but many lawmakers have consistently doubted that defense. As the Trump administration has opted to effectively ignore the law, however, Congress has been relatively quiet.Trumps unilateral extension is illegal and forces tech companies to once again decide between risking ruinous legal liability or taking TikTok offlineA few Senate Democrats, including Ed Markey (D-MA), recently warned Trump that another extension would only introduce more legal uncertainty, and some expressed doubt that some of the reported deal scenarios could even resolve the apps legal concerns. In a statement after Trumps second extension, Markey says while hed like to see the deadline pushed, Trumps unilateral extension is illegal and forces tech companies to once again decide between risking ruinous legal liability or taking TikTok offline. He called the move unfair to those companies and unfair to TikToks users and creators. Instead, Trump should go through Congress to pass Markeys bill to the extend the deadline, he says. Rep. Ro Khanna (D-CA), a member of the China Committee whos criticized the law and warned it will harm free expression and creators livelihoods, also wants to see a solution go through Congress, but is seeking a full repeal of the law. Still, he called Trumps delay a good step.The new statements from China Committee and E&C Republicans appear to be the first coordinated moves to put a firm line in the sand on the topic. Some Republicans who support the divest-or-ban law have previously urged Trumps compliance in one-off statements or writings. Moolenaar previously warned in an op-ed that an adequate deal must fully break ties with ByteDance after reports that Trump was considering a deal with Oracle that would potentially leave some ties intact. Sen. Josh Hawley (R-MO) told reporters earlier this week that if a deal doesnt comply with the statute, he would advise the President against it. If he cant get a deal to sell the company in a way that fully complies, Hawley thinks Trump ought to enforce the statute and ban TikTok. This middle way, I dont think is viable.Warner maintains that lawmakers want a TikTok sale that keeps the app in the US, and he says the Biden administration should have been more aggressive in getting negotiations started. He remains concerned that TikToks ownership structure could allow a foreign adversary government to influence young Americans. During the negotiations, we saw the enormous bias in TikTok on things like the Uyghurs, the Hong Kong protests, the conflict in Gaza, says Warner. That was how we got 80 percent of the vote. Warner says he remains concerned about the security of US user data, but sees the potential for TikTok to be used to shape public opinion as the more serious threat. Still, lawmakers seem unlikely to do much beyond (maybe) trying to pass a new law should Trump continue to flout the existing one. Congress, says Hawley, we dont have an enforcement arm of our own.See More:
    0 Commentarios 0 Acciones 112 Views
  • WWW.MARKTECHPOST.COM
    This AI Paper Introduces a Short KL+MSE Fine-Tuning Strategy: A Low-Cost Alternative to End-to-End Sparse Autoencoder Training for Interpretability
    Sparse autoencoders are central tools in analyzing how large language models function internally. Translating complex internal states into interpretable components allows researchers to break down neural activations into parts that make sense to humans. These methods support tracing logic paths and identifying how particular tokens or phrases influence model behavior. Sparse autoencoders are especially valuable for interpretability applications, including circuit analysis, where understanding what each neuron contributes is crucial to ensuring trustworthy model behavior.A pressing issue with sparse autoencoder training lies in aligning training objectives with how performance is measured during model inference. Traditionally, training uses mean squared error (MSE) on precomputed model activations. However, this doesnt optimize for cross-entropy loss, which is used to judge performance when reconstructed activations replace the originals. This mismatch results in reconstructions that perform poorly in real inference settings. More direct methods that train on both MSE and KL divergence solve this issue, but they demand considerable computation, which limits their adoption in practice.Several approaches have attempted to improve sparse autoencoder training. Full end-to-end training combining KL divergence and MSE losses offers better reconstruction quality. Still, it comes with a high computational cost of up to 48 higher due to multiple forward passes and lack of activation amortization. An alternative involves using LoRA adapters to fine-tune the base language model around a fixed autoencoder. While efficient, this method modifies the model itself, which isnt ideal for applications that require analyzing the unaltered architecture.An independent researcher from Deepmind has introduced a new solution that applies a brief KL+MSE fine-tuning step at the tail end of the training, specifically for the final 25 million tokensjust 0.510% of the usual training data volume. The models come from the Gemma team and Pythia project. It avoids altering the model architecture and minimizes complexity while achieving performance similar to full end-to-end training. It also allows training time savings of up to 90% in scenarios with large models or amortized activation collection without requiring additional infrastructure or algorithmic changes.To implement this, the training begins with standard MSE on shuffled activations, followed by a short KL+MSE fine-tuning phase. This phase uses a dynamic balancing mechanism to adjust the weight of KL divergence relative to MSE loss. Instead of manually tuning a fixed parameter, the system recalculates the KL scaling factor per training batch. The formula ensures the total combined loss maintains the same scale as the original MSE loss. This dynamic control prevents the need for additional hyperparameters and simplifies transfer across model types. Fine-tuning is executed with a linear decay of the learning rate from 5e-5 to 0 over the 25M token window, aligning the process with practical compute budgets and preserving sparsity settings from earlier training.Performance evaluations show that this approach reduced the cross-entropy loss gap by 20% to 50%, depending on the sparsity setting. For example, on Pythia-160M with K=80, the KL+MSE fine-tuned model performed slightly better than a full end-to-end model, requiring 50% less wall-clock time. At higher sparsity (K=160), the fine-tuned MSE-only model achieved similar or marginally better outcomes than KL+MSE, possibly due to the simplicity of the objective. Tests with LoRA and linear adapters revealed that their benefits do not stack, as each method corrects a shared error source in MSE-trained autoencoders. Even very low-rank LoRA adapters (rank 2) captured over half the performance gains of full fine-tuning.Although cross-entropy results consistently favored the fine-tuned method, interpretability metrics showed mixed trends. On SAEBench, ReLU-based sparse autoencoders saw improvements in sparse probing and RAVEL metrics, while performance on spurious correlation and targeted probe tasks dropped. TopK-based models showed smaller, more inconsistent changes. These results suggest that fine-tuning may yield reconstructions better aligned with model predictions but may not always enhance interpretability, depending on the specific evaluation task or architecture type.This research underscores a meaningful advancement in sparse autoencoder training: a computationally light, technically simple method that improves reconstruction accuracy without modifying base models. It addresses key alignment issues in training objectives and delivers practical results across models and sparsity levels. While not uniformly superior in all interpretability metrics, it offers a favorable trade-off between performance and simplicity for tasks like circuit-level analysis.Check outthe Paper.All credit for this research goes to the researchers of this project. Also,feel free to follow us onTwitterand dont forget to join our85k+ ML SubReddit. NikhilNikhil is an intern consultant at Marktechpost. He is pursuing an integrated dual degree in Materials at the Indian Institute of Technology, Kharagpur. Nikhil is an AI/ML enthusiast who is always researching applications in fields like biomaterials and biomedical science. With a strong background in Material Science, he is exploring new advancements and creating opportunities to contribute.Nikhilhttps://www.marktechpost.com/author/nikhil0980/This AI Paper Introduces FASTCURL: A Curriculum Reinforcement Learning Framework with Context Extension for Efficient Training of R1-like Reasoning ModelsNikhilhttps://www.marktechpost.com/author/nikhil0980/This AI Paper Unveils a Reverse-Engineered Simulator Model for Modern NVIDIA GPUs: Enhancing Microarchitecture Accuracy and Performance PredictionNikhilhttps://www.marktechpost.com/author/nikhil0980/The Complete Beginners Guide to Terminal/Command PromptNikhilhttps://www.marktechpost.com/author/nikhil0980/How to Use Git and Git Bash Locally: A Comprehensive Guide
    0 Commentarios 0 Acciones 78 Views
  • WWW.CNET.COM
    Marvel Rivals Season 2: Here Are Emma Frost's Abilities
    Emma Frost is joining the Rivals roster in season 2 as a vanguard with a set of abilities that change depending on her form.
    0 Commentarios 0 Acciones 88 Views
  • WWW.EUROGAMER.NET
    What we've been playing - office nightmares, games with kids, and Tetris building games
    What we've been playing - office nightmares, games with kids, and Tetris building gamesA few of the things that have us hooked this week.Image credit: Eurogamer / Galactic Cafe Feature by Robert Purchese Associate Editor Additional contributions byTom OrryPublished on April 5, 2025 5th AprilHello and welcome back to our regular feature where we write a little bit about some of the games we've been playing. This week, Bertie gets his starchiest white shirt on and descends into the corporate purgatory of The Stanley Parable, cleansing himself in the nearest river in between, while Tom O both dips back into Avowed and tries Split Fiction with his son.What have you been playing?Catch up with the older editions of this column in our What We've Been Playing archive.The Stanley Parable: Ultra Deluxe, PCEven the trailers for The Stanley Parable are genius.Watch on YouTubeI have a remarkable ability to allow things to completely pass me by - just whoosh! they're gone. Perhaps I'll do a masterclass on it one day. "One simply turns their head in the opposite direction and hey presto, the world passes them by." Oh now I've given it away.It's how I found myself only this week playing seminal 2013 indie banger The Stanley Parable, partially because Lottie loves it and appears as if summoned whenever someone even slightly references it, and also because I was told it was a lot like Severance, the TV show. And it is by the way - it's exactly like Severance, but I ended up writing a thing about that which I don't want to echo here.What I wanted to say here was that The Stanley Parable absolutely holds up. It feels as refreshing and magical and intelligent to me in 2025 as I expect it did to people in 2013 - as I know it did to people in 2013, because they wouldn't stop going on about it. And I don't think that's a given. A game like that has to work harder in 2025, partially because of the legacy that precedes it, partly because it's aging, and partly because there are that many more intelligent indie games around it. That The Stanley Parable should still shine so bright is borderline remarkable.-BertieAvowed, Xbox Series X / Split Fiction, PS5 ProSplit Fiction is great. The end.Watch on YouTubeIf there's an annoyance I have with Avowed (other than the ghosting visuals when moving the camera), it's the difficulty I have working out where to go, to get to where I want to be. I've started taking on some side quests having been focusing mainly on the central storyline, but it's thrown some map markers way out beyond where I've been before. I often head over in the direction of a marker, only to find that the door I go through takes me to an area that is locked off from where I need to go, so I end up going down blind alleys and retracing my steps back again.My solution has been to make a quick exit to the open world outside of whichever built up area I'm in, then go what I presume is the long way round to the marker. I know GPS-style navigation isn't everyone's cup of tea, but I feel Avowed would benefit from having it.I've also put a couple of hours into Split Fiction alongside my son. I wasn't really sold on the art direction in the pre-release footage, it coming across as a little bit generic, but in the heat of the action it's perfectly solid and changes things up frequently. We managed to complete the pig section just before we called it a night, and it was a lot of fun, if rather traumatic. I'm not sure what was more disturbing, though, the twerking pig or the meat grinder. But we're looking forward to seeing what happens next.-Tom ORiver Towns, PCRiver Towns is much more like Tetris than I realised.Watch on YouTubeI've had this one earmarked for a while, but I was completely wrong - I now realise - about what kind of game it would be. What I thought it would be was a cosy building game about making settlements alongside rivers, and it sort of is that, but much more immediately it's a kind of Tetris game. And I like that because I know that.The premise is really simple: lay down pieces of a town shaped as Tetronimoes are, with the hope of fitting them in perfectly to a confined space by the side of a river. Place them perfectly and you'll get a big score. It really is that simple. A town literally pops into being, as in a pop-up picture book, as you lay the pieces down.You follow a dried out river across an overland map, gradually restoring life and activity as you go, and the complexity gently increases. I now have two types of building - two factions, sort of - that I need to place away from each other so as not cross-contaminate, and I get bonus points at the end for whichever faction is larger. It's like playing against myself, which is odd.I've only had a brief blast at it so how it develops from here, I don't know, but the introduction is strong - immediate and tactile and pleasant. I like it.-Bertie
    0 Commentarios 0 Acciones 85 Views
  • 0 Commentarios 0 Acciones 91 Views