• WWW.CREATIVEBLOQ.COM
    How to choose a laptop for video editing: The specs you can't do without
    Choosing a laptop for video editing might be slightly different than choosing one for general tasks, gaming or productivity.
    0 Yorumlar 0 hisse senetleri 43 Views
  • WWW.WIRED.COM
    As Summer Approaches, Federal Cuts Threaten Program to Keep Vulnerable People Cool
    Some $380 million is now in limbo after reductions in the federal workforce affected staff that run a program helping low-income people pay their energy bills.
    0 Yorumlar 0 hisse senetleri 52 Views
  • APPLEINSIDER.COM
    Deals: Amazon drops Apple's 14-inch MacBook Pro with M4 Pro chip to $1,829
    Apple's latest 14-inch MacBook Pro equipped with an M4 Pro chip is on sale for $1,829 this weekend at Amazon, with total discounts on the range delivering up to $493 in savings.Price drops are in effect on Apple's latest MacBook Pro - Image credit: ApplePick up the standard 14-inch MacBook Pro with an M4 Pro chip in Space Black at the discounted price of $1,829 at Amazon this weekend, a savings of $170 off MSRP. This markdown on the current model delivers the lowest online price across popular Apple resellers.Buy for $1,829 Continue Reading on AppleInsider | Discuss on our Forums
    0 Yorumlar 0 hisse senetleri 57 Views
  • EN.WIKIPEDIA.ORG
    Wikipedia picture of the day for April 20
    Trou au Natron is a volcanic caldera in the Tibesti Massif in northern Chad. The volcano is extinct, and it is unknown when it last erupted. Trou au Natron is located just south-east of Toussidé, the westernmost volcano of the Tibesti Mountains. The caldera has an irregular diameter of approximately 6 to 8 kilometres (4 to 5 miles) and is up to 1,000 metres (3,300 ft) deep. Because of its irregular shape, it has been theorized that the caldera was formed as a result of multiple massive explosions, each of which deepened the enormous pit. Its exact period of formation is unconfirmed, although a Pleistocene formation has been suggested. Much of the surface of the caldera is lined with a white crust of carbonate salts such as sodium carbonate and natrolite, known as natron, leading to the caldera's name, literally 'hole of natron' in French. This crust is sometimes known as the Tibesti Soda Lake. Both the slopes and the floor of the caldera contain thick layers of fossilized aquatic gastropods and diatoms, indicating that it was once home to a deep lake. This satellite image of Trou au Natron was taken in 2008 from the International Space Station, at an altitude of around 352 kilometres (219 miles). The white crust can be seen at the bottom of the caldera. Photograph credit: NASA Recently featured: African hawk-eagle Christ Crowned with Thorns Rambutan Archive More featured pictures
    0 Yorumlar 0 hisse senetleri 38 Views
  • EN.WIKIPEDIA.ORG
    On this day: April 20
    Vädersolstavlan April 20: Easter (Christianity, 2025); first day of Ridván (Baháʼí Faith, 2025); 420 (cannabis culture) 1535 – Sun dogs were observed over Stockholm, Sweden, inspiring Vädersolstavlan (pictured), the oldest coloured depiction of the city. 1657 – Anglo-Spanish War: The English navy sank much of a Spanish treasure fleet at the Battle of Santa Cruz de Tenerife off the Canary Islands, but was unable to capture the treasure. 1968 – Pierre Trudeau was sworn in as prime minister of Canada, succeeding Lester B. Pearson. 2004 – An incomplete tunnel leading to the Nicoll Highway MRT station in Singapore collapsed, resulting in four deaths and the station's relocation. 2010 – An explosion on Deepwater Horizon, an offshore rig in the Gulf of Mexico, resulted in the largest marine oil spill in history. William Bedloe (b. 1650)David Brainerd (b. 1718)Frances Ames (b. 1920)Kojo Laing (d. 2017) More anniversaries: April 19 April 20 April 21 Archive By email List of days of the year About
    0 Yorumlar 0 hisse senetleri 38 Views
  • WWW.THEVERGE.COM
    In Haste, you gotta go fast
    Haste: Broken Worlds takes the relaxing loop of sliding down and leaping off hills from iOS classic Tiny Wings and turns it into a thrilling, high-speed, and 3D roguelike.In Haste, you play as Zoe, a girl who typically delivers letters but has found herself mysteriously transported to the new worlds you run through. When I say run, I mean it: Zoe cannons through the game.Levels are filled with rolling hills, and your goal is to leap off the upslopes and land on the downslopes. The better your landings, the more you’ll increase your speed and build a boost meter that can be used for things like a burst forward or a grappling hook.Image: LandfallYou’re incentivized to keep your speed up. The faster you complete a level, the higher grade you’ll get; higher grades give you better bonuses of things like “sparks,” which you can use to buy items.Throughout the vibrant, procedurally generated levels, you’ll also have to avoid obstacles like rocks, giant Sarlacc-like pits, and machines that shoot lasers and bullets at you. If you crash into an object, you’ll lose health and slow down. If you’re too slow, a crackling, damaging energy will sneak up behind you. If you run out of health, you’ll lose a life. Lose all your lives, and your run will end. There are some familiar roguelike mechanics in Haste, like a currency you can spend between runs to permanently improve things like your health and boost meters. During runs, you’ll also pick each stage on a map. Haste has a great map feature that lets you pick some or all of your route in advance so that you can play a series of levels without interruption. It really helps with the momentum.When things are going well, it’s easy to reach an amazing flow state where you’re leaping from hills at breakneck speed, timing your landings and boosts, dodging obstacles in your path, and racing to the end of every level as fast as you can.But at the end of each zone (technically a “shard”), there’s a large boss — like a giant flying snake — and they were often total buzzkills. The boss levels force you to run through more open-ended zones instead of just blasting forward, and the game just isn’t as fun and feels tougher in the open environments. Thankfully, you can change the game’s difficulty level.The game’s storytelling messes up the flow, too. You’ll meet a bunch of characters, and while their personalities are fun and their character portraits are illustrated well, they often show up in lengthy, forced conversations between levels. For a game called Haste, there are a lot of long chats!Those quibbles only detracted a little from how much I enjoyed Haste. At its best, Haste keeps you on the edge of your seat as you leap up and down from hills and careen through worlds. But when the game changes pace, it can be frustrating. I just want to go fast. Haste: Broken Worlds is now available on PC.See More:
    0 Yorumlar 0 hisse senetleri 53 Views
  • WWW.MARKTECHPOST.COM
    NVIDIA Introduces CLIMB: A Framework for Iterative Data Mixture Optimization in Language Model Pretraining
    Challenges in Constructing Effective Pretraining Data Mixtures As large language models (LLMs) scale in size and capability, the choice of pretraining data remains a critical determinant of downstream performance. Most LLMs are trained on large, web-scale datasets such as Common Crawl, which provide broad coverage but lack explicit domain labels. This introduces difficulties in curating mixtures that balance general knowledge with domain-specific expertise. Manual dataset curation, as seen in efforts like The Pile, is labor-intensive and does not scale well. Moreover, the nonlinear relationship between data composition and model performance makes it non-trivial to determine what proportions of domain data are optimal. These constraints motivate the need for automated, scalable, and adaptive data selection methods. CLIMB: An Iterative Framework for Data Mixture Discovery To address this, NVIDIA researchers propose CLIMB—CLustering-based Iterative Data Mixture Bootstrapping—a framework that automates the discovery and refinement of data mixtures for language model pretraining. CLIMB combines unsupervised clustering with iterative optimization to identify mixtures that are well-suited for general or domain-specific objectives. The pipeline begins by embedding large-scale text data into a semantic space using pretrained encoders. K-means clustering is then applied to organize the data into coherent groups, which are pruned and merged based on content quality and redundancy. This forms the basis for constructing candidate mixtures. Subsequently, CLIMB uses proxy models to evaluate sampled mixtures and fits a regression-based predictor (e.g., LightGBM) to estimate mixture performance. An iterative bootstrapping procedure progressively refines the sampling space, prioritizing high-performing configurations. This allows CLIMB to converge on an effective data mixture under a fixed compute budget. Technical Details and Design Considerations The optimization process is framed as a bi-level problem: at the lower level, proxy models are trained on candidate mixtures; at the upper level, a predictor is learned to approximate performance outcomes. This predictor guides further sampling and pruning, enabling efficient exploration of the mixture space. CLIMB supports sparsity in mixture weights, encouraging the discovery of compact, domain-relevant data subsets. The use of clustering over embeddings—rather than token-level features—ensures semantic coherence within clusters. The iterative refinement is structured to balance breadth (search space coverage) with depth (predictive accuracy), and ablation studies confirm that careful compute allocation across iterations improves convergence and final performance. The framework also exhibits robustness across proxy model sizes and cluster granularities. While larger proxy models yield slightly better predictions, even smaller models preserve key structural trends. Similarly, CLIMB is relatively insensitive to initial cluster count, provided it is within a reasonable range. Empirical Evaluation and Observations CLIMB was evaluated on several general reasoning tasks, including PIQA, ARC (Easy and Challenge), HellaSwag, and WinoGrande. A 1B-parameter model trained on CLIMB-discovered mixtures achieved an average accuracy of 60.41%, outperforming comparable baselines such as DoReMi and RegMix. When extended to 400B-token pretraining, this 1B model outperformed Llama-3.2-1B by 2.0% on a broad suite of benchmarks. Similarly, in the sub-500M model category, CLIMB-based pretraining led to consistent improvements over models like SmolLM and TinyLlama. Domain specialization further highlights CLIMB’s utility. In targeted MMLU benchmarks across STEM, humanities, and social sciences, CLIMB-trained models outperformed both random selection and exhaustive search baselines. The iterative process showed consistent gains over each stage, indicating effective guidance from the predictive model. To facilitate reproducibility and further research, NVIDIA has released two resources: ClimbLab: A 1.2-trillion-token corpus organized into 20 semantic clusters. ClimbMix: A 400-billion-token optimized mixture for efficient pretraining. Models trained on ClimbMix outperform those trained on datasets like Nemotron-CC and SmolLM under equivalent token budgets, demonstrating improved scaling characteristics. Conclusion CLIMB presents a systematic approach for optimizing data mixtures in LLM pretraining. By combining semantic clustering with proxy-based iterative search, it avoids reliance on manual annotations or static heuristics. The method supports both generalist and specialist training goals and adapts to varying compute and data constraints. This framework contributes to ongoing efforts in data-centric AI by offering a scalable and principled alternative to handcrafted data pipelines. Its empirical performance underscores the importance of data mixture optimization in maximizing model utility, particularly under fixed resource budgets. Check out the Twitter and join our Telegram Channel and LinkedIn Group. Don’t Forget to join our 90k+ ML SubReddit. Asif RazzaqWebsite |  + postsBioAsif Razzaq is the CEO of Marktechpost Media Inc.. As a visionary entrepreneur and engineer, Asif is committed to harnessing the potential of Artificial Intelligence for social good. His most recent endeavor is the launch of an Artificial Intelligence Media Platform, Marktechpost, which stands out for its in-depth coverage of machine learning and deep learning news that is both technically sound and easily understandable by a wide audience. The platform boasts of over 2 million monthly views, illustrating its popularity among audiences.Asif Razzaqhttps://www.marktechpost.com/author/6flvq/OpenAI Releases a Technical Playbook for Enterprise AI IntegrationAsif Razzaqhttps://www.marktechpost.com/author/6flvq/Meta AI Released the Perception Language Model (PLM): An Open and Reproducible Vision-Language Model to Tackle Challenging Visual Recognition TasksAsif Razzaqhttps://www.marktechpost.com/author/6flvq/An In-Depth Guide to Firecrawl Playground: Exploring Scrape, Crawl, Map, and Extract Features for Smarter Web Data ExtractionAsif Razzaqhttps://www.marktechpost.com/author/6flvq/Meta AI Introduces Perception Encoder: A Large-Scale Vision Encoder that Excels Across Several Vision Tasks for Images and Video
    0 Yorumlar 0 hisse senetleri 40 Views
  • TOWARDSAI.NET
    Have o1 Models Solved Human Reasoning?
    Latest   Machine Learning Have o1 Models Solved Human Reasoning? 0 like April 19, 2025 Share this post Last Updated on April 19, 2025 by Editorial Team Author(s): Nehdiii Originally published on Towards AI. Image Generated By ChatGPT OpenAI made waves in the AI community with the release of their o1 models. As the excitement settles, I feel it’s the perfect time to share my thoughts on LLMs’ reasoning abilities, especially as someone who has spent a significant portion of my research exploring their capabilities in compositional reasoning tasks. This also serves as an opportunity to address the many “Faith and Fate” questions and concerns I’ve been receiving over the past year, such as: Do LLMs truly reason? Have we achieved AGI? Can they really not solve simple arithmetic problems? The buzz around the o1 models, code-named “strawberry,” has been growing since August, fueled by rumors and media speculation. Last Thursday, Twitter lit up with OpenAI employees celebrating o1’s performance boost on several reasoning tasks. The media further fueled the excitement with headlines claiming that “human-like reasoning” is essentially a solved problem in LLMs. Without a doubt, o1 is exceptionally powerful and distinct from any other models. It’s an incredible achievement by OpenAI to release these models, and it’s astonishing to witness the significant jump in Elo scores on ChatBotArena compared to the incremental improvements from other major players. ChatBotArena continues to be the leading platform for… Read the full blog for free on Medium. Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming a sponsor. Published via Towards AI Towards AI - Medium Share this post
    0 Yorumlar 0 hisse senetleri 46 Views
  • WWW.IGN.COM
    Paul Rudd Hypes Nintendo Switch 2 With Playful Throwback to Infamous 90s SNES Commercial
    Nintendo has tapped actor Paul Rudd to hype up the Nintendo Switch 2 in a brand new commercial that pays corny-yet-adorable homage to a beloved 90s commercial he did for the Super Nintendo.The original commercial, which aired in 1991, shows Rudd in a long black jacket, beaded necklace, and really, uh, interesting hairdo stomping up to a drive-in movie theater, SNES in hand. He hooks it up and begins playing a number of favorites on the big screen: The Legend of Zelda: A Link to the Past, F-Zero, Sim City, and others, while a crowd of interested onlookers forms around him. The commercial ends with the famous SNES slogan: "Now you're playing with power."Paul Rudd, from the new Nintendo Switch 2 commercialIn the new Nintendo Switch 2 commercial, Rudd is 34 years older but somehow looks...kind of the same? He's still got the coat, and the necklace, and the hair. But this time, he stomps into a living room and hooks up a Nintendo Switch 2 to play with comedians Joe Lo Truglio and Jordan Carlos, as well as a kid that calls him "Uncle Paul." They play Mario Kart World using the system's new GameChat feature, and the others tease Rudd about his get-up and wacky 90s ad attitude, which includes the commercial lampshading a fog machine and a fan blowing to make the atmosphere look as intense as it did in the 90s. The commercial concludes with Rudd voicing that instead of playing with power, "Now we're playing together." The whole thing is cheesy as heck, but goes along with the bit and acknowledges the goofiness of the original commercial to what amounts to a pretty cute effect.IGN had the pleasure of sitting down with Rudd to talk to him about his experience shooting a follow-up Nintendo hardware commercial over 30 years after his first crack at it. During our chat, we learned that Rudd suspects he was wearing his own beaded necklace in the original commercial, and that he kept playing Mario Kart World on the set in between takes. Unfortunately, he says, they didn't let him take a Nintendo Switch 2 home with him. You can watch our full interview right here:PlayJust this week, we got word that Nintendo Switch 2 preorders are back on for April 24 this time, and the price is still $450, though accessory prices have gone up due to the impact of tariffs in the United States. We've got everything you need to know about where and how to get one of those sweet, sweet pre-orders in our guide.Rebekah Valentine is a senior reporter for IGN. You can find her posting on BlueSky @duckvalentine.bsky.social. Got a story tip? Send it to rvalentine@ign.com.
    0 Yorumlar 0 hisse senetleri 45 Views
  • TECHCRUNCH.COM
    ChatGPT is referring to users by their names unprompted, and some find it 'creepy'
    Some ChatGPT users have noticed a strange phenomenon recently: Occasionally, the chatbot refers to them by name as it reasons through problems. That wasn’t the default behavior previously, and several users claim ChatGPT is mentioning their names despite never having been told what to call them. Reviews are mixed. One user, software developer and AI enthusiast Simon Willison, called the feature “creepy and unnecessary.” Another developer, Nick Dobos, said he “hated it.” A cursory search of X turns up scores of users confused by — and wary of — ChatGPT’s first-name basis behavior. “It’s like a teacher keeps calling my name, LOL,” wrote one user. “Yeah, I don’t like it.” It’s not clear when, exactly, the change happened, or whether it’s related to ChatGPT’s upgraded “memory” feature that lets the chatbot draw on past chats to personalize its responses. Some users on X say ChatGPT began calling them by their names even though they’d disabled memory and related personalization settings. OpenAI hasn’t responded to TechCrunch’s request for comment. In any event, the blowback illustrates the uncanny valley OpenAI might struggle to overcome in its efforts to make ChatGPT more “personal” for the people who use it. Last week, the company’s CEO, Sam Altman, hinted at AI systems that “get to know you over your life” to become “extremely useful and personalized.” But judging by this latest wave of reactions, not everyone’s sold on the idea. An article published by The Valens Clinic, a psychiatry office in Dubai, may shed some light on the visceral reactions to ChatGPT’s name use. Names convey intimacy. But when a person — or chatbot, as the case may be — uses a name a lot, it comes across as inauthentic. “Using an individual’s name when addressing them directly is a powerful relationship-developing strategy,” writes Valens. “It denotes acceptance and admiration. However, undesirable or extravagant use can be looked at as fake and invasive.” In a similar vein, perhaps another reason many people don’t want ChatGPT using their name is that it feels ham-fisted — a clumsy attempt at anthropomorphizing an emotionless bot. In the same way that most folks wouldn’t want their toaster calling them by their name, they don’t want ChatGPT to “pretend” it understands a name’s significance. This reporter certainly found it disquieting when o3 in ChatGPT earlier this week said it was doing research for “Kyle.” (As of Friday, the change seemingly had been reverted; o3 called me “user.”) It had the opposite of the intended effect — poking holes in the illusion that the underlying models are anything more than programmable, synthetic things.
    0 Yorumlar 0 hisse senetleri 48 Views