0 Yorumlar
0 hisse senetleri
13 Views
Rehber
Rehber
-
Please log in to like, share and comment!
-
WWW.MARKTECHPOST.COMMeet CoMERA: An Advanced Tensor Compression Framework Redefining AI Model Training with Speed and PrecisionTraining large-scale AI models such as transformers and language models have become an indispensable yet highly demanding process in AI. With billions of parameters, these models offer groundbreaking capabilities but come at a steep cost in terms of computational power, memory, and energy consumption. For example, OpenAIs GPT-3 comprises 175 billion parameters and requires weeks of GPU training. Such massive requirements limit these technologies to organizations with substantial computational resources, exacerbating concerns over energy efficiency and environmental impact. Addressing these challenges has become critical to ensuring the broader accessibility and sustainability of AI advancements.The inefficiencies in training large models stem primarily from their reliance on dense matrices, which demand significant memory and computing power. The limited support for optimized low-precision or low-rank operations in modern GPUs further compounds these requirements. While some methods, such as matrix factorization and heuristic rank reduction, have been proposed to alleviate these issues, their real-world applicability is constrained. For instance, GaLore enables training on single-batch settings but suffers from impractical runtime overhead. Similarly, LTE, which adopts low-rank adapters, struggles with convergence on large-scale tasks. The lack of a method that simultaneously reduces memory usage, computational cost, and training time without compromising performance has created an urgent need for innovative solutions.Researchers from the University at Albany SUNY, the University of California at Santa Barbara, Amazon Alexa AI, and Meta introduced Computing-and Memory-Efficient training method via Rank-Adaptive tensor optimization (CoMERA), a novel framework that combines memory efficiency with computational speed through rank-adaptive tensor compression. Unlike traditional methods focusing solely on compression, CoMERA adopts a multi-objective optimization approach to balance compression ratio and model accuracy. It utilizes tensorized embeddings and advanced tensor-network contractions to optimize GPU utilization, reducing runtime overhead while maintaining robust performance. The framework also introduces CUDA Graph to minimize kernel-launching delays during GPU operations, a significant bottleneck in traditional tensor compression approaches.CoMERAs foundation is based on adaptive tensor representations, which allow model layers to adjust their ranks dynamically based on resource constraints. By modifying tensor ranks, the framework achieves compression without compromising the integrity of neural network operations. This dynamic optimization is achieved through a two-stage training process:An early stage focused on stable convergenceA late stage that fine-tunes ranks to meet specific compression targetsIn a six-encoder transformer model, CoMERA achieved compression ratios ranging from 43x in its early stage to an impressive 361x in its late-stage optimizations. Also, it reduced memory consumption by 9x compared to GaLore, with 2-3x faster training per epoch.When applied to transformer models trained on the MNLI dataset, CoMERA reduced model sizes from 256 MB to as little as 3.2 MB while preserving accuracy. In large-scale recommendation systems like DLRM, CoMERA compressed models by 99x and achieved a 7x reduction in peak memory usage. The framework also excelled in pre-training CodeBERT, a domain-specific large language model, where it gained a 4.23x overall compression ratio and demonstrated a 2x speedup during certain training phases. These results underscore its ability to handle diverse tasks and architectures, extending its applicability across domains.The key takeaways from this research are as follows:CoMERA achieved compression ratios of up to 361x for specific layers and 99x for full models, drastically reducing storage and memory requirements.The framework delivered 2-3x faster training times per epoch for transformers and recommendation systems, saving computational resources and time.Using tensorized representations and CUDA Graph, CoMERA reduced peak memory consumption by 7x, enabling training on smaller GPUs.CoMERAs approach supports diverse architectures, including transformers and large language models, while maintaining or improving accuracy.By lowering the energy and resource demands of training, CoMERA contributes to more sustainable AI practices and makes cutting-edge models accessible to a broader audience.In conclusion, CoMERA addresses some of the most significant barriers to AI scalability and accessibility by enabling faster, memory-efficient training. Its adaptive optimization capabilities and compatibility with modern hardware make it a compelling choice for organizations seeking to train large models without incurring prohibitive costs. This studys results pave the way for further exploration of tensor-based optimizations in domains like distributed computing and resource-constrained edge devices.Check out the Paper. All credit for this research goes to the researchers of this project. Also,dont forget to follow us onTwitter and join ourTelegram Channel andLinkedIn Group. Dont Forget to join our60k+ ML SubReddit. Asif RazzaqAsif Razzaq is the CEO of Marktechpost Media Inc.. As a visionary entrepreneur and engineer, Asif is committed to harnessing the potential of Artificial Intelligence for social good. His most recent endeavor is the launch of an Artificial Intelligence Media Platform, Marktechpost, which stands out for its in-depth coverage of machine learning and deep learning news that is both technically sound and easily understandable by a wide audience. The platform boasts of over 2 million monthly views, illustrating its popularity among audiences. [Download] Evaluation of Large Language Model Vulnerabilities Report (Promoted)0 Yorumlar 0 hisse senetleri 10 Views
-
WWW.MARKTECHPOST.COMCoordTok: A Scalable Video Tokenizer that Learns a Mapping from Co-ordinate-based Representations to the Corresponding Patches of Input VideosBreaking down videos into smaller, meaningful parts for vision models remains challenging, particularly for long videos. Vision models rely on these smaller parts, called tokens, to process and understand video data, but creating these tokens efficiently is difficult. While recent tools achieve better video compression than older methods, they struggle to handle large video datasets effectively. A key issue is their inability to fully utilize temporal coherence, the natural pattern where video frames are often similar over short periods, which video codecs use for efficient compression. These tools are also computationally expensive to train and are limited to short clips, making them not very effective in capturing patterns and processing longer videos.Current video tokenization methods have high computational costs and struggle to handle long video sequences efficiently. Early approaches used image tokenizers to compress videos frame by frame but ignored the natural continuity between frames, reducing their effectiveness. Later methods introduced spatiotemporal layers, reduced redundancy, and used adaptive encoding, but they still required rebuilding entire video frames during training, which limited them to short clips. Video generation models like autoregressive methods, masked generative transformers, and diffusion models are also limited to short sequences.To solve this, researchers from KAIST and UC Berkeley proposed CoordTok, which learns a mapping from coordinate-based representations to the corresponding patches of input videos. Motivated by recent advances in 3D generative models, CoordTok encodes a video into factorized triplane representations and reconstructs patches corresponding to randomly sampled (x, y, t) coordinates. This approach allows large tokenizer models to be trained directly on long videos without requiring excessive resources. The video is divided into space-time patches and processed using transformer layers, with the decoder mapping sampled (x, y, t) coordinates to corresponding pixels. This reduces both memory and computational costs while preserving video quality.Based on this, researchers updated CoordTok to efficiently process a video by introducing a hierarchical architecture that grasped local and global features from the video. This architecture represented a factorized triplane to process patches of space and time, making long-duration video processing easier without excessively using computational resources. This approach greatly reduced the memory and computation requirements and maintained high video quality.Researchers improved the performance by adding a hierarchical structure that captured the local and global features of videos. This structure allowed the model to process space-time patches more efficiently using transformer layers, which helped generate factorized triplane representations. As a result, CoordTok handled longer videos without demanding excessive computational resources. For example, CoordTok encoded a 128-frame video with 128128 resolution into 1280 tokens, while baselines required 6144 or 8192 tokens to achieve similar reconstruction quality. The models reconstruction quality was further improved by fine-tuning with both 2 loss and LPIPS loss, enhancing the accuracy of the reconstructed frames. This combination of strategies reduced memory usage by up to 50% and computational costs while maintaining high-quality video reconstruction, with models like CoordTok-L achieving a PSNR of 26.9.In conclusion, the proposed framework by researchers, CoordTok, proves to be an efficient video tokenizer that uses coordinate-based representations to reduce computational costs and memory requirements while encoding long videos.It allows memory-efficient training for video generation models, making handling long videos with fewer tokens possible. However, it is not strong enough for dynamic videos and suggests further potential improvements, such as using multiple content planes or adaptive methods. This work can serve as a starting point for future research on scalable video tokenizers and generation, which can be beneficial for comprehending and generating long videos.Check out the Paper and Project. All credit for this research goes to the researchers of this project. Also,dont forget to follow us onTwitter and join ourTelegram Channel andLinkedIn Group. Dont Forget to join our60k+ ML SubReddit. Divyesh Vitthal Jawkhede+ postsDivyesh is a consulting intern at Marktechpost. He is pursuing a BTech in Agricultural and Food Engineering from the Indian Institute of Technology, Kharagpur. He is a Data Science and Machine learning enthusiast who wants to integrate these leading technologies into the agricultural domain and solve challenges. [Download] Evaluation of Large Language Model Vulnerabilities Report (Promoted)0 Yorumlar 0 hisse senetleri 10 Views
-
WWW.CNET.COMBest Internet Providers in Sebring, FloridaLooking for the best internet provider in Sebring? CNETs broadband experts will guide you to the right plan for your needs.0 Yorumlar 0 hisse senetleri 10 Views
-
WWW.TOMSHARDWARE.COMUndersea power cable connecting Finland and Estonia experiences outage capacity reduced to 35% as Finnish authorities investigate | Sabotage isnt ruled out yet.submitted by /u/ControlCAD [link] [comments]0 Yorumlar 0 hisse senetleri 11 Views
-
WWW.FORBES.COMNYT Strands Today: Hints, Spangram And Answers For Thursday, December 26thToday's NYT Strands hints and answersCredit: New York TimesLooking for Wednesdays Strands hints, spangram and answers? You can find them here:Christmas is over. The fat goose is likely dead and eaten. The old man has a few more pennies in his hat, though theyll likely be spent before the new year. Mistletoe has served its purpose. All three ships have sailed in. 2025 is about to begin. And we have a Strands grid to solve and words to uncover!Strands is the newest game in the New York Times stable of puzzle games. Its a fun twist on classic word search games. Every day were given a new theme and then tasked with uncovering all the words on the grid that fit that theme, including a spangram that spans two sides of the board. One of these words is the spangram which crosses from one side of the grid to another and reveals even more about the days theme.Spoilers ahead.Todays Strands HintsTodays Theme: Relative conjunctionHint #1: More like relatives.Hint #2: Just not immediate family.Read More: Todays Wordle #1286 Hints, Clues And Answer For Thursday, December 26thTo help you uncover all the words, here are the first two letters of every word, including the spangram.NIGRNEUNAUCOINRERemember, spoilers ahead!What Are Todays Strands Answers?Todays spangram is: REUNIONHeres the full list of words:NIECEGRANDCHILDNEPHEWUNCLEAUNTCOUSININLAWHeres the completed Strands grid:Today's StrandsScreenshot: Erik KainTodays Strands BreakdownConjunction threw me off on this one for a spell. Clever to use the theme to misdirect, since relative is too on-the-nose. Once Id found AUNT and UNCLE, however, it was pretty easy to piece the rest together. I wish the spangram had been FAMILYREUNION instead of just REUNION, which felt a bit like a cop-out. I think the trickiest of all these words was INLAW, which I looked at for the longest time (having already gotten AUNT and COUSIN) and still couldnt unscramble in my head. I had to type out the letters and as soon as I did that I saw the word.How did you do on your Strands today? Let me know on Twitter and Facebook.Be sure to check out my blog for my daily Wordle guides as well as all my other writing about TV shows, streaming guides, movie reviews, video game coverage and much more. Thanks for stopping by!0 Yorumlar 0 hisse senetleri 9 Views
-
WWW.FORBES.COMTodays Wordle #1286 Hints, Clues And Answer For Thursday, December 26thHow to solve today's Wordle.SOPA Images/LightRocket via Getty ImagesLooking for Tuesdays Wordle hints, clues and answer? You can find them here:Christmas has come and gone. Its always a little bittersweet. So much anticipation in the lead-up to Christmas, and then it comes and we open our presents and listen to the music and then its over. But we still have New Years Eve to look forward to!Yesterday, I gave you a Wordle Wednesday riddle and it was a pretty tough one! Today, Ill give you the answer. Here was the riddle:Christmas Riddle Santa stands in his workshop, With gifts stacked to the sky. He calls for three clever elves, To help with his supply. The first elf takes half the pile, Then adds three gifts more. The second elf takes half of what's left, And adds the same as before. The third elf does the same again, Leaving Santa with just one. How many gifts did Santa have, Before the elves begun?The answer is 50. Theres a complicated mathematical way to solve this, but Ill just explain the numbers. If the first elf takes half the pile25and adds 3, you get 28. The second elf takes half of whats left11and adds 3, you get 14. 14 + 28 = 42. The third elf does the same, taking half of 84and adding 3. Thats 7. 42 + 7 = 49. One present is left for Santa, which leaves us with 50.And thats all folks! Have a great rest of 2024! Now lets solve this Wordle . . . .How To Solve Todays WordleThe Hint: Attach to something.The Clue: This Wordle has a double letter.Okay, spoilers below!...The Answer:Today's WordleScreenshot: Erik KainPlay Puzzles & Games on ForbesWordle AnalysisEvery day I check Wordle Bot to help analyze my guessing game. You can check your Wordles with Wordle Bot right here. I was lucky with my first guess today. Ive used FLARE a number of times in the past, but it was better than ever today, leaving me with just 5 possible solutions. MAFIA was the best second guess I could have come up withI never would have guessed the answer on #2and left me with just 1 possible solution: AFFIX for the win. Thats a tough Wordle!Competitive Wordle ScoreI get 1 point for guessing in three and another point for beating the Bot, who took four tries today. 2 points for me!How To Play Competitive WordleGuessing in 1 is worth 3 points; guessing in 2 is worth 2 points; guessing in 3 is worth 1 point; guessing in 4 is worth 0 points; guessing in 5 is -1 points; guessing in 6 is -2 points and missing the Wordle is -3 points.If you beat your opponent you get 1 point. If you tie, you get 0 points. And if you lose to your opponent, you get -1 point. Add it up to get your score. Keep a daily running score or just play for a new score each day.Fridays are 2XP, meaning you double your pointspositive or negative.You can keep a running tally or just play day-by-day. Enjoy!Todays Wordle EtymologyThe word "affix" comes from the Latin affixus, the past participle of affigere, meaning "to fasten to". This is derived from ad- (meaning "to" or "toward") and figere (meaning "to fasten" or "fix"). The term entered Middle English in the late 15th century, retaining the sense of attaching or adding something.Let me know how you fared with your Wordle today on Twitter, Instagram or Facebook. Also be sure to subscribe to my YouTube channel and follow me here on this blog where I write about games, TV shows and movies when Im not writing puzzle guides. Sign up for my newsletter for more reviews and commentary on entertainment and culture.0 Yorumlar 0 hisse senetleri 9 Views
-
WWW.TECHSPOT.COMThe British Army is trialing radio waves to zap drones out of the sky – at 13 cents per shotForward-looking: The UK Ministry of Defence has revealed it is testing a futuristic weapon capable of taking down drones using nothing but radio waves. Remarkably, each "shot" costs less than a "pack of mince pies." The Radio Frequency Directed Energy Weapon (RFDEW) has been in development for some time, but British soldiers recently had the chance to put it through its paces. The Army's Royal Artillery Trials and Development Unit, in collaboration with 7 Air Defence Group, successfully conducted a live firing trial in West Wales. This marked the first use of the system against Uncrewed Aerial Systems (UAS) by the British Armed Forces. Unlike laser-based energy weapons that use concentrated light beams, the RFDEW disables drones and missiles by bombarding them with high-powered radio frequencies, effectively frying their internal electronics.Laser-based weapon systems have proven their efficacy against individual drones and other aircraft, but they face challenges when dealing with swarms of drones. This is where the RFDEW has shown superior potential.During the trials, the Army's air defense teams successfully detected, tracked, and engaged multiple drone targets at distances of up to a kilometer. Impressively, each engagement cost only about 10p (13 cents) per shot.The RFDEW trials mark a big milestone not just for the UK's directed energy initiatives, but for rapidly advancing military tech in general. It seems to check all the boxes the press release by the UK government highlights that it's highly automated so it can be operated by a single person, it's precise, it's relatively low-cost, and it packs the punch to neutralize threats on land, in the air, or even over water.The last bit is important, suggesting that it can be used against threats beyond aerial drones and missiles. The system is also flexible in terms of deployment and can be fitted onto any military vehicle. // Related StoriesThe technology was developed by British defense firm Thales, in partnership with QinetiQ, Teledyne e2v, and others. Its development supported over 135 skilled engineering jobs across the UK.Government bigwigs are understandably hyped about the successful trials. Defence Minister Maria Eagle called it "another step forward for a potentially game-changing sovereign weapon" that will help the UK maintain a "crucial advantage against the emerging threats we face."However, deploying the RFDEW operationally is still likely some way off. There's likely still plenty more testing and fine-tuning needed before radio wave assaults become standard British military doctrine.0 Yorumlar 0 hisse senetleri 9 Views
-
WWW.DIGITALTRENDS.COMHyundai to offer free NACS adapters to its EV customersHyundai appears to be in a Christmas kind of mood.The South Korean automaker announced that it will start offering free North American Charging Standard (NACS) adapters in the first quarter of 2025.Recommended VideosThe offer will be applicable to current and new Hyundai electric vehicle (EV) owners who have purchased or leased their vehicle on or before January 31, 2025.Please enable Javascript to view this contentHyundai says its authorized adapter will give Hyundai EVs equipped with combined charging system (CCS) ports access to more than 20,000 Tesla Superchargers in the U.S.RelatedTo accelerate EV adoption, we started by listening to our current owners, says Olabisi Boyle, senior vice president of product at Hyundai Motor North Americ., in a statement. These adapters will make DC fast-charging more convenient for current owners.The NACS adapters will be available for all of Hyundais EVs available on the U.S. market. These include the model year 2024 and earlier Kona Electric, Ioniq hatchback, and Ioniq 5 and Ioniq 6, as well as 2025 Ioniq 6, Ioniq 5 N, and Kona Electric. The automakers Genesis luxury brand will also be participating in the program.This differs from its Kia unit, as NACS to CCS adapters are only offered to Kias EV6 and EV9 models that were delivered after September 4, 2024. Earlier models and Niro EVs are not getting the adapters.The new Hyundai 2025 Ioniq 5 will be the first non-Tesla vehicle to feature a native NACS port. Other new models, like the Ioniq 9, are also getting the native ports.Meanwhile, both the Ioniq 5 and Ioniq 9 will start off by charging slower with NACS on the Tesla Supercharger network than with their CCS adapter. Hyundai has told Green Car Reports that this is not about the vehicles but about the Supercharger network, which is due to get upgraded sometime in 2025.Editors Recommendations0 Yorumlar 0 hisse senetleri 12 Views
-
WWW.BUSINESSINSIDER.COMTrump urges Wayne Gretzky to run for Canadian prime minister as Justin Trudeau could be on the brink of losing powerDonald Trump urged Wayne Gretzky to run for prime minister of Canada.One of Prime Minister Justin Trudeau's coalition partners may force him out of the position.Gretzky visited Mar-a-Lago and wore a MAGA hat after Trump's November electoral victory.In a Christmas Day message, past and future president Donald Trump said he urged Wayne Gretzky to run for prime minister of Canada.Trump wrote on Truth Social that he talked with the legendary hockey player and Canadian icon, telling Gretzky he could easily win a national election.He also said Gretzky could become "Governor of Canada" an apparent reference to his joke that the northern neighbor could become the 51st state in the United States of America."I just left Wayne Gretzky, 'The Great One' as he is known in Ice Hockey circles," Trump wrote in a Wednesday afternoon Truth Social post. "I said, 'Wayne, why don't you run for Prime Minister of Canada, soon to be known as the Governor of Canada - You would win easily, you wouldn't even have to campaign.'"Gretzky wasn't interested in running, Trump said."He had no interest, but I think the people of Canada should start a DRAFT WAYNE GRETZKY Movement," Trump wrote. "It would be so much fun to watch!"A representative for Gretzky didn't immediately respond to a request for comment from Business Insider.In his next presidential term, Trump has said that he would imposetariffsonimported goods from Canada that wouldmake American importers pay 25% more.Trump's account posted on Truth Social nearly 40 times on Wednesday, mostly articles from conservative media outlets supporting his policies. He also named Kevin Marino Cabrera, a Republican official in Florida who worked for Trump's 2020 campaign, as his choice for ambassador to Panama. Over the past week, Trump has threatened to retake control of the Panama Canal.The president-elect's support for Gretzky a dual US-Canadian citizen comes as Canadian Prime Minister Justin Trudeau could be on the brink of losing power.Trudeau's Liberal Party remains in power through a coalition with the New Democratic Party in the country's parliament. Jagmeet Singh, the leader of the New Democratic Party, said he would call for a "no confidence" vote in January, costing the Liberals their majority and triggering a new election. Canada is also scheduled to have a federal election in October 2025.Gretzky and his family visited Mar-a-Lago shortly after Trump's November electoral victory. In one photo posted to Instagram by a Trump Organization executive, Gretzky is wearing a white-and-gold "Make America Great Again" cap.In the past, Gretzky has occasionally supported members of Canada's Conservative party, which polls show is leading Trudeau's Liberal party.0 Yorumlar 0 hisse senetleri 10 Views