0 Comments
0 Shares
74 Views
Directory
Directory
-
Please log in to like, share and comment!
-
WWW.MARKTECHPOST.COMMeta AI Researchers Introduce Mixture-of-Transformers (MoT): A Sparse Multi-Modal Transformer Architecture that Significantly Reduces Pretraining Computational CostsAdvancements in AI have paved the way for multi-modal foundation models that simultaneously process text, images, and speech under a unified framework. These models can potentially transform various applications, from content creation to seamless translation across media types, as they enable the generation and interpretation of complex data. However, achieving this requires immense computational resources, which creates a barrier to scaling and operational efficiency. Training these multi-modal systems is complex, as each modality, whether text, image, or audio, introduces unique challenges, requiring customized handling while maintaining cohesion within the models framework. Balancing this level of diversity in data types has proven difficult regarding both processing power and training efficiency.A primary issue faced in multi-modal AI research is that traditional language models are optimized for text, and extending them to incorporate images and audio requires substantial computational power. Large language models, or LLMs, designed specifically for text-based tasks do not naturally integrate other modalities due to the inherent differences in how each modality needs to be processed. For instance, a text model optimized on trillions of tokens can only extend to image and speech data with conflicts in the training dynamics. Consequently, the computational load escalates, with these models requiring up to five times the data and processing power compared to text-only models. Researchers, therefore, aim to find architectures that can accommodate these requirements without a proportional increase in resources.Various strategies currently address this need for computational efficiency in multi-modal models. One prominent approach is using sparse architectures, such as Mixture-of-Experts (MoE), which activates only specific parts of the model as needed. MoE operates by utilizing experts to manage different aspects of the data, reducing the workload of the model at any given moment. However, MoE has limitations, including instability caused by unbalanced expert utilization and difficulty managing training dynamics at scale. Furthermore, MoEs routing mechanism tends to focus on specific aspects of the data, often leading to an imbalance in training different modalities, thus requiring additional techniques to stabilize the process and maintain efficiency.FAIR at Meta and Stanford University researchers introduced a new architecture called Mixture-of-Transformers (MoT). The MoT, built as a sparse, multi-modal transformer, reduces computational demands by incorporating modality-specific parameters. Unlike traditional dense models that rely on uniform processing, MoT utilizes distinct components for each modality, text, image, and speech, allowing for modality-specific optimization without requiring additional model components. For example, MoT assigns unique feed-forward networks, attention matrices, and normalization layers to each modality while maintaining a unified attention mechanism across the entire input data sequence, enhancing processing efficiency and output accuracy.The Mixture-of-Transformers framework leverages this sparse design by decoupling the model parameters according to modality, optimizing training and inference phases. For instance, MoT separates text, image, and speech parameters during a multi-modal task, applying customized processing layers for each. This process reduces the need for dense model layers to accommodate all modalities simultaneously. As a result, MoT achieves a balance of efficiency and effectiveness that traditional dense models lack. For instance, in tests involving text and image generation within the Chameleon 7B model, MoT delivered comparable results to dense baselines with only 55.8% of the FLOPs and even less 37.2% when integrating a third modality, such as speech. This efficiency gain translates to significant reductions in resource usage, which, in large-scale AI models, can lead to major cost savings.Mixture-of-Transformers showed notable improvements across multiple evaluation criteria. Compared to dense transformer models, the architecture reduced pretraining times for text and image tasks by over 40%. In the Chameleon setting, where the model processes text and images using autoregressive objectives, MoT reached the dense models final validation loss using just 55.8% of the computational power. Furthermore, MoT accelerated the training process by achieving the same levels of accuracy in image quality with 47.2% of the time required by dense models, and it achieved text quality in 75.6% of the typical time. Such efficiency gains were further confirmed in the Transfusion setting. MoT matched dense baseline image performance while using only one-third of the FLOPs, proving its adaptability and resource efficiency in handling complex multi-modal data.The research offers several key takeaways, highlighting the potential of Mixture-of-Transformers to redefine multi-modal AI processing:Efficient Multi-Modal Processing: MoT matches dense model performance across text, image, and speech, achieving results with 37.2% to 55.8% of the computational resources.Training Acceleration: In the Chameleon model, MoT reduced training time for image tasks by 52.8% and text tasks by 24.4% while maintaining accuracy.Adaptive Scalability: MoT demonstrated high adaptability by effectively handling discrete and continuous tokens for multiple modalities without additional processing layers.Resource Reduction in Real-Time Use: Performance evaluations on NVIDIA A100 GPUs showed MoT significantly reduced wall-clock training times, making it a viable option for real-time applications.In conclusion, Mixture-of-Transformers presents an innovative approach to multi-modal modeling by offering an efficient, scalable solution for integrating diverse data types within a single framework. Through a sparse architecture that leverages modality-specific processing, MoT significantly reduces computational load while delivering robust performance across various tasks. This breakthrough could transform the landscape of AI, enabling more accessible, resource-efficient models for advanced multi-modal applications.Check out the Paper. All credit for this research goes to the researchers of this project. Also,dont forget to follow us onTwitter and join ourTelegram Channel andLinkedIn Group. If you like our work, you will love ournewsletter.. Dont Forget to join our55k+ ML SubReddit. Asif RazzaqAsif Razzaq is the CEO of Marktechpost Media Inc.. As a visionary entrepreneur and engineer, Asif is committed to harnessing the potential of Artificial Intelligence for social good. His most recent endeavor is the launch of an Artificial Intelligence Media Platform, Marktechpost, which stands out for its in-depth coverage of machine learning and deep learning news that is both technically sound and easily understandable by a wide audience. The platform boasts of over 2 million monthly views, illustrating its popularity among audiences. Upcoming Live LinkedIn event, 'One Platform, Multimodal Possibilities,' where Encord CEO Eric Landau and Head of Product Engineering, Justin Sharps will talk how they are reinventing data development process to help teams build game-changing multimodal AI models, fast0 Comments 0 Shares 84 Views
-
TOWARDSAI.NETElon Musks Own AI Flags Him as a Leading Misinformation Source on XElon Musks Own AI Flags Him as a Leading Misinformation Source on X 0 like November 13, 2024Share this postAuthor(s): Get The Gist Originally published on Towards AI. Plus: Nvidia is Building Japans Most Advanced AI SupercomputerThis member-only story is on us. Upgrade to access all of Medium.Welcome to Get The Gist, where every weekday we share an easy-to-read summary of the latest and greatest developments in AI news, innovations, and trends all delivered in under 5 minutes! In todays edition:Nvidia is Building Japans Most Advanced AI SupercomputerGoogle Nest Cameras Get Smarter with New Gemini AI FeaturesGrok Flags Musk as a Leading Misinformation Source on XAmazon to Launch Its New AI ChipAnd more AI news.Image by: NvidiaThe Gist: SoftBank, in partnership with NVIDIA, is building Japans most powerful AI supercomputer, aiming to lead in AI innovation, telecom, and industrial growth. This groundbreaking infrastructure promises new revenue streams and transformative applications across industries.Key Details:SoftBanks AI supercomputer, based on NVIDIAs Blackwell platform, will be the most powerful in Japan, supporting AI development for research, universities, and businesses.Using the NVIDIA AI Aerial platform, SoftBank has piloted the first AI-integrated 5G network, unlocking multi-billion dollar revenue opportunities for telecom.SoftBanks planned AI marketplace, powered by NVIDIA AI Enterprise, will provide secure, local AI services to industries, enabling growth in fields like healthcare, robotics, and transportation.Image by: NeowinThe Gist: Starting next week, Google Nest cameras will roll out advanced AI Read the full blog for free on Medium.Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming asponsor. Published via Towards AITowards AI - Medium Share this post0 Comments 0 Shares 109 Views
-
WWW.DENOFGEEK.COMDeadpool & Wolverine Caps the Best Year Gambit Ever HadThis post contains spoilers for Deadpool & Wolverine.He might arrive spinning his cards, accompanied by triumphant music and a slow-mo stride, but make no mistake: Gambit is introduced as a joke in Deadpool & Wolverine. Gambits a joke on a plot level, as he alone represents one-quarter of the resistance army that the titular heroes hoped would be much larger; hes also a deep cut in-joke, even by Deadpool standards, because hes played by Channing Tatum, who tried for years to get a Gambit solo film out of development hell; and hes a comic book nerd joke, decked out with all the absurdities that made him seem cool in the 90s and dated immediately afterward, with his duster and head sock, and his superfluous French catchphrases.But by the end of the movie, Gambit is a full-on superhero in his own right. And Deadpool & Wolverine is just one of the stories that reestablished the Ragin Cajuns status as one of Marvels most reliable characters. Its a Gambit renaissance occurring across comics, cartoons, and cinema.Remember It AgainMr. Stubble King 1994 read a caption under Gambits appearance in The No-Sin Situation, a hilarious back-up story written and drawn by Chip Zdarsky. A spoof of the otherwise dire Original Sins crossover, The No-Sin Situation was published in 2014 (so well after Gambits heyday) and largely consisted of single-panel talking head shots of Marvel heroes making embarrassing confessions. Sometimes I kill people who I suspect are vampires but really I know they arent vampires, says Blade, in a typical example. I just really like killing.The comic begins with Gambits admission, and its among the most ridiculous. Im not actually French, admits a character the comic already told readers is 20 years dated. Its why I always just pepper easy French words like oui and chere with English. I just wanted to sound coolPublished decades after the height of Gambit mania, The No-Sin Situation highlighted what a punching bag Remy LeBeau had become. One of the last great characters that legendary X-Men writer Chris Claremont introduced during his 14-year run, Gambit had everything that people loved about Wolverine, including a dubious moral compass and a shadowy backstory, but in a 90s-centric package. By the time Original Sin hit comic shops, Gambit had been muscled out of the X-Men and into a lesser incarnation of X-Factor, where even the great Peter David failed to find something compelling to do with him.Contrast that to the current Uncanny X-Men ongoing by Gail Simone and David Marquez. Speaking with Den of Geek before the first issues release, Simone praised the romance between Gambit and Rogue as the driving force of her story. I love that Rogue andGambitare central to it because their romance is super hot, Simone said.While the writer doesnt shy away from lambasting other popular characters (see: her Cyclops comments on social media), shes 100 percent serious about Gambit in Uncanny. The passionate relationship between him and Rogue drives Simones story, which finds a group of mutants hiding out in Gambits old bayou stomping grounds.Yet the most impressive example of Gambits unlikely 2024 renaissance might be the way hes used in the surprisingly excellent cartoon series, X-Men 97. For the first couple of episodes, Gambit appears exactly as one would expect in a revival of a Saturday morning cartoon show from the 1990s: hes charming, but in a goofy dad sort of way. As the series progresses, however, to embrace richer themes about fascism and oppression, Gambit goes from hot goofball to genuine hero. In his greatest moment, Gambit stands down the massive Sentinel attacking the mutant-ruled island of Genosha. Gambits swagger pauses when the Sentinel pierces him with a metallic tentacle, but only for a moment.The camera cuts to a close-up of his blood-stained mouth, which slides into a characteristic smirk. The names Gambit, he tells the genocidal robot. Remember it. And with that delivery, Gambit charges the entire machine, sacrificing himself in his last breath while blowing up the massive threat to all mutankind.At that moment, Gambit becomes an awesome hero without sacrificing any of the elements that made him a punchline for so many years.Timeless CoolInitially, Gambits Deadpool & Wolverine appearance seems ready to set back the Cool-O-Meter. The movie gets a lot of mileage out of Tatums garbled Cajun accent and his stylish, if impractical, tendency to throw charged playing cards at enemies.Your powers close-up magic. Thats good, Deadpool observes, with typical snark. Even in that scene, however, Tatum begins to bring a bit of charm to the joke.As the heroes rouse themselves out of a defeated stupor by admitting their place in the world, both fictional and real, Jennifer Garners Elektra, a character we havent seen in 19 years, laments, Our worlds forgot about us. Yet it is Tatums Gambit who earns the laugh by adding, Or never learned about us. Yet as the heroes continue to commiserate, there is no irony to Gambits observation about the lives they saved or wanted to save. There is a genuine sadness to Gambits line deliveries that cuts through all of the self-aware winking, a sadness that cant even be undercut by the extended description of conception that Gambit is about to offer.That richness marks Tatums take on Gambit throughout the movie. As demonstrated in Logan Lucky and the Magic Mike films, Tatum excels at grafting affable charm onto a movie star body. He doesnt have the aggressive charisma of, say, a Tom Cruise or even Dwayne Johnson. Rather his charisma is derived from a laid back personality that draws people closer toward him. And crucially, as demonstrated by his surprise cameos in This is the End, The Lego Movie, and The Hateful Eight, hes happy to be the butt of the joke.Tatums effortless charm allows Gambit to be both laughable and cool, especially as the movie progresses. After spending several minutes laughing at Gambit, we viewers feel compelled to cut him some slack when he gets to charge a whole deck in slow-motion. Its unnecessarily flashy, especially in the middle of a battle that director Shawn Levy mostly shoots with chaotic handheld cameras and lots of cuts. In an action sequence that is tediously directed, weve got to admit this moment looks cool.So cool, in fact, that star Ryan Reynolds released a deleted scene to grateful and excited fans. The scene finds Gambits body in the wreckage of the battle on Cassandra Novas hordes. His eyes begin to glow, suggesting that he has not died, teasing more Gambit adventures in one universe or another.Read more From the Big Easy to the Big TimeWill a Gambit solo movie finally happen? Yes, Deadpool & Wolverine made a ton of money and Tatums Gambit was a highlight, but its hard to imagine the beleaguered MCU putting too much money in a revival of a movie series that never actually happened.Even if Tatum doesnt get to strap on the headsock again, he succeeded in making Gambit a viable character in a year that has turned out to be about reclaiming the Ragin Cajun. Between Tatums take in Deadpool & Wolverine, Simone and Marquezs story in Uncanny X-Men, and the thrilling first season of X-Men 97, Gambits ready to hit the big time once again even if he has to endure a few giggles along the way.Deadpool & Wolverine is now streaming on Disney+.0 Comments 0 Shares 94 Views
-
9TO5MAC.COMOura CEO baits Apple with smart ring shade: its hard to do this product category rightOuras smart ring has led many to wonder if Apple would ever create its own competing ring product. Ouras CEO apparently doesnt think so, and his reasons include some clear bait for the tech giant.Apples wearables line may not need a smart ringTom Hale, Ouras CEO, recently gave an interview where he was asked about any concerns of Apple introducing a smart ring in the future (via MacRumors).Arjun Kharpal has the story for CNBC:I think they [Apple] are unconvinced about the value of having a ring and a watch together and theyre not interested in undercutting the Apple Watch as a business, Hale told CNBC in an interview on Tuesday at the Web Summit in Lisbon, Portugal. I think theyre probably keeping a close eye on Samsung and a close eye on us, but its hard to do this product category right.Hales point about not wanting to undercut the Apple Watch business is a good one. Almost 10 years into its life, Apple has continued finding tremendous success with the Apple Watch.The company also has another extremely successful wearable product: AirPods. And with iOS 18.1 Apple shipped some major new health features on AirPods Pro 2. More health features are reportedly in the works for future AirPods models.But the line about it being hard to do this product category right? Yeah, thats all bait.Im sure the Oura Ring was a complicated product to create, but Apple absolutely has all the know-how it needs to create a competitor if it wants to. At this point though, it doesnt seem to have much reason to.What do you think? Should Apple create a smart ring? Let us know in the comments.Best Apple wearables and accessoriesAdd 9to5Mac to your Google News feed. FTC: We use income earning auto affiliate links. More.Youre reading 9to5Mac experts who break news about Apple and its surrounding ecosystem, day after day. Be sure to check out our homepage for all the latest news, and follow 9to5Mac on Twitter, Facebook, and LinkedIn to stay in the loop. Dont know where to start? Check out our exclusive stories, reviews, how-tos, and subscribe to our YouTube channel0 Comments 0 Shares 92 Views
-
9TO5MAC.COMiOS 18.2 has the best Apple Intelligence features, heres whats comingiOS 18.2 is just a few weeks away. Its a huge software update that includes some of the best, most powerful Apple Intelligence features yet. Heres the full list of AI features coming soon.Second wave of Apple Intelligence features in iOS 18.2Apple Intelligences debut in iOS 18.1 offered lots of new capabilities, but whats coming next is even better. Some of the most highly anticipated AI features will arrive on compatible devices with iOS 18.2, iPadOS 18.2, and macOS Sequoia 15.2.Heres everything thats coming:Genmoji:Make your own custom emoji for use in any app.ChatGPT in Siri:Siri can tap into ChatGPTs knowledge, and you can even query ChatGPT directly.Image Playground: Create original AI images in animation, illustration, or sketch styles.Visual intelligence:Use iPhone 16s Camera Control to get relevant info from your physical environment.Image Wand: Turn your sketches or notes into beautiful illustrations in the Notes app.Compose with ChatGPT: OpenAIs assistant can draft original text from scratch inside any app.Custom rewrites: Apples writing tools let you Describe your change for custom AI rewrites.Language expansion: Localized Englishsupportin Australia, Canada, New Zealand, South Africa, and the UK.Special waitlist for image generation accessWhen Apple Intelligence first launched in iOS 18.1, it arrived with a waitlist for users. The company is doing something similar with its second set of AI features in iOS 18.2, but only for image generation.After installing iOS 18.2, iPadOS 18.2, or macOS Sequoia 15.2, youll need to request explicit access to use the image features like Genmoji, Image Playground, and Image Wand.This special waitlist is due to ApplesResponsible AI principleto Design with care. As the company describes it:We take precautions at every stage of our process, including design, model training, feature development, and quality evaluation to identify how our AI tools may be misused or lead to potential harm. We will continuously and proactively improve our AI tools with the help of user feedback.Essentially, Apple doesnt want its AI tools to be used for harmwhether intentionally or by accident. So its rolling out image generation access slowly, over a period of time.iOS 18.2 release wrap-upiOS 18.2 is currently in public beta and its full launch will happen in early-to-mid December. After it debuts, nearly all of the currently announced Apple Intelligence features will have arrived.Which iOS 18.2 AI features are you most excited for? Let us know in the comments.Add 9to5Mac to your Google News feed. FTC: We use income earning auto affiliate links. More.Youre reading 9to5Mac experts who break news about Apple and its surrounding ecosystem, day after day. Be sure to check out our homepage for all the latest news, and follow 9to5Mac on Twitter, Facebook, and LinkedIn to stay in the loop. Dont know where to start? Check out our exclusive stories, reviews, how-tos, and subscribe to our YouTube channel0 Comments 0 Shares 93 Views
-
FUTURISM.COMAnother "Fitness" Influencer Just Dropped Dead, Days After 30th BirthdayImage by @iamjaxontippet via InstagramDevelopmentsFitness influencer Jaxon Tippet was on vacation this month when he died unexpectedly from an apparent heart attack only a few days after his 30th birthday.Tippet previously battled with steroid use. In 2017, he was fined $4,000 by the Australian Southport Magistrates Court after being caught with 250 steroid tablets, vials of testosterone, and a needle and syringe hidden in his underwear.On a2022 episode of the Good Humans podcast, Tippet recalled having no symptoms in his first year of taking steroids. But as time moved on, he experienced a gradual and insidious shift."I could feel my health deteriorating," he said. "I was very tired. I was yellow in the face, my urine was almost orange... I couldn't get an erection."Since getting clean, Tippet had dedicated his Instagram and TikTok accounts which, combined, have nearly 250,000 followers to encouraging aspirational bodybuilders to take care of their mental health and be positive. But his death serves as a grim reminder that the unrealistic physiques of fitness influencers are often achieved using perilous techniques like extreme diets and the use of steroids and growth hormones that are usually much more dangerous than "healthy."Worse yet, these personalities are explosively popular among young people, leading to the risk that they'll imitate their idols and engage in similar dangerous behavior.Case in point, a striking number of fitness influencers die suddenly and way too young. Just in the past year, for example, bodybuilder Illia Yefimchyk died at 36 after years of steroids use and spending his days consuming 16,500 calories in a bid to be the "most monstrous bodybuilder" along with his fellow bodybuilders Antonio Souza and Neil Currey.On the opposite end of the spectrum, in 2023, raw food influencer Zhanna Samsonova died at 39 after primarily eating jackfruit and durian for years.In other words, it's important to remember that lustrous influencer content no matter if it's the most impressive biceps or the smoothest skin is never a true indicator of health.More on influencers: MrBeast Warns Youth Not to Be Like HimShare This Article0 Comments 0 Shares 92 Views
-
FUTURISM.COMAI Expert Warns Crash Is Imminent As AI Improvements Hit Brick Wall"The economics are likely to be grim."Crash and BurnThe scales are falling from the eyes of the tech industry right now, as generative AI models are reportedly hitting a technological brick wall.As some experts have long predicted would happen, improvements that once came easily by simply scaling up large language models in other words, by adding more parameters, training data, and processing power are now slowing down, and that's if they're yielding any significant gains at all.Gary Marcus, a cognitive scientist and AI skeptic, is warning that once everyone wises up to these shortcomings, the entire industry could crash."The economics are likely to be grim," Marcus wrote on his Substack. "Sky high valuation of companies like OpenAI and Microsoft are largely based on the notion that LLMs will, with continued scaling, become artificial general intelligence.""As I have always warned," he added, "that's just a fantasy."Diminishing ReturnsThe canary in the coal mine came when The Information reported this week that behind the scenes, OpenAI researchers discovered that its upcoming flagship model, code-named Orion, demonstrated noticeably less improvement over its predecessor GPT-4, than GPT-4 did over GPT-3.In areas like coding a major appeal for these LLMs there may even be no improvements at all.This is echoed elsewhere in the industry. Ilya Sutskever, founder of the startup Safe Superintelligence and co-founder and former chief science officer of OpenAI, told Reuters that improvements from scaling up AI models have plateaued.In short, the dogma that "bigger is better" when it comes to AI models, which has predicated the industry's ludicrous growth, may no longer be true.This is not the death knell of AI. "But," Marcus wrote, "the economics will likely never make sense: additional training is expensive, the more scaling, the more costly."Outside the BoxPer Reuters, training runs for large models can cost tens of millions of dollars, require using hundreds of AI chips, and can take months to complete at a time. Tech companies have also run out of freely available data to train their models, having practically scraped the entire surface web."LLMs such as they are, will become a commodity; price wars will keep revenue low. Given the cost of chips, profits will be elusive," Marcus predicts. "When everyone realizes this, the financial bubble may burst quickly."There may be a way out of this economic rut. As the reports from The Information and Reuters note, OpenAI researchers are developing ways to surmount the scaling problem, such as training the models to "think" or "reason" in a similar way to humans, capabilities which have been previewed in its o1 model.One way they are doing this is through a technique called "test-time compute," which has an AI model explore multiple possibilities for complex problems and then choosing the most promising one, instead of jumping to a conclusion.Whether such work will trailblaze a new way of pursuing significant AI improvements, however, will have to be borne out in the long run. As it stands, the AI industry continues to have a profitability problem, and as economic markets are rarely patient, there could be another AI winter to come if these improvements aren't made fast.Share This Article0 Comments 0 Shares 97 Views
-
THEHACKERNEWS.COMOvrC Platform Vulnerabilities Expose IoT Devices to Remote Attacks and Code ExecutionNov 13, 2024Ravie LakshmananCloud Security / VulnerabilityA security analysis of the OvrC cloud platform has uncovered 10 vulnerabilities that could be chained to allow potential attackers to execute code remotely on connected devices."Attackers successfully exploiting these vulnerabilities can access, control, and disrupt devices supported by OvrC; some of those include smart electrical power supplies, cameras, routers, home automation systems, and more," Claroty researcher Uri Katz said in a technical report.Snap One's OvrC, pronounced "oversee," is advertised as a "revolutionary support platform" that enables homeowners and businesses to remotely manage, configure, and troubleshoot IoT devices on the network. According to its website, OvrC solutions are deployed at over 500,000 end-user locations.According to a coordinated advisory issued by the U.S. Cybersecurity and Infrastructure Security Agency (CISA), successful exploitation of the identified vulnerabilities could allow an attacker to "impersonate and claim devices, execute arbitrary code, and disclose information about the affected device."The flaws have been found to impact OvrC Pro and OvrC Connect, with the company releasing fixes for eight of them in May 2023 and the remaining two on November 12, 2024."Many of these issues we found arise from neglecting the device-to-cloud interface," Katz said. "In many of these cases, the core issue is the ability to cross-claim IoT devices because of weak identifiers or similar bugs. These issues range from weak access controls, authentication bypasses, failed input validation, hardcoded credentials, and remote code execution flaws."As a result, a remote attacker could abuse these vulnerabilities to bypass firewalls and gain unauthorized access to the cloud-based management interface. Even worse, the access could be subsequently weaponized to enumerate and profile devices, hijack devices, elevate privileges, and even run arbitrary code.The most severe of the flaws are listed below -CVE-2023-28649 (CVSS v4 score: 9.2), which allows an attacker to impersonate a hub and hijack a deviceCVE-2023-31241 (CVSS v4 score: 9.2), which allows an attacker to claim arbitrary unclaimed devices by bypassing the requirement for a serial numberCVE-2023-28386 (CVSS v4 score: 9.2), which allows an attacker to upload arbitrary firmware updates resulting in code executionCVE-2024-50381 (CVSS v4 score: 9.1), which allows an attacker to impersonate a hub and unclaim devices arbitrarily and subsequently exploit other flaws to claim it"With more devices coming online every day and cloud management becoming the dominant means of configuring and accessing services, more than ever, the impetus is on manufacturers and cloud service providers to secure these devices and connections," Katz said. "The negative outcomes can impact connected power supplies, business routers, home automation systems and more connected to the OvrC cloud."The disclosure comes as Nozomi Networks detailed three security flaws impacting EmbedThis GoAhead, a compact web server used in embedded and IoT devices, that could lead to a denial-of-service (DoS) under specific conditions. The vulnerabilities (CVE-2024-3184, CVE-2024-3186, and CVE-2024-3187) have been patched in GoAhead version 6.0.1.In recent months, multiple security shortcomings have also been uncovered in Johnson Controls' exacqVision Web Service that could be combined to take control of video streams from surveillance cameras connected to the application and steal credentials.Found this article interesting? Follow us on Twitter and LinkedIn to read more exclusive content we post.SHARE0 Comments 0 Shares 94 Views
-
THEHACKERNEWS.COMMicrosoft Fixes 90 New Flaws, Including Actively Exploited NTLM and Task Scheduler BugsNov 13, 2024Ravie LakshmananVulnerability / Patch TuesdayMicrosoft on Tuesday revealed that two security flaws impacting Windows NT LAN Manager (NTLM) and Task Scheduler have come under active exploitation in the wild.The security vulnerabilities are among the 90 security bugs the tech giant addressed as part of its Patch Tuesday update for November 2024. Of the 90 flaws, four are rated Critical, 85 are rated Important, and one is rated Moderate in severity. Fifty-two of the patched vulnerabilities are remote code execution flaws.The fixes are in addition to 31 vulnerabilities Microsoft resolved in its Chromium-based Edge browser since the release of the October 2024 Patch Tuesday update. The two vulnerabilities that have been listed as actively exploited are below -CVE-2024-43451 (CVSS score: 6.5) - Windows NTLM Hash Disclosure Spoofing VulnerabilityCVE-2024-49039 (CVSS score: 8.8) - Windows Task Scheduler Elevation of Privilege Vulnerability"This vulnerability discloses a user's NTLMv2 hash to the attacker who could use this to authenticate as the user," Microsoft said in an advisory for CVE-2024-43451, crediting ClearSky researcher Israel Yeshurun with discovering and reporting the flaw.It's worth noting that CVE-2024-43451 is the third flaw after CVE-2024-21410 (patched in February) and CVE-2024-38021 (patched in July) that can be used to reveal a user's NTLMv2 hash and has been exploited in the wild this year alone."Attackers continue to be adamant about discovering and exploiting zero-day vulnerabilities that can disclose NTLMv2 hashes, as they can be used to authenticate to systems and potentially move laterally within a network to access other systems," Satnam Narang, senior staff research engineer at Tenable, said in a statement.CVE-2024-49039, on the other hand, could allow an attacker to execute RPC functions that are otherwise restricted to privileged accounts. However, Microsoft notes that successful exploitation requires an authenticated attacker to run a specially crafted application on the target system to first elevate their privileges to a Medium Integrity Level.Vlad Stolyarov and Bahare Sabouri of Google's Threat Analysis Group (TAG) and an anonymous researcher have been acknowledged for reporting the vulnerability. This raises the possibility that the zero-day exploitation of the flaw is associated with some nation-state-aligned group or an advanced persistent threat (APT) actor.There are currently no insights into how the shortcomings are exploited in the wild or how widespread these attacks are, but the development has prompted the U.S. Cybersecurity and Infrastructure Security Agency (CISA) to add them to the Known Exploited Vulnerabilities (KEV) catalog.One of the publicly disclosed, but not yet exploited, zero-day flaws is CVE-2024-49019 (CVSS score: 7.8), a privilege escalation vulnerability in Active Directory Certificate Services that could be leveraged to obtain domain admin privileges. Details of the vulnerability, dubbed EKUwu, were documented by TrustedSec last month.Another vulnerability of note is CVE-2024-43498 (CVSS score: 9.8), a critical remote code execution bug in .NET and Visual Studio that a remote unauthenticated attacker could exploit by sending specially crafted requests to a vulnerable .NET web app or by loading a specially crafted file into a vulnerable desktop app.The update also fixes a critical cryptographic protocol flaw impacting Windows Kerberos (CVE-2024-43639, CVSS score: 9.8) that could be abused by an unauthenticated attacker to perform remote code execution.The highest-rated vulnerability in this month's release is a remote code execution flaw in Azure CycleCloud (CVE-2024-43602, CVSS score: 9.9), which allows an attacker with basic user permissions to gain root-level privileges."Ease of exploitation was as simple as sending a request to a vulnerable AzureCloud CycleCloud cluster that would modify its configuration," Narang said. "As organizations continue to shift into utilizing cloud resources, the attack surface widens as a result."Lastly, a non-Microsoft-issued CVE addressed by Redmond is a remote code execution flaw in OpenSSL (CVE-2024-5535, CVSS score: 9.1). It was originally patched by OpenSSL maintainers back in June 2024."Exploitation of this vulnerability requires that an attacker send a malicious link to the victim via email, or that they convince the user to click the link, typically by way of an enticement in an email or Instant Messenger message," Microsoft said."In the worst-case email attack scenario, an attacker could send a specially crafted email to the user without a requirement that the victim open, read, or click on the link. This could result in the attacker executing remote code on the victim's machine."Coinciding with the November security update, Microsoft also announced its adoption of Common Security Advisory Framework (CSAF), an OASIS standard for disclosing vulnerabilities in machine-readable form, for all CVEs in order to accelerate response and remediation efforts."CSAF files are meant to be consumed by computers more so than by humans, so we are adding CSAF files as an addition to our existing CVE data channels rather than a replacement," the company said. "This is the beginning of a journey to continue to increase transparency around our supply chain and the vulnerabilities that we address and resolve in our entire supply chain, including Open Source Software embedded in our products."Software Patches from Other VendorsOther than Microsoft, security updates have also been released by other vendors over the past few weeks to rectify several vulnerabilities, including Found this article interesting? Follow us on Twitter and LinkedIn to read more exclusive content we post.SHARE0 Comments 0 Shares 102 Views