• WWW.BUSINESSINSIDER.COM
    Netflix prepared well for its high-stakes NFL streaming debut on Christmas, and it paid off
    Netflix streamed NFL games for the first time on Christmas Day.Technical problems marred a high-profile boxing matchlast month, but Netflix learned lessons.Many social media users praised Netflix for a smooth broadcast after it beefed up capacity.After fumbling a high-profile boxing match featuring Mike Tyson and Jake Paul last month that was marred by technical problems, many social media users praised Netflix for a smooth broadcast of its first-ever NFL games on Christmas Day.Netflix, with more than 280 million subscribers worldwide, is the home of hit shows like "Squid Games" and "Stranger Things," which have different technical requirements than massive live events.Christmas marked the first time it has streamed America's most popular sport, with the Kansas City Chiefs beating the Pittsburgh Steelers. The Baltimore Ravens and Houston Texans followed, featuring a halftime performance by Beyonc.More than 60 million users tuned into last month's boxing match, exceeding Netflix and internet service providers' capacity.Netflix's stream of the event was beset by buffering, poor image quality, and audio problems after Netflix executives greatly underestimated the size of the audience and failed to beef up capacity, The Wall Street Journal reported."We were stressing our own technology, we were pushing every ISP in the world right to the limits of their own capacity, we were stressing the limits of the internet itself," Netflix co-CEO Ted Sarandos explained at a conference this month.It was an embarrassing misstep for Netflix, which is set to broadcast Christmas NFL games through 2026 and recently signed a contract to stream the FIFA Women's World Cup in 2027 and 2031.For the Christmas NFL event, executives worked ahead of time with internet service providers like Charter's Spectrum, Comcast's Xfinity, and Verizon's FiOS to increase capacity, the Journal reported.The investment seems to have paid off.However, not everyone was pleased. Some social media users complained about glitches, and others disliked being forced to subscribe to yet another streaming service to watch football.
    0 Commenti 0 condivisioni 145 Views
  • WWW.BUSINESSINSIDER.COM
    Every song on Beyonc's setlist for her Christmas halftime show
    2024-12-26T00:05:48Z Read in app Angle down iconAn icon in the shape of an angle pointing down. Beyonc in a promo photo for her Christmas Day halftime show. Parkwood Entertainment This story is available exclusively to Business Insider subscribers. Become an Insider and start reading now.Have an account? Beyonc performed live during halftime of the Baltimore Ravens vs. Houston Texans game on Wednesday.The set list included live debuts of "Cowboy Carter" tracks, including "Texas Hold 'Em" and "Ya Ya."She also performed duets with singers Post Malone and Shaboozey.Beyonc took the stage at the Ravens vs. Texans game on Christmas Day, delivering a dynamic NFL halftime show that doubled as another test for Netflix's live event strategy.The 12-minute performance at NRG Stadium in Houston, Beyonc's hometown, featured live debuts of several tracks from her latest album, "Cowboy Carter," plus multiple duets with special guests including Beyonc's 12-year-old daughter Blue Ivy Carter, who joined the performance to line dance during the final song.Here's every song on Beyonc's Christmas Day setlist, listed below chronologically. '16 Carriages'Beyonc in the visualizer for "16 Carriages." Beyonc/YouTube "16 Carriages" was released alongside "Texas Hold 'Em" as the single's B-side. It has been nominated for Best Country Solo Performance at the 2025 Grammys.'Blackbiird'Beyonc in a promo photo for her Christmas Day halftime show. Parkwood Entertainment Beyonc performed "Blackbiird" with Tanner Adell, Brittney Spencer, Tiera Kennedy, and Reyna Roberts a cover of the 1968 classic by The Beatles, which was inspired by the Civil Rights Movement. 'Ya Ya'Beyonc in a press photo for the 2024 Paris Olympics. Parkwood Entertainment The 20th track on "Cowboy Carter" is a country-rock banger that interpolates two hits from 1966: Nancy Sinatra's "These Boots Are Made for Walkin'" and The Beach Boys' "Good Vibrations.""Ya Ya" was hailed by critics as a standout upon the album's release and will compete for best Americana performance at the Grammys in February.The song was previously used in a promotional video for the 2024 Paris Olympics on NBC, which featured clips of Beyonc introducing Team USA athletes like Noah Lyles, Sha'Carri Richardson, Caeleb Dressel, Katie Ledecky, and Simone Biles.'My House'Beyonc in a promo photo for "Cowboy Carter." Blair Caldwell/Parkwood "My House'" was released at the end of 2023 as a single ahead of Beyonc's 2023 film, "Renaissance: A Film by Beyonc." 'Spaghettii, 'Riiverdance,' and 'Sweet Honey Buckiin'Shaboozey performs at the Detroit Lions vs. Chicago Bears game on Thanksgiving. Amy Lemus/NurPhoto via Getty Images Beyonc brought out Shaboozey to perform a medley of their collaborations on "Cowboy Carter," including "Spaghettii," a Grammy nominee for best melodic rap performance.Following his featured role on "Cowboy Carter," Shaboozey had a breakout year with his own hit, "A Bar Song (Tipsy)." The country-pop anthem topped the Hot 100 for 19 weeks, tying Lil Nas X's "Old Town Road" for the longest streak in history. 'Levii's Jeans'Post Malone performs at the 2024 CMA Awards. Theo Wargo/Getty Images Beyonc welcomed Post Malone to the stage for a duet of "Levii's Jeans," the 17th track on "Cowboy Carter," and Grammy nominee for best pop duo/group performance.Like Beyonc and Shaboozey, Malone had a big year. He released his own country album, "F-1 Trillion," in August." The tracklist included collaborations with Nashville legends like Tim McGraw, Dolly Parton, and Chris Stapleton.The album's biggest hit, however, was a duet with Morgan Wallen titled "I Had Some Help," which debuted at No. 1 on the Hot 100 and remained atop the chart for six weeks.Malone also topped the Hot 100 in April by teaming up with Taylor Swift for "Fortnight," the lead single from her record-breaking album "The Tortured Poets Department." 'Jolene'Beyonc in a promo photo for "Cowboy Carter." Blair Caldwell/Parkwood Beyonc's version of "Jolene" put a twist on country singer Dolly Parton's popular song about infidelity.Parton recorded the introduction to Beyonc's rendition, telling E! News in a May interview that she was "very proud" of the song's success."As a songwriter, you love the fact that people do your songs no matter how they do them," Parton said. 'Texas Hold 'Em'Beyonc shared a teaser for "Cowboy Carter" that featured a clip of "Texas Hold 'Em." Beyonc/YouTube "Texas Hold 'Em" was surprise-released as the lead single for "Cowboy Carter" during the 2024 Super Bowl.The song shot to No. 1 on Billboard's Hot Country Songs, making Beyonc the first Black woman in history to top the chart."Texas Hold 'Em" also held No. 1 on the Billboard Hot 100 for two weeks and earned three more nods song of the year, record of the year, and best country song for the most-nominated and most-awarded performer in Grammys history. BeyonceMusicNFL More... Close iconTwo crossed lines that form an 'X'. It indicates a way to close an interaction, or dismiss a notification.
    0 Commenti 0 condivisioni 154 Views
  • GAMERANT.COM
    BLEACH TYBW Theory: The Quincy Robot BG9 is the Arrancar Shawlong Koufang
    The BLEACH universe is expansive and complex, with various configurations of the main states of being introduced over the course of the series becoming possible in part due to Ssuke Aizen's (and to some extent, Kisuke Urahara's) meddling with the boundaries between Shinigami and Hollow. Using the Hgyoku he extracted from Rukia's soul in the conclusion of the Soul Society arc, Aizen would go on to create more Arrancar, Hollows that have gained Shinigami powers from tearing off parts of their masks.
    0 Commenti 0 condivisioni 132 Views
  • GAMERANT.COM
    Halo: Master Chief Collection - In Which Order To Play All Games
    Ask any Xbox owner what their favorite game is, and theres a good chance theyll say Halo. This isnt surprising once youve experienced the magic of the classic Halo titles.
    0 Commenti 0 condivisioni 133 Views
  • GAMERANT.COM
    Adapting This Stephen King Novel Could Hold Answers to Mike Flanagan's The Dark Tower
    In the grand pantheon of Stephen King's works, many have ties to his magnum opus, The Dark Tower series. These novels have one or more connections to the story of Roland the Gunslinger and his quest for The Dark Tower, the grand structure that binds all reality together. The most prominent that fans are aware of include Stephen Kings The Stand, The Shining, Salems Lot, and more, but there is one book in particular that not only has ties to the main antagonist of the Dark Tower series, but a figure that could hold the key to Rolands mission altogether, Insomnia.
    0 Commenti 0 condivisioni 143 Views
  • 0 Commenti 0 condivisioni 132 Views
  • WWW.YANKODESIGN.COM
    The Zen Home Offers The Perks & Minimalism Of A Tiny Home In A Truly Tiny Package
    Called the Zen, this tiny home is designed by the Australian tiny house building company Havenn Tiny Houses. It is a highly versatile home that is ideal for glamping enthusiasts. It isnt intended for full-time living, but it serves as a multi-purpose single-level dwelling for short-term stays. It isnt exactly a home, but an extra space for a home, quite like an ADU. Zen Tiny functions as an excellent additional space for a separate amenity. This unique and compact tiny abode is called an Essential Luxury, as it offers all the perks of a full-time home and the minimalism of a tiny home but in a truly tiny package.Designer: Havenn Tiny HousesZen is half the size of a typical home on wheels, as it features a length of 14.2 feet, a width of 7.5 feet, and a height of 9.8 feet. It features a simple single-level layout that is ideal for multi-purpose flexibility. It can be used as a lovely self-contained bedroom that is surrounded by nature, making for the perfect glamping escapade. It can also be utilized as a home office, as it has more than enough room for two office setups. It is equipped with high ceilings, large windows, and a cute French door. These features help create a space that feels spacious and free-flowing.It would be great as a guest bedroom, as it has sufficient space for a queen-sized bed and essential storage. It is made using natural materials, and equipped with solar panels for a sustainable touch. Sustainably sourced and recycled materials were used to construct the home. The Zen tiny home is also equipped with energy-efficient insulation and windows, eco-friendly toilets, and solar panels. Water conservation is also achieved via greywater systems or rainwater collection, paired up with efficient fixtures. The home is designed to minimize environmental impact while offering a comfortable living space.The house is built using termite-and rust-proof frames on a galvanized steel base. The R-value 2.0 insulation provides comfort throughout the year, while the premium hybrid flooring adds some luxury. A large skylight has been placed above the main section of the home, allowing more natural light to stream in. The home also includes a fully-equipped kitchen and laundry appliances. The bathroom is also well-designed, and the toilet has an eco-friendly version for off-the-grid situations. The Zen tiny house is compact and minimal, yet well-equipped making for an excellent tiny home. It is priced at $18,750, so isnt very expensive either. It lets you connect with the outdoors while reducing the impact on the environment.The post The Zen Home Offers The Perks & Minimalism Of A Tiny Home In A Truly Tiny Package first appeared on Yanko Design.
    0 Commenti 0 condivisioni 130 Views
  • 0 Commenti 0 condivisioni 205 Views
  • WWW.MARKTECHPOST.COM
    Meet CoMERA: An Advanced Tensor Compression Framework Redefining AI Model Training with Speed and Precision
    Training large-scale AI models such as transformers and language models have become an indispensable yet highly demanding process in AI. With billions of parameters, these models offer groundbreaking capabilities but come at a steep cost in terms of computational power, memory, and energy consumption. For example, OpenAIs GPT-3 comprises 175 billion parameters and requires weeks of GPU training. Such massive requirements limit these technologies to organizations with substantial computational resources, exacerbating concerns over energy efficiency and environmental impact. Addressing these challenges has become critical to ensuring the broader accessibility and sustainability of AI advancements.The inefficiencies in training large models stem primarily from their reliance on dense matrices, which demand significant memory and computing power. The limited support for optimized low-precision or low-rank operations in modern GPUs further compounds these requirements. While some methods, such as matrix factorization and heuristic rank reduction, have been proposed to alleviate these issues, their real-world applicability is constrained. For instance, GaLore enables training on single-batch settings but suffers from impractical runtime overhead. Similarly, LTE, which adopts low-rank adapters, struggles with convergence on large-scale tasks. The lack of a method that simultaneously reduces memory usage, computational cost, and training time without compromising performance has created an urgent need for innovative solutions.Researchers from the University at Albany SUNY, the University of California at Santa Barbara, Amazon Alexa AI, and Meta introduced Computing-and Memory-Efficient training method via Rank-Adaptive tensor optimization (CoMERA), a novel framework that combines memory efficiency with computational speed through rank-adaptive tensor compression. Unlike traditional methods focusing solely on compression, CoMERA adopts a multi-objective optimization approach to balance compression ratio and model accuracy. It utilizes tensorized embeddings and advanced tensor-network contractions to optimize GPU utilization, reducing runtime overhead while maintaining robust performance. The framework also introduces CUDA Graph to minimize kernel-launching delays during GPU operations, a significant bottleneck in traditional tensor compression approaches.CoMERAs foundation is based on adaptive tensor representations, which allow model layers to adjust their ranks dynamically based on resource constraints. By modifying tensor ranks, the framework achieves compression without compromising the integrity of neural network operations. This dynamic optimization is achieved through a two-stage training process:An early stage focused on stable convergenceA late stage that fine-tunes ranks to meet specific compression targetsIn a six-encoder transformer model, CoMERA achieved compression ratios ranging from 43x in its early stage to an impressive 361x in its late-stage optimizations. Also, it reduced memory consumption by 9x compared to GaLore, with 2-3x faster training per epoch.When applied to transformer models trained on the MNLI dataset, CoMERA reduced model sizes from 256 MB to as little as 3.2 MB while preserving accuracy. In large-scale recommendation systems like DLRM, CoMERA compressed models by 99x and achieved a 7x reduction in peak memory usage. The framework also excelled in pre-training CodeBERT, a domain-specific large language model, where it gained a 4.23x overall compression ratio and demonstrated a 2x speedup during certain training phases. These results underscore its ability to handle diverse tasks and architectures, extending its applicability across domains.The key takeaways from this research are as follows:CoMERA achieved compression ratios of up to 361x for specific layers and 99x for full models, drastically reducing storage and memory requirements.The framework delivered 2-3x faster training times per epoch for transformers and recommendation systems, saving computational resources and time.Using tensorized representations and CUDA Graph, CoMERA reduced peak memory consumption by 7x, enabling training on smaller GPUs.CoMERAs approach supports diverse architectures, including transformers and large language models, while maintaining or improving accuracy.By lowering the energy and resource demands of training, CoMERA contributes to more sustainable AI practices and makes cutting-edge models accessible to a broader audience.In conclusion, CoMERA addresses some of the most significant barriers to AI scalability and accessibility by enabling faster, memory-efficient training. Its adaptive optimization capabilities and compatibility with modern hardware make it a compelling choice for organizations seeking to train large models without incurring prohibitive costs. This studys results pave the way for further exploration of tensor-based optimizations in domains like distributed computing and resource-constrained edge devices.Check out the Paper. All credit for this research goes to the researchers of this project. Also,dont forget to follow us onTwitter and join ourTelegram Channel andLinkedIn Group. Dont Forget to join our60k+ ML SubReddit. Asif RazzaqAsif Razzaq is the CEO of Marktechpost Media Inc.. As a visionary entrepreneur and engineer, Asif is committed to harnessing the potential of Artificial Intelligence for social good. His most recent endeavor is the launch of an Artificial Intelligence Media Platform, Marktechpost, which stands out for its in-depth coverage of machine learning and deep learning news that is both technically sound and easily understandable by a wide audience. The platform boasts of over 2 million monthly views, illustrating its popularity among audiences. [Download] Evaluation of Large Language Model Vulnerabilities Report (Promoted)
    0 Commenti 0 condivisioni 138 Views
  • WWW.MARKTECHPOST.COM
    CoordTok: A Scalable Video Tokenizer that Learns a Mapping from Co-ordinate-based Representations to the Corresponding Patches of Input Videos
    Breaking down videos into smaller, meaningful parts for vision models remains challenging, particularly for long videos. Vision models rely on these smaller parts, called tokens, to process and understand video data, but creating these tokens efficiently is difficult. While recent tools achieve better video compression than older methods, they struggle to handle large video datasets effectively. A key issue is their inability to fully utilize temporal coherence, the natural pattern where video frames are often similar over short periods, which video codecs use for efficient compression. These tools are also computationally expensive to train and are limited to short clips, making them not very effective in capturing patterns and processing longer videos.Current video tokenization methods have high computational costs and struggle to handle long video sequences efficiently. Early approaches used image tokenizers to compress videos frame by frame but ignored the natural continuity between frames, reducing their effectiveness. Later methods introduced spatiotemporal layers, reduced redundancy, and used adaptive encoding, but they still required rebuilding entire video frames during training, which limited them to short clips. Video generation models like autoregressive methods, masked generative transformers, and diffusion models are also limited to short sequences.To solve this, researchers from KAIST and UC Berkeley proposed CoordTok, which learns a mapping from coordinate-based representations to the corresponding patches of input videos. Motivated by recent advances in 3D generative models, CoordTok encodes a video into factorized triplane representations and reconstructs patches corresponding to randomly sampled (x, y, t) coordinates. This approach allows large tokenizer models to be trained directly on long videos without requiring excessive resources. The video is divided into space-time patches and processed using transformer layers, with the decoder mapping sampled (x, y, t) coordinates to corresponding pixels. This reduces both memory and computational costs while preserving video quality.Based on this, researchers updated CoordTok to efficiently process a video by introducing a hierarchical architecture that grasped local and global features from the video. This architecture represented a factorized triplane to process patches of space and time, making long-duration video processing easier without excessively using computational resources. This approach greatly reduced the memory and computation requirements and maintained high video quality.Researchers improved the performance by adding a hierarchical structure that captured the local and global features of videos. This structure allowed the model to process space-time patches more efficiently using transformer layers, which helped generate factorized triplane representations. As a result, CoordTok handled longer videos without demanding excessive computational resources. For example, CoordTok encoded a 128-frame video with 128128 resolution into 1280 tokens, while baselines required 6144 or 8192 tokens to achieve similar reconstruction quality. The models reconstruction quality was further improved by fine-tuning with both 2 loss and LPIPS loss, enhancing the accuracy of the reconstructed frames. This combination of strategies reduced memory usage by up to 50% and computational costs while maintaining high-quality video reconstruction, with models like CoordTok-L achieving a PSNR of 26.9.In conclusion, the proposed framework by researchers, CoordTok, proves to be an efficient video tokenizer that uses coordinate-based representations to reduce computational costs and memory requirements while encoding long videos.It allows memory-efficient training for video generation models, making handling long videos with fewer tokens possible. However, it is not strong enough for dynamic videos and suggests further potential improvements, such as using multiple content planes or adaptive methods. This work can serve as a starting point for future research on scalable video tokenizers and generation, which can be beneficial for comprehending and generating long videos.Check out the Paper and Project. All credit for this research goes to the researchers of this project. Also,dont forget to follow us onTwitter and join ourTelegram Channel andLinkedIn Group. Dont Forget to join our60k+ ML SubReddit. Divyesh Vitthal Jawkhede+ postsDivyesh is a consulting intern at Marktechpost. He is pursuing a BTech in Agricultural and Food Engineering from the Indian Institute of Technology, Kharagpur. He is a Data Science and Machine learning enthusiast who wants to integrate these leading technologies into the agricultural domain and solve challenges. [Download] Evaluation of Large Language Model Vulnerabilities Report (Promoted)
    0 Commenti 0 condivisioni 130 Views