• What's New on Max in June 2025

    Max still goes by Max for now, but the return to the original HBO Max branding is expected sometime this summer. In the meantime, the third season of HBO Original series The Gilded Age is set to drop in weekly installments starting June 22. The period drama, set in 1880s New York, stars Carrie Coon, Christine Baranski, and Cynthia Nixon, among others.

    On the movie slate, there's A24's Parthenope, a Paolo Sorrentino coming-of-age film set in Naples and starring Celeste Dalla Porta and Gary Oldman, and Cleaner, about radical activists in present-day London who take hostages at an energy company's annual gala in an attempt to expose corruption. Max is also getting The Day The Earth Blew Up: A Looney Tunes Movieand the much-hyped A Minecraft Movie, a fantasy comedy film based on the video game and starring Jason Momoa, Jack Black, Danielle Brooks, Emma Myers, and Jennifer Coolidge. The movie debuted in theaters this spring and is listed as "coming soon" to Max. HBO Original documentaries coming in June include The Mortician, which will debut June 1—the three-episode run will dive deep into the family behind California's Lamb Funeral Home and its morally questionable practices. Zackary Drucker's Enigmaexplores transgender legacy and identity through the stories of April Ashley and Amanda Lear, among others, while My Mom Jaynefollows actress Mariska Hargitay in her journey to learn more about her mom, who died when Hargitay was three. Max subscribers will also get a variety of live sports, including NHL playoff games as well as a handful of MLB and U.S. soccer matchups.Here's everything else coming to Max in June. What’s coming to Max in June 2025Available June 1A Hologram for the KingA Nightmare on Elm StreetA Perfect GetawayBacktrackBatman and Superman: Battle of the Super SonsBlack PatchBlues in the NightCasinoFight ClubGentleman JimHellboyI Am Not Your NegroIgorIllegalIn the Good Old SummertimeInvasion of the Body SnatchersKid Glove KillerMeet Me in St. LouisMy Scientology MovieNumbered MenOne Foot in HeavenParasitePresenting Lily MarsPride & PrejudicePublic EnemiesReign of the SupermenSerenadeSilver RiverSpaceballsSplitStrike Up the BandSummer StockSuperman: Man of TomorrowSuperman: Red SonSuperman: UnboundSuperman/Batman: Public EnemiesThank Your Lucky StarsThe Death of SupermanThe Fighting 69thThe Harvey GirlsThe Hunger GamesThe Hunger Games: Catching FireThe Hunger Games: Mockingjay Part 1The Hunger Games: Mockingjay Part 2The Man Who Invented ChristmasThe Match KingThe Mayor of HellThe MorticianThe NitwitsThe Prince and the PauperThe Sea ChaseThe Sea HawkThe Sunlit NightThe VerdictThey Made Me a CriminalThis Side of the LawThree Faces EastThree StrangersTotal Drama Island, Season 2Wagons WestWords and MusicYou'll Find OutZiegfeld FolliesAvailable June 2BBQ Brawl, Season 6Available June 3Bullet TrainUgliest House in America, Season 6Available June 41000-lb Roomies, Season 1Fatal Destination, Season 1Available June 5Bea's Block, Season 1CChespirito: Not Really on Purpose, Season 1Available June 6House Hunters International: Volume 9, Season 201ParthenopeAvailable June 10Virgins, Season 1Available June 11Guy's Grocery Games, Season 38Available June 12Bitchin' Rides, Season 11Mini Beat Power Rockers: A Superheroic NightAvailable June 13CleanerHouse Hunters: Volume 10, Season 240Maine Cabin Masters, Season 10Super SaraToad & Friends, Season 1BAvailable June 16Hero Ball, Season 3BAvailable June 17Dr. Sanjay Gupta Reports: Animal PharmSuper Mega Cakes, Season 1Available June 19Expedition Unknown, Season 15Mystery At Blind Frog Ranch, Season 5Available June 20House Hunters: Volume 10, Season 241Lu & The Bally Bunch, Season 1CNow or Never: FC MontfermeilTeen Titans Go!, Season 9BAvailable June 21The Kitchen, Season 38The Never Ever Mets, Season 2Available June 22The Gilded Age, Season 3Available June 23Match Me Abroad, Season 2Available June 24EnigmaMean Girl Murders, Season 3The InvitationAvailable June 25Rehab Addict, Season 10Available June 27House Hunters: Volume 10, Season 242My Mom JaynePati, Seasons 1&2The Day the Earth Blew Up: A Looney Tunes MovieAvailable June 29#Somebody's Son, Season 1Family or Fiancé, Season 4Available June 30 90 Day Fiancé: Pillow Talk, Season 11Truck U, Season 21
    #what039s #new #max #june
    What's New on Max in June 2025
    Max still goes by Max for now, but the return to the original HBO Max branding is expected sometime this summer. In the meantime, the third season of HBO Original series The Gilded Age is set to drop in weekly installments starting June 22. The period drama, set in 1880s New York, stars Carrie Coon, Christine Baranski, and Cynthia Nixon, among others. On the movie slate, there's A24's Parthenope, a Paolo Sorrentino coming-of-age film set in Naples and starring Celeste Dalla Porta and Gary Oldman, and Cleaner, about radical activists in present-day London who take hostages at an energy company's annual gala in an attempt to expose corruption. Max is also getting The Day The Earth Blew Up: A Looney Tunes Movieand the much-hyped A Minecraft Movie, a fantasy comedy film based on the video game and starring Jason Momoa, Jack Black, Danielle Brooks, Emma Myers, and Jennifer Coolidge. The movie debuted in theaters this spring and is listed as "coming soon" to Max. HBO Original documentaries coming in June include The Mortician, which will debut June 1—the three-episode run will dive deep into the family behind California's Lamb Funeral Home and its morally questionable practices. Zackary Drucker's Enigmaexplores transgender legacy and identity through the stories of April Ashley and Amanda Lear, among others, while My Mom Jaynefollows actress Mariska Hargitay in her journey to learn more about her mom, who died when Hargitay was three. Max subscribers will also get a variety of live sports, including NHL playoff games as well as a handful of MLB and U.S. soccer matchups.Here's everything else coming to Max in June. What’s coming to Max in June 2025Available June 1A Hologram for the KingA Nightmare on Elm StreetA Perfect GetawayBacktrackBatman and Superman: Battle of the Super SonsBlack PatchBlues in the NightCasinoFight ClubGentleman JimHellboyI Am Not Your NegroIgorIllegalIn the Good Old SummertimeInvasion of the Body SnatchersKid Glove KillerMeet Me in St. LouisMy Scientology MovieNumbered MenOne Foot in HeavenParasitePresenting Lily MarsPride & PrejudicePublic EnemiesReign of the SupermenSerenadeSilver RiverSpaceballsSplitStrike Up the BandSummer StockSuperman: Man of TomorrowSuperman: Red SonSuperman: UnboundSuperman/Batman: Public EnemiesThank Your Lucky StarsThe Death of SupermanThe Fighting 69thThe Harvey GirlsThe Hunger GamesThe Hunger Games: Catching FireThe Hunger Games: Mockingjay Part 1The Hunger Games: Mockingjay Part 2The Man Who Invented ChristmasThe Match KingThe Mayor of HellThe MorticianThe NitwitsThe Prince and the PauperThe Sea ChaseThe Sea HawkThe Sunlit NightThe VerdictThey Made Me a CriminalThis Side of the LawThree Faces EastThree StrangersTotal Drama Island, Season 2Wagons WestWords and MusicYou'll Find OutZiegfeld FolliesAvailable June 2BBQ Brawl, Season 6Available June 3Bullet TrainUgliest House in America, Season 6Available June 41000-lb Roomies, Season 1Fatal Destination, Season 1Available June 5Bea's Block, Season 1CChespirito: Not Really on Purpose, Season 1Available June 6House Hunters International: Volume 9, Season 201ParthenopeAvailable June 10Virgins, Season 1Available June 11Guy's Grocery Games, Season 38Available June 12Bitchin' Rides, Season 11Mini Beat Power Rockers: A Superheroic NightAvailable June 13CleanerHouse Hunters: Volume 10, Season 240Maine Cabin Masters, Season 10Super SaraToad & Friends, Season 1BAvailable June 16Hero Ball, Season 3BAvailable June 17Dr. Sanjay Gupta Reports: Animal PharmSuper Mega Cakes, Season 1Available June 19Expedition Unknown, Season 15Mystery At Blind Frog Ranch, Season 5Available June 20House Hunters: Volume 10, Season 241Lu & The Bally Bunch, Season 1CNow or Never: FC MontfermeilTeen Titans Go!, Season 9BAvailable June 21The Kitchen, Season 38The Never Ever Mets, Season 2Available June 22The Gilded Age, Season 3Available June 23Match Me Abroad, Season 2Available June 24EnigmaMean Girl Murders, Season 3The InvitationAvailable June 25Rehab Addict, Season 10Available June 27House Hunters: Volume 10, Season 242My Mom JaynePati, Seasons 1&2The Day the Earth Blew Up: A Looney Tunes MovieAvailable June 29#Somebody's Son, Season 1Family or Fiancé, Season 4Available June 30 90 Day Fiancé: Pillow Talk, Season 11Truck U, Season 21 #what039s #new #max #june
    What's New on Max in June 2025
    lifehacker.com
    Max still goes by Max for now, but the return to the original HBO Max branding is expected sometime this summer. In the meantime, the third season of HBO Original series The Gilded Age is set to drop in weekly installments starting June 22. The period drama, set in 1880s New York, stars Carrie Coon, Christine Baranski, and Cynthia Nixon, among others. On the movie slate, there's A24's Parthenope (June 6), a Paolo Sorrentino coming-of-age film set in Naples and starring Celeste Dalla Porta and Gary Oldman, and Cleaner (June 13), about radical activists in present-day London who take hostages at an energy company's annual gala in an attempt to expose corruption. Max is also getting The Day The Earth Blew Up: A Looney Tunes Movie (June 27) and the much-hyped A Minecraft Movie, a fantasy comedy film based on the video game and starring Jason Momoa, Jack Black, Danielle Brooks, Emma Myers, and Jennifer Coolidge. The movie debuted in theaters this spring and is listed as "coming soon" to Max. HBO Original documentaries coming in June include The Mortician, which will debut June 1—the three-episode run will dive deep into the family behind California's Lamb Funeral Home and its morally questionable practices. Zackary Drucker's Enigma (June 24) explores transgender legacy and identity through the stories of April Ashley and Amanda Lear, among others, while My Mom Jayne (June 27) follows actress Mariska Hargitay in her journey to learn more about her mom, who died when Hargitay was three. Max subscribers will also get a variety of live sports, including NHL playoff games as well as a handful of MLB and U.S. soccer matchups.Here's everything else coming to Max in June. What’s coming to Max in June 2025Available June 1A Hologram for the King (2016)A Nightmare on Elm Street (2010)A Perfect Getaway (2009)Backtrack (2016)Batman and Superman: Battle of the Super Sons (2022)Black Patch (1957)Blues in the Night (1941)Casino (1995)Fight Club (1999)Gentleman Jim (1942)Hellboy (2004)I Am Not Your Negro (2017)Igor (2008)Illegal (1955)In the Good Old Summertime (1949)Invasion of the Body Snatchers (1978)Kid Glove Killer (1942)Meet Me in St. Louis (1944)My Scientology Movie (2017)Numbered Men (1930)One Foot in Heaven (1941)Parasite (2019)Presenting Lily Mars (1943)Pride & Prejudice (2005)Public Enemies (2009)Reign of the Supermen (2019)Serenade (1956)Silver River (1948)Spaceballs (1987)Split (2017)Strike Up the Band (1940)Summer Stock (1950)Superman: Man of Tomorrow (2020)Superman: Red Son (2020)Superman: Unbound (2013)Superman/Batman: Public Enemies (2009)Thank Your Lucky Stars (1943)The Death of Superman (2018)The Fighting 69th (1940)The Harvey Girls (1946)The Hunger Games (2012)The Hunger Games: Catching Fire (2013)The Hunger Games: Mockingjay Part 1 (2014)The Hunger Games: Mockingjay Part 2 (2015)The Man Who Invented Christmas (2017)The Match King (1932)The Mayor of Hell (1933)The Mortician (HBO Original)The Nitwits (1935)The Prince and the Pauper (1937)The Sea Chase (1955)The Sea Hawk (1940)The Sunlit Night (2019)The Verdict (1946)They Made Me a Criminal (1939)This Side of the Law (1950)Three Faces East (1930)Three Strangers (1946)Total Drama Island, Season 2 (Cartoon Network)Wagons West (1952)Words and Music (1948)You'll Find Out (1940)Ziegfeld Follies (1946)Available June 2BBQ Brawl, Season 6 (Food Network)Available June 3Bullet Train (2022)Ugliest House in America, Season 6 (HGTV)Available June 41000-lb Roomies, Season 1 (TLC)Fatal Destination, Season 1 (ID)Available June 5Bea's Block, Season 1C (Max Original)Chespirito: Not Really on Purpose, Season 1 (Max Original)Available June 6House Hunters International: Volume 9, Season 201 (HGTV)Parthenope (A24)Available June 10Virgins, Season 1 (TLC)Available June 11Guy's Grocery Games, Season 38 (Food Network)Available June 12Bitchin' Rides, Season 11Mini Beat Power Rockers: A Superheroic Night (Discovery International)Available June 13Cleaner (2025)House Hunters: Volume 10, Season 240 (HGTV)Maine Cabin Masters, Season 10 (Magnolia Network)Super Sara (Max Original)Toad & Friends, Season 1BAvailable June 16Hero Ball, Season 3BAvailable June 17Dr. Sanjay Gupta Reports: Animal Pharm (CNN Originals, 2025)Super Mega Cakes, Season 1 (Food Network)Available June 19Expedition Unknown, Season 15 (Discovery)Mystery At Blind Frog Ranch, Season 5 (Discovery)Available June 20House Hunters: Volume 10, Season 241 (HGTV)Lu & The Bally Bunch, Season 1C (Cartoon Network)Now or Never: FC Montfermeil (Max Original) Teen Titans Go!, Season 9B (Cartoon Network)Available June 21The Kitchen, Season 38 (Food Network)The Never Ever Mets, Season 2 (OWN)Available June 22The Gilded Age, Season 3 (HBO Original)Available June 23Match Me Abroad, Season 2 (TLC)Available June 24Enigma (HBO Original)Mean Girl Murders, Season 3 (ID)The Invitation (2022)Available June 25Rehab Addict, Season 10 (HGTV)Available June 27House Hunters: Volume 10, Season 242 (HGTV)My Mom Jayne (HBO Original)Pati, Seasons 1&2 (Max Original)The Day the Earth Blew Up: A Looney Tunes Movie (2025)Available June 29#Somebody's Son, Season 1 (OWN)Family or Fiancé, Season 4 (OWN)Available June 30 90 Day Fiancé: Pillow Talk, Season 11 (TLC)Truck U, Season 21
    0 Σχόλια ·0 Μοιράστηκε ·0 Προεπισκόπηση
  • Everything New on Max in June 2025

    This month on Max you’ll get the return of The Gilded Age, the historical drama about life in 1880s New York City, now entering its third season. You’ll also get the streaming debuts of recent films Parthenope, The Day the Earth Blew Up: A Looney Tunes Movie, and A Minecraft Movie, the 2025 blockbuster that sent TikTok teens everywhere into a frenzy. Plus a new weekly documentary series called The Mortician explores the dark secret behind a California funeral home.Here’s the full list of what’s coming toMax in June 2025...June 1A Hologram for the KingA Nightmare on Elm StreetA Perfect GetawayBacktrackBatman and Superman: Battle of the Super SonsBlack PatchBlues in the NightCasinoFight ClubGentleman Jim14. HellboySonyloading...HellboyI Am Not Your NegroIgorIllegalIn the Good Old SummertimeInvasion of the Body SnatchersKid Glove KillerMeet Me in St. LouisMy Scientology MovieNumbered MenOne Foot in HeavenParasitePresenting Lily MarsPride & PrejudicePublic EnemiesReign of the SupermenSerenadeSilver RiverMGMMGMloading...SpaceballsSplitStrike Up the BandSummer StockSuperman: Man of TomorrowSuperman: Red SonSuperman: UnboundSuperman/Batman: Public EnemiesThank Your Lucky StarsThe Death of SupermanThe Fighting 69thThe Harvey GirlsThe Hunger GamesThe Hunger Games: Catching FireThe Hunger Games: Mockingjay Part 1The Hunger Games: Mockingjay Part 2The Man Who Invented ChristmasThe Match KingThe Mayor of HellHBOHBOloading...The MorticianThe NitwitsThe Prince and the PauperThe Sea ChaseThe Sea HawkThe Sunlit NightThe VerdictThey Made Me a CriminalThis Side of the LawThree Faces EastThree StrangersTotal Drama Island, Season 2Wagons WestWords and MusicYou'll Find OutZiegfeld FolliesJune 2BBQ Brawl, Season 6June 3Bullet TrainUgliest House in America, Season 6June 41000-lb Roomies, Season 1Fatal Destination, Season 1June 5Bea's Block, Season 1CChespirito: Not Really on Purpose, Season 1A24A24loading...June 6House Hunters International: Volume 9, Season 201ParthenopeJune 10Virgins, Season 1June 11Guy's Grocery Games, Season 38June 12Bitchin' Rides, Season 11 Mini Beat Power Rockers: A Superheroic NightJune 13CleanerHouse Hunters: Volume 10, Season 240Maine Cabin Masters, Season 10Super SaraToad & Friends, Season 1BJune 16Hero Ball, Season 3BJune 17Dr. Sanjay Gupta Reports: Animal PharmSuper Mega Cakes, Season 1June 19 Expedition Unknown, Season 15Mystery At Blind Frog Ranch, Season 5June 20House Hunters: Volume 10, Season 241Lu & The Bally Bunch, Season 1CNow or Never: FC MontfermeilTeen Titans Go!, Season 9BJune 21The Kitchen, Season 38The Never Ever Mets, Season 2HBOHBOloading...June 22The Gilded Age, Season 3June 23Match Me Abroad, Season 2June 24EnigmaMean Girl Murders, Season 3The InvitationJune 25Rehab Addict, Season 10Ketchup EntertainmentKetchup Entertainmentloading...June 27House Hunters: Volume 10, Season 242My Mom JaynePati, Seasons 1&2The Day the Earth Blew Up: A Looney Tunes MovieJune 29#Somebody's Son, Season 1Family or Fiancé, Season 4June 3090 Day Fiancé: Pillow Talk, Season 11Truck U, Season 21Get our free mobile app2000s Movies That Got Bad Reviews That Are Actually GoodThese underrated films deserved better reviews than they got from most critics.
    #everything #new #max #june
    Everything New on Max in June 2025
    This month on Max you’ll get the return of The Gilded Age, the historical drama about life in 1880s New York City, now entering its third season. You’ll also get the streaming debuts of recent films Parthenope, The Day the Earth Blew Up: A Looney Tunes Movie, and A Minecraft Movie, the 2025 blockbuster that sent TikTok teens everywhere into a frenzy. Plus a new weekly documentary series called The Mortician explores the dark secret behind a California funeral home.Here’s the full list of what’s coming toMax in June 2025...June 1A Hologram for the KingA Nightmare on Elm StreetA Perfect GetawayBacktrackBatman and Superman: Battle of the Super SonsBlack PatchBlues in the NightCasinoFight ClubGentleman Jim14. HellboySonyloading...HellboyI Am Not Your NegroIgorIllegalIn the Good Old SummertimeInvasion of the Body SnatchersKid Glove KillerMeet Me in St. LouisMy Scientology MovieNumbered MenOne Foot in HeavenParasitePresenting Lily MarsPride & PrejudicePublic EnemiesReign of the SupermenSerenadeSilver RiverMGMMGMloading...SpaceballsSplitStrike Up the BandSummer StockSuperman: Man of TomorrowSuperman: Red SonSuperman: UnboundSuperman/Batman: Public EnemiesThank Your Lucky StarsThe Death of SupermanThe Fighting 69thThe Harvey GirlsThe Hunger GamesThe Hunger Games: Catching FireThe Hunger Games: Mockingjay Part 1The Hunger Games: Mockingjay Part 2The Man Who Invented ChristmasThe Match KingThe Mayor of HellHBOHBOloading...The MorticianThe NitwitsThe Prince and the PauperThe Sea ChaseThe Sea HawkThe Sunlit NightThe VerdictThey Made Me a CriminalThis Side of the LawThree Faces EastThree StrangersTotal Drama Island, Season 2Wagons WestWords and MusicYou'll Find OutZiegfeld FolliesJune 2BBQ Brawl, Season 6June 3Bullet TrainUgliest House in America, Season 6June 41000-lb Roomies, Season 1Fatal Destination, Season 1June 5Bea's Block, Season 1CChespirito: Not Really on Purpose, Season 1A24A24loading...June 6House Hunters International: Volume 9, Season 201ParthenopeJune 10Virgins, Season 1June 11Guy's Grocery Games, Season 38June 12Bitchin' Rides, Season 11 Mini Beat Power Rockers: A Superheroic NightJune 13CleanerHouse Hunters: Volume 10, Season 240Maine Cabin Masters, Season 10Super SaraToad & Friends, Season 1BJune 16Hero Ball, Season 3BJune 17Dr. Sanjay Gupta Reports: Animal PharmSuper Mega Cakes, Season 1June 19 Expedition Unknown, Season 15Mystery At Blind Frog Ranch, Season 5June 20House Hunters: Volume 10, Season 241Lu & The Bally Bunch, Season 1CNow or Never: FC MontfermeilTeen Titans Go!, Season 9BJune 21The Kitchen, Season 38The Never Ever Mets, Season 2HBOHBOloading...June 22The Gilded Age, Season 3June 23Match Me Abroad, Season 2June 24EnigmaMean Girl Murders, Season 3The InvitationJune 25Rehab Addict, Season 10Ketchup EntertainmentKetchup Entertainmentloading...June 27House Hunters: Volume 10, Season 242My Mom JaynePati, Seasons 1&2The Day the Earth Blew Up: A Looney Tunes MovieJune 29#Somebody's Son, Season 1Family or Fiancé, Season 4June 3090 Day Fiancé: Pillow Talk, Season 11Truck U, Season 21Get our free mobile app2000s Movies That Got Bad Reviews That Are Actually GoodThese underrated films deserved better reviews than they got from most critics. #everything #new #max #june
    Everything New on Max in June 2025
    screencrush.com
    This month on Max you’ll get the return of The Gilded Age, the historical drama about life in 1880s New York City, now entering its third season. (On Max? HBO? On HBO Max? I’m lost and confused and scared and all I’m trying to do is give you a list of movies and shows that will be available over the next few months on streaming.)You’ll also get the streaming debuts of recent films Parthenope, The Day the Earth Blew Up: A Looney Tunes Movie, and A Minecraft Movie, the 2025 blockbuster that sent TikTok teens everywhere into a frenzy. Plus a new weekly documentary series called The Mortician explores the dark secret behind a California funeral home.Here’s the full list of what’s coming to (HBO) Max in June 2025...June 1A Hologram for the King (2016) A Nightmare on Elm Street (2010) A Perfect Getaway (2009) Backtrack (2016) Batman and Superman: Battle of the Super Sons (2022) Black Patch (1957) Blues in the Night (1941) Casino (1995) Fight Club (1999) Gentleman Jim (1942)14. Hellboy (2004)Sonyloading...Hellboy (2004) I Am Not Your Negro (2017) Igor (2008) Illegal (1955) In the Good Old Summertime (1949) Invasion of the Body Snatchers (1978) Kid Glove Killer (1942) Meet Me in St. Louis (1944) My Scientology Movie (2017) Numbered Men (1930) One Foot in Heaven (1941) Parasite (2019) Presenting Lily Mars (1943) Pride & Prejudice (2005) Public Enemies (2009) Reign of the Supermen (2019) Serenade (1956) Silver River (1948)MGMMGMloading...Spaceballs (1987) Split (2017) Strike Up the Band (1940) Summer Stock (1950) Superman: Man of Tomorrow (2020) Superman: Red Son (2020) Superman: Unbound (2013) Superman/Batman: Public Enemies (2009) Thank Your Lucky Stars (1943) The Death of Superman (2018) The Fighting 69th (1940) The Harvey Girls (1946) The Hunger Games (2012) The Hunger Games: Catching Fire (2013) The Hunger Games: Mockingjay Part 1 (2014) The Hunger Games: Mockingjay Part 2 (2015) The Man Who Invented Christmas (2017) The Match King (1932) The Mayor of Hell (1933)HBOHBOloading...The Mortician (HBO Original) The Nitwits (1935) The Prince and the Pauper (1937) The Sea Chase (1955) The Sea Hawk (1940) The Sunlit Night (2019) The Verdict (1946) They Made Me a Criminal (1939) This Side of the Law (1950) Three Faces East (1930) Three Strangers (1946) Total Drama Island, Season 2 (Cartoon Network) Wagons West (1952) Words and Music (1948) You'll Find Out (1940) Ziegfeld Follies (1946)June 2BBQ Brawl, Season 6 (Food Network)June 3Bullet Train (2022) Ugliest House in America, Season 6 (HGTV)June 41000-lb Roomies, Season 1 (TLC) Fatal Destination, Season 1 (ID)June 5Bea's Block, Season 1C (Max Original) Chespirito: Not Really on Purpose, Season 1 (Max Original)A24A24loading...June 6House Hunters International: Volume 9, Season 201 (HGTV) Parthenope (A24)June 10Virgins, Season 1 (TLC)June 11Guy's Grocery Games, Season 38 (Food Network)June 12Bitchin' Rides, Season 11 Mini Beat Power Rockers: A Superheroic Night (Discovery International)June 13Cleaner (2025) House Hunters: Volume 10, Season 240 (HGTV) Maine Cabin Masters, Season 10 (Magnolia Network) Super Sara (Max Original) Toad & Friends, Season 1BJune 16Hero Ball, Season 3BJune 17Dr. Sanjay Gupta Reports: Animal Pharm (CNN Originals, 2025) Super Mega Cakes, Season 1 (Food Network)June 19 Expedition Unknown, Season 15 (Discovery) Mystery At Blind Frog Ranch, Season 5 (Discovery)June 20House Hunters: Volume 10, Season 241 (HGTV) Lu & The Bally Bunch, Season 1C (Cartoon Network) Now or Never: FC Montfermeil (Max Original) Teen Titans Go!, Season 9B (Cartoon Network)June 21The Kitchen, Season 38 (Food Network) The Never Ever Mets, Season 2 (OWN)HBOHBOloading...June 22The Gilded Age, Season 3 (HBO Original)June 23Match Me Abroad, Season 2 (TLC)June 24Enigma (HBO Original) Mean Girl Murders, Season 3 (ID) The Invitation (2022)June 25Rehab Addict, Season 10 (HGTV)Ketchup EntertainmentKetchup Entertainmentloading...June 27House Hunters: Volume 10, Season 242 (HGTV) My Mom Jayne (HBO Original) Pati, Seasons 1&2 (Max Original) The Day the Earth Blew Up: A Looney Tunes Movie (2025)June 29#Somebody's Son, Season 1 (OWN) Family or Fiancé, Season 4 (OWN)June 3090 Day Fiancé: Pillow Talk, Season 11 (TLC) Truck U, Season 21Get our free mobile app2000s Movies That Got Bad Reviews That Are Actually GoodThese underrated films deserved better reviews than they got from most critics.
    0 Σχόλια ·0 Μοιράστηκε ·0 Προεπισκόπηση
  • Boost 2-Bit LLM Accuracy with EoRA

    Quantization is one of the key techniques for reducing the memory footprint of large language models. It works by converting the data type of model parameters from higher-precision formats such as 32-bit floating pointor 16-bit floating pointto lower-precision integer formats, typically INT8 or INT4. For example, quantizing a model to 4-bit means each parameter uses only 0.5 bytes, compared to 4 bytes in FP32.

    Post-training quantization methods like GPTQ and AWQ can dramatically reduce the size of large models. A model like Llama 3 with 70 billion parameters can occupy around 140 GB in FP16, but this can be reduced to approximately 40 GB using 4-bit quantization, while still maintaining strong performance on downstream tasks.

    However, despite this substantial reduction, such models still exceed the memory capacity of most consumer-grade GPUs, which typically offer 24 GB to 32 GB of VRAM. To make these models truly accessible, quantization to even lower bitwidths, such as 2-bit, is required. While recent advances in low-bit quantization are promising, achieving stable and accurate 2-bit quantization remains a significant challenge.

    In this article, we review a technique called EoRA that helps compensate for quantization-induced errors. EoRA is a training-free method, meaning it can be applied quickly and efficiently to any model, even the largest ones. We’ll check how EoRA works and demonstrate how it can significantly improve the performance of 2-bit quantized models, bringing them close to the accuracy of their full-precision counterparts while being up to 5.5x smaller.

    We’ll analyze experimental results obtained using large models such as Qwen3-32B and Qwen2.5-72B, both quantized to 2-bit using state-of-the-art quantization techniques, to illustrate the effectiveness of EoRA.

    Diving into the Eigenspace in Search of an Adapter

    Post-training quantization or, more generally, compression aims to reduce model size or inference cost by minimizing the output difference between the original weights Wl​ and compressed weights Ŵl  using only a small calibration dataset.

    Most quantization methods are framed layer-wise, but the choice of compression formats is rigid and limits flexibility across diverse deployment needs.

    To bypass format constraints and improve accuracy, previous work, such as QLoRAand HQQ+, directly fine-tuned a Lora adapter on top of the frozen quantized models.

    It is also possible to reframe compression as a compensation problem: given a compressed model, introduce low-rank residual paths that specifically correct compression errors.

    A straightforward method uses SVD to decompose the compression error:

    \into

    \forming low-rank approximations via two matrices:

    \\where Al and Bl are the standard tensors of a LoRA adapter.

    However, plain SVD has two limitations: it does not minimize the original layerwise compression loss directly, and it allocates capacity uniformly across all error components, ignoring the varying importance of different parts of the model.

    To address this, NVIDIA proposes EoRA.

    EoRA: Training-free Compensation for Compressed LLM with Eigenspace Low-Rank Approximation

    EoRA first projects the compression error into the eigenspace defined by the input activation covariance:

    \where X̃ is the average activation over the calibration set. Then, by performing eigendecomposition, we get:

    \The compression error ΔW is projected as:

    \where Q′=QΛ. Then SVD is applied on ΔW′ to produce a low-rank approximation, and the result is projected back to the original space, adjusting the low-rank factors accordingly.

    This eigenspace projection changes the optimization objective: it weights the importance of different error components according to their contribution to the layerwise output, making the approximation more efficient. It can be computed quickly without any training, requires only calibration activations, and does not introduce extra inference latency. Moreover, the derivation shows that this approach leads to a direct minimization of the layerwise compression loss, not just the raw weight error.

    Analytically, truncating a singular value in the projected space corresponds to minimizing the true compression error under reasonable assumptions about the calibration activations.

    In their paper, NVIDIA presents a wide range of strong results showing that EoRA can significantly boost the accuracy of quantized models. However, their experiments focus mostly on older Quantization methods like GPTQ and are limited to mid-sized LLMs, up to 13B parameters, at 3-bit and 4-bit precisions.

    This leaves an open question: can EoRA still be effective for much larger models, using more modern quantization techniques, and even pushing down to 2-bit precision?

    Let’s find out.

    Calibrating an EoRA Adapter

    Suppose we have quantized models that show significantly degraded performance compared to their full-precision counterparts on certain tasks. Our goal is to reduce this performance gap using EoRA.

    For the experiments, I used Qwen2.5-72B Instruct and Qwen3-32B, both quantized to 2-bit using AutoRound, a state-of-the-art quantization algorithm developed by Intel. AutoRound leverages SignSGD optimization to fine-tune quantization, and is particularly effective for low-bit settings.

    All the models I made are available here:

    Quantized Qwen3

    Quantized Qwen2.5

    The 2-bit models were quantized with a group size of 32, except for which used a group size of 128. A larger group size reduces model size by storing less quantization metadata, but it introduces greater quantization error.

    I evaluated the models on IFEval, a benchmark that measures instruction-following capabilities. Results showed a noticeable drop in performance for the quantized versions.

    Image by the author

    To compensate for this degradation, I applied an EoRA adapter using the implementation provided in the GPTQModel library. The integration is straightforward. If you’re curious about how it’s implemented in PyTorch, the codebase is compact, clean, and easy to follow:

    GPTQModel’s EoRA implementation: eora.py

    EoRA requires a calibration dataset. Ideally, this dataset should reflect the model’s intended use case. However, since we don’t have a specific target task in this context and aim to preserve the model’s general capabilities, I used 1,024 randomly sampled examples from the C4 dataset.

    Another key parameter is the LoRA rank, which greatly influences the effectiveness of the EoRA adapter. Its optimal value depends on the model architecture, the target task, and the calibration data. A higher rank may yield better performance but risks overfitting to the calibration set. It also increases the size of the adapter, counterproductive when the overall goal of quantization is to reduce memory usage. Conversely, a lower rank keeps the adapter lightweight but might not capture enough information to effectively compensate for quantization errors.

    In my experiments, I tested LoRA ranks of 32, 64, and 256.

    Below is the code used to create the EoRA adapter with GPTQModel:

    from gptqmodel import GPTQModel
    from gptqmodel.adapter.adapter import Lora
    from datasets import load_dataset

    calibration_dataset = load_dataset.select)eora_adapter_path = "Qwen3-32B-autoround-2bit-gptq-r256"
    model_path = "kaitchup/Qwen3-32B-autoround-2bit-gptq"
    eora = LoraGPTQModel.adapter.generateUsing an NVIDIA A100 GPU on RunPod, it took approximately 4 hours to generate the EoRA adapter for the model Qwen3-32B-autoround-2bit-gptq.

    All EoRA adapters created for these models are publicly available:

    EoRA Adapters for Qwen2.5 and Qwen3

    Evaluating EoRA Adapters for 2-bit LLMs

    Let’s evaluate the effect of the EoRA adapters. Do they improve the accuracy of the 2-bit models?

    Image by the author

    It works!

    The improvements are particularly notable for Qwen3-14B and Qwen3-32B. For instance, applying EoRA to Qwen3-32B, quantized to 2-bit with a group size of 128, resulted in an accuracy gain of nearly 7.5 points. Increasing the LoRA rank, from 32 to 64, also led to improvements, highlighting the impact of rank on performance.

    EoRA is also effective on larger models like Qwen2.5-72B, though the gains are more modest. Lower-rank adapters showed little to no benefit on this model; it wasn’t until I increased the rank to 256 that significant improvements began to appear.

    Memory Consumption of EoRA

    Using the EoRA adapter during inference results in the following increase in memory consumption:

    Image by the author

    The overhead is generally negligible. For instance for 2-bit Qwen3-14B, the adapters only add 257 MB and 514 MB to the total model size, with ranks of 32 and 64. With larger ranks, using an EoRA adapter becomes questionable as the total memory consumption may surpass the memory consumption of the same model quantized at a higher precision. For instance, 2-bit Qwen2.5 72B with an EoRA adapter of rank 256 is larger than 3-bit Qwen2.5 72B.

    Note: This estimate includes only the memory consumed by the adapter’s parameters. For completeness, we could also account for the memory used by adapter activations during inference. However, these are extremely small relative to other tensorsand can safely be considered negligible.

    Conclusion

    EoRA works. We’ve confirmed that it’s a simple yet effective method for compensating quantization errors, even at 2-bit precision. It’s intuitive, training-free, and delivers meaningful performance gains. That said, there are a few trade-offs to consider:

    Rank search: Finding the optimal LoRA rank requires experimentation. It’s difficult to predict in advance whether a rank of 32 will be sufficient or whether a higher rank, like 256, will cause overfitting. The optimal value depends on the model, calibration data, and target task.

    Increased memory consumption: The goal of quantization is to reduce memory usage, often for highly constrained environments. While EoRA adapters are relatively lightweight at lower ranks, they do slightly increase memory consumption, particularly at higher ranks, reducing the overall efficiency of 2-bit quantization.

    Looking ahead, NVIDIA’s paper also demonstrates that EoRA adapters make excellent starting points for QLoRA fine-tuning. In other words, if you plan to fine-tune a 2-bit model using QLoRA, initializing from an EoRA-adapted model can lead to better results with less training effort. I’ve written about fine-tuning adapters for GPTQ model last year, in my newsletter:

    QLoRA with AutoRound: Cheaper and Better LLM Fine-tuning on Your GPU

    The main difference is that instead of initializing the adapter from scratch, we would load the EoRA adapter. This adapter will be fine-tuned.

    ReferencesDettmers et al, QLoRA: Efficient Finetuning of Quantized LLMs, arXivBadri and Shaji, Towards 1-bit Machine Learning Models, Mobius Labs’ BlogLiu et al., EoRA: Training-free Compensation for Compressed LLM with Eigenspace Low-Rank Approximation, arXiv
    The post Boost 2-Bit LLM Accuracy with EoRA appeared first on Towards Data Science.
    #boost #2bit #llm #accuracy #with
    Boost 2-Bit LLM Accuracy with EoRA
    Quantization is one of the key techniques for reducing the memory footprint of large language models. It works by converting the data type of model parameters from higher-precision formats such as 32-bit floating pointor 16-bit floating pointto lower-precision integer formats, typically INT8 or INT4. For example, quantizing a model to 4-bit means each parameter uses only 0.5 bytes, compared to 4 bytes in FP32. Post-training quantization methods like GPTQ and AWQ can dramatically reduce the size of large models. A model like Llama 3 with 70 billion parameters can occupy around 140 GB in FP16, but this can be reduced to approximately 40 GB using 4-bit quantization, while still maintaining strong performance on downstream tasks. However, despite this substantial reduction, such models still exceed the memory capacity of most consumer-grade GPUs, which typically offer 24 GB to 32 GB of VRAM. To make these models truly accessible, quantization to even lower bitwidths, such as 2-bit, is required. While recent advances in low-bit quantization are promising, achieving stable and accurate 2-bit quantization remains a significant challenge. In this article, we review a technique called EoRA that helps compensate for quantization-induced errors. EoRA is a training-free method, meaning it can be applied quickly and efficiently to any model, even the largest ones. We’ll check how EoRA works and demonstrate how it can significantly improve the performance of 2-bit quantized models, bringing them close to the accuracy of their full-precision counterparts while being up to 5.5x smaller. We’ll analyze experimental results obtained using large models such as Qwen3-32B and Qwen2.5-72B, both quantized to 2-bit using state-of-the-art quantization techniques, to illustrate the effectiveness of EoRA. Diving into the Eigenspace in Search of an Adapter Post-training quantization or, more generally, compression aims to reduce model size or inference cost by minimizing the output difference between the original weights Wl​ and compressed weights Ŵl  using only a small calibration dataset. Most quantization methods are framed layer-wise, but the choice of compression formats is rigid and limits flexibility across diverse deployment needs. To bypass format constraints and improve accuracy, previous work, such as QLoRAand HQQ+, directly fine-tuned a Lora adapter on top of the frozen quantized models. It is also possible to reframe compression as a compensation problem: given a compressed model, introduce low-rank residual paths that specifically correct compression errors. A straightforward method uses SVD to decompose the compression error: \into \forming low-rank approximations via two matrices: \\where Al and Bl are the standard tensors of a LoRA adapter. However, plain SVD has two limitations: it does not minimize the original layerwise compression loss directly, and it allocates capacity uniformly across all error components, ignoring the varying importance of different parts of the model. To address this, NVIDIA proposes EoRA. EoRA: Training-free Compensation for Compressed LLM with Eigenspace Low-Rank Approximation EoRA first projects the compression error into the eigenspace defined by the input activation covariance: \where X̃ is the average activation over the calibration set. Then, by performing eigendecomposition, we get: \The compression error ΔW is projected as: \where Q′=QΛ. Then SVD is applied on ΔW′ to produce a low-rank approximation, and the result is projected back to the original space, adjusting the low-rank factors accordingly. This eigenspace projection changes the optimization objective: it weights the importance of different error components according to their contribution to the layerwise output, making the approximation more efficient. It can be computed quickly without any training, requires only calibration activations, and does not introduce extra inference latency. Moreover, the derivation shows that this approach leads to a direct minimization of the layerwise compression loss, not just the raw weight error. Analytically, truncating a singular value in the projected space corresponds to minimizing the true compression error under reasonable assumptions about the calibration activations. In their paper, NVIDIA presents a wide range of strong results showing that EoRA can significantly boost the accuracy of quantized models. However, their experiments focus mostly on older Quantization methods like GPTQ and are limited to mid-sized LLMs, up to 13B parameters, at 3-bit and 4-bit precisions. This leaves an open question: can EoRA still be effective for much larger models, using more modern quantization techniques, and even pushing down to 2-bit precision? Let’s find out. Calibrating an EoRA Adapter Suppose we have quantized models that show significantly degraded performance compared to their full-precision counterparts on certain tasks. Our goal is to reduce this performance gap using EoRA. For the experiments, I used Qwen2.5-72B Instruct and Qwen3-32B, both quantized to 2-bit using AutoRound, a state-of-the-art quantization algorithm developed by Intel. AutoRound leverages SignSGD optimization to fine-tune quantization, and is particularly effective for low-bit settings. All the models I made are available here: Quantized Qwen3 Quantized Qwen2.5 The 2-bit models were quantized with a group size of 32, except for which used a group size of 128. A larger group size reduces model size by storing less quantization metadata, but it introduces greater quantization error. I evaluated the models on IFEval, a benchmark that measures instruction-following capabilities. Results showed a noticeable drop in performance for the quantized versions. Image by the author To compensate for this degradation, I applied an EoRA adapter using the implementation provided in the GPTQModel library. The integration is straightforward. If you’re curious about how it’s implemented in PyTorch, the codebase is compact, clean, and easy to follow: GPTQModel’s EoRA implementation: eora.py EoRA requires a calibration dataset. Ideally, this dataset should reflect the model’s intended use case. However, since we don’t have a specific target task in this context and aim to preserve the model’s general capabilities, I used 1,024 randomly sampled examples from the C4 dataset. Another key parameter is the LoRA rank, which greatly influences the effectiveness of the EoRA adapter. Its optimal value depends on the model architecture, the target task, and the calibration data. A higher rank may yield better performance but risks overfitting to the calibration set. It also increases the size of the adapter, counterproductive when the overall goal of quantization is to reduce memory usage. Conversely, a lower rank keeps the adapter lightweight but might not capture enough information to effectively compensate for quantization errors. In my experiments, I tested LoRA ranks of 32, 64, and 256. Below is the code used to create the EoRA adapter with GPTQModel: from gptqmodel import GPTQModel from gptqmodel.adapter.adapter import Lora from datasets import load_dataset calibration_dataset = load_dataset.select)eora_adapter_path = "Qwen3-32B-autoround-2bit-gptq-r256" model_path = "kaitchup/Qwen3-32B-autoround-2bit-gptq" eora = LoraGPTQModel.adapter.generateUsing an NVIDIA A100 GPU on RunPod, it took approximately 4 hours to generate the EoRA adapter for the model Qwen3-32B-autoround-2bit-gptq. All EoRA adapters created for these models are publicly available: EoRA Adapters for Qwen2.5 and Qwen3 Evaluating EoRA Adapters for 2-bit LLMs Let’s evaluate the effect of the EoRA adapters. Do they improve the accuracy of the 2-bit models? Image by the author It works! The improvements are particularly notable for Qwen3-14B and Qwen3-32B. For instance, applying EoRA to Qwen3-32B, quantized to 2-bit with a group size of 128, resulted in an accuracy gain of nearly 7.5 points. Increasing the LoRA rank, from 32 to 64, also led to improvements, highlighting the impact of rank on performance. EoRA is also effective on larger models like Qwen2.5-72B, though the gains are more modest. Lower-rank adapters showed little to no benefit on this model; it wasn’t until I increased the rank to 256 that significant improvements began to appear. Memory Consumption of EoRA Using the EoRA adapter during inference results in the following increase in memory consumption: Image by the author The overhead is generally negligible. For instance for 2-bit Qwen3-14B, the adapters only add 257 MB and 514 MB to the total model size, with ranks of 32 and 64. With larger ranks, using an EoRA adapter becomes questionable as the total memory consumption may surpass the memory consumption of the same model quantized at a higher precision. For instance, 2-bit Qwen2.5 72B with an EoRA adapter of rank 256 is larger than 3-bit Qwen2.5 72B. Note: This estimate includes only the memory consumed by the adapter’s parameters. For completeness, we could also account for the memory used by adapter activations during inference. However, these are extremely small relative to other tensorsand can safely be considered negligible. Conclusion EoRA works. We’ve confirmed that it’s a simple yet effective method for compensating quantization errors, even at 2-bit precision. It’s intuitive, training-free, and delivers meaningful performance gains. That said, there are a few trade-offs to consider: Rank search: Finding the optimal LoRA rank requires experimentation. It’s difficult to predict in advance whether a rank of 32 will be sufficient or whether a higher rank, like 256, will cause overfitting. The optimal value depends on the model, calibration data, and target task. Increased memory consumption: The goal of quantization is to reduce memory usage, often for highly constrained environments. While EoRA adapters are relatively lightweight at lower ranks, they do slightly increase memory consumption, particularly at higher ranks, reducing the overall efficiency of 2-bit quantization. Looking ahead, NVIDIA’s paper also demonstrates that EoRA adapters make excellent starting points for QLoRA fine-tuning. In other words, if you plan to fine-tune a 2-bit model using QLoRA, initializing from an EoRA-adapted model can lead to better results with less training effort. I’ve written about fine-tuning adapters for GPTQ model last year, in my newsletter: QLoRA with AutoRound: Cheaper and Better LLM Fine-tuning on Your GPU The main difference is that instead of initializing the adapter from scratch, we would load the EoRA adapter. This adapter will be fine-tuned. ReferencesDettmers et al, QLoRA: Efficient Finetuning of Quantized LLMs, arXivBadri and Shaji, Towards 1-bit Machine Learning Models, Mobius Labs’ BlogLiu et al., EoRA: Training-free Compensation for Compressed LLM with Eigenspace Low-Rank Approximation, arXiv The post Boost 2-Bit LLM Accuracy with EoRA appeared first on Towards Data Science. #boost #2bit #llm #accuracy #with
    Boost 2-Bit LLM Accuracy with EoRA
    towardsdatascience.com
    Quantization is one of the key techniques for reducing the memory footprint of large language models (LLMs). It works by converting the data type of model parameters from higher-precision formats such as 32-bit floating point (FP32) or 16-bit floating point (FP16/BF16) to lower-precision integer formats, typically INT8 or INT4. For example, quantizing a model to 4-bit means each parameter uses only 0.5 bytes, compared to 4 bytes in FP32. Post-training quantization methods like GPTQ and AWQ can dramatically reduce the size of large models. A model like Llama 3 with 70 billion parameters can occupy around 140 GB in FP16, but this can be reduced to approximately 40 GB using 4-bit quantization, while still maintaining strong performance on downstream tasks. However, despite this substantial reduction, such models still exceed the memory capacity of most consumer-grade GPUs, which typically offer 24 GB to 32 GB of VRAM. To make these models truly accessible, quantization to even lower bitwidths, such as 2-bit, is required. While recent advances in low-bit quantization are promising, achieving stable and accurate 2-bit quantization remains a significant challenge. In this article, we review a technique called EoRA that helps compensate for quantization-induced errors. EoRA is a training-free method, meaning it can be applied quickly and efficiently to any model, even the largest ones. We’ll check how EoRA works and demonstrate how it can significantly improve the performance of 2-bit quantized models, bringing them close to the accuracy of their full-precision counterparts while being up to 5.5x smaller. We’ll analyze experimental results obtained using large models such as Qwen3-32B and Qwen2.5-72B, both quantized to 2-bit using state-of-the-art quantization techniques, to illustrate the effectiveness of EoRA. Diving into the Eigenspace in Search of an Adapter Post-training quantization or, more generally, compression aims to reduce model size or inference cost by minimizing the output difference between the original weights Wl​ and compressed weights Ŵl  using only a small calibration dataset. Most quantization methods are framed layer-wise, but the choice of compression formats is rigid and limits flexibility across diverse deployment needs. To bypass format constraints and improve accuracy, previous work, such as QLoRA [1] and HQQ+ [2], directly fine-tuned a Lora adapter on top of the frozen quantized models. It is also possible to reframe compression as a compensation problem: given a compressed model, introduce low-rank residual paths that specifically correct compression errors. A straightforward method uses SVD to decompose the compression error: \[\Delta W_l = W_l – \hat{W}_l\] into \[U_l \Sigma_l V_l^T\] forming low-rank approximations via two matrices: \[B_l = U_l \Sigma_l \] \[A_l = V_l^T\] where Al and Bl are the standard tensors of a LoRA adapter. However, plain SVD has two limitations: it does not minimize the original layerwise compression loss directly, and it allocates capacity uniformly across all error components, ignoring the varying importance of different parts of the model. To address this, NVIDIA proposes EoRA [3]. EoRA: Training-free Compensation for Compressed LLM with Eigenspace Low-Rank Approximation EoRA first projects the compression error into the eigenspace defined by the input activation covariance: \[\tilde{X} \tilde{X}^T\] where X̃ is the average activation over the calibration set. Then, by performing eigendecomposition, we get: \[\tilde{X} \tilde{X}^T = Q \Lambda Q^T\] The compression error ΔW is projected as: \[\Delta W’ = \Delta W Q’\] where Q′=QΛ. Then SVD is applied on ΔW′ to produce a low-rank approximation, and the result is projected back to the original space, adjusting the low-rank factors accordingly. This eigenspace projection changes the optimization objective: it weights the importance of different error components according to their contribution to the layerwise output (via eigenvalues), making the approximation more efficient. It can be computed quickly without any training, requires only calibration activations, and does not introduce extra inference latency. Moreover, the derivation shows that this approach leads to a direct minimization of the layerwise compression loss, not just the raw weight error. Analytically, truncating a singular value in the projected space corresponds to minimizing the true compression error under reasonable assumptions about the calibration activations. In their paper, NVIDIA presents a wide range of strong results showing that EoRA can significantly boost the accuracy of quantized models. However, their experiments focus mostly on older Quantization methods like GPTQ and are limited to mid-sized LLMs, up to 13B parameters, at 3-bit and 4-bit precisions. This leaves an open question: can EoRA still be effective for much larger models, using more modern quantization techniques, and even pushing down to 2-bit precision? Let’s find out. Calibrating an EoRA Adapter Suppose we have quantized models that show significantly degraded performance compared to their full-precision counterparts on certain tasks. Our goal is to reduce this performance gap using EoRA. For the experiments, I used Qwen2.5-72B Instruct and Qwen3-32B, both quantized to 2-bit using AutoRound (Apache 2.0 license), a state-of-the-art quantization algorithm developed by Intel. AutoRound leverages SignSGD optimization to fine-tune quantization, and is particularly effective for low-bit settings. All the models I made are available here (Apache 2.0 license): Quantized Qwen3 Quantized Qwen2.5 The 2-bit models were quantized with a group size of 32, except for which used a group size of 128. A larger group size reduces model size by storing less quantization metadata, but it introduces greater quantization error. I evaluated the models on IFEval, a benchmark that measures instruction-following capabilities. Results showed a noticeable drop in performance for the quantized versions. Image by the author To compensate for this degradation, I applied an EoRA adapter using the implementation provided in the GPTQModel library (licensed under Apache 2.0). The integration is straightforward. If you’re curious about how it’s implemented in PyTorch, the codebase is compact, clean, and easy to follow: GPTQModel’s EoRA implementation: eora.py EoRA requires a calibration dataset. Ideally, this dataset should reflect the model’s intended use case. However, since we don’t have a specific target task in this context and aim to preserve the model’s general capabilities, I used 1,024 randomly sampled examples from the C4 dataset (licensed under ODC-BY). Another key parameter is the LoRA rank, which greatly influences the effectiveness of the EoRA adapter. Its optimal value depends on the model architecture, the target task, and the calibration data. A higher rank may yield better performance but risks overfitting to the calibration set. It also increases the size of the adapter, counterproductive when the overall goal of quantization is to reduce memory usage. Conversely, a lower rank keeps the adapter lightweight but might not capture enough information to effectively compensate for quantization errors. In my experiments, I tested LoRA ranks of 32, 64, and 256. Below is the code used to create the EoRA adapter with GPTQModel: from gptqmodel import GPTQModel from gptqmodel.adapter.adapter import Lora from datasets import load_dataset calibration_dataset = load_dataset( "allenai/c4", data_files="en/c4-train.00001-of-01024.json.gz", split="train", download_mode="force_redownload" ).select(range(1024))["text"] eora_adapter_path = "Qwen3-32B-autoround-2bit-gptq-r256" model_path = "kaitchup/Qwen3-32B-autoround-2bit-gptq" eora = Lora( path=eora_adapter_path, rank=256, ) GPTQModel.adapter.generate( adapter=eora, model_id_or_path="Qwen/Qwen3-32B", quantized_model_id_or_path=model_path, calibration_dataset=calibration_dataset, calibration_dataset_concat_size=0, auto_gc=False) Using an NVIDIA A100 GPU on RunPod (referral link), it took approximately 4 hours to generate the EoRA adapter for the model Qwen3-32B-autoround-2bit-gptq. All EoRA adapters created for these models are publicly available (Apache 2.0 license): EoRA Adapters for Qwen2.5 and Qwen3 Evaluating EoRA Adapters for 2-bit LLMs Let’s evaluate the effect of the EoRA adapters. Do they improve the accuracy of the 2-bit models? Image by the author It works! The improvements are particularly notable for Qwen3-14B and Qwen3-32B. For instance, applying EoRA to Qwen3-32B, quantized to 2-bit with a group size of 128, resulted in an accuracy gain of nearly 7.5 points. Increasing the LoRA rank, from 32 to 64, also led to improvements, highlighting the impact of rank on performance. EoRA is also effective on larger models like Qwen2.5-72B, though the gains are more modest. Lower-rank adapters showed little to no benefit on this model; it wasn’t until I increased the rank to 256 that significant improvements began to appear. Memory Consumption of EoRA Using the EoRA adapter during inference results in the following increase in memory consumption: Image by the author The overhead is generally negligible. For instance for 2-bit Qwen3-14B, the adapters only add 257 MB and 514 MB to the total model size, with ranks of 32 and 64. With larger ranks, using an EoRA adapter becomes questionable as the total memory consumption may surpass the memory consumption of the same model quantized at a higher precision. For instance, 2-bit Qwen2.5 72B with an EoRA adapter of rank 256 is larger than 3-bit Qwen2.5 72B. Note: This estimate includes only the memory consumed by the adapter’s parameters. For completeness, we could also account for the memory used by adapter activations during inference. However, these are extremely small relative to other tensors (such as the model’s attention and MLP layers) and can safely be considered negligible. Conclusion EoRA works. We’ve confirmed that it’s a simple yet effective method for compensating quantization errors, even at 2-bit precision. It’s intuitive, training-free, and delivers meaningful performance gains. That said, there are a few trade-offs to consider: Rank search: Finding the optimal LoRA rank requires experimentation. It’s difficult to predict in advance whether a rank of 32 will be sufficient or whether a higher rank, like 256, will cause overfitting. The optimal value depends on the model, calibration data, and target task. Increased memory consumption: The goal of quantization is to reduce memory usage, often for highly constrained environments. While EoRA adapters are relatively lightweight at lower ranks, they do slightly increase memory consumption, particularly at higher ranks, reducing the overall efficiency of 2-bit quantization. Looking ahead, NVIDIA’s paper also demonstrates that EoRA adapters make excellent starting points for QLoRA fine-tuning. In other words, if you plan to fine-tune a 2-bit model using QLoRA, initializing from an EoRA-adapted model can lead to better results with less training effort. I’ve written about fine-tuning adapters for GPTQ model last year, in my newsletter: QLoRA with AutoRound: Cheaper and Better LLM Fine-tuning on Your GPU The main difference is that instead of initializing the adapter from scratch, we would load the EoRA adapter. This adapter will be fine-tuned. References [1] Dettmers et al, QLoRA: Efficient Finetuning of Quantized LLMs (2023), arXiv [2] Badri and Shaji, Towards 1-bit Machine Learning Models (2024), Mobius Labs’ Blog [3] Liu et al., EoRA: Training-free Compensation for Compressed LLM with Eigenspace Low-Rank Approximation (2024), arXiv The post Boost 2-Bit LLM Accuracy with EoRA appeared first on Towards Data Science.
    0 Σχόλια ·0 Μοιράστηκε ·0 Προεπισκόπηση
  • #333;">BOYAMIC 2 Rebuilds Mobile Audio with AI and Onboard Capture

    Wireless mics fail when they rely too much on perfect conditions.
    BOYAMIC 2 fixes that by making every part of the system self-contained.
    Each transmitter records on its own.
    Each receiver controls levels, backups, and signal without needing an app.
    Noise is filtered in real time.
    Recording keeps going even if the connection drops.
    Designer: BOYAMIC
    There’s no need for a separate recorder or post-edit rescue.
    The unit handles gain shifts, background interference, and voice clarity without user intervention.
    Everything shows on screen.
    Adjustments happen through physical controls.
    Files are saved directly to internal memory.
    This system is built to capture clean audio without depending on external gear.
    It records immediately, adapts instantly, and stores everything without breaking the workflow.
    Industrial Design and Physical Form
    Each transmitter is small but solid.
    It’s 40 millimeters tall with a ridged surface that helps with grip and alignment.
    The finish reduces glare and makes handling easier.
    You can clip it or use the built-in magnet.
    Placement is quick, and it stays put.
    The record button is recessed, so you won’t hit it by mistake.
    An LED shows when it’s active.
    The mic capsule stays exposed but protected, avoiding interference from hands or clothing.
    Nothing sticks out or gets in the way.
     
    The receiver is built around a screen and a knob.
    The 1.1-inch display shows battery, signal, gain, and status.
    The knob adjusts volume and selects settings.
    It works fast, without touchscreen lag.
    You can see and feel every change.
    Connections are spaced cleanly.
    One side has a USB-C port.
    The other has a 3.5 mm jack.
    A plug-in port supports USB-C or Lightning.
    The mount is fixed and locks into rigs without shifting.
    The charging case holds two transmitters and one receiver.
    Each has its own slot with magnetic contacts.
    Drop them in, close the lid, and they stay in place.
    LEDs on the case show power levels.
    There are no loose parts, exposed pins, or extra steps.
    Every shape and control supports fast setup and clear operation.
    You can press, turn, mount, and move without second-guessing.
    The design doesn’t try to be invisible; it stays readable, durable, and direct.
    Signal Processing and Audio Control
    BOYAMIC 2 uses onboard AI to separate voice from background noise.
    The system was trained on over 700,000 real-world sound samples.
    It filters traffic, crowds, wind, and mechanical hum in real time.
    Depending on the environment, you can toggle between strong and weak noise reduction.
    Both modes work directly from the transmitter or through the receiver.
    The mic uses a 6mm condenser capsule with a 48 kHz sample rate and 24-bit depth.
    The signal-to-noise ratio reaches 90 dB.
    Two low-cut filter options, at 75 Hz and 150 Hz, handle low-end rumble.
    These are effective against HVAC, engine hum, or low vibration.
    Gain is managed with automatic control.
    The system boosts quiet voices and pulls back when sound gets too loud.
    Built-in limiters stop clipping during spikes.
    A safety track records a second copy at -12 dB for backup.
    This makes it harder to lose a usable take even when volume jumps suddenly.
    Each setting is adjustable on screen.
    You don’t need a mobile app to access basic controls.
    Everything runs live and updates immediately.
    There are no delays or sync problems during capture.
    Recording and Storage
    Each transmitter records internally without needing the receiver.
    Files are saved in 32-bit float or 24-bit WAV formats.
    Internal storage is 8 GB.
    That gives you about ten hours of float audio or fifteen hours of 24-bit.
    When full, the system loops and overwrites older files.
    Recording continues even if the connection drops.
    Every session is split into timestamped chunks for fast transfer.
    You can plug the transmitter into any USB-C port and drag the files directly.
    No software is needed.
    This setup protects against signal loss, battery drops, or app crashes.
    The mic stays live, and the recording stays intact.
    Each transmitter runs for up to nine hours without noise cancellation or recording.
    With both features on, the runtime is closer to six hours.
    The receiver runs for about fifteen hours.
    The charging case holds enough power to recharge all three units twice.
    The system uses 2.4 GHz digital transmission.
    Its range can reach up to 300 meters in open areas.
    With walls or obstacles, it drops to around 60 meters.
    Latency stays at 25 milliseconds, even at long distances.
    You get reliable sync and stable audio across open ground or indoor spaces.
    Charging is handled through the included case or by direct USB-C.
    Each device takes under two hours to recharge fully.
    Compatibility and Multi-Device Support
    The system supports cameras, smartphones, and computers.
    USB-C and Lightning adapters are included.
    A 3.5 mm TRS cable connects the receiver to most cameras or mixers.
    While recording, you can charge your phone through the receiver, which is useful for long mobile shoots.
    One transmitter can send audio to up to four receivers at once, which helps with multi-angle setups or backup channels.
    The receiver also supports stereo, mono, and safety track modes.
    Based on your workflow, you choose how audio is split or merged.
    Settings can be changed from the receiver screen or through the BOYA app.
    The app adds firmware updates, custom EQ profiles, and gain presets for different camera brands.
    But the core controls don’t depend on it.The post BOYAMIC 2 Rebuilds Mobile Audio with AI and Onboard Capture first appeared on Yanko Design.
    #0066cc;">#boyamic #rebuilds #mobile #audio #with #and #onboard #capture #wireless #mics #fail #when #they #rely #too #much #perfect #conditionsboyamic #fixes #that #making #every #part #the #system #selfcontainedeach #transmitter #records #its #owneach #receiver #controls #levels #backups #signal #without #needing #appnoise #filtered #real #timerecording #keeps #going #even #connection #dropsdesigner #boyamictheres #need #for #separate #recorder #postedit #rescuethe #unit #handles #gain #shifts #background #interference #voice #clarity #user #interventioneverything #shows #screenadjustments #happen #through #physical #controlsfiles #are #saved #directly #internal #memorythis #built #clean #depending #external #gearit #immediately #adapts #instantly #stores #everything #breaking #workflowindustrial #design #formeach #small #but #solidits #millimeters #tall #ridged #surface #helps #grip #alignmentthe #finish #reduces #glare #makes #handling #easieryou #can #clip #use #builtin #magnetplacement #quick #stays #putthe #record #button #recessed #you #wont #hit #mistakean #led #activethe #mic #capsule #exposed #protected #avoiding #from #hands #clothingnothing #sticks #out #gets #waythe #around #screen #knobthe #11inch #display #battery #statusthe #knob #adjusts #volume #selects #settingsit #works #fast #touchscreen #lagyou #see #feel #changeconnections #spaced #cleanlyone #side #has #usbc #portthe #other #jacka #plugin #port #supports #lightningthe #mount #fixed #locks #into #rigs #shiftingthe #charging #case #holds #two #transmitters #one #receivereach #own #slot #magnetic #contactsdrop #them #close #lid #stay #placeleds #show #power #levelsthere #loose #parts #pins #extra #stepsevery #shape #control #setup #clear #operationyou #press #turn #move #secondguessingthe #doesnt #try #invisible #readable #durable #directsignal #processing #controlboyamic #uses #noisethe #was #trained #over #realworld #sound #samplesit #filters #traffic #crowds #wind #mechanical #hum #timedepending #environment #toggle #between #strong #weak #noise #reductionboth #modes #work #receiverthe #6mm #condenser #khz #sample #rate #24bit #depththe #signaltonoise #ratio #reaches #dbtwo #lowcut #filter #options #handle #lowend #rumblethese #effective #against #hvac #engine #low #vibrationgain #managed #automatic #controlthe #boosts #quiet #voices #pulls #back #loudbuiltin #limiters #stop #clipping #during #spikesa #safety #track #second #copy #backupthis #harder #lose #usable #take #jumps #suddenlyeach #setting #adjustable #screenyou #dont #app #access #basic #controlseverything #runs #live #updates #immediatelythere #delays #sync #problems #capturerecording #storageeach #internally #receiverfiles #32bit #float #wav #formatsinternal #storage #gbthat #gives #about #ten #hours #fifteen #24bitwhen #full #loops #overwrites #older #filesrecording #continues #dropsevery #session #split #timestamped #chunks #transferyou #plug #any #drag #files #directlyno #software #neededthis #protects #loss #drops #crashesthe #recording #intacteach #nine #cancellation #recordingwith #both #features #runtime #closer #six #hoursthe #enough #recharge #all #three #units #twicethe #ghz #digital #transmissionits #range #reach #meters #open #areaswith #walls #obstacles #meterslatency #milliseconds #long #distancesyou #get #reliable #stable #across #ground #indoor #spacescharging #handled #included #direct #usbceach #device #takes #under #fullycompatibility #multidevice #supportthe #cameras #smartphones #computersusbc #lightning #adapters #includeda #trs #cable #connects #most #mixerswhile #charge #your #phone #which #useful #shootsone #send #four #receivers #once #multiangle #setups #backup #channelsthe #also #stereo #mono #modesbased #workflow #choose #how #mergedsettings #changed #boya #appthe #adds #firmware #custom #profiles #presets #different #camera #brandsbut #core #depend #itthe #post #first #appeared #yanko
    BOYAMIC 2 Rebuilds Mobile Audio with AI and Onboard Capture
    Wireless mics fail when they rely too much on perfect conditions. BOYAMIC 2 fixes that by making every part of the system self-contained. Each transmitter records on its own. Each receiver controls levels, backups, and signal without needing an app. Noise is filtered in real time. Recording keeps going even if the connection drops. Designer: BOYAMIC There’s no need for a separate recorder or post-edit rescue. The unit handles gain shifts, background interference, and voice clarity without user intervention. Everything shows on screen. Adjustments happen through physical controls. Files are saved directly to internal memory. This system is built to capture clean audio without depending on external gear. It records immediately, adapts instantly, and stores everything without breaking the workflow. Industrial Design and Physical Form Each transmitter is small but solid. It’s 40 millimeters tall with a ridged surface that helps with grip and alignment. The finish reduces glare and makes handling easier. You can clip it or use the built-in magnet. Placement is quick, and it stays put. The record button is recessed, so you won’t hit it by mistake. An LED shows when it’s active. The mic capsule stays exposed but protected, avoiding interference from hands or clothing. Nothing sticks out or gets in the way.   The receiver is built around a screen and a knob. The 1.1-inch display shows battery, signal, gain, and status. The knob adjusts volume and selects settings. It works fast, without touchscreen lag. You can see and feel every change. Connections are spaced cleanly. One side has a USB-C port. The other has a 3.5 mm jack. A plug-in port supports USB-C or Lightning. The mount is fixed and locks into rigs without shifting. The charging case holds two transmitters and one receiver. Each has its own slot with magnetic contacts. Drop them in, close the lid, and they stay in place. LEDs on the case show power levels. There are no loose parts, exposed pins, or extra steps. Every shape and control supports fast setup and clear operation. You can press, turn, mount, and move without second-guessing. The design doesn’t try to be invisible; it stays readable, durable, and direct. Signal Processing and Audio Control BOYAMIC 2 uses onboard AI to separate voice from background noise. The system was trained on over 700,000 real-world sound samples. It filters traffic, crowds, wind, and mechanical hum in real time. Depending on the environment, you can toggle between strong and weak noise reduction. Both modes work directly from the transmitter or through the receiver. The mic uses a 6mm condenser capsule with a 48 kHz sample rate and 24-bit depth. The signal-to-noise ratio reaches 90 dB. Two low-cut filter options, at 75 Hz and 150 Hz, handle low-end rumble. These are effective against HVAC, engine hum, or low vibration. Gain is managed with automatic control. The system boosts quiet voices and pulls back when sound gets too loud. Built-in limiters stop clipping during spikes. A safety track records a second copy at -12 dB for backup. This makes it harder to lose a usable take even when volume jumps suddenly. Each setting is adjustable on screen. You don’t need a mobile app to access basic controls. Everything runs live and updates immediately. There are no delays or sync problems during capture. Recording and Storage Each transmitter records internally without needing the receiver. Files are saved in 32-bit float or 24-bit WAV formats. Internal storage is 8 GB. That gives you about ten hours of float audio or fifteen hours of 24-bit. When full, the system loops and overwrites older files. Recording continues even if the connection drops. Every session is split into timestamped chunks for fast transfer. You can plug the transmitter into any USB-C port and drag the files directly. No software is needed. This setup protects against signal loss, battery drops, or app crashes. The mic stays live, and the recording stays intact. Each transmitter runs for up to nine hours without noise cancellation or recording. With both features on, the runtime is closer to six hours. The receiver runs for about fifteen hours. The charging case holds enough power to recharge all three units twice. The system uses 2.4 GHz digital transmission. Its range can reach up to 300 meters in open areas. With walls or obstacles, it drops to around 60 meters. Latency stays at 25 milliseconds, even at long distances. You get reliable sync and stable audio across open ground or indoor spaces. Charging is handled through the included case or by direct USB-C. Each device takes under two hours to recharge fully. Compatibility and Multi-Device Support The system supports cameras, smartphones, and computers. USB-C and Lightning adapters are included. A 3.5 mm TRS cable connects the receiver to most cameras or mixers. While recording, you can charge your phone through the receiver, which is useful for long mobile shoots. One transmitter can send audio to up to four receivers at once, which helps with multi-angle setups or backup channels. The receiver also supports stereo, mono, and safety track modes. Based on your workflow, you choose how audio is split or merged. Settings can be changed from the receiver screen or through the BOYA app. The app adds firmware updates, custom EQ profiles, and gain presets for different camera brands. But the core controls don’t depend on it.The post BOYAMIC 2 Rebuilds Mobile Audio with AI and Onboard Capture first appeared on Yanko Design.
    المصدر: www.yankodesign.com
    #boyamic #rebuilds #mobile #audio #with #and #onboard #capture #wireless #mics #fail #when #they #rely #too #much #perfect #conditionsboyamic #fixes #that #making #every #part #the #system #selfcontainedeach #transmitter #records #its #owneach #receiver #controls #levels #backups #signal #without #needing #appnoise #filtered #real #timerecording #keeps #going #even #connection #dropsdesigner #boyamictheres #need #for #separate #recorder #postedit #rescuethe #unit #handles #gain #shifts #background #interference #voice #clarity #user #interventioneverything #shows #screenadjustments #happen #through #physical #controlsfiles #are #saved #directly #internal #memorythis #built #clean #depending #external #gearit #immediately #adapts #instantly #stores #everything #breaking #workflowindustrial #design #formeach #small #but #solidits #millimeters #tall #ridged #surface #helps #grip #alignmentthe #finish #reduces #glare #makes #handling #easieryou #can #clip #use #builtin #magnetplacement #quick #stays #putthe #record #button #recessed #you #wont #hit #mistakean #led #activethe #mic #capsule #exposed #protected #avoiding #from #hands #clothingnothing #sticks #out #gets #waythe #around #screen #knobthe #11inch #display #battery #statusthe #knob #adjusts #volume #selects #settingsit #works #fast #touchscreen #lagyou #see #feel #changeconnections #spaced #cleanlyone #side #has #usbc #portthe #other #jacka #plugin #port #supports #lightningthe #mount #fixed #locks #into #rigs #shiftingthe #charging #case #holds #two #transmitters #one #receivereach #own #slot #magnetic #contactsdrop #them #close #lid #stay #placeleds #show #power #levelsthere #loose #parts #pins #extra #stepsevery #shape #control #setup #clear #operationyou #press #turn #move #secondguessingthe #doesnt #try #invisible #readable #durable #directsignal #processing #controlboyamic #uses #noisethe #was #trained #over #realworld #sound #samplesit #filters #traffic #crowds #wind #mechanical #hum #timedepending #environment #toggle #between #strong #weak #noise #reductionboth #modes #work #receiverthe #6mm #condenser #khz #sample #rate #24bit #depththe #signaltonoise #ratio #reaches #dbtwo #lowcut #filter #options #handle #lowend #rumblethese #effective #against #hvac #engine #low #vibrationgain #managed #automatic #controlthe #boosts #quiet #voices #pulls #back #loudbuiltin #limiters #stop #clipping #during #spikesa #safety #track #second #copy #backupthis #harder #lose #usable #take #jumps #suddenlyeach #setting #adjustable #screenyou #dont #app #access #basic #controlseverything #runs #live #updates #immediatelythere #delays #sync #problems #capturerecording #storageeach #internally #receiverfiles #32bit #float #wav #formatsinternal #storage #gbthat #gives #about #ten #hours #fifteen #24bitwhen #full #loops #overwrites #older #filesrecording #continues #dropsevery #session #split #timestamped #chunks #transferyou #plug #any #drag #files #directlyno #software #neededthis #protects #loss #drops #crashesthe #recording #intacteach #nine #cancellation #recordingwith #both #features #runtime #closer #six #hoursthe #enough #recharge #all #three #units #twicethe #ghz #digital #transmissionits #range #reach #meters #open #areaswith #walls #obstacles #meterslatency #milliseconds #long #distancesyou #get #reliable #stable #across #ground #indoor #spacescharging #handled #included #direct #usbceach #device #takes #under #fullycompatibility #multidevice #supportthe #cameras #smartphones #computersusbc #lightning #adapters #includeda #trs #cable #connects #most #mixerswhile #charge #your #phone #which #useful #shootsone #send #four #receivers #once #multiangle #setups #backup #channelsthe #also #stereo #mono #modesbased #workflow #choose #how #mergedsettings #changed #boya #appthe #adds #firmware #custom #profiles #presets #different #camera #brandsbut #core #depend #itthe #post #first #appeared #yanko
    BOYAMIC 2 Rebuilds Mobile Audio with AI and Onboard Capture
    www.yankodesign.com
    Wireless mics fail when they rely too much on perfect conditions. BOYAMIC 2 fixes that by making every part of the system self-contained. Each transmitter records on its own. Each receiver controls levels, backups, and signal without needing an app. Noise is filtered in real time. Recording keeps going even if the connection drops. Designer: BOYAMIC There’s no need for a separate recorder or post-edit rescue. The unit handles gain shifts, background interference, and voice clarity without user intervention. Everything shows on screen. Adjustments happen through physical controls. Files are saved directly to internal memory. This system is built to capture clean audio without depending on external gear. It records immediately, adapts instantly, and stores everything without breaking the workflow. Industrial Design and Physical Form Each transmitter is small but solid. It’s 40 millimeters tall with a ridged surface that helps with grip and alignment. The finish reduces glare and makes handling easier. You can clip it or use the built-in magnet. Placement is quick, and it stays put. The record button is recessed, so you won’t hit it by mistake. An LED shows when it’s active. The mic capsule stays exposed but protected, avoiding interference from hands or clothing. Nothing sticks out or gets in the way.   The receiver is built around a screen and a knob. The 1.1-inch display shows battery, signal, gain, and status. The knob adjusts volume and selects settings. It works fast, without touchscreen lag. You can see and feel every change. Connections are spaced cleanly. One side has a USB-C port. The other has a 3.5 mm jack. A plug-in port supports USB-C or Lightning. The mount is fixed and locks into rigs without shifting. The charging case holds two transmitters and one receiver. Each has its own slot with magnetic contacts. Drop them in, close the lid, and they stay in place. LEDs on the case show power levels. There are no loose parts, exposed pins, or extra steps. Every shape and control supports fast setup and clear operation. You can press, turn, mount, and move without second-guessing. The design doesn’t try to be invisible; it stays readable, durable, and direct. Signal Processing and Audio Control BOYAMIC 2 uses onboard AI to separate voice from background noise. The system was trained on over 700,000 real-world sound samples. It filters traffic, crowds, wind, and mechanical hum in real time. Depending on the environment, you can toggle between strong and weak noise reduction. Both modes work directly from the transmitter or through the receiver. The mic uses a 6mm condenser capsule with a 48 kHz sample rate and 24-bit depth. The signal-to-noise ratio reaches 90 dB. Two low-cut filter options, at 75 Hz and 150 Hz, handle low-end rumble. These are effective against HVAC, engine hum, or low vibration. Gain is managed with automatic control. The system boosts quiet voices and pulls back when sound gets too loud. Built-in limiters stop clipping during spikes. A safety track records a second copy at -12 dB for backup. This makes it harder to lose a usable take even when volume jumps suddenly. Each setting is adjustable on screen. You don’t need a mobile app to access basic controls. Everything runs live and updates immediately. There are no delays or sync problems during capture. Recording and Storage Each transmitter records internally without needing the receiver. Files are saved in 32-bit float or 24-bit WAV formats. Internal storage is 8 GB. That gives you about ten hours of float audio or fifteen hours of 24-bit. When full, the system loops and overwrites older files. Recording continues even if the connection drops. Every session is split into timestamped chunks for fast transfer. You can plug the transmitter into any USB-C port and drag the files directly. No software is needed. This setup protects against signal loss, battery drops, or app crashes. The mic stays live, and the recording stays intact. Each transmitter runs for up to nine hours without noise cancellation or recording. With both features on, the runtime is closer to six hours. The receiver runs for about fifteen hours. The charging case holds enough power to recharge all three units twice. The system uses 2.4 GHz digital transmission. Its range can reach up to 300 meters in open areas. With walls or obstacles, it drops to around 60 meters. Latency stays at 25 milliseconds, even at long distances. You get reliable sync and stable audio across open ground or indoor spaces. Charging is handled through the included case or by direct USB-C. Each device takes under two hours to recharge fully. Compatibility and Multi-Device Support The system supports cameras, smartphones, and computers. USB-C and Lightning adapters are included. A 3.5 mm TRS cable connects the receiver to most cameras or mixers. While recording, you can charge your phone through the receiver, which is useful for long mobile shoots. One transmitter can send audio to up to four receivers at once, which helps with multi-angle setups or backup channels. The receiver also supports stereo, mono, and safety track modes. Based on your workflow, you choose how audio is split or merged. Settings can be changed from the receiver screen or through the BOYA app. The app adds firmware updates, custom EQ profiles, and gain presets for different camera brands. But the core controls don’t depend on it.The post BOYAMIC 2 Rebuilds Mobile Audio with AI and Onboard Capture first appeared on Yanko Design.
    0 Σχόλια ·0 Μοιράστηκε ·0 Προεπισκόπηση
CGShares https://cgshares.com