• Humpback Whales Are Approaching People to Blow Rings. What Are They Trying to Say?

    A bubble ring created by a humpback whale named Thorn. Image © Dan Knaub, The Video Company
    Humpback Whales Are Approaching People to Blow Rings. What Are They Trying to Say?
    June 13, 2025
    NatureSocial Issues
    Grace Ebert

    After the “orca uprising” captivated anti-capitalists around the world in 2023, scientists are intrigued by another form of marine mammal communication.
    A study released this month by the SETI Institute and the University of California at Davis dives into a newly documented phenomenon of humpback whales blowing bubble rings while interacting with humans. In contrast to the orcas’ aggressive behavior, researchers say the humpbacks appear to be friendly, relaxed, and even curious.
    Bubbles aren’t new to these aquatic giants, which typically release various shapes when corraling prey and courting mates. This study follows 12 distinct incidents involving 11 whales producing 39 rings, most of which have approached boats near Hawaii, the Dominican Republic, Mo’orea, and the U.S. Atlantic coast on their own.
    The impact of this research reaches far beyond the oceans, though. Deciphering these non-verbal messages could aid in potential extraterrestrial communication, as they can help to “develop filters that aid in parsing cosmic signals for signs of extraterrestrial life,” a statement says.
    “Because of current limitations on technology, an important assumption of the search for extraterrestrial intelligence is that extraterrestrial intelligence and life will be interested in making contact and so target human receivers,” said Dr. Laurance Doyle, a SETI Institute scientist who co-wrote the paper. “This important assumption is certainly supported by the independent evolution of curious behavior in humpback whales.”A composite image of at least one bubble ring from each interaction
    Previous articleNext article
    #humpback #whales #are #approaching #people
    Humpback Whales Are Approaching People to Blow Rings. What Are They Trying to Say?
    A bubble ring created by a humpback whale named Thorn. Image © Dan Knaub, The Video Company Humpback Whales Are Approaching People to Blow Rings. What Are They Trying to Say? June 13, 2025 NatureSocial Issues Grace Ebert After the “orca uprising” captivated anti-capitalists around the world in 2023, scientists are intrigued by another form of marine mammal communication. A study released this month by the SETI Institute and the University of California at Davis dives into a newly documented phenomenon of humpback whales blowing bubble rings while interacting with humans. In contrast to the orcas’ aggressive behavior, researchers say the humpbacks appear to be friendly, relaxed, and even curious. Bubbles aren’t new to these aquatic giants, which typically release various shapes when corraling prey and courting mates. This study follows 12 distinct incidents involving 11 whales producing 39 rings, most of which have approached boats near Hawaii, the Dominican Republic, Mo’orea, and the U.S. Atlantic coast on their own. The impact of this research reaches far beyond the oceans, though. Deciphering these non-verbal messages could aid in potential extraterrestrial communication, as they can help to “develop filters that aid in parsing cosmic signals for signs of extraterrestrial life,” a statement says. “Because of current limitations on technology, an important assumption of the search for extraterrestrial intelligence is that extraterrestrial intelligence and life will be interested in making contact and so target human receivers,” said Dr. Laurance Doyle, a SETI Institute scientist who co-wrote the paper. “This important assumption is certainly supported by the independent evolution of curious behavior in humpback whales.”A composite image of at least one bubble ring from each interaction Previous articleNext article #humpback #whales #are #approaching #people
    WWW.THISISCOLOSSAL.COM
    Humpback Whales Are Approaching People to Blow Rings. What Are They Trying to Say?
    A bubble ring created by a humpback whale named Thorn. Image © Dan Knaub, The Video Company Humpback Whales Are Approaching People to Blow Rings. What Are They Trying to Say? June 13, 2025 NatureSocial Issues Grace Ebert After the “orca uprising” captivated anti-capitalists around the world in 2023, scientists are intrigued by another form of marine mammal communication. A study released this month by the SETI Institute and the University of California at Davis dives into a newly documented phenomenon of humpback whales blowing bubble rings while interacting with humans. In contrast to the orcas’ aggressive behavior, researchers say the humpbacks appear to be friendly, relaxed, and even curious. Bubbles aren’t new to these aquatic giants, which typically release various shapes when corraling prey and courting mates. This study follows 12 distinct incidents involving 11 whales producing 39 rings, most of which have approached boats near Hawaii, the Dominican Republic, Mo’orea, and the U.S. Atlantic coast on their own. The impact of this research reaches far beyond the oceans, though. Deciphering these non-verbal messages could aid in potential extraterrestrial communication, as they can help to “develop filters that aid in parsing cosmic signals for signs of extraterrestrial life,” a statement says. “Because of current limitations on technology, an important assumption of the search for extraterrestrial intelligence is that extraterrestrial intelligence and life will be interested in making contact and so target human receivers,” said Dr. Laurance Doyle, a SETI Institute scientist who co-wrote the paper. “This important assumption is certainly supported by the independent evolution of curious behavior in humpback whales.” (via PetaPixel) A composite image of at least one bubble ring from each interaction Previous articleNext article
    0 Comments 0 Shares
  • The Clock Is Ticking on Elon Musk's Hail Mary to Save Tesla

    It's December of 2015, and the Green Bay Packers are up against the wall. They've lost their last three games, and their early-season momentum is feared dead in the water.The Detroit Lions, a longtime rival, only need to stop one last play on the 39-yard line to keep their two-point lead and take home the win.The snap comes, and Packers quarterback Aaron Rodgers scrambles down the field while his faithful receivers scutter for the endzone. From 61 yards, the quarterback makes his final throw, a pass that meets a leaping Richard Rodgers to give Green Bay the touchdown, winning the game and ultimately saving the season.It's safe to say Tesla is in a similar spot: the losses are mounting, the future looks dim, and the team is down to their last pass. Sadly, Elon Musk is no Rodgers.Ten years after the "Miracle in Motown," the electric vehicle company's stock has plummeted by 25 percent in just six months, thanks to horrid global sales, a portfolio many investors see as crusty and dated, and perhaps above all, the alienating behavior of its own chief executive.Mere months into Musk's disastrous stint as federal spending czar, the prediction that "Tesla will soon collapse" is no longer a fringe opinion held by forum dwellers, but a serious charge levied by political commentators, stock gurus, and former Tesla executives alike.Fortunately for any foolhardy shareholders keeping the faith, Elon Musk has promised to rollout Tesla's autonomous robotaxi service in Austin, a product some analysts predicted could soon make up 90 percent of Tesla's profits.Unfortunately for those investors, Musk has given Tesla a self-imposed deadline of June 12th to make it all happen — meaning we're two weeks away from seeing whether or not the rubber hits the road. So where is the company at on its self-driving cabs?Well, the self-driving vehicles about to land in Austin streets are blowing past school buses into child crash dummies, if that's any indication.According to a FuelArc analysis of a school bus test, Tesla's latest iteration of "full self-driving" software failed to detect flashing red school bus stop signs, detected child-sized pedestrians but failed to react, and made no attempt to brake or evade the adolescent crash dummies as the car drew closer.FuelArc notes that school bus recognition only hit self-driving Teslas in December of 2024. Keep in mind, these vehicles have been on public roads, albeit with drivers behind the wheel, since October of 2015 — just months before Rodger's now-infamous Hail Mary.It's obvious that the robotaxi is nowhere near ready, which is probably why Tesla is scrambling to hire remote operators to drive its vehicles ahead of the looming June deadline.This ought to be the "Miracle in Motown" moment for Telsa – but the quarterback doesn't even have the ball, and the receivers are nowhere to be found.More on Tesla: Self-Driving Tesla Suddenly Swerves Off the Road and CrashesShare This Article
    #clock #ticking #elon #musk039s #hail
    The Clock Is Ticking on Elon Musk's Hail Mary to Save Tesla
    It's December of 2015, and the Green Bay Packers are up against the wall. They've lost their last three games, and their early-season momentum is feared dead in the water.The Detroit Lions, a longtime rival, only need to stop one last play on the 39-yard line to keep their two-point lead and take home the win.The snap comes, and Packers quarterback Aaron Rodgers scrambles down the field while his faithful receivers scutter for the endzone. From 61 yards, the quarterback makes his final throw, a pass that meets a leaping Richard Rodgers to give Green Bay the touchdown, winning the game and ultimately saving the season.It's safe to say Tesla is in a similar spot: the losses are mounting, the future looks dim, and the team is down to their last pass. Sadly, Elon Musk is no Rodgers.Ten years after the "Miracle in Motown," the electric vehicle company's stock has plummeted by 25 percent in just six months, thanks to horrid global sales, a portfolio many investors see as crusty and dated, and perhaps above all, the alienating behavior of its own chief executive.Mere months into Musk's disastrous stint as federal spending czar, the prediction that "Tesla will soon collapse" is no longer a fringe opinion held by forum dwellers, but a serious charge levied by political commentators, stock gurus, and former Tesla executives alike.Fortunately for any foolhardy shareholders keeping the faith, Elon Musk has promised to rollout Tesla's autonomous robotaxi service in Austin, a product some analysts predicted could soon make up 90 percent of Tesla's profits.Unfortunately for those investors, Musk has given Tesla a self-imposed deadline of June 12th to make it all happen — meaning we're two weeks away from seeing whether or not the rubber hits the road. So where is the company at on its self-driving cabs?Well, the self-driving vehicles about to land in Austin streets are blowing past school buses into child crash dummies, if that's any indication.According to a FuelArc analysis of a school bus test, Tesla's latest iteration of "full self-driving" software failed to detect flashing red school bus stop signs, detected child-sized pedestrians but failed to react, and made no attempt to brake or evade the adolescent crash dummies as the car drew closer.FuelArc notes that school bus recognition only hit self-driving Teslas in December of 2024. Keep in mind, these vehicles have been on public roads, albeit with drivers behind the wheel, since October of 2015 — just months before Rodger's now-infamous Hail Mary.It's obvious that the robotaxi is nowhere near ready, which is probably why Tesla is scrambling to hire remote operators to drive its vehicles ahead of the looming June deadline.This ought to be the "Miracle in Motown" moment for Telsa – but the quarterback doesn't even have the ball, and the receivers are nowhere to be found.More on Tesla: Self-Driving Tesla Suddenly Swerves Off the Road and CrashesShare This Article #clock #ticking #elon #musk039s #hail
    FUTURISM.COM
    The Clock Is Ticking on Elon Musk's Hail Mary to Save Tesla
    It's December of 2015, and the Green Bay Packers are up against the wall. They've lost their last three games, and their early-season momentum is feared dead in the water.The Detroit Lions, a longtime rival, only need to stop one last play on the 39-yard line to keep their two-point lead and take home the win.The snap comes, and Packers quarterback Aaron Rodgers scrambles down the field while his faithful receivers scutter for the endzone. From 61 yards, the quarterback makes his final throw, a pass that meets a leaping Richard Rodgers to give Green Bay the touchdown, winning the game and ultimately saving the season.It's safe to say Tesla is in a similar spot: the losses are mounting, the future looks dim, and the team is down to their last pass. Sadly, Elon Musk is no Rodgers.Ten years after the "Miracle in Motown," the electric vehicle company's stock has plummeted by 25 percent in just six months, thanks to horrid global sales, a portfolio many investors see as crusty and dated, and perhaps above all, the alienating behavior of its own chief executive.Mere months into Musk's disastrous stint as federal spending czar, the prediction that "Tesla will soon collapse" is no longer a fringe opinion held by forum dwellers, but a serious charge levied by political commentators, stock gurus, and former Tesla executives alike.Fortunately for any foolhardy shareholders keeping the faith, Elon Musk has promised to rollout Tesla's autonomous robotaxi service in Austin, a product some analysts predicted could soon make up 90 percent of Tesla's profits.Unfortunately for those investors, Musk has given Tesla a self-imposed deadline of June 12th to make it all happen — meaning we're two weeks away from seeing whether or not the rubber hits the road. So where is the company at on its self-driving cabs?Well, the self-driving vehicles about to land in Austin streets are blowing past school buses into child crash dummies, if that's any indication.According to a FuelArc analysis of a school bus test, Tesla's latest iteration of "full self-driving" software failed to detect flashing red school bus stop signs (and in turn failed to stop at the parked bus), detected child-sized pedestrians but failed to react, and made no attempt to brake or evade the adolescent crash dummies as the car drew closer.FuelArc notes that school bus recognition only hit self-driving Teslas in December of 2024. Keep in mind, these vehicles have been on public roads, albeit with drivers behind the wheel, since October of 2015 — just months before Rodger's now-infamous Hail Mary.It's obvious that the robotaxi is nowhere near ready, which is probably why Tesla is scrambling to hire remote operators to drive its vehicles ahead of the looming June deadline.This ought to be the "Miracle in Motown" moment for Telsa – but the quarterback doesn't even have the ball, and the receivers are nowhere to be found.More on Tesla: Self-Driving Tesla Suddenly Swerves Off the Road and CrashesShare This Article
    0 Comments 0 Shares
  • EA SPORTS™ College Football 26 Launches Worldwide on July 10 Celebrating Emerging Stars, Real-world Coaches and the Spirit of College Football

    May 27, 2025

    Ryan Williams and Jeremiah Smith Star on the College Football 26 Cover and the Deluxe Edition Honors Icons of College Football

    Full College Football 26 Reveal Coming Thursday & Fans Can Pre-Order the MVP Bundle Now To Get The Deluxe Editions of College Football 26 and Madden NFL 26
    REDWOOD CITY, Calif.----
    Electronic Arts Inc.and EA SPORTS™ today unveiled the dynamic covers of EA SPORTS™ College Football 26, ahead of the game’s full reveal this Thursday, May 29. Alabama wide receiver Ryan Williams and Ohio State wide receiver Jeremiah Smith shine on the Standard Edition cover, while the Deluxe Edition highlights college football legends alongside prominent coaches, beloved mascots, and other standout players. Fans can dive into authentic gameplay across 136 FBS schools and experience the unrivaled passion of college football when EA SPORTS College Football 26 launches worldwide on July 10 on PlayStation®5 and Xbox Series X|S.Standout sophomores Ryan Williams and Jeremiah Smith star on the EA SPORTS College Football 26 covers.“Last year, when we brought back the pride, pageantry, atmospheres and traditions of College Football, the response from fans was overwhelming,” said Evan Dexter, VP, Franchise Strategy & Marketing, EA SPORTS College Football. “With College Football 26, we’re celebrating our sophomore season with two generational sophomore wide receivers on the cover and we can’t wait for the world to experience even more heart and authenticity across athletes, stadiums, coaches and fans. Tune in this Thursday to see what makes it so special.”Williams and Smith land on the EA SPORTS College Football 26 covers after stellar starts to their careers last season. Williams, a dynamic playmaker, set freshman records at Alabama, dazzling fans with his speed and highlight-reel catches. Smith, a cornerstone of Ohio State’s offense, emerged as one of the nation’s top receivers, showcasing elite route-running and clutch performances en route to the Buckeyes capturing the National Championship. Accomplished coaches like Ohio State’s Ryan Day, Notre Dame’s Marcus Freeman, and Georgia’s Kirby Smart are featured on the Deluxe Edition cover, alongside iconic mascots and players such as Clemson QB Cade Klubnik, Notre Dame RB Jeremiyah Love, and Penn State RB Nick Singleton, embodying the culture of the sport. Past EA SPORTS cover stars Reggie Bush, Tim Tebow, and Denard Robinson also appear, paying tribute to college football’s rich history.“As a lifelong fan of EA SPORTS games, being on the cover of College Football 26 is a dream come true,” said Williams. “It was incredible to see myself in College Football 25 last year, and now to represent Alabama and share this moment with fans who’ve played EA SPORTS games for years is unreal.”“Being on the cover of EA SPORTS College Football 26 is a tremendous privilege, and I’m proud to represent Ohio State alongside Coach Day while carrying the Buckeye legacy forward, celebrating the passion of our fans and the tradition of this incredible program,” said Smith.Football fans can pre-order the EA SPORTS™ MVP Bundle now, which includes the Deluxe Editions of EA SPORTS College Football 26 and Madden NFL 26, granting 3-day early access to both games plus special bonuses.* The Standard and Deluxe Editions of College Football 26 are also available for pre-order today.More College Football 26 details will be shared this Thursday and throughout the summer leading up to launch. Fans can stay updated by visiting the official website or following along on social mediafor all the latest announcements.*Conditions & restrictions apply. See for details.For College Football 26 assets, visit: EAPressPortal.com.EA SPORTS™ College Football 26 is developed in Orlando, Florida and Madrid, Spain by EA Tiburon and will be available worldwide July 10 for PlayStation®5 and Xbox Series X|S.About Electronic ArtsElectronic Artsis a global leader in digital interactive entertainment. The Company develops and delivers games, content and online services for Internet-connected consoles, mobile devices and personal computers.In fiscal year 2025, EA posted GAAP net revenue of approximately billion. Headquartered in Redwood City, California, EA is recognized for a portfolio of critically acclaimed, high-quality brands such as EA SPORTS FC™, Battlefield™, Apex Legends™, The Sims™, EA SPORTS™ Madden NFL, EA SPORTS™ College Football, Need for Speed™, Dragon Age™, Titanfall™, Plants vs. Zombies™ and EA SPORTS F1®. More information about EA is available at www.ea.com/news.EA, EA SPORTS, EA SPORTS FC, Battlefield, Need for Speed, Apex Legends, The Sims, Dragon Age, Titanfall, and Plants vs. Zombies are trademarks of Electronic Arts Inc. John Madden, NFL, and F1 are the property of their respective owners and used with permission.

    Erin Exum
    Director, Integrated CommsSource: Electronic Arts Inc.

    Multimedia Files:
    #sports #college #football #launches #worldwide
    EA SPORTS™ College Football 26 Launches Worldwide on July 10 Celebrating Emerging Stars, Real-world Coaches and the Spirit of College Football
    May 27, 2025 Ryan Williams and Jeremiah Smith Star on the College Football 26 Cover and the Deluxe Edition Honors Icons of College Football Full College Football 26 Reveal Coming Thursday & Fans Can Pre-Order the MVP Bundle Now To Get The Deluxe Editions of College Football 26 and Madden NFL 26 REDWOOD CITY, Calif.---- Electronic Arts Inc.and EA SPORTS™ today unveiled the dynamic covers of EA SPORTS™ College Football 26, ahead of the game’s full reveal this Thursday, May 29. Alabama wide receiver Ryan Williams and Ohio State wide receiver Jeremiah Smith shine on the Standard Edition cover, while the Deluxe Edition highlights college football legends alongside prominent coaches, beloved mascots, and other standout players. Fans can dive into authentic gameplay across 136 FBS schools and experience the unrivaled passion of college football when EA SPORTS College Football 26 launches worldwide on July 10 on PlayStation®5 and Xbox Series X|S.Standout sophomores Ryan Williams and Jeremiah Smith star on the EA SPORTS College Football 26 covers.“Last year, when we brought back the pride, pageantry, atmospheres and traditions of College Football, the response from fans was overwhelming,” said Evan Dexter, VP, Franchise Strategy & Marketing, EA SPORTS College Football. “With College Football 26, we’re celebrating our sophomore season with two generational sophomore wide receivers on the cover and we can’t wait for the world to experience even more heart and authenticity across athletes, stadiums, coaches and fans. Tune in this Thursday to see what makes it so special.”Williams and Smith land on the EA SPORTS College Football 26 covers after stellar starts to their careers last season. Williams, a dynamic playmaker, set freshman records at Alabama, dazzling fans with his speed and highlight-reel catches. Smith, a cornerstone of Ohio State’s offense, emerged as one of the nation’s top receivers, showcasing elite route-running and clutch performances en route to the Buckeyes capturing the National Championship. Accomplished coaches like Ohio State’s Ryan Day, Notre Dame’s Marcus Freeman, and Georgia’s Kirby Smart are featured on the Deluxe Edition cover, alongside iconic mascots and players such as Clemson QB Cade Klubnik, Notre Dame RB Jeremiyah Love, and Penn State RB Nick Singleton, embodying the culture of the sport. Past EA SPORTS cover stars Reggie Bush, Tim Tebow, and Denard Robinson also appear, paying tribute to college football’s rich history.“As a lifelong fan of EA SPORTS games, being on the cover of College Football 26 is a dream come true,” said Williams. “It was incredible to see myself in College Football 25 last year, and now to represent Alabama and share this moment with fans who’ve played EA SPORTS games for years is unreal.”“Being on the cover of EA SPORTS College Football 26 is a tremendous privilege, and I’m proud to represent Ohio State alongside Coach Day while carrying the Buckeye legacy forward, celebrating the passion of our fans and the tradition of this incredible program,” said Smith.Football fans can pre-order the EA SPORTS™ MVP Bundle now, which includes the Deluxe Editions of EA SPORTS College Football 26 and Madden NFL 26, granting 3-day early access to both games plus special bonuses.* The Standard and Deluxe Editions of College Football 26 are also available for pre-order today.More College Football 26 details will be shared this Thursday and throughout the summer leading up to launch. Fans can stay updated by visiting the official website or following along on social mediafor all the latest announcements.*Conditions & restrictions apply. See for details.For College Football 26 assets, visit: EAPressPortal.com.EA SPORTS™ College Football 26 is developed in Orlando, Florida and Madrid, Spain by EA Tiburon and will be available worldwide July 10 for PlayStation®5 and Xbox Series X|S.About Electronic ArtsElectronic Artsis a global leader in digital interactive entertainment. The Company develops and delivers games, content and online services for Internet-connected consoles, mobile devices and personal computers.In fiscal year 2025, EA posted GAAP net revenue of approximately billion. Headquartered in Redwood City, California, EA is recognized for a portfolio of critically acclaimed, high-quality brands such as EA SPORTS FC™, Battlefield™, Apex Legends™, The Sims™, EA SPORTS™ Madden NFL, EA SPORTS™ College Football, Need for Speed™, Dragon Age™, Titanfall™, Plants vs. Zombies™ and EA SPORTS F1®. More information about EA is available at www.ea.com/news.EA, EA SPORTS, EA SPORTS FC, Battlefield, Need for Speed, Apex Legends, The Sims, Dragon Age, Titanfall, and Plants vs. Zombies are trademarks of Electronic Arts Inc. John Madden, NFL, and F1 are the property of their respective owners and used with permission. Erin Exum Director, Integrated CommsSource: Electronic Arts Inc. Multimedia Files: #sports #college #football #launches #worldwide
    NEWS.EA.COM
    EA SPORTS™ College Football 26 Launches Worldwide on July 10 Celebrating Emerging Stars, Real-world Coaches and the Spirit of College Football
    May 27, 2025 Ryan Williams and Jeremiah Smith Star on the College Football 26 Cover and the Deluxe Edition Honors Icons of College Football Full College Football 26 Reveal Coming Thursday & Fans Can Pre-Order the MVP Bundle Now To Get The Deluxe Editions of College Football 26 and Madden NFL 26 REDWOOD CITY, Calif.--(BUSINESS WIRE)-- Electronic Arts Inc. (NASDAQ: EA) and EA SPORTS™ today unveiled the dynamic covers of EA SPORTS™ College Football 26, ahead of the game’s full reveal this Thursday, May 29. Alabama wide receiver Ryan Williams and Ohio State wide receiver Jeremiah Smith shine on the Standard Edition cover, while the Deluxe Edition highlights college football legends alongside prominent coaches, beloved mascots, and other standout players. Fans can dive into authentic gameplay across 136 FBS schools and experience the unrivaled passion of college football when EA SPORTS College Football 26 launches worldwide on July 10 on PlayStation®5 and Xbox Series X|S.Standout sophomores Ryan Williams and Jeremiah Smith star on the EA SPORTS College Football 26 covers.“Last year, when we brought back the pride, pageantry, atmospheres and traditions of College Football, the response from fans was overwhelming,” said Evan Dexter, VP, Franchise Strategy & Marketing, EA SPORTS College Football. “With College Football 26, we’re celebrating our sophomore season with two generational sophomore wide receivers on the cover and we can’t wait for the world to experience even more heart and authenticity across athletes, stadiums, coaches and fans. Tune in this Thursday to see what makes it so special.”Williams and Smith land on the EA SPORTS College Football 26 covers after stellar starts to their careers last season. Williams, a dynamic playmaker, set freshman records at Alabama, dazzling fans with his speed and highlight-reel catches. Smith, a cornerstone of Ohio State’s offense, emerged as one of the nation’s top receivers, showcasing elite route-running and clutch performances en route to the Buckeyes capturing the National Championship. Accomplished coaches like Ohio State’s Ryan Day, Notre Dame’s Marcus Freeman, and Georgia’s Kirby Smart are featured on the Deluxe Edition cover, alongside iconic mascots and players such as Clemson QB Cade Klubnik, Notre Dame RB Jeremiyah Love, and Penn State RB Nick Singleton, embodying the culture of the sport. Past EA SPORTS cover stars Reggie Bush, Tim Tebow, and Denard Robinson also appear, paying tribute to college football’s rich history.“As a lifelong fan of EA SPORTS games, being on the cover of College Football 26 is a dream come true,” said Williams. “It was incredible to see myself in College Football 25 last year, and now to represent Alabama and share this moment with fans who’ve played EA SPORTS games for years is unreal.”“Being on the cover of EA SPORTS College Football 26 is a tremendous privilege, and I’m proud to represent Ohio State alongside Coach Day while carrying the Buckeye legacy forward, celebrating the passion of our fans and the tradition of this incredible program,” said Smith.Football fans can pre-order the EA SPORTS™ MVP Bundle now, which includes the Deluxe Editions of EA SPORTS College Football 26 and Madden NFL 26, granting 3-day early access to both games plus special bonuses.* The Standard and Deluxe Editions of College Football 26 are also available for pre-order today.More College Football 26 details will be shared this Thursday and throughout the summer leading up to launch. Fans can stay updated by visiting the official website or following along on social media (Instagram, X, Facebook, and TikTok) for all the latest announcements.*Conditions & restrictions apply. See https://www.ea.com/games/madden-nfl/madden-nfl-26/legal-disclaimers for details.For College Football 26 assets, visit: EAPressPortal.com.EA SPORTS™ College Football 26 is developed in Orlando, Florida and Madrid, Spain by EA Tiburon and will be available worldwide July 10 for PlayStation®5 and Xbox Series X|S.About Electronic ArtsElectronic Arts (NASDAQ: EA) is a global leader in digital interactive entertainment. The Company develops and delivers games, content and online services for Internet-connected consoles, mobile devices and personal computers.In fiscal year 2025, EA posted GAAP net revenue of approximately $7.5 billion. Headquartered in Redwood City, California, EA is recognized for a portfolio of critically acclaimed, high-quality brands such as EA SPORTS FC™, Battlefield™, Apex Legends™, The Sims™, EA SPORTS™ Madden NFL, EA SPORTS™ College Football, Need for Speed™, Dragon Age™, Titanfall™, Plants vs. Zombies™ and EA SPORTS F1®. More information about EA is available at www.ea.com/news.EA, EA SPORTS, EA SPORTS FC, Battlefield, Need for Speed, Apex Legends, The Sims, Dragon Age, Titanfall, and Plants vs. Zombies are trademarks of Electronic Arts Inc. John Madden, NFL, and F1 are the property of their respective owners and used with permission. Erin Exum Director, Integrated Comms [email protected] Source: Electronic Arts Inc. Multimedia Files:
    14 Comments 0 Shares
  • From Smart to Intelligent: Evolution in Architecture and Cities

    this picture!Algae Curtain / EcoLogicStudio. Image © ecoLogicStudio"The limits of our design language are the limits of our design thinking". Patrik Schumacher's statement subtly hints at a shift occurring in the built environment, moving beyond technological integration to embrace intelligence in the spaces and cities we occupy. The future proposes a possibility of buildings serving functions beyond housing human activity to actively participate in shaping urban life.The architecture profession has long been enamored with "smart" buildings - structures that collect and process data through sensor networks and automated systems. Smart cities were heralded to improve quality of life as well as the sustainability and efficiency of city operations using technology. While smart buildings and cities are still at a far reach, these advancements only mark the beginning of a much more impactful application of technology in the built environment. Being smart is about collecting data. Being intelligent is about interpreting that data and acting autonomously upon it.
    this picture!The next generation of intelligent buildings will focus on both externalities and the integration of advanced interior systems to improve energy efficiency, sustainability, and security. Exterior innovations like walls with rotatable units that automatically respond to real-time environmental data, optimizing ventilation and insulation without human intervention are one application. Related Article The Future of Work: Sentient Workplaces for Employee Wellbeing Kinetic architectural elements, integrated with artificial intelligence, create responsive exteriors that breathe and adapt. Networked photovoltaic glass systems may share surplus energy across buildings, establishing efficient microgrids that transform individual structures into nodes within larger urban systems.Interior spaces are experiencing a similar evolution through platforms like Honeywell's Advance Control for Buildings, which integrates cybersecurity, accelerated network speeds, and autonomous decision-making capabilities. Such systems simultaneously optimize HVAC, lighting, and security subsystems through real-time adjustments that respond to environmental shifts and occupant behavior patterns. Advanced security incorporates deep learning-powered facial recognition, while sophisticated voice controls distinguish between human commands and background noise with high accuracy.Kas Oosterhuis envisions architecture where building components become senders and receivers of real-time information, creating communicative networks: "People communicate. Buildings communicate. People communicate with people. People communicate with buildings. Buildings communicate with buildings." This swarm architecture represents an open-source, real-time system where all elements participate in continuous information exchange.this picture!this picture!While these projects are impressive, they also bring critical issues about autonomy and control to light. How much decision-making authority should we delegate to our buildings? Should structures make choices for us or simply offer informed suggestions based on learned patterns?Beyond buildings, intelligent systems can remodel urban management through AI and machine learning applications. Solutions that monitor and predict pedestrian traffic patterns in public spaces are being explored. For instance, Carlo Ratti's collaboration with Google's Sidewalk Labs hints at the possibility of the streetscape seamlessly adapting to people's needs with a prototype of a modular and reconfigurable paving system in Toronto. The Dynamic Street features a series of hexagonal modular pavers which can be picked up and replaced within hours or even minutes in order to swiftly change the function of the road without creating disruptions on the street. Sidewalk Labs also developed technologies like Delve, a machine-learning tool for designing cities, and focused on sustainability through initiatives like Mesa, a building-automation system.Cities are becoming their own sensors at elemental levels, with physical fabric automated to monitor performance and use continuously. Digital skins overlay these material systems, enabling populations to navigate urban complexity in real-time—locating services, finding acquaintances, and identifying transportation options.The implications extend beyond immediate utility. Remote sensing capabilities offer insights into urban growth patterns, long-term usage trends, and global-scale problems that individual real-time operations cannot detect. This creates enormous opportunities for urban design that acknowledges the city as a self-organizing system, moving beyond traditional top-down planning toward bottom-up growth enabled by embedded information systems.this picture!this picture!While artificial intelligence dominates discussions of intelligent architecture, parallel developments are emerging through non-human biological intelligence. Researchers are discovering the profound capabilities of living organisms - bacteria, fungi, algae - that have evolved sophisticated strategies over millions of years. Micro-organisms possess intelligence that often eludes human comprehension, yet their exceptional properties offer transformative potential for urban design.EcoLogicStudio's work with the H.O.R.T.U.S. series exemplifies this biological turn in intelligent architecture. The acronym—Hydro Organism Responsive To Urban Stimuli—describes photosynthetic sculptures and urban structures that create artificial habitats for cyanobacteria integrated within the built environment. These living systems function not merely as decorative elements but as active metabolic participants, absorbing emissions from building systems while producing biomass and oxygen through photosynthesis. The PhotoSynthetica Tower project, unveiled at Tokyo's Mori Art Museum, materializes this vision as a complex synthetic organism where bacteria, autonomous farming machines, and various forms of animal intelligence become bio-citizens alongside humans. The future of intelligent architecture lies not in replacing human decision-making but in creating sophisticated feedback loops between human and non-human intelligence. The synthesis recognizes that our knowledge remains incomplete in any age, particularly as new developments push us from lifestyles constraining us to single places toward embracing multiple locations and experiences.this picture!The built environment's role in emerging technologies extends far beyond operational efficiency or cost savings. Intelligent buildings can serve as active participants in sustainability targets, wellness strategies, and broader urban resilience planning. The possibility of intelligent architecture challenges the industry to expand our design language. The question facing the profession is not whether intelligence will permeate the built environment. Rather, architects must gauge how well-positioned we are to design for this intelligence, manage its implications, and partner with our buildings as collaborators in shaping the human experience.This article is part of the ArchDaily Topics: What Is Future Intelligence?, proudly presented by Gendo, an AI co-pilot for Architects. Our mission at Gendo is to help architects produce concept images 100X faster by focusing on the core of the design process. We have built a cutting-edge AI tool in collaboration with architects from some of the most renowned firms, such as Zaha Hadid, KPF, and David Chipperfield.Every month, we explore a topic in-depth through articles, interviews, news, and architecture projects. We invite you to learn more about our ArchDaily Topics. And, as always, at ArchDaily we welcome the contributions of our readers; if you want to submit an article or project, contact us.
    #smart #intelligent #evolution #architecture #cities
    From Smart to Intelligent: Evolution in Architecture and Cities
    this picture!Algae Curtain / EcoLogicStudio. Image © ecoLogicStudio"The limits of our design language are the limits of our design thinking". Patrik Schumacher's statement subtly hints at a shift occurring in the built environment, moving beyond technological integration to embrace intelligence in the spaces and cities we occupy. The future proposes a possibility of buildings serving functions beyond housing human activity to actively participate in shaping urban life.The architecture profession has long been enamored with "smart" buildings - structures that collect and process data through sensor networks and automated systems. Smart cities were heralded to improve quality of life as well as the sustainability and efficiency of city operations using technology. While smart buildings and cities are still at a far reach, these advancements only mark the beginning of a much more impactful application of technology in the built environment. Being smart is about collecting data. Being intelligent is about interpreting that data and acting autonomously upon it. this picture!The next generation of intelligent buildings will focus on both externalities and the integration of advanced interior systems to improve energy efficiency, sustainability, and security. Exterior innovations like walls with rotatable units that automatically respond to real-time environmental data, optimizing ventilation and insulation without human intervention are one application. Related Article The Future of Work: Sentient Workplaces for Employee Wellbeing Kinetic architectural elements, integrated with artificial intelligence, create responsive exteriors that breathe and adapt. Networked photovoltaic glass systems may share surplus energy across buildings, establishing efficient microgrids that transform individual structures into nodes within larger urban systems.Interior spaces are experiencing a similar evolution through platforms like Honeywell's Advance Control for Buildings, which integrates cybersecurity, accelerated network speeds, and autonomous decision-making capabilities. Such systems simultaneously optimize HVAC, lighting, and security subsystems through real-time adjustments that respond to environmental shifts and occupant behavior patterns. Advanced security incorporates deep learning-powered facial recognition, while sophisticated voice controls distinguish between human commands and background noise with high accuracy.Kas Oosterhuis envisions architecture where building components become senders and receivers of real-time information, creating communicative networks: "People communicate. Buildings communicate. People communicate with people. People communicate with buildings. Buildings communicate with buildings." This swarm architecture represents an open-source, real-time system where all elements participate in continuous information exchange.this picture!this picture!While these projects are impressive, they also bring critical issues about autonomy and control to light. How much decision-making authority should we delegate to our buildings? Should structures make choices for us or simply offer informed suggestions based on learned patterns?Beyond buildings, intelligent systems can remodel urban management through AI and machine learning applications. Solutions that monitor and predict pedestrian traffic patterns in public spaces are being explored. For instance, Carlo Ratti's collaboration with Google's Sidewalk Labs hints at the possibility of the streetscape seamlessly adapting to people's needs with a prototype of a modular and reconfigurable paving system in Toronto. The Dynamic Street features a series of hexagonal modular pavers which can be picked up and replaced within hours or even minutes in order to swiftly change the function of the road without creating disruptions on the street. Sidewalk Labs also developed technologies like Delve, a machine-learning tool for designing cities, and focused on sustainability through initiatives like Mesa, a building-automation system.Cities are becoming their own sensors at elemental levels, with physical fabric automated to monitor performance and use continuously. Digital skins overlay these material systems, enabling populations to navigate urban complexity in real-time—locating services, finding acquaintances, and identifying transportation options.The implications extend beyond immediate utility. Remote sensing capabilities offer insights into urban growth patterns, long-term usage trends, and global-scale problems that individual real-time operations cannot detect. This creates enormous opportunities for urban design that acknowledges the city as a self-organizing system, moving beyond traditional top-down planning toward bottom-up growth enabled by embedded information systems.this picture!this picture!While artificial intelligence dominates discussions of intelligent architecture, parallel developments are emerging through non-human biological intelligence. Researchers are discovering the profound capabilities of living organisms - bacteria, fungi, algae - that have evolved sophisticated strategies over millions of years. Micro-organisms possess intelligence that often eludes human comprehension, yet their exceptional properties offer transformative potential for urban design.EcoLogicStudio's work with the H.O.R.T.U.S. series exemplifies this biological turn in intelligent architecture. The acronym—Hydro Organism Responsive To Urban Stimuli—describes photosynthetic sculptures and urban structures that create artificial habitats for cyanobacteria integrated within the built environment. These living systems function not merely as decorative elements but as active metabolic participants, absorbing emissions from building systems while producing biomass and oxygen through photosynthesis. The PhotoSynthetica Tower project, unveiled at Tokyo's Mori Art Museum, materializes this vision as a complex synthetic organism where bacteria, autonomous farming machines, and various forms of animal intelligence become bio-citizens alongside humans. The future of intelligent architecture lies not in replacing human decision-making but in creating sophisticated feedback loops between human and non-human intelligence. The synthesis recognizes that our knowledge remains incomplete in any age, particularly as new developments push us from lifestyles constraining us to single places toward embracing multiple locations and experiences.this picture!The built environment's role in emerging technologies extends far beyond operational efficiency or cost savings. Intelligent buildings can serve as active participants in sustainability targets, wellness strategies, and broader urban resilience planning. The possibility of intelligent architecture challenges the industry to expand our design language. The question facing the profession is not whether intelligence will permeate the built environment. Rather, architects must gauge how well-positioned we are to design for this intelligence, manage its implications, and partner with our buildings as collaborators in shaping the human experience.This article is part of the ArchDaily Topics: What Is Future Intelligence?, proudly presented by Gendo, an AI co-pilot for Architects. Our mission at Gendo is to help architects produce concept images 100X faster by focusing on the core of the design process. We have built a cutting-edge AI tool in collaboration with architects from some of the most renowned firms, such as Zaha Hadid, KPF, and David Chipperfield.Every month, we explore a topic in-depth through articles, interviews, news, and architecture projects. We invite you to learn more about our ArchDaily Topics. And, as always, at ArchDaily we welcome the contributions of our readers; if you want to submit an article or project, contact us. #smart #intelligent #evolution #architecture #cities
    WWW.ARCHDAILY.COM
    From Smart to Intelligent: Evolution in Architecture and Cities
    Save this picture!Algae Curtain / EcoLogicStudio. Image © ecoLogicStudio"The limits of our design language are the limits of our design thinking". Patrik Schumacher's statement subtly hints at a shift occurring in the built environment, moving beyond technological integration to embrace intelligence in the spaces and cities we occupy. The future proposes a possibility of buildings serving functions beyond housing human activity to actively participate in shaping urban life.The architecture profession has long been enamored with "smart" buildings - structures that collect and process data through sensor networks and automated systems. Smart cities were heralded to improve quality of life as well as the sustainability and efficiency of city operations using technology. While smart buildings and cities are still at a far reach, these advancements only mark the beginning of a much more impactful application of technology in the built environment. Being smart is about collecting data. Being intelligent is about interpreting that data and acting autonomously upon it. Save this picture!The next generation of intelligent buildings will focus on both externalities and the integration of advanced interior systems to improve energy efficiency, sustainability, and security. Exterior innovations like walls with rotatable units that automatically respond to real-time environmental data, optimizing ventilation and insulation without human intervention are one application. Related Article The Future of Work: Sentient Workplaces for Employee Wellbeing Kinetic architectural elements, integrated with artificial intelligence, create responsive exteriors that breathe and adapt. Networked photovoltaic glass systems may share surplus energy across buildings, establishing efficient microgrids that transform individual structures into nodes within larger urban systems.Interior spaces are experiencing a similar evolution through platforms like Honeywell's Advance Control for Buildings, which integrates cybersecurity, accelerated network speeds, and autonomous decision-making capabilities. Such systems simultaneously optimize HVAC, lighting, and security subsystems through real-time adjustments that respond to environmental shifts and occupant behavior patterns. Advanced security incorporates deep learning-powered facial recognition, while sophisticated voice controls distinguish between human commands and background noise with high accuracy.Kas Oosterhuis envisions architecture where building components become senders and receivers of real-time information, creating communicative networks: "People communicate. Buildings communicate. People communicate with people. People communicate with buildings. Buildings communicate with buildings." This swarm architecture represents an open-source, real-time system where all elements participate in continuous information exchange.Save this picture!Save this picture!While these projects are impressive, they also bring critical issues about autonomy and control to light. How much decision-making authority should we delegate to our buildings? Should structures make choices for us or simply offer informed suggestions based on learned patterns?Beyond buildings, intelligent systems can remodel urban management through AI and machine learning applications. Solutions that monitor and predict pedestrian traffic patterns in public spaces are being explored. For instance, Carlo Ratti's collaboration with Google's Sidewalk Labs hints at the possibility of the streetscape seamlessly adapting to people's needs with a prototype of a modular and reconfigurable paving system in Toronto. The Dynamic Street features a series of hexagonal modular pavers which can be picked up and replaced within hours or even minutes in order to swiftly change the function of the road without creating disruptions on the street. Sidewalk Labs also developed technologies like Delve, a machine-learning tool for designing cities, and focused on sustainability through initiatives like Mesa, a building-automation system.Cities are becoming their own sensors at elemental levels, with physical fabric automated to monitor performance and use continuously. Digital skins overlay these material systems, enabling populations to navigate urban complexity in real-time—locating services, finding acquaintances, and identifying transportation options.The implications extend beyond immediate utility. Remote sensing capabilities offer insights into urban growth patterns, long-term usage trends, and global-scale problems that individual real-time operations cannot detect. This creates enormous opportunities for urban design that acknowledges the city as a self-organizing system, moving beyond traditional top-down planning toward bottom-up growth enabled by embedded information systems.Save this picture!Save this picture!While artificial intelligence dominates discussions of intelligent architecture, parallel developments are emerging through non-human biological intelligence. Researchers are discovering the profound capabilities of living organisms - bacteria, fungi, algae - that have evolved sophisticated strategies over millions of years. Micro-organisms possess intelligence that often eludes human comprehension, yet their exceptional properties offer transformative potential for urban design.EcoLogicStudio's work with the H.O.R.T.U.S. series exemplifies this biological turn in intelligent architecture. The acronym—Hydro Organism Responsive To Urban Stimuli—describes photosynthetic sculptures and urban structures that create artificial habitats for cyanobacteria integrated within the built environment. These living systems function not merely as decorative elements but as active metabolic participants, absorbing emissions from building systems while producing biomass and oxygen through photosynthesis. The PhotoSynthetica Tower project, unveiled at Tokyo's Mori Art Museum, materializes this vision as a complex synthetic organism where bacteria, autonomous farming machines, and various forms of animal intelligence become bio-citizens alongside humans. The future of intelligent architecture lies not in replacing human decision-making but in creating sophisticated feedback loops between human and non-human intelligence. The synthesis recognizes that our knowledge remains incomplete in any age, particularly as new developments push us from lifestyles constraining us to single places toward embracing multiple locations and experiences.Save this picture!The built environment's role in emerging technologies extends far beyond operational efficiency or cost savings. Intelligent buildings can serve as active participants in sustainability targets, wellness strategies, and broader urban resilience planning. The possibility of intelligent architecture challenges the industry to expand our design language. The question facing the profession is not whether intelligence will permeate the built environment. Rather, architects must gauge how well-positioned we are to design for this intelligence, manage its implications, and partner with our buildings as collaborators in shaping the human experience.This article is part of the ArchDaily Topics: What Is Future Intelligence?, proudly presented by Gendo, an AI co-pilot for Architects. Our mission at Gendo is to help architects produce concept images 100X faster by focusing on the core of the design process. We have built a cutting-edge AI tool in collaboration with architects from some of the most renowned firms, such as Zaha Hadid, KPF, and David Chipperfield.Every month, we explore a topic in-depth through articles, interviews, news, and architecture projects. We invite you to learn more about our ArchDaily Topics. And, as always, at ArchDaily we welcome the contributions of our readers; if you want to submit an article or project, contact us.
    0 Comments 0 Shares
  • Marshall’s first soundbar will change how we think about home theater

    With its gold accents, prominent control knobs, and guitar amp styling, Marshall’s hefty Heston 120 looks like no other soundbar on the planet. But what fascinates me about the company’s first TV speaker isn’t the styling, it’s how it’s been engineered to work with the company’s equally iconic portable Bluetooth speakers: It uses Bluetooth.
    Wait, I know that sounds obvious, but bear with me because this is actually a new and intriguing change to the way soundbars work.

    Recommended Videos

    Marshall Heston 120
    Marshall
    First, a quick 101 on the Heston 120. It’s priced at which should tell you right away that Marshall isn’t messing around. That’s the same price as the Sonos Arc Ultra and Bowers & Wilkins Panorama 3, and only more than the Bose Smart Ultra Soundbar.
    It packs 11 drivers, including two dedicated subwoofers, and can process both Dolby Atmos and DTS:X in a 5.1.2-channel configuration. It has onboard mics that are used for room calibration, and it supports a wide array of protocols, including Apple AirPlay, Google Cast, Spotify Connect, and Tidal Connect. On the back panel, you get an Ethernet jack, an HDMI passthrough input with 4K/120Hz/Dolby Vision support, stereo RCA analog jacks, and a dedicated subwoofer output — something you rarely find on soundbars. 
    Marshall has redesigned its mobile app to give people deep controls over the Heston as well as the company’s full range of existing headphones, earbuds, and speakers.
    Expansion via Bluetooth
    Marshall
    Where things get interesting is on the wireless side of the equation. The Heston 120 supports Wi-Fi 6 and Bluetooth 5.3. That’s not unusual — all three of its competitors I mentioned above have the same or similar specs. What *is* unusual is how it uses these connections, specifically Bluetooth.
    Marshall considers the Heston 120 an all-in-one speaker that’s designed to work equally well for movies and music. However, the company also recognizes that some people want even more immersion from their TV sound systems, so it offers expansion via wireless speakers.
    Normally, when a soundbar is expandable with additional speakers, those connections are made via Wi-Fior dedicated onboard transmitter/receivers. Bluetooth has never been considered a viable option because of issues around latency and limitations on transmitting multiple audio channelssimultaneously.
    However, the Heston 120 is Bluetooth Auracast compatible — as far as I know, that’s a first for a soundbar — a technology that overcomes traditional Bluetooth limitations.
    Unlike earlier Bluetooth standards, which could create audio lag of 100-300 milliseconds, Auracast can achieve a latency of as little as 30 milliseconds. That should be almost imperceptible for dialogue synchronization, and even less noticeable for low-frequency bass or surround sound effects.
    Moreover, an Auracast device, like a TV or soundbar, can transmit multiple discrete broadcasts. In theory, it could handle multiple wireless subwoofers, two or four surround speakers, plus one or more wireless headphones or hearing aids — each with a dedicated sound stream.
    More choice, more flexibility
    Marshall Emberton III Marshall
    So what does this mean? Marshall’s ultimate goal is to let you use any pair of Auracast-capable Bluetooth speakers as your Heston 120 left/right surrounds, and an additional Auracast subwoofer for low-frequency effects.
    Initially, however, the plan is more conservative. At launch, the Heston 120 will support a single Marshall-built wireless subwoofer and later in the year you’ll be able to add two Marshall Bluetooth speakers as left/right surrounds.
    You’ll have a lot of choice — all of Marshall’s third-gen Homeline Bluetooth speakers are Auracast-ready — from the small but mighty Emberton III to the 120-watt Woburn III. Once they receive a planned firmware update, you can expect them all to work with the Heston as satellite speakers via Bluetooth.
    Typically, wireless surround speakers and subwoofers need to be plugged into a wall at all times. That provides power to the built-in amplifiers and their Wi-Fi network connections. Bluetooth, as a wireless technology, requires way less power than Wi-Fi, so if your Marshall portable Bluetooth speaker has a 20-hour battery, that’s 20 hours of completely wire-free home theater listening.
    And if, for some reason, you don’t have a Wi-Fi network, you can still assemble a multi-speaker system.
    Marshall points out that while Auracast is an open standard, each company can implement it as it sees fit, and that could mean that some Auracast speakers won’t work with the Heston 120. JBL Auracast speakers like the Charge 6 — for example — can only share and access audio from other JBL Auracast speakers.
    Still, Auracast-enabled soundbars like the Heston are opening up a new era in home theater technology; one where we’ll have a lot more freedom to choose the kind, number, and placement of speakers. It will also reduce the number of gadgets we buy. When your portable Bluetooth speaker can double as a surround speaker, that’s one less device in our ever-expanding world of tech.
    More options coming soon
    Auracast-enabled soundbars are the first step toward greater flexibility and choice in home theater. Soon, there will be more alternatives. Dolby has promised it will launch a soundbar alternative technology called Dolby Atmos FlexConnect, which will let a compatible TV send multichannel audio to a variety of wireless speakers that you’ll be able to place almost anywhere in your room.
    Fraunhofer IIS, the entity that gave us the MP3 file format, has its own version of FlexConnect — the somewhat awkwardly named UpHear Flexible Rendering. We haven’t seen any commercially available systems based on either Dolby’s or Fraunhofer’s tech so far, but I expect that to change in 2025.
    #marshalls #first #soundbar #will #change
    Marshall’s first soundbar will change how we think about home theater
    With its gold accents, prominent control knobs, and guitar amp styling, Marshall’s hefty Heston 120 looks like no other soundbar on the planet. But what fascinates me about the company’s first TV speaker isn’t the styling, it’s how it’s been engineered to work with the company’s equally iconic portable Bluetooth speakers: It uses Bluetooth. Wait, I know that sounds obvious, but bear with me because this is actually a new and intriguing change to the way soundbars work. Recommended Videos Marshall Heston 120 Marshall First, a quick 101 on the Heston 120. It’s priced at which should tell you right away that Marshall isn’t messing around. That’s the same price as the Sonos Arc Ultra and Bowers & Wilkins Panorama 3, and only more than the Bose Smart Ultra Soundbar. It packs 11 drivers, including two dedicated subwoofers, and can process both Dolby Atmos and DTS:X in a 5.1.2-channel configuration. It has onboard mics that are used for room calibration, and it supports a wide array of protocols, including Apple AirPlay, Google Cast, Spotify Connect, and Tidal Connect. On the back panel, you get an Ethernet jack, an HDMI passthrough input with 4K/120Hz/Dolby Vision support, stereo RCA analog jacks, and a dedicated subwoofer output — something you rarely find on soundbars.  Marshall has redesigned its mobile app to give people deep controls over the Heston as well as the company’s full range of existing headphones, earbuds, and speakers. Expansion via Bluetooth Marshall Where things get interesting is on the wireless side of the equation. The Heston 120 supports Wi-Fi 6 and Bluetooth 5.3. That’s not unusual — all three of its competitors I mentioned above have the same or similar specs. What *is* unusual is how it uses these connections, specifically Bluetooth. Marshall considers the Heston 120 an all-in-one speaker that’s designed to work equally well for movies and music. However, the company also recognizes that some people want even more immersion from their TV sound systems, so it offers expansion via wireless speakers. Normally, when a soundbar is expandable with additional speakers, those connections are made via Wi-Fior dedicated onboard transmitter/receivers. Bluetooth has never been considered a viable option because of issues around latency and limitations on transmitting multiple audio channelssimultaneously. However, the Heston 120 is Bluetooth Auracast compatible — as far as I know, that’s a first for a soundbar — a technology that overcomes traditional Bluetooth limitations. Unlike earlier Bluetooth standards, which could create audio lag of 100-300 milliseconds, Auracast can achieve a latency of as little as 30 milliseconds. That should be almost imperceptible for dialogue synchronization, and even less noticeable for low-frequency bass or surround sound effects. Moreover, an Auracast device, like a TV or soundbar, can transmit multiple discrete broadcasts. In theory, it could handle multiple wireless subwoofers, two or four surround speakers, plus one or more wireless headphones or hearing aids — each with a dedicated sound stream. More choice, more flexibility Marshall Emberton III Marshall So what does this mean? Marshall’s ultimate goal is to let you use any pair of Auracast-capable Bluetooth speakers as your Heston 120 left/right surrounds, and an additional Auracast subwoofer for low-frequency effects. Initially, however, the plan is more conservative. At launch, the Heston 120 will support a single Marshall-built wireless subwoofer and later in the year you’ll be able to add two Marshall Bluetooth speakers as left/right surrounds. You’ll have a lot of choice — all of Marshall’s third-gen Homeline Bluetooth speakers are Auracast-ready — from the small but mighty Emberton III to the 120-watt Woburn III. Once they receive a planned firmware update, you can expect them all to work with the Heston as satellite speakers via Bluetooth. Typically, wireless surround speakers and subwoofers need to be plugged into a wall at all times. That provides power to the built-in amplifiers and their Wi-Fi network connections. Bluetooth, as a wireless technology, requires way less power than Wi-Fi, so if your Marshall portable Bluetooth speaker has a 20-hour battery, that’s 20 hours of completely wire-free home theater listening. And if, for some reason, you don’t have a Wi-Fi network, you can still assemble a multi-speaker system. Marshall points out that while Auracast is an open standard, each company can implement it as it sees fit, and that could mean that some Auracast speakers won’t work with the Heston 120. JBL Auracast speakers like the Charge 6 — for example — can only share and access audio from other JBL Auracast speakers. Still, Auracast-enabled soundbars like the Heston are opening up a new era in home theater technology; one where we’ll have a lot more freedom to choose the kind, number, and placement of speakers. It will also reduce the number of gadgets we buy. When your portable Bluetooth speaker can double as a surround speaker, that’s one less device in our ever-expanding world of tech. More options coming soon Auracast-enabled soundbars are the first step toward greater flexibility and choice in home theater. Soon, there will be more alternatives. Dolby has promised it will launch a soundbar alternative technology called Dolby Atmos FlexConnect, which will let a compatible TV send multichannel audio to a variety of wireless speakers that you’ll be able to place almost anywhere in your room. Fraunhofer IIS, the entity that gave us the MP3 file format, has its own version of FlexConnect — the somewhat awkwardly named UpHear Flexible Rendering. We haven’t seen any commercially available systems based on either Dolby’s or Fraunhofer’s tech so far, but I expect that to change in 2025. #marshalls #first #soundbar #will #change
    WWW.DIGITALTRENDS.COM
    Marshall’s first soundbar will change how we think about home theater
    With its gold accents, prominent control knobs, and guitar amp styling, Marshall’s hefty Heston 120 looks like no other soundbar on the planet. But what fascinates me about the company’s first TV speaker isn’t the styling (it looks exactly like I’d expect from a Marshall product), it’s how it’s been engineered to work with the company’s equally iconic portable Bluetooth speakers: It uses Bluetooth. Wait, I know that sounds obvious, but bear with me because this is actually a new and intriguing change to the way soundbars work. Recommended Videos Marshall Heston 120 Marshall First, a quick 101 on the Heston 120. It’s priced at $1,000, which should tell you right away that Marshall isn’t messing around. That’s the same price as the Sonos Arc Ultra and Bowers & Wilkins Panorama 3, and only $100 more than the Bose Smart Ultra Soundbar. It packs 11 drivers, including two dedicated subwoofers, and can process both Dolby Atmos and DTS:X in a 5.1.2-channel configuration. It has onboard mics that are used for room calibration, and it supports a wide array of protocols, including Apple AirPlay, Google Cast, Spotify Connect, and Tidal Connect. On the back panel, you get an Ethernet jack, an HDMI passthrough input with 4K/120Hz/Dolby Vision support, stereo RCA analog jacks (for a turntable or other gear), and a dedicated subwoofer output — something you rarely find on soundbars.  Marshall has redesigned its mobile app to give people deep controls over the Heston as well as the company’s full range of existing headphones, earbuds, and speakers. Expansion via Bluetooth Marshall Where things get interesting is on the wireless side of the equation. The Heston 120 supports Wi-Fi 6 and Bluetooth 5.3. That’s not unusual — all three of its competitors I mentioned above have the same or similar specs. What *is* unusual is how it uses these connections, specifically Bluetooth. Marshall considers the Heston 120 an all-in-one speaker that’s designed to work equally well for movies and music. However, the company also recognizes that some people want even more immersion from their TV sound systems, so it offers expansion via wireless speakers. Normally, when a soundbar is expandable with additional speakers, those connections are made via Wi-Fi (Sonos, Bluesound, Denon) or dedicated onboard transmitter/receivers (Bose, Sony, Klipsch). Bluetooth has never been considered a viable option because of issues around latency and limitations on transmitting multiple audio channels (e.g. low frequency, surround left, surround right) simultaneously. However, the Heston 120 is Bluetooth Auracast compatible — as far as I know, that’s a first for a soundbar — a technology that overcomes traditional Bluetooth limitations. Unlike earlier Bluetooth standards, which could create audio lag of 100-300 milliseconds, Auracast can achieve a latency of as little as 30 milliseconds. That should be almost imperceptible for dialogue synchronization, and even less noticeable for low-frequency bass or surround sound effects. Moreover, an Auracast device, like a TV or soundbar, can transmit multiple discrete broadcasts. In theory, it could handle multiple wireless subwoofers, two or four surround speakers, plus one or more wireless headphones or hearing aids — each with a dedicated sound stream. More choice, more flexibility Marshall Emberton III Marshall So what does this mean? Marshall’s ultimate goal is to let you use any pair of Auracast-capable Bluetooth speakers as your Heston 120 left/right surrounds, and an additional Auracast subwoofer for low-frequency effects. Initially, however, the plan is more conservative. At launch, the Heston 120 will support a single Marshall-built wireless subwoofer and later in the year you’ll be able to add two Marshall Bluetooth speakers as left/right surrounds. You’ll have a lot of choice — all of Marshall’s third-gen Homeline Bluetooth speakers are Auracast-ready — from the small but mighty Emberton III to the 120-watt Woburn III. Once they receive a planned firmware update, you can expect them all to work with the Heston as satellite speakers via Bluetooth. Typically, wireless surround speakers and subwoofers need to be plugged into a wall at all times. That provides power to the built-in amplifiers and their Wi-Fi network connections. Bluetooth, as a wireless technology, requires way less power than Wi-Fi, so if your Marshall portable Bluetooth speaker has a 20-hour battery, that’s 20 hours of completely wire-free home theater listening. And if, for some reason, you don’t have a Wi-Fi network, you can still assemble a multi-speaker system. Marshall points out that while Auracast is an open standard, each company can implement it as it sees fit, and that could mean that some Auracast speakers won’t work with the Heston 120. JBL Auracast speakers like the Charge 6 — for example — can only share and access audio from other JBL Auracast speakers. Still, Auracast-enabled soundbars like the Heston are opening up a new era in home theater technology; one where we’ll have a lot more freedom to choose the kind, number, and placement of speakers. It will also reduce the number of gadgets we buy. When your portable Bluetooth speaker can double as a surround speaker, that’s one less device in our ever-expanding world of tech. More options coming soon Auracast-enabled soundbars are the first step toward greater flexibility and choice in home theater. Soon, there will be more alternatives. Dolby has promised it will launch a soundbar alternative technology called Dolby Atmos FlexConnect, which will let a compatible TV send multichannel audio to a variety of wireless speakers that you’ll be able to place almost anywhere in your room. Fraunhofer IIS, the entity that gave us the MP3 file format, has its own version of FlexConnect — the somewhat awkwardly named UpHear Flexible Rendering. We haven’t seen any commercially available systems based on either Dolby’s or Fraunhofer’s tech so far, but I expect that to change in 2025.
    0 Comments 0 Shares
  • Beyond single-model AI: How architectural design drives reliable multi-agent orchestration

    Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More

    We’re seeing AI evolve fast. It’s no longer just about building a single, super-smart model. The real power, and the exciting frontier, lies in getting multiple specialized AI agents to work together. Think of them as a team of expert colleagues, each with their own skills — one analyzes data, another interacts with customers, a third manages logistics, and so on. Getting this team to collaborate seamlessly, as envisioned by various industry discussions and enabled by modern platforms, is where the magic happens.
    But let’s be real: Coordinating a bunch of independent, sometimes quirky, AI agents is hard. It’s not just building cool individual agents; it’s the messy middle bit — the orchestration — that can make or break the system. When you have agents that are relying on each other, acting asynchronously and potentially failing independently, you’re not just building software; you’re conducting a complex orchestra. This is where solid architectural blueprints come in. We need patterns designed for reliability and scale right from the start.
    The knotty problem of agent collaboration
    Why is orchestrating multi-agent systems such a challenge? Well, for starters:

    They’re independent: Unlike functions being called in a program, agents often have their own internal loops, goals and states. They don’t just wait patiently for instructions.
    Communication gets complicated: It’s not just Agent A talking to Agent B. Agent A might broadcast info Agent C and D care about, while Agent B is waiting for a signal from E before telling F something.
    They need to have a shared brain: How do they all agree on the “truth” of what’s happening? If Agent A updates a record, how does Agent B know about it reliably and quickly? Stale or conflicting information is a killer.
    Failure is inevitable: An agent crashes. A message gets lost. An external service call times out. When one part of the system falls over, you don’t want the whole thing grinding to a halt or, worse, doing the wrong thing.
    Consistency can be difficult: How do you ensure that a complex, multi-step process involving several agents actually reaches a valid final state? This isn’t easy when operations are distributed and asynchronous.

    Simply put, the combinatorial complexity explodes as you add more agents and interactions. Without a solid plan, debugging becomes a nightmare, and the system feels fragile.
    Picking your orchestration playbook
    How you decide agents coordinate their work is perhaps the most fundamental architectural choice. Here are a few frameworks:

    The conductor: This is like a traditional symphony orchestra. You have a main orchestratorthat dictates the flow, tells specific agentswhen to perform their piece, and brings it all together.

    This allows for: Clear workflows, execution that is easy to trace, straightforward control; it is simpler for smaller or less dynamic systems.
    Watch out for: The conductor can become a bottleneck or a single point of failure. This scenario is less flexible if you need agents to react dynamically or work without constant oversight.

    The jazz ensemble: Here, agents coordinate more directly with each other based on shared signals or rules, much like musicians in a jazz band improvising based on cues from each other and a common theme. There might be shared resources or event streams, but no central boss micro-managing every note.

    This allows for: Resilience, scalability, adaptability to changing conditions, more emergent behaviors.
    What to consider: It can be harder to understand the overall flow, debugging is trickyand ensuring global consistency requires careful design.

    Many real-world multi-agent systemsend up being a hybrid — perhaps a high-level orchestrator sets the stage; then groups of agents within that structure coordinate decentrally.
    For agents to collaborate effectively, they often need a shared view of the world, or at least the parts relevant to their task. This could be the current status of a customer order, a shared knowledge base of product information or the collective progress towards a goal. Keeping this “collective brain” consistent and accessible across distributed agents is tough.
    Architectural patterns we lean on:

    The central library: A single, authoritative placewhere all shared information lives. Agents check books outand return them.

    Pro: Single source of truth, easier to enforce consistency.
    Con: Can get hammered with requests, potentially slowing things down or becoming a choke point. Must be seriously robust and scalable.

    Distributed notes: Agents keep local copies of frequently needed info for speed, backed by the central library.

    Pro: Faster reads.
    Con: How do you know if your copy is up-to-date? Cache invalidation and consistency become significant architectural puzzles.

    Shouting updates: Instead of agents constantly asking the library, the libraryshouts out “Hey, this piece of info changed!” via messages. Agents listen for updates they care about and update their own notes.

    Pro: Agents are decoupled, which is good for event-driven patterns.
    Con: Ensuring everyone gets the message and handles it correctly adds complexity. What if a message is lost?

    The right choice depends on how critical up-to-the-second consistency is, versus how much performance you need.
    Building for when stuff goes wrongIt’s not if an agent fails, it’s when. Your architecture needs to anticipate this.
    Think about:

    Watchdogs: This means having components whose job it is to simply watch other agents. If an agent goes quiet or starts acting weird, the watchdog can try restarting it or alerting the system.
    Try again, but be smart: If an agent’s action fails, it should often just try again. But, this only works if the action is idempotent. That means doing it five times has the exact same result as doing it once. If actions aren’t idempotent, retries can cause chaos.
    Cleaning up messes: If Agent A did something successfully, but Agent Bfailed, you might need to “undo” Agent A’s work. Patterns like Sagas help coordinate these multi-step, compensable workflows.
    Knowing where you were: Keeping a persistent log of the overall process helps. If the system goes down mid-workflow, it can pick up from the last known good step rather than starting over.
    Building firewalls: These patterns prevent a failure in one agent or service from overloading or crashing others, containing the damage.

    Making sure the job gets done rightEven with individual agent reliability, you need confidence that the entire collaborative task finishes correctly.
    Consider:

    Atomic-ish operations: While true ACID transactions are hard with distributed agents, you can design workflows to behave as close to atomically as possible using patterns like Sagas.
    The unchanging logbook: Record every significant action and state change as an event in an immutable log. This gives you a perfect history, makes state reconstruction easy, and is great for auditing and debugging.
    Agreeing on reality: For critical decisions, you might need agents to agree before proceeding. This can involve simple voting mechanisms or more complex distributed consensus algorithms if trust or coordination is particularly challenging.
    Checking the work: Build steps into your workflow to validate the output or state after an agent completes its task. If something looks wrong, trigger a reconciliation or correction process.

    The best architecture needs the right foundation.

    The post office: This is absolutely essential for decoupling agents. They send messages to the queue; agents interested in those messages pick them up. This enables asynchronous communication, handles traffic spikes and is key for resilient distributed systems.
    The shared filing cabinet: This is where your shared state lives. Choose the right typebased on your data structure and access patterns. This must be performant and highly available.
    The X-ray machine: Logs, metrics, tracing – you need these. Debugging distributed systems is notoriously hard. Being able to see exactly what every agent was doing, when and how they were interacting is non-negotiable.
    The directory: How do agents find each other or discover the services they need? A central registry helps manage this complexity.
    The playground: This is how you actually deploy, manage and scale all those individual agent instances reliably.

    How do agents chat?The way agents talk impacts everything from performance to how tightly coupled they are.

    Your standard phone call: This is simple, works everywhere and good for basic request/response. But it can feel a bit chatty and can be less efficient for high volume or complex data structures.
    The structured conference call: This uses efficient data formats, supports different call types including streaming and is type-safe. It is great for performance but requires defining service contracts.
    The bulletin board: Agents post messages to topics; other agents subscribe to topics they care about. This is asynchronous, highly scalable and completely decouples senders from receivers.
    Direct line: Agents call functions directly on other agents. This is fast, but creates very tight coupling — agent need to know exactly who they’re calling and where they are.

    Choose the protocol that fits the interaction pattern. Is it a direct request? A broadcast event? A stream of data?
    Putting it all together
    Building reliable, scalable multi-agent systems isn’t about finding a magic bullet; it’s about making smart architectural choices based on your specific needs. Will you lean more hierarchical for control or federated for resilience? How will you manage that crucial shared state? What’s your plan for whenan agent goes down? What infrastructure pieces are non-negotiable?
    It’s complex, yes, but by focusing on these architectural blueprints — orchestrating interactions, managing shared knowledge, planning for failure, ensuring consistency and building on a solid infrastructure foundation — you can tame the complexity and build the robust, intelligent systems that will drive the next wave of enterprise AI.
    Nikhil Gupta is the AI product management leader/staff product manager at Atlassian.

    Daily insights on business use cases with VB Daily
    If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI.
    Read our Privacy Policy

    Thanks for subscribing. Check out more VB newsletters here.

    An error occured.
    #beyond #singlemodel #how #architectural #design
    Beyond single-model AI: How architectural design drives reliable multi-agent orchestration
    Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More We’re seeing AI evolve fast. It’s no longer just about building a single, super-smart model. The real power, and the exciting frontier, lies in getting multiple specialized AI agents to work together. Think of them as a team of expert colleagues, each with their own skills — one analyzes data, another interacts with customers, a third manages logistics, and so on. Getting this team to collaborate seamlessly, as envisioned by various industry discussions and enabled by modern platforms, is where the magic happens. But let’s be real: Coordinating a bunch of independent, sometimes quirky, AI agents is hard. It’s not just building cool individual agents; it’s the messy middle bit — the orchestration — that can make or break the system. When you have agents that are relying on each other, acting asynchronously and potentially failing independently, you’re not just building software; you’re conducting a complex orchestra. This is where solid architectural blueprints come in. We need patterns designed for reliability and scale right from the start. The knotty problem of agent collaboration Why is orchestrating multi-agent systems such a challenge? Well, for starters: They’re independent: Unlike functions being called in a program, agents often have their own internal loops, goals and states. They don’t just wait patiently for instructions. Communication gets complicated: It’s not just Agent A talking to Agent B. Agent A might broadcast info Agent C and D care about, while Agent B is waiting for a signal from E before telling F something. They need to have a shared brain: How do they all agree on the “truth” of what’s happening? If Agent A updates a record, how does Agent B know about it reliably and quickly? Stale or conflicting information is a killer. Failure is inevitable: An agent crashes. A message gets lost. An external service call times out. When one part of the system falls over, you don’t want the whole thing grinding to a halt or, worse, doing the wrong thing. Consistency can be difficult: How do you ensure that a complex, multi-step process involving several agents actually reaches a valid final state? This isn’t easy when operations are distributed and asynchronous. Simply put, the combinatorial complexity explodes as you add more agents and interactions. Without a solid plan, debugging becomes a nightmare, and the system feels fragile. Picking your orchestration playbook How you decide agents coordinate their work is perhaps the most fundamental architectural choice. Here are a few frameworks: The conductor: This is like a traditional symphony orchestra. You have a main orchestratorthat dictates the flow, tells specific agentswhen to perform their piece, and brings it all together. This allows for: Clear workflows, execution that is easy to trace, straightforward control; it is simpler for smaller or less dynamic systems. Watch out for: The conductor can become a bottleneck or a single point of failure. This scenario is less flexible if you need agents to react dynamically or work without constant oversight. The jazz ensemble: Here, agents coordinate more directly with each other based on shared signals or rules, much like musicians in a jazz band improvising based on cues from each other and a common theme. There might be shared resources or event streams, but no central boss micro-managing every note. This allows for: Resilience, scalability, adaptability to changing conditions, more emergent behaviors. What to consider: It can be harder to understand the overall flow, debugging is trickyand ensuring global consistency requires careful design. Many real-world multi-agent systemsend up being a hybrid — perhaps a high-level orchestrator sets the stage; then groups of agents within that structure coordinate decentrally. For agents to collaborate effectively, they often need a shared view of the world, or at least the parts relevant to their task. This could be the current status of a customer order, a shared knowledge base of product information or the collective progress towards a goal. Keeping this “collective brain” consistent and accessible across distributed agents is tough. Architectural patterns we lean on: The central library: A single, authoritative placewhere all shared information lives. Agents check books outand return them. Pro: Single source of truth, easier to enforce consistency. Con: Can get hammered with requests, potentially slowing things down or becoming a choke point. Must be seriously robust and scalable. Distributed notes: Agents keep local copies of frequently needed info for speed, backed by the central library. Pro: Faster reads. Con: How do you know if your copy is up-to-date? Cache invalidation and consistency become significant architectural puzzles. Shouting updates: Instead of agents constantly asking the library, the libraryshouts out “Hey, this piece of info changed!” via messages. Agents listen for updates they care about and update their own notes. Pro: Agents are decoupled, which is good for event-driven patterns. Con: Ensuring everyone gets the message and handles it correctly adds complexity. What if a message is lost? The right choice depends on how critical up-to-the-second consistency is, versus how much performance you need. Building for when stuff goes wrongIt’s not if an agent fails, it’s when. Your architecture needs to anticipate this. Think about: Watchdogs: This means having components whose job it is to simply watch other agents. If an agent goes quiet or starts acting weird, the watchdog can try restarting it or alerting the system. Try again, but be smart: If an agent’s action fails, it should often just try again. But, this only works if the action is idempotent. That means doing it five times has the exact same result as doing it once. If actions aren’t idempotent, retries can cause chaos. Cleaning up messes: If Agent A did something successfully, but Agent Bfailed, you might need to “undo” Agent A’s work. Patterns like Sagas help coordinate these multi-step, compensable workflows. Knowing where you were: Keeping a persistent log of the overall process helps. If the system goes down mid-workflow, it can pick up from the last known good step rather than starting over. Building firewalls: These patterns prevent a failure in one agent or service from overloading or crashing others, containing the damage. Making sure the job gets done rightEven with individual agent reliability, you need confidence that the entire collaborative task finishes correctly. Consider: Atomic-ish operations: While true ACID transactions are hard with distributed agents, you can design workflows to behave as close to atomically as possible using patterns like Sagas. The unchanging logbook: Record every significant action and state change as an event in an immutable log. This gives you a perfect history, makes state reconstruction easy, and is great for auditing and debugging. Agreeing on reality: For critical decisions, you might need agents to agree before proceeding. This can involve simple voting mechanisms or more complex distributed consensus algorithms if trust or coordination is particularly challenging. Checking the work: Build steps into your workflow to validate the output or state after an agent completes its task. If something looks wrong, trigger a reconciliation or correction process. The best architecture needs the right foundation. The post office: This is absolutely essential for decoupling agents. They send messages to the queue; agents interested in those messages pick them up. This enables asynchronous communication, handles traffic spikes and is key for resilient distributed systems. The shared filing cabinet: This is where your shared state lives. Choose the right typebased on your data structure and access patterns. This must be performant and highly available. The X-ray machine: Logs, metrics, tracing – you need these. Debugging distributed systems is notoriously hard. Being able to see exactly what every agent was doing, when and how they were interacting is non-negotiable. The directory: How do agents find each other or discover the services they need? A central registry helps manage this complexity. The playground: This is how you actually deploy, manage and scale all those individual agent instances reliably. How do agents chat?The way agents talk impacts everything from performance to how tightly coupled they are. Your standard phone call: This is simple, works everywhere and good for basic request/response. But it can feel a bit chatty and can be less efficient for high volume or complex data structures. The structured conference call: This uses efficient data formats, supports different call types including streaming and is type-safe. It is great for performance but requires defining service contracts. The bulletin board: Agents post messages to topics; other agents subscribe to topics they care about. This is asynchronous, highly scalable and completely decouples senders from receivers. Direct line: Agents call functions directly on other agents. This is fast, but creates very tight coupling — agent need to know exactly who they’re calling and where they are. Choose the protocol that fits the interaction pattern. Is it a direct request? A broadcast event? A stream of data? Putting it all together Building reliable, scalable multi-agent systems isn’t about finding a magic bullet; it’s about making smart architectural choices based on your specific needs. Will you lean more hierarchical for control or federated for resilience? How will you manage that crucial shared state? What’s your plan for whenan agent goes down? What infrastructure pieces are non-negotiable? It’s complex, yes, but by focusing on these architectural blueprints — orchestrating interactions, managing shared knowledge, planning for failure, ensuring consistency and building on a solid infrastructure foundation — you can tame the complexity and build the robust, intelligent systems that will drive the next wave of enterprise AI. Nikhil Gupta is the AI product management leader/staff product manager at Atlassian. Daily insights on business use cases with VB Daily If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI. Read our Privacy Policy Thanks for subscribing. Check out more VB newsletters here. An error occured. #beyond #singlemodel #how #architectural #design
    VENTUREBEAT.COM
    Beyond single-model AI: How architectural design drives reliable multi-agent orchestration
    Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More We’re seeing AI evolve fast. It’s no longer just about building a single, super-smart model. The real power, and the exciting frontier, lies in getting multiple specialized AI agents to work together. Think of them as a team of expert colleagues, each with their own skills — one analyzes data, another interacts with customers, a third manages logistics, and so on. Getting this team to collaborate seamlessly, as envisioned by various industry discussions and enabled by modern platforms, is where the magic happens. But let’s be real: Coordinating a bunch of independent, sometimes quirky, AI agents is hard. It’s not just building cool individual agents; it’s the messy middle bit — the orchestration — that can make or break the system. When you have agents that are relying on each other, acting asynchronously and potentially failing independently, you’re not just building software; you’re conducting a complex orchestra. This is where solid architectural blueprints come in. We need patterns designed for reliability and scale right from the start. The knotty problem of agent collaboration Why is orchestrating multi-agent systems such a challenge? Well, for starters: They’re independent: Unlike functions being called in a program, agents often have their own internal loops, goals and states. They don’t just wait patiently for instructions. Communication gets complicated: It’s not just Agent A talking to Agent B. Agent A might broadcast info Agent C and D care about, while Agent B is waiting for a signal from E before telling F something. They need to have a shared brain (state): How do they all agree on the “truth” of what’s happening? If Agent A updates a record, how does Agent B know about it reliably and quickly? Stale or conflicting information is a killer. Failure is inevitable: An agent crashes. A message gets lost. An external service call times out. When one part of the system falls over, you don’t want the whole thing grinding to a halt or, worse, doing the wrong thing. Consistency can be difficult: How do you ensure that a complex, multi-step process involving several agents actually reaches a valid final state? This isn’t easy when operations are distributed and asynchronous. Simply put, the combinatorial complexity explodes as you add more agents and interactions. Without a solid plan, debugging becomes a nightmare, and the system feels fragile. Picking your orchestration playbook How you decide agents coordinate their work is perhaps the most fundamental architectural choice. Here are a few frameworks: The conductor (hierarchical): This is like a traditional symphony orchestra. You have a main orchestrator (the conductor) that dictates the flow, tells specific agents (musicians) when to perform their piece, and brings it all together. This allows for: Clear workflows, execution that is easy to trace, straightforward control; it is simpler for smaller or less dynamic systems. Watch out for: The conductor can become a bottleneck or a single point of failure. This scenario is less flexible if you need agents to react dynamically or work without constant oversight. The jazz ensemble (federated/decentralized): Here, agents coordinate more directly with each other based on shared signals or rules, much like musicians in a jazz band improvising based on cues from each other and a common theme. There might be shared resources or event streams, but no central boss micro-managing every note. This allows for: Resilience (if one musician stops, the others can often continue), scalability, adaptability to changing conditions, more emergent behaviors. What to consider: It can be harder to understand the overall flow, debugging is tricky (“Why did that agent do that then?”) and ensuring global consistency requires careful design. Many real-world multi-agent systems (MAS) end up being a hybrid — perhaps a high-level orchestrator sets the stage; then groups of agents within that structure coordinate decentrally. For agents to collaborate effectively, they often need a shared view of the world, or at least the parts relevant to their task. This could be the current status of a customer order, a shared knowledge base of product information or the collective progress towards a goal. Keeping this “collective brain” consistent and accessible across distributed agents is tough. Architectural patterns we lean on: The central library (centralized knowledge base): A single, authoritative place (like a database or a dedicated knowledge service) where all shared information lives. Agents check books out (read) and return them (write). Pro: Single source of truth, easier to enforce consistency. Con: Can get hammered with requests, potentially slowing things down or becoming a choke point. Must be seriously robust and scalable. Distributed notes (distributed cache): Agents keep local copies of frequently needed info for speed, backed by the central library. Pro: Faster reads. Con: How do you know if your copy is up-to-date? Cache invalidation and consistency become significant architectural puzzles. Shouting updates (message passing): Instead of agents constantly asking the library, the library (or other agents) shouts out “Hey, this piece of info changed!” via messages. Agents listen for updates they care about and update their own notes. Pro: Agents are decoupled, which is good for event-driven patterns. Con: Ensuring everyone gets the message and handles it correctly adds complexity. What if a message is lost? The right choice depends on how critical up-to-the-second consistency is, versus how much performance you need. Building for when stuff goes wrong (error handling and recovery) It’s not if an agent fails, it’s when. Your architecture needs to anticipate this. Think about: Watchdogs (supervision): This means having components whose job it is to simply watch other agents. If an agent goes quiet or starts acting weird, the watchdog can try restarting it or alerting the system. Try again, but be smart (retries and idempotency): If an agent’s action fails, it should often just try again. But, this only works if the action is idempotent. That means doing it five times has the exact same result as doing it once (like setting a value, not incrementing it). If actions aren’t idempotent, retries can cause chaos. Cleaning up messes (compensation): If Agent A did something successfully, but Agent B (a later step in the process) failed, you might need to “undo” Agent A’s work. Patterns like Sagas help coordinate these multi-step, compensable workflows. Knowing where you were (workflow state): Keeping a persistent log of the overall process helps. If the system goes down mid-workflow, it can pick up from the last known good step rather than starting over. Building firewalls (circuit breakers and bulkheads): These patterns prevent a failure in one agent or service from overloading or crashing others, containing the damage. Making sure the job gets done right (consistent task execution) Even with individual agent reliability, you need confidence that the entire collaborative task finishes correctly. Consider: Atomic-ish operations: While true ACID transactions are hard with distributed agents, you can design workflows to behave as close to atomically as possible using patterns like Sagas. The unchanging logbook (event sourcing): Record every significant action and state change as an event in an immutable log. This gives you a perfect history, makes state reconstruction easy, and is great for auditing and debugging. Agreeing on reality (consensus): For critical decisions, you might need agents to agree before proceeding. This can involve simple voting mechanisms or more complex distributed consensus algorithms if trust or coordination is particularly challenging. Checking the work (validation): Build steps into your workflow to validate the output or state after an agent completes its task. If something looks wrong, trigger a reconciliation or correction process. The best architecture needs the right foundation. The post office (message queues/brokers like Kafka or RabbitMQ): This is absolutely essential for decoupling agents. They send messages to the queue; agents interested in those messages pick them up. This enables asynchronous communication, handles traffic spikes and is key for resilient distributed systems. The shared filing cabinet (knowledge stores/databases): This is where your shared state lives. Choose the right type (relational, NoSQL, graph) based on your data structure and access patterns. This must be performant and highly available. The X-ray machine (observability platforms): Logs, metrics, tracing – you need these. Debugging distributed systems is notoriously hard. Being able to see exactly what every agent was doing, when and how they were interacting is non-negotiable. The directory (agent registry): How do agents find each other or discover the services they need? A central registry helps manage this complexity. The playground (containerization and orchestration like Kubernetes): This is how you actually deploy, manage and scale all those individual agent instances reliably. How do agents chat? (Communication protocol choices) The way agents talk impacts everything from performance to how tightly coupled they are. Your standard phone call (REST/HTTP): This is simple, works everywhere and good for basic request/response. But it can feel a bit chatty and can be less efficient for high volume or complex data structures. The structured conference call (gRPC): This uses efficient data formats, supports different call types including streaming and is type-safe. It is great for performance but requires defining service contracts. The bulletin board (message queues — protocols like AMQP, MQTT): Agents post messages to topics; other agents subscribe to topics they care about. This is asynchronous, highly scalable and completely decouples senders from receivers. Direct line (RPC — less common): Agents call functions directly on other agents. This is fast, but creates very tight coupling — agent need to know exactly who they’re calling and where they are. Choose the protocol that fits the interaction pattern. Is it a direct request? A broadcast event? A stream of data? Putting it all together Building reliable, scalable multi-agent systems isn’t about finding a magic bullet; it’s about making smart architectural choices based on your specific needs. Will you lean more hierarchical for control or federated for resilience? How will you manage that crucial shared state? What’s your plan for when (not if) an agent goes down? What infrastructure pieces are non-negotiable? It’s complex, yes, but by focusing on these architectural blueprints — orchestrating interactions, managing shared knowledge, planning for failure, ensuring consistency and building on a solid infrastructure foundation — you can tame the complexity and build the robust, intelligent systems that will drive the next wave of enterprise AI. Nikhil Gupta is the AI product management leader/staff product manager at Atlassian. Daily insights on business use cases with VB Daily If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI. Read our Privacy Policy Thanks for subscribing. Check out more VB newsletters here. An error occured.
    0 Comments 0 Shares
  • #333;">BOYAMIC 2 Rebuilds Mobile Audio with AI and Onboard Capture

    Wireless mics fail when they rely too much on perfect conditions.
    BOYAMIC 2 fixes that by making every part of the system self-contained.
    Each transmitter records on its own.
    Each receiver controls levels, backups, and signal without needing an app.
    Noise is filtered in real time.
    Recording keeps going even if the connection drops.
    Designer: BOYAMIC
    There’s no need for a separate recorder or post-edit rescue.
    The unit handles gain shifts, background interference, and voice clarity without user intervention.
    Everything shows on screen.
    Adjustments happen through physical controls.
    Files are saved directly to internal memory.
    This system is built to capture clean audio without depending on external gear.
    It records immediately, adapts instantly, and stores everything without breaking the workflow.
    Industrial Design and Physical Form
    Each transmitter is small but solid.
    It’s 40 millimeters tall with a ridged surface that helps with grip and alignment.
    The finish reduces glare and makes handling easier.
    You can clip it or use the built-in magnet.
    Placement is quick, and it stays put.
    The record button is recessed, so you won’t hit it by mistake.
    An LED shows when it’s active.
    The mic capsule stays exposed but protected, avoiding interference from hands or clothing.
    Nothing sticks out or gets in the way.
     
    The receiver is built around a screen and a knob.
    The 1.1-inch display shows battery, signal, gain, and status.
    The knob adjusts volume and selects settings.
    It works fast, without touchscreen lag.
    You can see and feel every change.
    Connections are spaced cleanly.
    One side has a USB-C port.
    The other has a 3.5 mm jack.
    A plug-in port supports USB-C or Lightning.
    The mount is fixed and locks into rigs without shifting.
    The charging case holds two transmitters and one receiver.
    Each has its own slot with magnetic contacts.
    Drop them in, close the lid, and they stay in place.
    LEDs on the case show power levels.
    There are no loose parts, exposed pins, or extra steps.
    Every shape and control supports fast setup and clear operation.
    You can press, turn, mount, and move without second-guessing.
    The design doesn’t try to be invisible; it stays readable, durable, and direct.
    Signal Processing and Audio Control
    BOYAMIC 2 uses onboard AI to separate voice from background noise.
    The system was trained on over 700,000 real-world sound samples.
    It filters traffic, crowds, wind, and mechanical hum in real time.
    Depending on the environment, you can toggle between strong and weak noise reduction.
    Both modes work directly from the transmitter or through the receiver.
    The mic uses a 6mm condenser capsule with a 48 kHz sample rate and 24-bit depth.
    The signal-to-noise ratio reaches 90 dB.
    Two low-cut filter options, at 75 Hz and 150 Hz, handle low-end rumble.
    These are effective against HVAC, engine hum, or low vibration.
    Gain is managed with automatic control.
    The system boosts quiet voices and pulls back when sound gets too loud.
    Built-in limiters stop clipping during spikes.
    A safety track records a second copy at -12 dB for backup.
    This makes it harder to lose a usable take even when volume jumps suddenly.
    Each setting is adjustable on screen.
    You don’t need a mobile app to access basic controls.
    Everything runs live and updates immediately.
    There are no delays or sync problems during capture.
    Recording and Storage
    Each transmitter records internally without needing the receiver.
    Files are saved in 32-bit float or 24-bit WAV formats.
    Internal storage is 8 GB.
    That gives you about ten hours of float audio or fifteen hours of 24-bit.
    When full, the system loops and overwrites older files.
    Recording continues even if the connection drops.
    Every session is split into timestamped chunks for fast transfer.
    You can plug the transmitter into any USB-C port and drag the files directly.
    No software is needed.
    This setup protects against signal loss, battery drops, or app crashes.
    The mic stays live, and the recording stays intact.
    Each transmitter runs for up to nine hours without noise cancellation or recording.
    With both features on, the runtime is closer to six hours.
    The receiver runs for about fifteen hours.
    The charging case holds enough power to recharge all three units twice.
    The system uses 2.4 GHz digital transmission.
    Its range can reach up to 300 meters in open areas.
    With walls or obstacles, it drops to around 60 meters.
    Latency stays at 25 milliseconds, even at long distances.
    You get reliable sync and stable audio across open ground or indoor spaces.
    Charging is handled through the included case or by direct USB-C.
    Each device takes under two hours to recharge fully.
    Compatibility and Multi-Device Support
    The system supports cameras, smartphones, and computers.
    USB-C and Lightning adapters are included.
    A 3.5 mm TRS cable connects the receiver to most cameras or mixers.
    While recording, you can charge your phone through the receiver, which is useful for long mobile shoots.
    One transmitter can send audio to up to four receivers at once, which helps with multi-angle setups or backup channels.
    The receiver also supports stereo, mono, and safety track modes.
    Based on your workflow, you choose how audio is split or merged.
    Settings can be changed from the receiver screen or through the BOYA app.
    The app adds firmware updates, custom EQ profiles, and gain presets for different camera brands.
    But the core controls don’t depend on it.The post BOYAMIC 2 Rebuilds Mobile Audio with AI and Onboard Capture first appeared on Yanko Design.
    #0066cc;">#boyamic #rebuilds #mobile #audio #with #and #onboard #capture #wireless #mics #fail #when #they #rely #too #much #perfect #conditionsboyamic #fixes #that #making #every #part #the #system #selfcontainedeach #transmitter #records #its #owneach #receiver #controls #levels #backups #signal #without #needing #appnoise #filtered #real #timerecording #keeps #going #even #connection #dropsdesigner #boyamictheres #need #for #separate #recorder #postedit #rescuethe #unit #handles #gain #shifts #background #interference #voice #clarity #user #interventioneverything #shows #screenadjustments #happen #through #physical #controlsfiles #are #saved #directly #internal #memorythis #built #clean #depending #external #gearit #immediately #adapts #instantly #stores #everything #breaking #workflowindustrial #design #formeach #small #but #solidits #millimeters #tall #ridged #surface #helps #grip #alignmentthe #finish #reduces #glare #makes #handling #easieryou #can #clip #use #builtin #magnetplacement #quick #stays #putthe #record #button #recessed #you #wont #hit #mistakean #led #activethe #mic #capsule #exposed #protected #avoiding #from #hands #clothingnothing #sticks #out #gets #waythe #around #screen #knobthe #11inch #display #battery #statusthe #knob #adjusts #volume #selects #settingsit #works #fast #touchscreen #lagyou #see #feel #changeconnections #spaced #cleanlyone #side #has #usbc #portthe #other #jacka #plugin #port #supports #lightningthe #mount #fixed #locks #into #rigs #shiftingthe #charging #case #holds #two #transmitters #one #receivereach #own #slot #magnetic #contactsdrop #them #close #lid #stay #placeleds #show #power #levelsthere #loose #parts #pins #extra #stepsevery #shape #control #setup #clear #operationyou #press #turn #move #secondguessingthe #doesnt #try #invisible #readable #durable #directsignal #processing #controlboyamic #uses #noisethe #was #trained #over #realworld #sound #samplesit #filters #traffic #crowds #wind #mechanical #hum #timedepending #environment #toggle #between #strong #weak #noise #reductionboth #modes #work #receiverthe #6mm #condenser #khz #sample #rate #24bit #depththe #signaltonoise #ratio #reaches #dbtwo #lowcut #filter #options #handle #lowend #rumblethese #effective #against #hvac #engine #low #vibrationgain #managed #automatic #controlthe #boosts #quiet #voices #pulls #back #loudbuiltin #limiters #stop #clipping #during #spikesa #safety #track #second #copy #backupthis #harder #lose #usable #take #jumps #suddenlyeach #setting #adjustable #screenyou #dont #app #access #basic #controlseverything #runs #live #updates #immediatelythere #delays #sync #problems #capturerecording #storageeach #internally #receiverfiles #32bit #float #wav #formatsinternal #storage #gbthat #gives #about #ten #hours #fifteen #24bitwhen #full #loops #overwrites #older #filesrecording #continues #dropsevery #session #split #timestamped #chunks #transferyou #plug #any #drag #files #directlyno #software #neededthis #protects #loss #drops #crashesthe #recording #intacteach #nine #cancellation #recordingwith #both #features #runtime #closer #six #hoursthe #enough #recharge #all #three #units #twicethe #ghz #digital #transmissionits #range #reach #meters #open #areaswith #walls #obstacles #meterslatency #milliseconds #long #distancesyou #get #reliable #stable #across #ground #indoor #spacescharging #handled #included #direct #usbceach #device #takes #under #fullycompatibility #multidevice #supportthe #cameras #smartphones #computersusbc #lightning #adapters #includeda #trs #cable #connects #most #mixerswhile #charge #your #phone #which #useful #shootsone #send #four #receivers #once #multiangle #setups #backup #channelsthe #also #stereo #mono #modesbased #workflow #choose #how #mergedsettings #changed #boya #appthe #adds #firmware #custom #profiles #presets #different #camera #brandsbut #core #depend #itthe #post #first #appeared #yanko
    BOYAMIC 2 Rebuilds Mobile Audio with AI and Onboard Capture
    Wireless mics fail when they rely too much on perfect conditions. BOYAMIC 2 fixes that by making every part of the system self-contained. Each transmitter records on its own. Each receiver controls levels, backups, and signal without needing an app. Noise is filtered in real time. Recording keeps going even if the connection drops. Designer: BOYAMIC There’s no need for a separate recorder or post-edit rescue. The unit handles gain shifts, background interference, and voice clarity without user intervention. Everything shows on screen. Adjustments happen through physical controls. Files are saved directly to internal memory. This system is built to capture clean audio without depending on external gear. It records immediately, adapts instantly, and stores everything without breaking the workflow. Industrial Design and Physical Form Each transmitter is small but solid. It’s 40 millimeters tall with a ridged surface that helps with grip and alignment. The finish reduces glare and makes handling easier. You can clip it or use the built-in magnet. Placement is quick, and it stays put. The record button is recessed, so you won’t hit it by mistake. An LED shows when it’s active. The mic capsule stays exposed but protected, avoiding interference from hands or clothing. Nothing sticks out or gets in the way.   The receiver is built around a screen and a knob. The 1.1-inch display shows battery, signal, gain, and status. The knob adjusts volume and selects settings. It works fast, without touchscreen lag. You can see and feel every change. Connections are spaced cleanly. One side has a USB-C port. The other has a 3.5 mm jack. A plug-in port supports USB-C or Lightning. The mount is fixed and locks into rigs without shifting. The charging case holds two transmitters and one receiver. Each has its own slot with magnetic contacts. Drop them in, close the lid, and they stay in place. LEDs on the case show power levels. There are no loose parts, exposed pins, or extra steps. Every shape and control supports fast setup and clear operation. You can press, turn, mount, and move without second-guessing. The design doesn’t try to be invisible; it stays readable, durable, and direct. Signal Processing and Audio Control BOYAMIC 2 uses onboard AI to separate voice from background noise. The system was trained on over 700,000 real-world sound samples. It filters traffic, crowds, wind, and mechanical hum in real time. Depending on the environment, you can toggle between strong and weak noise reduction. Both modes work directly from the transmitter or through the receiver. The mic uses a 6mm condenser capsule with a 48 kHz sample rate and 24-bit depth. The signal-to-noise ratio reaches 90 dB. Two low-cut filter options, at 75 Hz and 150 Hz, handle low-end rumble. These are effective against HVAC, engine hum, or low vibration. Gain is managed with automatic control. The system boosts quiet voices and pulls back when sound gets too loud. Built-in limiters stop clipping during spikes. A safety track records a second copy at -12 dB for backup. This makes it harder to lose a usable take even when volume jumps suddenly. Each setting is adjustable on screen. You don’t need a mobile app to access basic controls. Everything runs live and updates immediately. There are no delays or sync problems during capture. Recording and Storage Each transmitter records internally without needing the receiver. Files are saved in 32-bit float or 24-bit WAV formats. Internal storage is 8 GB. That gives you about ten hours of float audio or fifteen hours of 24-bit. When full, the system loops and overwrites older files. Recording continues even if the connection drops. Every session is split into timestamped chunks for fast transfer. You can plug the transmitter into any USB-C port and drag the files directly. No software is needed. This setup protects against signal loss, battery drops, or app crashes. The mic stays live, and the recording stays intact. Each transmitter runs for up to nine hours without noise cancellation or recording. With both features on, the runtime is closer to six hours. The receiver runs for about fifteen hours. The charging case holds enough power to recharge all three units twice. The system uses 2.4 GHz digital transmission. Its range can reach up to 300 meters in open areas. With walls or obstacles, it drops to around 60 meters. Latency stays at 25 milliseconds, even at long distances. You get reliable sync and stable audio across open ground or indoor spaces. Charging is handled through the included case or by direct USB-C. Each device takes under two hours to recharge fully. Compatibility and Multi-Device Support The system supports cameras, smartphones, and computers. USB-C and Lightning adapters are included. A 3.5 mm TRS cable connects the receiver to most cameras or mixers. While recording, you can charge your phone through the receiver, which is useful for long mobile shoots. One transmitter can send audio to up to four receivers at once, which helps with multi-angle setups or backup channels. The receiver also supports stereo, mono, and safety track modes. Based on your workflow, you choose how audio is split or merged. Settings can be changed from the receiver screen or through the BOYA app. The app adds firmware updates, custom EQ profiles, and gain presets for different camera brands. But the core controls don’t depend on it.The post BOYAMIC 2 Rebuilds Mobile Audio with AI and Onboard Capture first appeared on Yanko Design.
    المصدر: www.yankodesign.com
    #boyamic #rebuilds #mobile #audio #with #and #onboard #capture #wireless #mics #fail #when #they #rely #too #much #perfect #conditionsboyamic #fixes #that #making #every #part #the #system #selfcontainedeach #transmitter #records #its #owneach #receiver #controls #levels #backups #signal #without #needing #appnoise #filtered #real #timerecording #keeps #going #even #connection #dropsdesigner #boyamictheres #need #for #separate #recorder #postedit #rescuethe #unit #handles #gain #shifts #background #interference #voice #clarity #user #interventioneverything #shows #screenadjustments #happen #through #physical #controlsfiles #are #saved #directly #internal #memorythis #built #clean #depending #external #gearit #immediately #adapts #instantly #stores #everything #breaking #workflowindustrial #design #formeach #small #but #solidits #millimeters #tall #ridged #surface #helps #grip #alignmentthe #finish #reduces #glare #makes #handling #easieryou #can #clip #use #builtin #magnetplacement #quick #stays #putthe #record #button #recessed #you #wont #hit #mistakean #led #activethe #mic #capsule #exposed #protected #avoiding #from #hands #clothingnothing #sticks #out #gets #waythe #around #screen #knobthe #11inch #display #battery #statusthe #knob #adjusts #volume #selects #settingsit #works #fast #touchscreen #lagyou #see #feel #changeconnections #spaced #cleanlyone #side #has #usbc #portthe #other #jacka #plugin #port #supports #lightningthe #mount #fixed #locks #into #rigs #shiftingthe #charging #case #holds #two #transmitters #one #receivereach #own #slot #magnetic #contactsdrop #them #close #lid #stay #placeleds #show #power #levelsthere #loose #parts #pins #extra #stepsevery #shape #control #setup #clear #operationyou #press #turn #move #secondguessingthe #doesnt #try #invisible #readable #durable #directsignal #processing #controlboyamic #uses #noisethe #was #trained #over #realworld #sound #samplesit #filters #traffic #crowds #wind #mechanical #hum #timedepending #environment #toggle #between #strong #weak #noise #reductionboth #modes #work #receiverthe #6mm #condenser #khz #sample #rate #24bit #depththe #signaltonoise #ratio #reaches #dbtwo #lowcut #filter #options #handle #lowend #rumblethese #effective #against #hvac #engine #low #vibrationgain #managed #automatic #controlthe #boosts #quiet #voices #pulls #back #loudbuiltin #limiters #stop #clipping #during #spikesa #safety #track #second #copy #backupthis #harder #lose #usable #take #jumps #suddenlyeach #setting #adjustable #screenyou #dont #app #access #basic #controlseverything #runs #live #updates #immediatelythere #delays #sync #problems #capturerecording #storageeach #internally #receiverfiles #32bit #float #wav #formatsinternal #storage #gbthat #gives #about #ten #hours #fifteen #24bitwhen #full #loops #overwrites #older #filesrecording #continues #dropsevery #session #split #timestamped #chunks #transferyou #plug #any #drag #files #directlyno #software #neededthis #protects #loss #drops #crashesthe #recording #intacteach #nine #cancellation #recordingwith #both #features #runtime #closer #six #hoursthe #enough #recharge #all #three #units #twicethe #ghz #digital #transmissionits #range #reach #meters #open #areaswith #walls #obstacles #meterslatency #milliseconds #long #distancesyou #get #reliable #stable #across #ground #indoor #spacescharging #handled #included #direct #usbceach #device #takes #under #fullycompatibility #multidevice #supportthe #cameras #smartphones #computersusbc #lightning #adapters #includeda #trs #cable #connects #most #mixerswhile #charge #your #phone #which #useful #shootsone #send #four #receivers #once #multiangle #setups #backup #channelsthe #also #stereo #mono #modesbased #workflow #choose #how #mergedsettings #changed #boya #appthe #adds #firmware #custom #profiles #presets #different #camera #brandsbut #core #depend #itthe #post #first #appeared #yanko
    WWW.YANKODESIGN.COM
    BOYAMIC 2 Rebuilds Mobile Audio with AI and Onboard Capture
    Wireless mics fail when they rely too much on perfect conditions. BOYAMIC 2 fixes that by making every part of the system self-contained. Each transmitter records on its own. Each receiver controls levels, backups, and signal without needing an app. Noise is filtered in real time. Recording keeps going even if the connection drops. Designer: BOYAMIC There’s no need for a separate recorder or post-edit rescue. The unit handles gain shifts, background interference, and voice clarity without user intervention. Everything shows on screen. Adjustments happen through physical controls. Files are saved directly to internal memory. This system is built to capture clean audio without depending on external gear. It records immediately, adapts instantly, and stores everything without breaking the workflow. Industrial Design and Physical Form Each transmitter is small but solid. It’s 40 millimeters tall with a ridged surface that helps with grip and alignment. The finish reduces glare and makes handling easier. You can clip it or use the built-in magnet. Placement is quick, and it stays put. The record button is recessed, so you won’t hit it by mistake. An LED shows when it’s active. The mic capsule stays exposed but protected, avoiding interference from hands or clothing. Nothing sticks out or gets in the way.   The receiver is built around a screen and a knob. The 1.1-inch display shows battery, signal, gain, and status. The knob adjusts volume and selects settings. It works fast, without touchscreen lag. You can see and feel every change. Connections are spaced cleanly. One side has a USB-C port. The other has a 3.5 mm jack. A plug-in port supports USB-C or Lightning. The mount is fixed and locks into rigs without shifting. The charging case holds two transmitters and one receiver. Each has its own slot with magnetic contacts. Drop them in, close the lid, and they stay in place. LEDs on the case show power levels. There are no loose parts, exposed pins, or extra steps. Every shape and control supports fast setup and clear operation. You can press, turn, mount, and move without second-guessing. The design doesn’t try to be invisible; it stays readable, durable, and direct. Signal Processing and Audio Control BOYAMIC 2 uses onboard AI to separate voice from background noise. The system was trained on over 700,000 real-world sound samples. It filters traffic, crowds, wind, and mechanical hum in real time. Depending on the environment, you can toggle between strong and weak noise reduction. Both modes work directly from the transmitter or through the receiver. The mic uses a 6mm condenser capsule with a 48 kHz sample rate and 24-bit depth. The signal-to-noise ratio reaches 90 dB. Two low-cut filter options, at 75 Hz and 150 Hz, handle low-end rumble. These are effective against HVAC, engine hum, or low vibration. Gain is managed with automatic control. The system boosts quiet voices and pulls back when sound gets too loud. Built-in limiters stop clipping during spikes. A safety track records a second copy at -12 dB for backup. This makes it harder to lose a usable take even when volume jumps suddenly. Each setting is adjustable on screen. You don’t need a mobile app to access basic controls. Everything runs live and updates immediately. There are no delays or sync problems during capture. Recording and Storage Each transmitter records internally without needing the receiver. Files are saved in 32-bit float or 24-bit WAV formats. Internal storage is 8 GB. That gives you about ten hours of float audio or fifteen hours of 24-bit. When full, the system loops and overwrites older files. Recording continues even if the connection drops. Every session is split into timestamped chunks for fast transfer. You can plug the transmitter into any USB-C port and drag the files directly. No software is needed. This setup protects against signal loss, battery drops, or app crashes. The mic stays live, and the recording stays intact. Each transmitter runs for up to nine hours without noise cancellation or recording. With both features on, the runtime is closer to six hours. The receiver runs for about fifteen hours. The charging case holds enough power to recharge all three units twice. The system uses 2.4 GHz digital transmission. Its range can reach up to 300 meters in open areas. With walls or obstacles, it drops to around 60 meters. Latency stays at 25 milliseconds, even at long distances. You get reliable sync and stable audio across open ground or indoor spaces. Charging is handled through the included case or by direct USB-C. Each device takes under two hours to recharge fully. Compatibility and Multi-Device Support The system supports cameras, smartphones, and computers. USB-C and Lightning adapters are included. A 3.5 mm TRS cable connects the receiver to most cameras or mixers. While recording, you can charge your phone through the receiver, which is useful for long mobile shoots. One transmitter can send audio to up to four receivers at once, which helps with multi-angle setups or backup channels. The receiver also supports stereo, mono, and safety track modes. Based on your workflow, you choose how audio is split or merged. Settings can be changed from the receiver screen or through the BOYA app. The app adds firmware updates, custom EQ profiles, and gain presets for different camera brands. But the core controls don’t depend on it.The post BOYAMIC 2 Rebuilds Mobile Audio with AI and Onboard Capture first appeared on Yanko Design.
    0 Comments 0 Shares