• How libraries are becoming launchpads for music careers  

    In an era dominated by artificial intelligence and smartphones, one of the most overlooked engines of economic growth sits quietly at the heart of every neighborhood: the public library. 

    Gone are the days when libraries were sanctuaries reserved for only reading and research. Today, they are being reimagined as dynamic hubs for workforce development, creative sector support, and cultural exchange. Across the country, these reservoirs of knowledge are evolving into digital and physical beacons of community resilience. 

    Local access, global reach: A case study in artist empowerment 

    In Huntsville, where I serve as the city’s first music officer, we’ve partnered with our public library system to develop a multifunctional creative hub—with music at its core. A primary pillar of our collaboration is Blast Music, a digital streaming platform designed to showcase local talent. It’s a model other cities can and should replicate. 

    Through the Blast program, artists are paid, promoted, and added to a curated library collection—offering not only exposure, but bona fide industry credentials. Over 100 local artists are currently featured on the platform, and we will welcome up to 50 additional artists into the program annually. 

    The ripple effect of Blast is real. The free service empowers local listeners to discover homegrown talent while giving musicians tools to grow their fan base and attract industry attention. Perhaps most importantly, Blast provides emerging artists with resume-worthy recognition—essential for building sustainable careers in a tough industry. 

    But Blast isn’t just about digital reach—it’s embedded in Huntsville’s cultural DNA. From artist showcases like the Ladies of Blast event at the Orion Amphitheater, to community events like Hear to Be Seen, to stages designated exclusively for Blast artist performances at Camp to Amp, PorchFest, and more, Blast is bringing music into public spaces and cultivating civic pride. That’s the kind of community infrastructure that libraries are uniquely equipped to deliver. 

    There’s no such thing as too much visibility, and even artists with international acclaim see value in the platform. Huntsville native Kim Tibbs, a vocalist, songwriter, Alabama Music Hall of Fame honoree and UK chart-topper, submitted her album The Science of Completion Volume I to Blast—not only for more exposure, but to mentor and support the next generation of artists in her hometown.  

    Libraries as talent incubators 

    Huntsville is part of a broader national trend. In cities like Chicago, Nashville, and Austin, libraries are integrating creative labs, media production studios, and music education into their core services—functioning as public-sector incubators for the creative economy. 

    As technology continues to reshape traditional jobs, libraries are well-positioned to bridge skill gaps and fuel the rise of creative economies, including the vital but often overlooked non-performance roles in the music industry. 

    Huntsville is doubling down on this approach. We’re investing millions into programs that bring interactive music technology workshops to teens at the local library—focusing on hands-on training in production, recording, and audio engineering. With professional equipment, studio spaces, and expert instruction, we’re preparing the next generation for careers both onstage and behind the scenes. 

    Local industry is stepping up too. Hear Technologies, a global leader in sound and AV production, has been designing cutting-edge audio devices for years. They’re now part of a dynamic team collaborating with city leaders to help develop the library’s music maker space, nurture new talent and accelerate our region’s creative growth. 

    This matters now, more than ever 

    Libraries have always been entry points for education, employment, and exploration. But today, they’re more than just information access points—they are gateways to opportunity and launchpads for industries that define the future. By utilizing public space and collaborating with local talent, libraries can become platforms for economic mobility and cultural innovation. This investment isn’t a feel-good gesture. It’s a smart, strategic move for any city building a future that works—for everyone. 

    The playlist is simple: Invest in creative ecosystems, embed them in trusted community institutions like public libraries, and treat music as critical infrastructure.  

    Matt Mandrella is music officer for the City of Huntsville, Alabama. 
    #how #libraries #are #becoming #launchpads
    How libraries are becoming launchpads for music careers  
    In an era dominated by artificial intelligence and smartphones, one of the most overlooked engines of economic growth sits quietly at the heart of every neighborhood: the public library.  Gone are the days when libraries were sanctuaries reserved for only reading and research. Today, they are being reimagined as dynamic hubs for workforce development, creative sector support, and cultural exchange. Across the country, these reservoirs of knowledge are evolving into digital and physical beacons of community resilience.  Local access, global reach: A case study in artist empowerment  In Huntsville, where I serve as the city’s first music officer, we’ve partnered with our public library system to develop a multifunctional creative hub—with music at its core. A primary pillar of our collaboration is Blast Music, a digital streaming platform designed to showcase local talent. It’s a model other cities can and should replicate.  Through the Blast program, artists are paid, promoted, and added to a curated library collection—offering not only exposure, but bona fide industry credentials. Over 100 local artists are currently featured on the platform, and we will welcome up to 50 additional artists into the program annually.  The ripple effect of Blast is real. The free service empowers local listeners to discover homegrown talent while giving musicians tools to grow their fan base and attract industry attention. Perhaps most importantly, Blast provides emerging artists with resume-worthy recognition—essential for building sustainable careers in a tough industry.  But Blast isn’t just about digital reach—it’s embedded in Huntsville’s cultural DNA. From artist showcases like the Ladies of Blast event at the Orion Amphitheater, to community events like Hear to Be Seen, to stages designated exclusively for Blast artist performances at Camp to Amp, PorchFest, and more, Blast is bringing music into public spaces and cultivating civic pride. That’s the kind of community infrastructure that libraries are uniquely equipped to deliver.  There’s no such thing as too much visibility, and even artists with international acclaim see value in the platform. Huntsville native Kim Tibbs, a vocalist, songwriter, Alabama Music Hall of Fame honoree and UK chart-topper, submitted her album The Science of Completion Volume I to Blast—not only for more exposure, but to mentor and support the next generation of artists in her hometown.   Libraries as talent incubators  Huntsville is part of a broader national trend. In cities like Chicago, Nashville, and Austin, libraries are integrating creative labs, media production studios, and music education into their core services—functioning as public-sector incubators for the creative economy.  As technology continues to reshape traditional jobs, libraries are well-positioned to bridge skill gaps and fuel the rise of creative economies, including the vital but often overlooked non-performance roles in the music industry.  Huntsville is doubling down on this approach. We’re investing millions into programs that bring interactive music technology workshops to teens at the local library—focusing on hands-on training in production, recording, and audio engineering. With professional equipment, studio spaces, and expert instruction, we’re preparing the next generation for careers both onstage and behind the scenes.  Local industry is stepping up too. Hear Technologies, a global leader in sound and AV production, has been designing cutting-edge audio devices for years. They’re now part of a dynamic team collaborating with city leaders to help develop the library’s music maker space, nurture new talent and accelerate our region’s creative growth.  This matters now, more than ever  Libraries have always been entry points for education, employment, and exploration. But today, they’re more than just information access points—they are gateways to opportunity and launchpads for industries that define the future. By utilizing public space and collaborating with local talent, libraries can become platforms for economic mobility and cultural innovation. This investment isn’t a feel-good gesture. It’s a smart, strategic move for any city building a future that works—for everyone.  The playlist is simple: Invest in creative ecosystems, embed them in trusted community institutions like public libraries, and treat music as critical infrastructure.   Matt Mandrella is music officer for the City of Huntsville, Alabama.  #how #libraries #are #becoming #launchpads
    How libraries are becoming launchpads for music careers  
    www.fastcompany.com
    In an era dominated by artificial intelligence and smartphones, one of the most overlooked engines of economic growth sits quietly at the heart of every neighborhood: the public library.  Gone are the days when libraries were sanctuaries reserved for only reading and research. Today, they are being reimagined as dynamic hubs for workforce development, creative sector support, and cultural exchange. Across the country, these reservoirs of knowledge are evolving into digital and physical beacons of community resilience.  Local access, global reach: A case study in artist empowerment  In Huntsville, where I serve as the city’s first music officer, we’ve partnered with our public library system to develop a multifunctional creative hub—with music at its core. A primary pillar of our collaboration is Blast Music, a digital streaming platform designed to showcase local talent. It’s a model other cities can and should replicate.  Through the Blast program, artists are paid, promoted, and added to a curated library collection—offering not only exposure, but bona fide industry credentials. Over 100 local artists are currently featured on the platform, and we will welcome up to 50 additional artists into the program annually.  The ripple effect of Blast is real. The free service empowers local listeners to discover homegrown talent while giving musicians tools to grow their fan base and attract industry attention. Perhaps most importantly, Blast provides emerging artists with resume-worthy recognition—essential for building sustainable careers in a tough industry.  But Blast isn’t just about digital reach—it’s embedded in Huntsville’s cultural DNA. From artist showcases like the Ladies of Blast event at the Orion Amphitheater, to community events like Hear to Be Seen (a portrait exhibition of Blast musicians), to stages designated exclusively for Blast artist performances at Camp to Amp, PorchFest, and more, Blast is bringing music into public spaces and cultivating civic pride. That’s the kind of community infrastructure that libraries are uniquely equipped to deliver.  There’s no such thing as too much visibility, and even artists with international acclaim see value in the platform. Huntsville native Kim Tibbs, a vocalist, songwriter, Alabama Music Hall of Fame honoree and UK chart-topper, submitted her album The Science of Completion Volume I to Blast—not only for more exposure, but to mentor and support the next generation of artists in her hometown.   Libraries as talent incubators  Huntsville is part of a broader national trend. In cities like Chicago, Nashville, and Austin, libraries are integrating creative labs, media production studios, and music education into their core services—functioning as public-sector incubators for the creative economy.  As technology continues to reshape traditional jobs, libraries are well-positioned to bridge skill gaps and fuel the rise of creative economies, including the vital but often overlooked non-performance roles in the music industry.  Huntsville is doubling down on this approach. We’re investing millions into programs that bring interactive music technology workshops to teens at the local library—focusing on hands-on training in production, recording, and audio engineering. With professional equipment, studio spaces, and expert instruction, we’re preparing the next generation for careers both onstage and behind the scenes.  Local industry is stepping up too. Hear Technologies, a global leader in sound and AV production, has been designing cutting-edge audio devices for years. They’re now part of a dynamic team collaborating with city leaders to help develop the library’s music maker space, nurture new talent and accelerate our region’s creative growth.  This matters now, more than ever  Libraries have always been entry points for education, employment, and exploration. But today, they’re more than just information access points—they are gateways to opportunity and launchpads for industries that define the future. By utilizing public space and collaborating with local talent, libraries can become platforms for economic mobility and cultural innovation. This investment isn’t a feel-good gesture. It’s a smart, strategic move for any city building a future that works—for everyone.  The playlist is simple: Invest in creative ecosystems, embed them in trusted community institutions like public libraries, and treat music as critical infrastructure.   Matt Mandrella is music officer for the City of Huntsville, Alabama. 
    0 Comments ·0 Shares ·0 Reviews
  • The World’s First ‘Autofocus’ Spectacles: Hands-on with the ViXion 01S at BEYOND Expo 2025

    You walk into a tech expo expecting the usual suspects – glasses with cameras, wearables that whisper AI prompts in your ear, maybe even a pair of glasses that let you take and make calls hands-free. But tucked away at BEYOND Expo was a pair of spectacles that did something very simple but extremely revolutionary – it used algorithms to improve your vision.
    The ViXion 01S is arguably the world’s first ‘autofocus’ spectacles, designed to work for any sort of vision ailment that requires vision-corrective devices or spectacles. The product’s creator, initially working with visually impaired children, saw how frustrating it was for them to constantly switch between glasses for reading and distance. That pain point sparked a concept: what if eyewear could adapt the way eyes naturally do? None of that ‘let ChatGPT identify objects for me’, just a pair of spectacles that enable you to see better.
    Designer: Nendo for ViXion

    And that idea, or rather that phrase stopped me dead in my tracks – autofocus glasses. Possibly the world’s first. But not in the way a camera might autofocus on a face. These use a depth-perception sensor embedded subtly between the lenses, analyzing how far away you’re looking and adjusting the focus of the lenses in real time. The result is magic in the truest sense: your focus shifts from a book in your hand to a sign across the hall, and the glasses reshape their optics in under a second. Block the sensor, and the illusion becomes obvious – your vision blurs instantly, reminding you that these glasses are doing some serious computing… in split-second moments too.

    Forget bifocals or progressive lenses. The ViXion 01S behaves like multifocal glasses with a brain. It doesn’t rely on zones etched into the lens. Instead, it features dual variable lenses that morph their curvature to suit your focal length, from up-close at 10 inches to a clear view across a room. Whether you’re myopic, hyperopic, presbyopic, or dealing with the messier combinations like anisometropia, the ViXion adjusts. it goes all the way from a power of negative 10 to positive 10, covering possibly the entire gamut.

    For me, a guy who’s had specs since 1997, it felt incredible. I’ve got a power of nearly -6.5, something that’s a little too high for most tech devices. For example, I can’t vision-correct images in most VR headsets because they don’t go all the way as high as negative 6.5. The average human has not more than minus 2 or 3, or positive 1 or 2.

    Setup is easy enough. A short calibration lets you fine-tune your pupillary distance and correct vision strength – up to +10 or -10 diopters, via a simple switch on either side of the glasses. From there, it’s mostly hands-free. The battery runs over 15 hours on a full charge and tops up via USB-C, making it an all-day companion that recharges while you sleep.

    The aesthetic comes courtesy of Nendo, Japan’s minimalism maestros. Lightweight at just 55 grams, the frame wears its technology like a tailored suit – sharp, unobtrusive, refined. The fact that such an elegant design houses motorized lenses and a depth sensor almost feels like a flex.
    Awards followed, naturally. The ViXion 01S has been recognized at CES, IFA, and the Good Design Awards in Japan. Most recently, it clinched the Beyond Award this year, validating both its design and innovation chops.

    At this isn’t an impulse buy, but consider the math. If you’re someone juggling reading glasses, computer glasses, and regular prescription lenses, it starts to look a lot more reasonable. Especially when one device replaces all the rest.The post The World’s First ‘Autofocus’ Spectacles: Hands-on with the ViXion 01S at BEYOND Expo 2025 first appeared on Yanko Design.
    #worlds #first #autofocus #spectacles #handson
    The World’s First ‘Autofocus’ Spectacles: Hands-on with the ViXion 01S at BEYOND Expo 2025
    You walk into a tech expo expecting the usual suspects – glasses with cameras, wearables that whisper AI prompts in your ear, maybe even a pair of glasses that let you take and make calls hands-free. But tucked away at BEYOND Expo was a pair of spectacles that did something very simple but extremely revolutionary – it used algorithms to improve your vision. The ViXion 01S is arguably the world’s first ‘autofocus’ spectacles, designed to work for any sort of vision ailment that requires vision-corrective devices or spectacles. The product’s creator, initially working with visually impaired children, saw how frustrating it was for them to constantly switch between glasses for reading and distance. That pain point sparked a concept: what if eyewear could adapt the way eyes naturally do? None of that ‘let ChatGPT identify objects for me’, just a pair of spectacles that enable you to see better. Designer: Nendo for ViXion And that idea, or rather that phrase stopped me dead in my tracks – autofocus glasses. Possibly the world’s first. But not in the way a camera might autofocus on a face. These use a depth-perception sensor embedded subtly between the lenses, analyzing how far away you’re looking and adjusting the focus of the lenses in real time. The result is magic in the truest sense: your focus shifts from a book in your hand to a sign across the hall, and the glasses reshape their optics in under a second. Block the sensor, and the illusion becomes obvious – your vision blurs instantly, reminding you that these glasses are doing some serious computing… in split-second moments too. Forget bifocals or progressive lenses. The ViXion 01S behaves like multifocal glasses with a brain. It doesn’t rely on zones etched into the lens. Instead, it features dual variable lenses that morph their curvature to suit your focal length, from up-close at 10 inches to a clear view across a room. Whether you’re myopic, hyperopic, presbyopic, or dealing with the messier combinations like anisometropia, the ViXion adjusts. it goes all the way from a power of negative 10 to positive 10, covering possibly the entire gamut. For me, a guy who’s had specs since 1997, it felt incredible. I’ve got a power of nearly -6.5, something that’s a little too high for most tech devices. For example, I can’t vision-correct images in most VR headsets because they don’t go all the way as high as negative 6.5. The average human has not more than minus 2 or 3, or positive 1 or 2. Setup is easy enough. A short calibration lets you fine-tune your pupillary distance and correct vision strength – up to +10 or -10 diopters, via a simple switch on either side of the glasses. From there, it’s mostly hands-free. The battery runs over 15 hours on a full charge and tops up via USB-C, making it an all-day companion that recharges while you sleep. The aesthetic comes courtesy of Nendo, Japan’s minimalism maestros. Lightweight at just 55 grams, the frame wears its technology like a tailored suit – sharp, unobtrusive, refined. The fact that such an elegant design houses motorized lenses and a depth sensor almost feels like a flex. Awards followed, naturally. The ViXion 01S has been recognized at CES, IFA, and the Good Design Awards in Japan. Most recently, it clinched the Beyond Award this year, validating both its design and innovation chops. At this isn’t an impulse buy, but consider the math. If you’re someone juggling reading glasses, computer glasses, and regular prescription lenses, it starts to look a lot more reasonable. Especially when one device replaces all the rest.The post The World’s First ‘Autofocus’ Spectacles: Hands-on with the ViXion 01S at BEYOND Expo 2025 first appeared on Yanko Design. #worlds #first #autofocus #spectacles #handson
    The World’s First ‘Autofocus’ Spectacles: Hands-on with the ViXion 01S at BEYOND Expo 2025
    www.yankodesign.com
    You walk into a tech expo expecting the usual suspects – glasses with cameras, wearables that whisper AI prompts in your ear, maybe even a pair of glasses that let you take and make calls hands-free. But tucked away at BEYOND Expo was a pair of spectacles that did something very simple but extremely revolutionary – it used algorithms to improve your vision. The ViXion 01S is arguably the world’s first ‘autofocus’ spectacles, designed to work for any sort of vision ailment that requires vision-corrective devices or spectacles. The product’s creator, initially working with visually impaired children, saw how frustrating it was for them to constantly switch between glasses for reading and distance. That pain point sparked a concept: what if eyewear could adapt the way eyes naturally do? None of that ‘let ChatGPT identify objects for me’, just a pair of spectacles that enable you to see better. Designer: Nendo for ViXion And that idea, or rather that phrase stopped me dead in my tracks – autofocus glasses. Possibly the world’s first. But not in the way a camera might autofocus on a face. These use a depth-perception sensor embedded subtly between the lenses, analyzing how far away you’re looking and adjusting the focus of the lenses in real time. The result is magic in the truest sense: your focus shifts from a book in your hand to a sign across the hall, and the glasses reshape their optics in under a second. Block the sensor, and the illusion becomes obvious – your vision blurs instantly, reminding you that these glasses are doing some serious computing… in split-second moments too. Forget bifocals or progressive lenses. The ViXion 01S behaves like multifocal glasses with a brain. It doesn’t rely on zones etched into the lens. Instead, it features dual variable lenses that morph their curvature to suit your focal length, from up-close at 10 inches to a clear view across a room. Whether you’re myopic, hyperopic, presbyopic, or dealing with the messier combinations like anisometropia, the ViXion adjusts. it goes all the way from a power of negative 10 to positive 10, covering possibly the entire gamut. For me, a guy who’s had specs since 1997, it felt incredible. I’ve got a power of nearly -6.5, something that’s a little too high for most tech devices. For example, I can’t vision-correct images in most VR headsets because they don’t go all the way as high as negative 6.5. The average human has not more than minus 2 or 3, or positive 1 or 2. Setup is easy enough. A short calibration lets you fine-tune your pupillary distance and correct vision strength – up to +10 or -10 diopters, via a simple switch on either side of the glasses. From there, it’s mostly hands-free. The battery runs over 15 hours on a full charge and tops up via USB-C, making it an all-day companion that recharges while you sleep. The aesthetic comes courtesy of Nendo, Japan’s minimalism maestros. Lightweight at just 55 grams, the frame wears its technology like a tailored suit – sharp, unobtrusive, refined. The fact that such an elegant design houses motorized lenses and a depth sensor almost feels like a flex. Awards followed, naturally. The ViXion 01S has been recognized at CES, IFA, and the Good Design Awards in Japan. Most recently, it clinched the Beyond Award this year, validating both its design and innovation chops. At $500, this isn’t an impulse buy, but consider the math. If you’re someone juggling reading glasses, computer glasses, and regular prescription lenses (not to mention the cumulative cost of eye exams and replacements), it starts to look a lot more reasonable. Especially when one device replaces all the rest.The post The World’s First ‘Autofocus’ Spectacles: Hands-on with the ViXion 01S at BEYOND Expo 2025 first appeared on Yanko Design.
    0 Comments ·0 Shares ·0 Reviews
  • How to design characters for animation that fit a story

    Pro artist Omar Gomet reveals how to create characters that push a story's narrative.
    #how #design #characters #animation #that
    How to design characters for animation that fit a story
    Pro artist Omar Gomet reveals how to create characters that push a story's narrative. #how #design #characters #animation #that
    How to design characters for animation that fit a story
    www.creativebloq.com
    Pro artist Omar Gomet reveals how to create characters that push a story's narrative.
    0 Comments ·0 Shares ·0 Reviews
  • Google’s Android Chief Hopes Its ‘New Era’ Will Get People to Ditch Their iPhones

    Android is getting a design refresh, launching a mixed reality platform for smart glasses, and Gemini is expanding to cars and watches. Can it entice the overwhelmingly dominant iPhone-owning youth?
    #googles #android #chief #hopes #its
    Google’s Android Chief Hopes Its ‘New Era’ Will Get People to Ditch Their iPhones
    Android is getting a design refresh, launching a mixed reality platform for smart glasses, and Gemini is expanding to cars and watches. Can it entice the overwhelmingly dominant iPhone-owning youth? #googles #android #chief #hopes #its
    Google’s Android Chief Hopes Its ‘New Era’ Will Get People to Ditch Their iPhones
    www.wired.com
    Android is getting a design refresh, launching a mixed reality platform for smart glasses, and Gemini is expanding to cars and watches. Can it entice the overwhelmingly dominant iPhone-owning youth?
    0 Comments ·0 Shares ·0 Reviews
  • Macworld Podcast: The state of the Mac and macOS

    Macworld

    WWDC is coming soon, and on this episode of the Macworld Podcast, we talk about the current state of Mac hardware and macOS, and what that tells us about what Apple could be doing at WWDC.

    This is episode 935 with Jason Cross, Michael Simon, and Roman Loyola. 

    Watch episode 935 on YouTube

    Listen to episode 935 on Apple Podcasts

    Listen to episode 935 on Spotify

    Get info

    Click on the links below for more information on what was discussed on the show. 

    macOS 16: Everything we know so far about the next Mac update

    WWDC 2025: Everything you need to know before Apple’s big event

    Subscribe to the Macworld Podcast

    You can subscribe to the Macworld Podcast—or leave us a review!—right here in the Podcasts app. The Macworld Podcast is also available on Spotify and on the Macworld Podcast YouTube channel.  Or you can point your favorite podcast-savvy RSS reader at:

    To find previous episodes, visit Macworld’s podcast page or our home on MegaPhone.

    Apple
    #macworld #podcast #state #mac #macos
    Macworld Podcast: The state of the Mac and macOS
    Macworld WWDC is coming soon, and on this episode of the Macworld Podcast, we talk about the current state of Mac hardware and macOS, and what that tells us about what Apple could be doing at WWDC. This is episode 935 with Jason Cross, Michael Simon, and Roman Loyola.  Watch episode 935 on YouTube Listen to episode 935 on Apple Podcasts Listen to episode 935 on Spotify Get info Click on the links below for more information on what was discussed on the show.  macOS 16: Everything we know so far about the next Mac update WWDC 2025: Everything you need to know before Apple’s big event Subscribe to the Macworld Podcast You can subscribe to the Macworld Podcast—or leave us a review!—right here in the Podcasts app. The Macworld Podcast is also available on Spotify and on the Macworld Podcast YouTube channel.  Or you can point your favorite podcast-savvy RSS reader at: To find previous episodes, visit Macworld’s podcast page or our home on MegaPhone. Apple #macworld #podcast #state #mac #macos
    Macworld Podcast: The state of the Mac and macOS
    www.macworld.com
    Macworld WWDC is coming soon, and on this episode of the Macworld Podcast, we talk about the current state of Mac hardware and macOS, and what that tells us about what Apple could be doing at WWDC. This is episode 935 with Jason Cross, Michael Simon, and Roman Loyola.  Watch episode 935 on YouTube Listen to episode 935 on Apple Podcasts Listen to episode 935 on Spotify Get info Click on the links below for more information on what was discussed on the show.  macOS 16: Everything we know so far about the next Mac update WWDC 2025: Everything you need to know before Apple’s big event Subscribe to the Macworld Podcast You can subscribe to the Macworld Podcast—or leave us a review!—right here in the Podcasts app. The Macworld Podcast is also available on Spotify and on the Macworld Podcast YouTube channel.  Or you can point your favorite podcast-savvy RSS reader at: https://feeds.megaphone.fm/macworld To find previous episodes, visit Macworld’s podcast page or our home on MegaPhone. Apple
    0 Comments ·0 Shares ·0 Reviews
  • The making of Enemies: The evolution of digital humans continues with Ziva

    From The Heretic’s Gawain to Louise in Enemies, our Demo team continues to create real-time cinematics that push the boundaries of Unity’s capabilities for high-fidelity productions, with a special focus on digital humans.The pursuit to create ever more realistic digital characters is endless. And since the launch of Enemies at GDC 2022, we have continued our research and development into solutions for better and more believable digital human creation, in collaboration with Unity’s Graphics Engineering team and commercially available service providers specializing in that area.At SIGGRAPH 2022, we announced our next step: replacing the heavy 4D data playback of the protagonist’s performance with a lightweight Ziva puppet. This recent iteration sees the integration of Ziva animation technology with the latest in Unity’s graphics advancements, including the High Definition Render Pipeline– all with the aim of further developing an end-to-end pipeline for character asset creation, animation, and authoring.Along with the launch of a new strand-based Hair Solution and updated Digital Human package, the Enemies real-time demo is now available to download. You can run it in real-time and experience it for yourself, just as it was shown at Unite 2022.While the cinematic may not appear too different from the original, its final rendered version shows how the integration of Ziva technology has brought a new dimension to our protagonist.Ziva brings decades of experience and pioneering research from the VFX industry to enable greater animation quality for games, linear content production, and real-time projects. Its machine learning-based technology helps achieve extraordinary realism in facial animation, and also for body and muscle deformations.To achieve the level of realism in Enemies, Ziva used machine learning and 4D data capture, which goes beyond the traditional process of scanning actors in 3D scans. The static, uneditable 4D captured facial performance has now been transformed into a real-time puppet with a facial rig that can be animated and adjusted at any time – all while maintaining high fidelity.Our team built on that 4D capture data and trained a machine-learned model that could be animated to create any performance. The end result is a 50 MB facial rig that has all the detail of the 4D captured performance, without having to carry its original 3.7 GB of weight.This technology means that you can replicate the results with a fraction of the animation data, creating real-time results in a way that 4D does not typically allow.In order to achieve this, Unity’s Demo team focused on:Creating the puppetTo create this new version of Louise, we worked with the Ziva team. They handled the machine learning workflow using a preexisting 4D data library. Additional 4D data was collected from a new performance by the original Enemies actor. This is one of the unique advantages of our machine learning approach.With this combined dataset, we trained a Ziva puppet to accurately reproduce the original performance. We could alter this performance in any way, ranging from tweaking minute details to changing the entire expression.Using the 4D data capture through machine learning, we could enable any future performance to run on any 3D head by showing a single performance applied to multiple faces of varying proportions. This makes it easier to expand the range of performances to multiple actors and real-time digital humans for any future editions.The puppet’s control schemeOnce the machine learning was completed, we had 200–300 parameters that, when used in combination and at different weights, could recreate everything we had seen in the 4D data with incredible accuracy. We didn’t have to worry about a hand-animated performance looking different when used by a group of different animators. The persona and idiosyncrasies of the original actor would come through no matter how we chose to animate the face.As Ziva is based on deformations and not an underlying facial rig, we could manipulate even the smallest detail because the trained face uses a control scheme that was developed to take advantage of the fidelity of the machine-learned parameters/data.At this point, creating a rig is a relatively flexible process as we can just tap into those machine-learned parameters – this, in turn, deforms the face. There are no joints in a Ziva puppet, besides the basic logical face and neck joints.So what does this all mean?There are many advantages to this new workflow. First and foremost, we now have the ability to dynamically interact with the performance of the digital human in Enemies.This allows us to change the character’s performance after it has already been delivered. Digital Louise can now say the same lines as before, but with very different facial expressions. For example, she can be friendlier or angrier or convey any other emotion that the director envisions.We are also able to manually author new performances with the puppet – facial expressions and reactions that the original actress never performed. If we wanted to develop the story into an interactive experience, it would be important to expand the possibility of what the digital character reacts to, such as a player’s chess moves, with nuances of approval or disapproval.For the highest level of fidelity, the Ziva team can even create a new puppet with its own 4D dataset. Ziva also recently released a beta version of Face Trainer, a product built on a comprehensive library of 4D data and ML algorithms. It can be used to train any face mesh to perform the most complex expressions in real-time without any new 4D capture.Additionally, it is possible to create new lines of dialogue, all at a fraction of the time and cost that the creation of the first line required. We can do this either by getting the original actress to perform additional lines with an HMC and then using the HMC data to drive the puppet, or by getting another performer to deliver the new lines and retargeting their HMC data to the existing puppet.At SIGGRAPH Real-Time Live! we demonstrated how to apply the original performance from Enemies to the puppet of another actress – ultimately replacing the protagonist of the story with a different person, without changing anything else.This performance was then shown at Unite 2022 during the keynote , where Enemies ran on an Xbox Series X, with DX12 and real-time ray tracing.To further enhance the visual quality of Enemies, a number of HDRP systems were leveraged. These include Shader Graph motion vectors, Adaptive Probe Volumes, and of course, hair shading.Enemies also makes use of real-time ray tracing in HDRP and Unity’s native support for NVIDIA DLSS 2.0, which enable it to run at 4K image quality, comparable to native resolution. All of these updated Unity features are now available in Unity 2022 LTS.The brand new strand-based Hair Solution, developed during the creation of the Enemies demo, can simulate individual hairs in real-time. This technology is now available as an experimental package via GitHub, along with a tutorial to get started.By integrating a complete pipeline for authoring, simulation, shading, and rendering hair in Unity, this solution is applicable to digital humans and creatures, in both realistic and stylized projects. The development work continues with a more performant solution for hair rendering enabled by the upcoming Software Rasterizer in HDRP. We are also diversifying the authoring options available by adopting and integrating the Wētā Wig tool for more complex grooms, as showcased in the Lion demo.Expanding on the technological innovations from The Heretic, the updated Digital Human package provides a realistic shading model for the characters rendered in Unity.Such updates include:A better 4D pipelineA more performant Skin Attachment system on the GPU for high-density meshesMore realistic eyes with caustics on the irisA new skin shader, built with the available Editor technologyTension tech for blood flow simulation and wrinkle maps, eliminating the need for a facial rigAnd as always, there is more to come.Discover how Ziva can help bring your next project to life. Register your interest to receive updates or get early access to future Ziva beta programs. If you’d like to learn more, you can contact us here.
    #making #enemies #evolution #digital #humans
    The making of Enemies: The evolution of digital humans continues with Ziva
    From The Heretic’s Gawain to Louise in Enemies, our Demo team continues to create real-time cinematics that push the boundaries of Unity’s capabilities for high-fidelity productions, with a special focus on digital humans.The pursuit to create ever more realistic digital characters is endless. And since the launch of Enemies at GDC 2022, we have continued our research and development into solutions for better and more believable digital human creation, in collaboration with Unity’s Graphics Engineering team and commercially available service providers specializing in that area.At SIGGRAPH 2022, we announced our next step: replacing the heavy 4D data playback of the protagonist’s performance with a lightweight Ziva puppet. This recent iteration sees the integration of Ziva animation technology with the latest in Unity’s graphics advancements, including the High Definition Render Pipeline– all with the aim of further developing an end-to-end pipeline for character asset creation, animation, and authoring.Along with the launch of a new strand-based Hair Solution and updated Digital Human package, the Enemies real-time demo is now available to download. You can run it in real-time and experience it for yourself, just as it was shown at Unite 2022.While the cinematic may not appear too different from the original, its final rendered version shows how the integration of Ziva technology has brought a new dimension to our protagonist.Ziva brings decades of experience and pioneering research from the VFX industry to enable greater animation quality for games, linear content production, and real-time projects. Its machine learning-based technology helps achieve extraordinary realism in facial animation, and also for body and muscle deformations.To achieve the level of realism in Enemies, Ziva used machine learning and 4D data capture, which goes beyond the traditional process of scanning actors in 3D scans. The static, uneditable 4D captured facial performance has now been transformed into a real-time puppet with a facial rig that can be animated and adjusted at any time – all while maintaining high fidelity.Our team built on that 4D capture data and trained a machine-learned model that could be animated to create any performance. The end result is a 50 MB facial rig that has all the detail of the 4D captured performance, without having to carry its original 3.7 GB of weight.This technology means that you can replicate the results with a fraction of the animation data, creating real-time results in a way that 4D does not typically allow.In order to achieve this, Unity’s Demo team focused on:Creating the puppetTo create this new version of Louise, we worked with the Ziva team. They handled the machine learning workflow using a preexisting 4D data library. Additional 4D data was collected from a new performance by the original Enemies actor. This is one of the unique advantages of our machine learning approach.With this combined dataset, we trained a Ziva puppet to accurately reproduce the original performance. We could alter this performance in any way, ranging from tweaking minute details to changing the entire expression.Using the 4D data capture through machine learning, we could enable any future performance to run on any 3D head by showing a single performance applied to multiple faces of varying proportions. This makes it easier to expand the range of performances to multiple actors and real-time digital humans for any future editions.The puppet’s control schemeOnce the machine learning was completed, we had 200–300 parameters that, when used in combination and at different weights, could recreate everything we had seen in the 4D data with incredible accuracy. We didn’t have to worry about a hand-animated performance looking different when used by a group of different animators. The persona and idiosyncrasies of the original actor would come through no matter how we chose to animate the face.As Ziva is based on deformations and not an underlying facial rig, we could manipulate even the smallest detail because the trained face uses a control scheme that was developed to take advantage of the fidelity of the machine-learned parameters/data.At this point, creating a rig is a relatively flexible process as we can just tap into those machine-learned parameters – this, in turn, deforms the face. There are no joints in a Ziva puppet, besides the basic logical face and neck joints.So what does this all mean?There are many advantages to this new workflow. First and foremost, we now have the ability to dynamically interact with the performance of the digital human in Enemies.This allows us to change the character’s performance after it has already been delivered. Digital Louise can now say the same lines as before, but with very different facial expressions. For example, she can be friendlier or angrier or convey any other emotion that the director envisions.We are also able to manually author new performances with the puppet – facial expressions and reactions that the original actress never performed. If we wanted to develop the story into an interactive experience, it would be important to expand the possibility of what the digital character reacts to, such as a player’s chess moves, with nuances of approval or disapproval.For the highest level of fidelity, the Ziva team can even create a new puppet with its own 4D dataset. Ziva also recently released a beta version of Face Trainer, a product built on a comprehensive library of 4D data and ML algorithms. It can be used to train any face mesh to perform the most complex expressions in real-time without any new 4D capture.Additionally, it is possible to create new lines of dialogue, all at a fraction of the time and cost that the creation of the first line required. We can do this either by getting the original actress to perform additional lines with an HMC and then using the HMC data to drive the puppet, or by getting another performer to deliver the new lines and retargeting their HMC data to the existing puppet.At SIGGRAPH Real-Time Live! we demonstrated how to apply the original performance from Enemies to the puppet of another actress – ultimately replacing the protagonist of the story with a different person, without changing anything else.This performance was then shown at Unite 2022 during the keynote , where Enemies ran on an Xbox Series X, with DX12 and real-time ray tracing.To further enhance the visual quality of Enemies, a number of HDRP systems were leveraged. These include Shader Graph motion vectors, Adaptive Probe Volumes, and of course, hair shading.Enemies also makes use of real-time ray tracing in HDRP and Unity’s native support for NVIDIA DLSS 2.0, which enable it to run at 4K image quality, comparable to native resolution. All of these updated Unity features are now available in Unity 2022 LTS.The brand new strand-based Hair Solution, developed during the creation of the Enemies demo, can simulate individual hairs in real-time. This technology is now available as an experimental package via GitHub, along with a tutorial to get started.By integrating a complete pipeline for authoring, simulation, shading, and rendering hair in Unity, this solution is applicable to digital humans and creatures, in both realistic and stylized projects. The development work continues with a more performant solution for hair rendering enabled by the upcoming Software Rasterizer in HDRP. We are also diversifying the authoring options available by adopting and integrating the Wētā Wig tool for more complex grooms, as showcased in the Lion demo.Expanding on the technological innovations from The Heretic, the updated Digital Human package provides a realistic shading model for the characters rendered in Unity.Such updates include:A better 4D pipelineA more performant Skin Attachment system on the GPU for high-density meshesMore realistic eyes with caustics on the irisA new skin shader, built with the available Editor technologyTension tech for blood flow simulation and wrinkle maps, eliminating the need for a facial rigAnd as always, there is more to come.Discover how Ziva can help bring your next project to life. Register your interest to receive updates or get early access to future Ziva beta programs. If you’d like to learn more, you can contact us here. #making #enemies #evolution #digital #humans
    The making of Enemies: The evolution of digital humans continues with Ziva
    unity.com
    From The Heretic’s Gawain to Louise in Enemies, our Demo team continues to create real-time cinematics that push the boundaries of Unity’s capabilities for high-fidelity productions, with a special focus on digital humans.The pursuit to create ever more realistic digital characters is endless. And since the launch of Enemies at GDC 2022, we have continued our research and development into solutions for better and more believable digital human creation, in collaboration with Unity’s Graphics Engineering team and commercially available service providers specializing in that area.At SIGGRAPH 2022, we announced our next step: replacing the heavy 4D data playback of the protagonist’s performance with a lightweight Ziva puppet. This recent iteration sees the integration of Ziva animation technology with the latest in Unity’s graphics advancements, including the High Definition Render Pipeline (HDRP) – all with the aim of further developing an end-to-end pipeline for character asset creation, animation, and authoring.Along with the launch of a new strand-based Hair Solution and updated Digital Human package, the Enemies real-time demo is now available to download. You can run it in real-time and experience it for yourself, just as it was shown at Unite 2022.While the cinematic may not appear too different from the original, its final rendered version shows how the integration of Ziva technology has brought a new dimension to our protagonist.Ziva brings decades of experience and pioneering research from the VFX industry to enable greater animation quality for games, linear content production, and real-time projects. Its machine learning (ML)-based technology helps achieve extraordinary realism in facial animation, and also for body and muscle deformations.To achieve the level of realism in Enemies, Ziva used machine learning and 4D data capture, which goes beyond the traditional process of scanning actors in 3D scans. The static, uneditable 4D captured facial performance has now been transformed into a real-time puppet with a facial rig that can be animated and adjusted at any time – all while maintaining high fidelity.Our team built on that 4D capture data and trained a machine-learned model that could be animated to create any performance. The end result is a 50 MB facial rig that has all the detail of the 4D captured performance, without having to carry its original 3.7 GB of weight.This technology means that you can replicate the results with a fraction of the animation data, creating real-time results in a way that 4D does not typically allow.In order to achieve this, Unity’s Demo team focused on:Creating the puppetTo create this new version of Louise, we worked with the Ziva team. They handled the machine learning workflow using a preexisting 4D data library. Additional 4D data was collected from a new performance by the original Enemies actor (we only needed to collect a few additional expressions). This is one of the unique advantages of our machine learning approach.With this combined dataset, we trained a Ziva puppet to accurately reproduce the original performance. We could alter this performance in any way, ranging from tweaking minute details to changing the entire expression.Using the 4D data capture through machine learning, we could enable any future performance to run on any 3D head by showing a single performance applied to multiple faces of varying proportions. This makes it easier to expand the range of performances to multiple actors and real-time digital humans for any future editions.The puppet’s control schemeOnce the machine learning was completed, we had 200–300 parameters that, when used in combination and at different weights, could recreate everything we had seen in the 4D data with incredible accuracy. We didn’t have to worry about a hand-animated performance looking different when used by a group of different animators. The persona and idiosyncrasies of the original actor would come through no matter how we chose to animate the face.As Ziva is based on deformations and not an underlying facial rig, we could manipulate even the smallest detail because the trained face uses a control scheme that was developed to take advantage of the fidelity of the machine-learned parameters/data.At this point, creating a rig is a relatively flexible process as we can just tap into those machine-learned parameters – this, in turn, deforms the face. There are no joints in a Ziva puppet, besides the basic logical face and neck joints.So what does this all mean?There are many advantages to this new workflow. First and foremost, we now have the ability to dynamically interact with the performance of the digital human in Enemies.This allows us to change the character’s performance after it has already been delivered. Digital Louise can now say the same lines as before, but with very different facial expressions. For example, she can be friendlier or angrier or convey any other emotion that the director envisions.We are also able to manually author new performances with the puppet – facial expressions and reactions that the original actress never performed. If we wanted to develop the story into an interactive experience, it would be important to expand the possibility of what the digital character reacts to, such as a player’s chess moves, with nuances of approval or disapproval.For the highest level of fidelity, the Ziva team can even create a new puppet with its own 4D dataset. Ziva also recently released a beta version of Face Trainer, a product built on a comprehensive library of 4D data and ML algorithms. It can be used to train any face mesh to perform the most complex expressions in real-time without any new 4D capture.Additionally, it is possible to create new lines of dialogue, all at a fraction of the time and cost that the creation of the first line required. We can do this either by getting the original actress to perform additional lines with an HMC and then using the HMC data to drive the puppet, or by getting another performer to deliver the new lines and retargeting their HMC data to the existing puppet.At SIGGRAPH Real-Time Live! we demonstrated how to apply the original performance from Enemies to the puppet of another actress – ultimately replacing the protagonist of the story with a different person, without changing anything else.This performance was then shown at Unite 2022 during the keynote (segment 01:03:00), where Enemies ran on an Xbox Series X, with DX12 and real-time ray tracing.To further enhance the visual quality of Enemies, a number of HDRP systems were leveraged. These include Shader Graph motion vectors, Adaptive Probe Volumes (APV), and of course, hair shading.Enemies also makes use of real-time ray tracing in HDRP and Unity’s native support for NVIDIA DLSS 2.0 (Deep Learning Super Sampling), which enable it to run at 4K image quality, comparable to native resolution. All of these updated Unity features are now available in Unity 2022 LTS.The brand new strand-based Hair Solution, developed during the creation of the Enemies demo, can simulate individual hairs in real-time. This technology is now available as an experimental package via GitHub (requires Unity 2020.2.0f1 or newer), along with a tutorial to get started.By integrating a complete pipeline for authoring, simulation, shading, and rendering hair in Unity, this solution is applicable to digital humans and creatures, in both realistic and stylized projects. The development work continues with a more performant solution for hair rendering enabled by the upcoming Software Rasterizer in HDRP. We are also diversifying the authoring options available by adopting and integrating the Wētā Wig tool for more complex grooms, as showcased in the Lion demo.Expanding on the technological innovations from The Heretic, the updated Digital Human package provides a realistic shading model for the characters rendered in Unity.Such updates include:A better 4D pipelineA more performant Skin Attachment system on the GPU for high-density meshesMore realistic eyes with caustics on the iris (available in HDRP as of Unity 2022.2)A new skin shader, built with the available Editor technologyTension tech for blood flow simulation and wrinkle maps, eliminating the need for a facial rigAnd as always, there is more to come.Discover how Ziva can help bring your next project to life. Register your interest to receive updates or get early access to future Ziva beta programs. If you’d like to learn more, you can contact us here.
    0 Comments ·0 Shares ·0 Reviews
  • Apple iPhone still dominates consumer smartphone brand loyalty despite modest drop

    Apple iPhone owners are still sticking with the brand at rates that shame every other smartphone producer, and a small dip in loyalty doesn't mean much without better data.iPhone 16eApple iPhone owners aren't quite as loyal as they used to be. New data shows that 89% bought another iPhone when upgrading. That figure, known as the loyalty rate, measures how often customers stick with the same brand.Though 89% still stick with the company when upgrading, that's down from a 94% high in 2021, according to new data from Consumer Intelligence Research Partners. Continue Reading on AppleInsider | Discuss on our Forums
    #apple #iphone #still #dominates #consumer
    Apple iPhone still dominates consumer smartphone brand loyalty despite modest drop
    Apple iPhone owners are still sticking with the brand at rates that shame every other smartphone producer, and a small dip in loyalty doesn't mean much without better data.iPhone 16eApple iPhone owners aren't quite as loyal as they used to be. New data shows that 89% bought another iPhone when upgrading. That figure, known as the loyalty rate, measures how often customers stick with the same brand.Though 89% still stick with the company when upgrading, that's down from a 94% high in 2021, according to new data from Consumer Intelligence Research Partners. Continue Reading on AppleInsider | Discuss on our Forums #apple #iphone #still #dominates #consumer
    Apple iPhone still dominates consumer smartphone brand loyalty despite modest drop
    appleinsider.com
    Apple iPhone owners are still sticking with the brand at rates that shame every other smartphone producer, and a small dip in loyalty doesn't mean much without better data.iPhone 16eApple iPhone owners aren't quite as loyal as they used to be. New data shows that 89% bought another iPhone when upgrading. That figure, known as the loyalty rate, measures how often customers stick with the same brand.Though 89% still stick with the company when upgrading, that's down from a 94% high in 2021, according to new data from Consumer Intelligence Research Partners (CIRP). Continue Reading on AppleInsider | Discuss on our Forums
    0 Comments ·0 Shares ·0 Reviews
  • Google’s AI agents will bring you the web now

    For the last two decades, Google has brought people a list of algorithmically-selected links from the web for any given search query. At I/O 2025, Google made clear that the concept of Search is firmly in its rearview mirror.
    On Tuesday, Google CEO Sundar Pichai and his executives presented new ways to bring users the web, this time intermediated through a series of AI agents.
    “We couldn’t be more excited about this chapter of Google search where you can truly ask anythingyour simplest and hardest questions, your deepest research, your personalized shopping needs,” said Google’s VP of Search, Liz Reid, onstage at I/O. “We believe AI will be the most powerful engine for discovery that the web has ever seen.”
    The largest announcement of I/O was that Google now offers AI mode to every Search user in the United States. This gives hundreds of millions of people a button to converse with an AI agent that will visit web pages, summarize them any way they’d like, or even help them shop. With Project Mariner, Google is delivering an even more hands-off AI agent to its Ultra subscribers. That agent will handle 10 different tasks simultaneously, visiting web pages and clicking around on those pages while users are free to plug away on something else altogether.
    Google is also making its Deep Research agent, which visits dozens of relevant websites and generates thorough research reports, more personalized and is connecting it to your Gmail and Drive. In a parallel development, the company is further integrating Project Astra — the company’s multimodal, real-time AI experience — into Search and Gemini, giving users more ways to verbally speak with an AI agent and let it see what they see.
    I could go on, but you get the idea — AI agents dominated I/O 2025.
    The rise of ChatGPT has forced an AI reckoning at Google, causing the company to rethink how it brings users information from the web. This reckoning really started at last year’s I/O, when Google introduced AI overviews into Search, a launch that was overshadowed by its embarrassing hallucinations. The rollout of AI overviews made it seem as if AI was not ready for primetime, and that Search as we know it was here to stay.

    Techcrunch event

    Join us at TechCrunch Sessions: AI
    Secure your spot for our leading AI industry event with speakers from OpenAI, Anthropic, and Cohere. For a limited time, tickets are just for an entire day of expert talks, workshops, and potent networking.

    Exhibit at TechCrunch Sessions: AI
    Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last.

    Berkeley, CA
    |
    June 5

    REGISTER NOW

    But at I/O 2025, Google presented a more compelling, fleshed-out approach to how AI would reshape Search, and thus, the web. The company’s new vision suggests that the future of the web, and the company, involves AI agents fetching information from the web and presenting it to users in whatever way they’d like.
    The idea that Google’s AI agents could replace Search is a compelling one, especially because Google is trying to lay an infrastructure for AI agents. Google announced on Tuesday that the SDK for Gemini models will now natively support Anthropic’s MCP, an increasingly popular standard for connecting agents to data sources across the internet.
    Google isn’t alone in this shift. At a different tech conference this week, Microsoft CTO Kevin Scott laid out his own vision for an “open agentic web,” in which agents take actions on users’ behalf across the internet. Scott noted that a key feature to make this possible would be the plumbing that connects these agents to each other and data sources — namely, Google’s Agent2Agent protocol and Anthropic’s MCP.
    Despite the enthusiasm, as Ben Thompson notes in Stratechery, the agentic web has its problems. For instance, Thompson notes that if Google sends AI agents to websites instead of people, that largely breaks the ad-supported model of the internet.
    The impacts could vary across industries. Agents may not be a problem for companies that sell goods or services on the internet, such as DoorDash or Ticketmaster — in fact, these companies are embracing agents as a new platform to reach customers. However, the same can’t be said for publishers, which are now fighting with AI agents for eyeballs.
    During I/O, a Google communications leader told me that “human attention is the only truly finite resource,” and the company’s launch of AI agents aims to give users more of their time back. That may all pan out, but AI summaries of articles seem likely to take dollars away from publishers — and potentially devastate the very content creation on which these AI systems depend.
    Further, there’s a lingering problem with AI systems around hallucinations — their tendency to make stuff up and present it as fact — which became embarrassingly clear with Google’s launch of AI overviews. Speaking onstage Tuesday, DeepMind CEO Demis Hassabis even raised concerns about the consistency of AI models.
    “You can easily, within a few minutes, find some obvious flaws with— some high school math thing that it doesn’t solve, some basic game it can’t play,” said Hassabis. “It’s not very difficult to find those holes in the system. For me, for something to be called AGI, it would need to be much more consistent across the board.”
    The consequences could be far-reaching. Widespread hallucinations could lead users to be more distrustful of information they encounter on the web. They could also sow misinformation among users. Either outcome is not ideal.
    Google doesn’t seem to be waiting for ad-supported businesses or AI models to catch up — the company is pushing ahead with AI agents anyway. Google has likely done more than any other company to steward the web as we know it. But in what could prove a major turning point, the company’s conception of the web seems to be reorienting around AI agents.
    #googles #agents #will #bring #you
    Google’s AI agents will bring you the web now
    For the last two decades, Google has brought people a list of algorithmically-selected links from the web for any given search query. At I/O 2025, Google made clear that the concept of Search is firmly in its rearview mirror. On Tuesday, Google CEO Sundar Pichai and his executives presented new ways to bring users the web, this time intermediated through a series of AI agents. “We couldn’t be more excited about this chapter of Google search where you can truly ask anythingyour simplest and hardest questions, your deepest research, your personalized shopping needs,” said Google’s VP of Search, Liz Reid, onstage at I/O. “We believe AI will be the most powerful engine for discovery that the web has ever seen.” The largest announcement of I/O was that Google now offers AI mode to every Search user in the United States. This gives hundreds of millions of people a button to converse with an AI agent that will visit web pages, summarize them any way they’d like, or even help them shop. With Project Mariner, Google is delivering an even more hands-off AI agent to its Ultra subscribers. That agent will handle 10 different tasks simultaneously, visiting web pages and clicking around on those pages while users are free to plug away on something else altogether. Google is also making its Deep Research agent, which visits dozens of relevant websites and generates thorough research reports, more personalized and is connecting it to your Gmail and Drive. In a parallel development, the company is further integrating Project Astra — the company’s multimodal, real-time AI experience — into Search and Gemini, giving users more ways to verbally speak with an AI agent and let it see what they see. I could go on, but you get the idea — AI agents dominated I/O 2025. The rise of ChatGPT has forced an AI reckoning at Google, causing the company to rethink how it brings users information from the web. This reckoning really started at last year’s I/O, when Google introduced AI overviews into Search, a launch that was overshadowed by its embarrassing hallucinations. The rollout of AI overviews made it seem as if AI was not ready for primetime, and that Search as we know it was here to stay. Techcrunch event Join us at TechCrunch Sessions: AI Secure your spot for our leading AI industry event with speakers from OpenAI, Anthropic, and Cohere. For a limited time, tickets are just for an entire day of expert talks, workshops, and potent networking. Exhibit at TechCrunch Sessions: AI Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last. Berkeley, CA | June 5 REGISTER NOW But at I/O 2025, Google presented a more compelling, fleshed-out approach to how AI would reshape Search, and thus, the web. The company’s new vision suggests that the future of the web, and the company, involves AI agents fetching information from the web and presenting it to users in whatever way they’d like. The idea that Google’s AI agents could replace Search is a compelling one, especially because Google is trying to lay an infrastructure for AI agents. Google announced on Tuesday that the SDK for Gemini models will now natively support Anthropic’s MCP, an increasingly popular standard for connecting agents to data sources across the internet. Google isn’t alone in this shift. At a different tech conference this week, Microsoft CTO Kevin Scott laid out his own vision for an “open agentic web,” in which agents take actions on users’ behalf across the internet. Scott noted that a key feature to make this possible would be the plumbing that connects these agents to each other and data sources — namely, Google’s Agent2Agent protocol and Anthropic’s MCP. Despite the enthusiasm, as Ben Thompson notes in Stratechery, the agentic web has its problems. For instance, Thompson notes that if Google sends AI agents to websites instead of people, that largely breaks the ad-supported model of the internet. The impacts could vary across industries. Agents may not be a problem for companies that sell goods or services on the internet, such as DoorDash or Ticketmaster — in fact, these companies are embracing agents as a new platform to reach customers. However, the same can’t be said for publishers, which are now fighting with AI agents for eyeballs. During I/O, a Google communications leader told me that “human attention is the only truly finite resource,” and the company’s launch of AI agents aims to give users more of their time back. That may all pan out, but AI summaries of articles seem likely to take dollars away from publishers — and potentially devastate the very content creation on which these AI systems depend. Further, there’s a lingering problem with AI systems around hallucinations — their tendency to make stuff up and present it as fact — which became embarrassingly clear with Google’s launch of AI overviews. Speaking onstage Tuesday, DeepMind CEO Demis Hassabis even raised concerns about the consistency of AI models. “You can easily, within a few minutes, find some obvious flaws with— some high school math thing that it doesn’t solve, some basic game it can’t play,” said Hassabis. “It’s not very difficult to find those holes in the system. For me, for something to be called AGI, it would need to be much more consistent across the board.” The consequences could be far-reaching. Widespread hallucinations could lead users to be more distrustful of information they encounter on the web. They could also sow misinformation among users. Either outcome is not ideal. Google doesn’t seem to be waiting for ad-supported businesses or AI models to catch up — the company is pushing ahead with AI agents anyway. Google has likely done more than any other company to steward the web as we know it. But in what could prove a major turning point, the company’s conception of the web seems to be reorienting around AI agents. #googles #agents #will #bring #you
    Google’s AI agents will bring you the web now
    techcrunch.com
    For the last two decades, Google has brought people a list of algorithmically-selected links from the web for any given search query. At I/O 2025, Google made clear that the concept of Search is firmly in its rearview mirror. On Tuesday, Google CEO Sundar Pichai and his executives presented new ways to bring users the web, this time intermediated through a series of AI agents. “We couldn’t be more excited about this chapter of Google search where you can truly ask anything […] your simplest and hardest questions, your deepest research, your personalized shopping needs,” said Google’s VP of Search, Liz Reid, onstage at I/O. “We believe AI will be the most powerful engine for discovery that the web has ever seen.” The largest announcement of I/O was that Google now offers AI mode to every Search user in the United States. This gives hundreds of millions of people a button to converse with an AI agent that will visit web pages, summarize them any way they’d like, or even help them shop. With Project Mariner, Google is delivering an even more hands-off AI agent to its Ultra subscribers. That agent will handle 10 different tasks simultaneously, visiting web pages and clicking around on those pages while users are free to plug away on something else altogether. Google is also making its Deep Research agent, which visits dozens of relevant websites and generates thorough research reports, more personalized and is connecting it to your Gmail and Drive. In a parallel development, the company is further integrating Project Astra — the company’s multimodal, real-time AI experience — into Search and Gemini, giving users more ways to verbally speak with an AI agent and let it see what they see. I could go on, but you get the idea — AI agents dominated I/O 2025. The rise of ChatGPT has forced an AI reckoning at Google, causing the company to rethink how it brings users information from the web. This reckoning really started at last year’s I/O, when Google introduced AI overviews into Search, a launch that was overshadowed by its embarrassing hallucinations. The rollout of AI overviews made it seem as if AI was not ready for primetime, and that Search as we know it was here to stay. Techcrunch event Join us at TechCrunch Sessions: AI Secure your spot for our leading AI industry event with speakers from OpenAI, Anthropic, and Cohere. For a limited time, tickets are just $292 for an entire day of expert talks, workshops, and potent networking. Exhibit at TechCrunch Sessions: AI Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last. Berkeley, CA | June 5 REGISTER NOW But at I/O 2025, Google presented a more compelling, fleshed-out approach to how AI would reshape Search, and thus, the web. The company’s new vision suggests that the future of the web, and the company, involves AI agents fetching information from the web and presenting it to users in whatever way they’d like. The idea that Google’s AI agents could replace Search is a compelling one, especially because Google is trying to lay an infrastructure for AI agents. Google announced on Tuesday that the SDK for Gemini models will now natively support Anthropic’s MCP, an increasingly popular standard for connecting agents to data sources across the internet. Google isn’t alone in this shift. At a different tech conference this week, Microsoft CTO Kevin Scott laid out his own vision for an “open agentic web,” in which agents take actions on users’ behalf across the internet. Scott noted that a key feature to make this possible would be the plumbing that connects these agents to each other and data sources — namely, Google’s Agent2Agent protocol and Anthropic’s MCP. Despite the enthusiasm, as Ben Thompson notes in Stratechery, the agentic web has its problems. For instance, Thompson notes that if Google sends AI agents to websites instead of people, that largely breaks the ad-supported model of the internet. The impacts could vary across industries. Agents may not be a problem for companies that sell goods or services on the internet, such as DoorDash or Ticketmaster — in fact, these companies are embracing agents as a new platform to reach customers. However, the same can’t be said for publishers, which are now fighting with AI agents for eyeballs. During I/O, a Google communications leader told me that “human attention is the only truly finite resource,” and the company’s launch of AI agents aims to give users more of their time back. That may all pan out, but AI summaries of articles seem likely to take dollars away from publishers — and potentially devastate the very content creation on which these AI systems depend. Further, there’s a lingering problem with AI systems around hallucinations — their tendency to make stuff up and present it as fact — which became embarrassingly clear with Google’s launch of AI overviews. Speaking onstage Tuesday, DeepMind CEO Demis Hassabis even raised concerns about the consistency of AI models. “You can easily, within a few minutes, find some obvious flaws with [AI chatbots] — some high school math thing that it doesn’t solve, some basic game it can’t play,” said Hassabis. “It’s not very difficult to find those holes in the system. For me, for something to be called AGI, it would need to be much more consistent across the board.” The consequences could be far-reaching. Widespread hallucinations could lead users to be more distrustful of information they encounter on the web. They could also sow misinformation among users. Either outcome is not ideal. Google doesn’t seem to be waiting for ad-supported businesses or AI models to catch up — the company is pushing ahead with AI agents anyway. Google has likely done more than any other company to steward the web as we know it. But in what could prove a major turning point, the company’s conception of the web seems to be reorienting around AI agents.
    0 Comments ·0 Shares ·0 Reviews
  • Nvidia provides Omniverse Blueprint for AI factory digital twins

    Nvidia today announced a significant expansion of the Nvidia Omniverse Blueprint for AI factory digital twins, now available as a preview.Read More
    #nvidia #provides #omniverse #blueprint #factory
    Nvidia provides Omniverse Blueprint for AI factory digital twins
    Nvidia today announced a significant expansion of the Nvidia Omniverse Blueprint for AI factory digital twins, now available as a preview.Read More #nvidia #provides #omniverse #blueprint #factory
    Nvidia provides Omniverse Blueprint for AI factory digital twins
    venturebeat.com
    Nvidia today announced a significant expansion of the Nvidia Omniverse Blueprint for AI factory digital twins, now available as a preview.Read More
    0 Comments ·0 Shares ·0 Reviews
  • OpenAI s'offre la start-up Io lancée par Jony Ive, le designer de l'iPhone

    C'est une nouvelle de taille. La coqueluche américaine de l'IA vient d'officialiser l'acquisition de de la start-up Io fondée il y a un an par...
    #openai #s039offre #startup #lancée #par
    OpenAI s'offre la start-up Io lancée par Jony Ive, le designer de l'iPhone
    C'est une nouvelle de taille. La coqueluche américaine de l'IA vient d'officialiser l'acquisition de de la start-up Io fondée il y a un an par... #openai #s039offre #startup #lancée #par
    OpenAI s'offre la start-up Io lancée par Jony Ive, le designer de l'iPhone
    www.usine-digitale.fr
    C'est une nouvelle de taille. La coqueluche américaine de l'IA vient d'officialiser l'acquisition de de la start-up Io fondée il y a un an par...
    0 Comments ·0 Shares ·0 Reviews
CGShares https://cgshares.com