Google I/O 2025: Android Takes A Back Seat To AI And XR
Google CEO Sundar Pichai talking about Google Beam, formerly known as Project Starline, at Google ... More I/O 2025Anshel Sag
Google used its annual I/O event this week to put the focus squarely on AI — with a strong dash of XR. While there’s no doubt that Google remains very committed to Android and the Android ecosystem, it was more than apparent that the company’s work on AI is only accelerating. Onstage, Google executives showed how its Gemini AI models have seen a more than 50x increase in monthly token usage over the past year, with the major inflection point clearly being the release of Gemini 2.5 in March 2025.
I believe that Google’s efforts in AI have been supercharged by Gemini 2.5 and the agentic era of AI. The company also showed its continued commitment to getting Android XR off the ground with the second developer preview of Android XR, which it also announced at Google I/O.Google’s monthly tokens processedAnshel Sag
Incorporating Gemini And AI Everywhere
For Google, the best way to justify the long-term and continuous investment in Gemini is to make it accessible in as many ways as possible. That includes expanding into markets beyond the smartphone and browser. That’s why Gemini is already replacing Google Assistant in most areas. This is also a necessary move because Google Assistant’s functionality has regressed to the point of frustration as the company has shifted development resources to Gemini. This means that we’re getting Gemini via Google TV, Android Auto and WearOS. Let’s not forget that Android XR is the first operating system from Google that has been built from the ground up during the Gemini era. That translates to most XR experiences from Google being grounded in AI from the outset to make the most of agents and multimodal AI for improving the user experience.
To accelerate the pace of adoption of on-device AI, Google has also announced improvements to LiteRT, its runtime for using AI models locally that has a heavy focus on maximizing on-device NPUs. Google also announced the AI Edge Portal to enable developers to test and benchmark their on-device models. These models will be crucial for enabling low-latency and secure experiences for users when connectivity might be challenged or when data simply cannot leave the device. While I believe that on-device AI performance is going to be important to developers going forward, it is also important to recognize that hybrid AI — mixing on-device and cloud AI processing — is likely here to stay for a very long time.
Android XR, Smart Glasses And The Xreal Partnership
Because Google introduced most of its Android updates in a separate “Android Show” a week before Google I/O, the Android updates during I/O mostly applied to Android XR. The new Material 3 Expressive design system will find its way across Google’s OSes and looks set to deliver snappier, more responsive experiences at equal or better performance. I wrote extensively about Google’s Android XR launch in December 2024, explaining how it would likely serve as Google’s tip of the spear for enabling new and unique AI experiences. At Google I/O, the company showed the sum of these efforts in terms of both creating partnerships and enabling a spectrum of XR devices from partners.Google’s Shahram Izadi, vice president and general manager of Android XR, talking about Project ... More Moohan onstage at Google I/O 2025Anshel Sag
In this vein, Google reiterated its commitment to Samsung and Project Moohan, which Google now says will ship this year. The company also talked about other partnerships in the ecosystem that will enable new form factors for the AI-enabled wearable XR operating system. Specifically, it will be partnering with Warby Parker and Gentle Monster to develop smart glasses. In a press release, Google said it has allotted million for its partnership with Warby Parker, with million already committed to product development and commercialization and the remaining million dependent on reaching certain milestones.
I believe that this partnership is akin to the one that Meta established with EssilorLuxottica, leaving the design, fit and retail presence to the eyeglasses experts. Warby Parker is such a good fit because the company is already very forward-thinking on technology, and I believe that this partnership can enable Google to make some beautiful smart glasses to compete with Meta Ray Bans. While I absolutely adore my Meta Ray Bans, I do think they would be considerably more useful if they were running Gemini 2.5, even the flash version of the model. Gentle Monster is also a great fit for Google because it helps capture the Asian market better, and because its designs are so large that they give Google plenty of room to work with.
Many people have written about their impressions of Project Moohan and the smart glasses from Google I/O, but the reality is that these were not new — or final — products. So, I hope that these XR devices are as exciting to people as they were to me back in December.Google announces Project Aura on stage during the Google I/O developer keynote.Anshel Sag
For me the more important XR news from the event was the announcement of the Project Aura headset in partnership with Xreal. Project Aura, while still limited in details, does seem to indicate that there’s a middle ground for Google between the more immersive Moohan headset and lightweight smart glasses. It’s evident that Google wants to capture this sweet spot with Xreal’s help. Also, if you know anything about Xreal’s history, it makes sense that it would be the company Google works with to bring 3-D AR to market. Project Aura feels like Google’s way to compete with Meta’s Orion in terms of field of view, 3-D AR capabilities and standalone compute. While many people think of Orion as a pair of standalone glasses, in fact they depend on an external compute puck; with Qualcomm’s help, Google will also use a puck via a wire, though I would love to see that disappear in subsequent versions.
The Xreal One and One Pro products already feel like moves in the direction Google is leaning, but with Project Aura it seems that Google wants more diversity within Android XR — and it wants to build a product with the company that has already shipped more AR headsets than anyone else. The wider 70-degree field of view should do wonders for the user experience, and while the price of Project Aura is still unclear, I would expect it to be much more expensive than most of Xreal’s current offerings. Google and Xreal say they will disclose more details about Project Aura at the AWE 2025 show in June, which I will be attending — so look for more details from me when that happens.
Project Starline Becomes Google Beam
Google also updated its XR conferencing platform, formerly called Project Starline, which it has been building with HP. Google has now changed the project into a product name with the introduction of Google Beam. While not that much has changed since I last tried out Project Starline at HP’s headquarters last September, the technology is still quite impressive — and still quite expensive. One of the new capabilities for Google Beam, also being made available as part of Google Meet, is near-real-time translated conversations that capture a person’s tone, expressions and accents while translating their speech. I got to experience this at Google I/O, and it was extremely convincing, not to mention a great way to enhance the already quite impressive Beam experience. It really did sound like the translated voice was the person’s own voice speaking English; this was significant on its own, but achieving it with spatial video at fairly low latency was even better. I hope that Google will one day be able to do the translations in real time, synced with the user’s speech.
Google says that it and HP are still coming to market with a Google Beam product later this year and will be showing it off at the InfoComm conference in June. Google has already listed some lead customers for Google Beam, including Deloitte, Salesforce, Citadel, NEC, Hackensack Meridian Health, Duolingo and Recruit. This is a longer list than I expected, but the technology is also more impressive than I had initially expected, so I am happy to see it finally come to market. I do believe that with time we’ll probably see Google Beam expand beyond the 65-inch screen, but for now that’s the best way to attain full immersion. I also expect that sooner or later we could see Beam working with Android XR devices as well.
Analyst Takeaways From Google I/O
I believe that Google is one of the few companies that genuinely understands the intersection of AI and XR — and that has the assets and capabilities to leverage that understanding. Other companies may have the knowledge but lack the assets, capabilities or execution. I also believe that Google finally understands the “why” behind XR and how much AI helps answer that question. Google’s previous efforts in XR were for the sake of pursuing XR and didn’t really align well with the rest of the company’s efforts. Especially given the growth of AI overall and the capabilities of Gemini in particular, AR glasses are now one of the best ways to experience AI. Nobody wants to hold their phone up to something for a multimodel AI to see it, and no one wants to type long AI prompts into their phone. They want to interact with AI in the context of more natural visual and auditory experiences. Although smartphones can deliver a fairly good experience for this, they pale in comparison to having the microphones and cameras closer to your eyes and mouth. The more you use AI this way, the less you find yourself needing to pull out your phone. I certainly don’t think smartphones are going to disappear, but I do think they are going to decline in terms of where most of an individual’s AI computing and connectivity happen.
All of this is why I’m much more confident in Google’s approach to XR this time around, even though the company has burned so many bridges with its previous endeavors in the space. More than that, I believe that Google’s previous absence in the XR market has impeded the market’s growth. Now, however, the company is clearly investing in partnerships and ecosystem enablement. It will be important for the company to continue to execute on this and enable its partners to be successful. A big part of that is building a strong XR ecosystem that can compete with the likes of Apple and Meta. It won’t happen overnight, but the success of that ecosystem will be what makes or breaks Google’s approach to XR beyond its embrace of Gemini.
Moor Insights & Strategy provides or has provided paid services to technology companies, like all tech industry research and analyst firms. These services include research, analysis, advising, consulting, benchmarking, acquisition matchmaking and video and speaking sponsorships. Of the companies mentioned in this article, Moor Insights & Strategy currently hasa paid business relationship with Google, HP, Meta, Qualcomm, Salesforce and Samsung.Editorial StandardsReprints & Permissions
#google #android #takes #back #seat
Google I/O 2025: Android Takes A Back Seat To AI And XR
Google CEO Sundar Pichai talking about Google Beam, formerly known as Project Starline, at Google ... More I/O 2025Anshel Sag
Google used its annual I/O event this week to put the focus squarely on AI — with a strong dash of XR. While there’s no doubt that Google remains very committed to Android and the Android ecosystem, it was more than apparent that the company’s work on AI is only accelerating. Onstage, Google executives showed how its Gemini AI models have seen a more than 50x increase in monthly token usage over the past year, with the major inflection point clearly being the release of Gemini 2.5 in March 2025.
I believe that Google’s efforts in AI have been supercharged by Gemini 2.5 and the agentic era of AI. The company also showed its continued commitment to getting Android XR off the ground with the second developer preview of Android XR, which it also announced at Google I/O.Google’s monthly tokens processedAnshel Sag
Incorporating Gemini And AI Everywhere
For Google, the best way to justify the long-term and continuous investment in Gemini is to make it accessible in as many ways as possible. That includes expanding into markets beyond the smartphone and browser. That’s why Gemini is already replacing Google Assistant in most areas. This is also a necessary move because Google Assistant’s functionality has regressed to the point of frustration as the company has shifted development resources to Gemini. This means that we’re getting Gemini via Google TV, Android Auto and WearOS. Let’s not forget that Android XR is the first operating system from Google that has been built from the ground up during the Gemini era. That translates to most XR experiences from Google being grounded in AI from the outset to make the most of agents and multimodal AI for improving the user experience.
To accelerate the pace of adoption of on-device AI, Google has also announced improvements to LiteRT, its runtime for using AI models locally that has a heavy focus on maximizing on-device NPUs. Google also announced the AI Edge Portal to enable developers to test and benchmark their on-device models. These models will be crucial for enabling low-latency and secure experiences for users when connectivity might be challenged or when data simply cannot leave the device. While I believe that on-device AI performance is going to be important to developers going forward, it is also important to recognize that hybrid AI — mixing on-device and cloud AI processing — is likely here to stay for a very long time.
Android XR, Smart Glasses And The Xreal Partnership
Because Google introduced most of its Android updates in a separate “Android Show” a week before Google I/O, the Android updates during I/O mostly applied to Android XR. The new Material 3 Expressive design system will find its way across Google’s OSes and looks set to deliver snappier, more responsive experiences at equal or better performance. I wrote extensively about Google’s Android XR launch in December 2024, explaining how it would likely serve as Google’s tip of the spear for enabling new and unique AI experiences. At Google I/O, the company showed the sum of these efforts in terms of both creating partnerships and enabling a spectrum of XR devices from partners.Google’s Shahram Izadi, vice president and general manager of Android XR, talking about Project ... More Moohan onstage at Google I/O 2025Anshel Sag
In this vein, Google reiterated its commitment to Samsung and Project Moohan, which Google now says will ship this year. The company also talked about other partnerships in the ecosystem that will enable new form factors for the AI-enabled wearable XR operating system. Specifically, it will be partnering with Warby Parker and Gentle Monster to develop smart glasses. In a press release, Google said it has allotted million for its partnership with Warby Parker, with million already committed to product development and commercialization and the remaining million dependent on reaching certain milestones.
I believe that this partnership is akin to the one that Meta established with EssilorLuxottica, leaving the design, fit and retail presence to the eyeglasses experts. Warby Parker is such a good fit because the company is already very forward-thinking on technology, and I believe that this partnership can enable Google to make some beautiful smart glasses to compete with Meta Ray Bans. While I absolutely adore my Meta Ray Bans, I do think they would be considerably more useful if they were running Gemini 2.5, even the flash version of the model. Gentle Monster is also a great fit for Google because it helps capture the Asian market better, and because its designs are so large that they give Google plenty of room to work with.
Many people have written about their impressions of Project Moohan and the smart glasses from Google I/O, but the reality is that these were not new — or final — products. So, I hope that these XR devices are as exciting to people as they were to me back in December.Google announces Project Aura on stage during the Google I/O developer keynote.Anshel Sag
For me the more important XR news from the event was the announcement of the Project Aura headset in partnership with Xreal. Project Aura, while still limited in details, does seem to indicate that there’s a middle ground for Google between the more immersive Moohan headset and lightweight smart glasses. It’s evident that Google wants to capture this sweet spot with Xreal’s help. Also, if you know anything about Xreal’s history, it makes sense that it would be the company Google works with to bring 3-D AR to market. Project Aura feels like Google’s way to compete with Meta’s Orion in terms of field of view, 3-D AR capabilities and standalone compute. While many people think of Orion as a pair of standalone glasses, in fact they depend on an external compute puck; with Qualcomm’s help, Google will also use a puck via a wire, though I would love to see that disappear in subsequent versions.
The Xreal One and One Pro products already feel like moves in the direction Google is leaning, but with Project Aura it seems that Google wants more diversity within Android XR — and it wants to build a product with the company that has already shipped more AR headsets than anyone else. The wider 70-degree field of view should do wonders for the user experience, and while the price of Project Aura is still unclear, I would expect it to be much more expensive than most of Xreal’s current offerings. Google and Xreal say they will disclose more details about Project Aura at the AWE 2025 show in June, which I will be attending — so look for more details from me when that happens.
Project Starline Becomes Google Beam
Google also updated its XR conferencing platform, formerly called Project Starline, which it has been building with HP. Google has now changed the project into a product name with the introduction of Google Beam. While not that much has changed since I last tried out Project Starline at HP’s headquarters last September, the technology is still quite impressive — and still quite expensive. One of the new capabilities for Google Beam, also being made available as part of Google Meet, is near-real-time translated conversations that capture a person’s tone, expressions and accents while translating their speech. I got to experience this at Google I/O, and it was extremely convincing, not to mention a great way to enhance the already quite impressive Beam experience. It really did sound like the translated voice was the person’s own voice speaking English; this was significant on its own, but achieving it with spatial video at fairly low latency was even better. I hope that Google will one day be able to do the translations in real time, synced with the user’s speech.
Google says that it and HP are still coming to market with a Google Beam product later this year and will be showing it off at the InfoComm conference in June. Google has already listed some lead customers for Google Beam, including Deloitte, Salesforce, Citadel, NEC, Hackensack Meridian Health, Duolingo and Recruit. This is a longer list than I expected, but the technology is also more impressive than I had initially expected, so I am happy to see it finally come to market. I do believe that with time we’ll probably see Google Beam expand beyond the 65-inch screen, but for now that’s the best way to attain full immersion. I also expect that sooner or later we could see Beam working with Android XR devices as well.
Analyst Takeaways From Google I/O
I believe that Google is one of the few companies that genuinely understands the intersection of AI and XR — and that has the assets and capabilities to leverage that understanding. Other companies may have the knowledge but lack the assets, capabilities or execution. I also believe that Google finally understands the “why” behind XR and how much AI helps answer that question. Google’s previous efforts in XR were for the sake of pursuing XR and didn’t really align well with the rest of the company’s efforts. Especially given the growth of AI overall and the capabilities of Gemini in particular, AR glasses are now one of the best ways to experience AI. Nobody wants to hold their phone up to something for a multimodel AI to see it, and no one wants to type long AI prompts into their phone. They want to interact with AI in the context of more natural visual and auditory experiences. Although smartphones can deliver a fairly good experience for this, they pale in comparison to having the microphones and cameras closer to your eyes and mouth. The more you use AI this way, the less you find yourself needing to pull out your phone. I certainly don’t think smartphones are going to disappear, but I do think they are going to decline in terms of where most of an individual’s AI computing and connectivity happen.
All of this is why I’m much more confident in Google’s approach to XR this time around, even though the company has burned so many bridges with its previous endeavors in the space. More than that, I believe that Google’s previous absence in the XR market has impeded the market’s growth. Now, however, the company is clearly investing in partnerships and ecosystem enablement. It will be important for the company to continue to execute on this and enable its partners to be successful. A big part of that is building a strong XR ecosystem that can compete with the likes of Apple and Meta. It won’t happen overnight, but the success of that ecosystem will be what makes or breaks Google’s approach to XR beyond its embrace of Gemini.
Moor Insights & Strategy provides or has provided paid services to technology companies, like all tech industry research and analyst firms. These services include research, analysis, advising, consulting, benchmarking, acquisition matchmaking and video and speaking sponsorships. Of the companies mentioned in this article, Moor Insights & Strategy currently hasa paid business relationship with Google, HP, Meta, Qualcomm, Salesforce and Samsung.Editorial StandardsReprints & Permissions
#google #android #takes #back #seat