All the Biggest News and Features Announced During Google I/O 2025
It should have been obvious that Google I/O 2025 would be jammed-packed, considering the company felt the need to hold a separate event to cover all of its Android news. But color me shocked that Google pulled off a nearly two hour-long presentation, full of announcements and reveals, mostly about AI. Not all AI announcements are equal, of course. Some of the news was geared towards enterprise users, and some towards developers. But many of the features discussed are on their way to consumers' devices too, some as soon as today. These are the updates I'm going to focus on here—you can expect to try out these features today, in the coming weeks, or at some point in the near future.Gemini Live is coming to the iPhoneEarlier this year, Google rolled out Gemini Live for all Android users via the Gemini app. The feature lets you share your camera feed or screen with Gemini, so it can help answer questions about what you're seeing. As of today, Google is now bringing the feature out to iPhones with the Gemini app as well. As long as you have the app, you can share your camera and screen with the AI, no matter what platform you're on. AI Mode is the future of Google SearchGoogle has been testing AI Mode in Search since March. The feature essentially turns Google Search into more of a Gemini experience, allowing you to stack multiple questions into one complex request. According to Google, it's AI can handle breaking down your query and searching the web for the most relevant sources. The result, in theory, is a complete report answering all aspects of your search, including links to sources and images.AI Mode is rolling out for all users—not just testers—over the coming weeks. But it's not just the AI Mode experience that Google has been testing. The company also announced new AI Mode features at I/O. Cram multiple searches into oneFirst, there's Deep Search, which multiplies the number of searches AI Mode typically would make for your query and generates an "expert-level fully-cited report" for you. I would still fact check it thoroughly, seeing as AI has a habit of hallucinating. AI Mode is also getting Gemini Live access, so you can share your screen or camera in Search.Use "Agent Mode" as a real world personal assistantProject Mariner is also coming to AI Mode. Google says you'll have access to "agentic capabilities," which basically means you can rely on the AI to complete tasks for you. For example, you'll be able to ask AI Mode to find you "affordable tickets for this Saturday’s Reds game in the lower level," and not only will the bot do the searching for you, it'll fill out the necessary forms. Google says that functionality will apply to event tickets, restaurant reservations, and local appointments.You can see that in action with Agent Mode, which will theoretically be able to execute complex tasks on your behalf. We don't know a lot about how that will work yet, but we do have a clear example from the Google I/O stage. During the presentation, Alphabet CEO Sundar Pichai tasked Gemini's Agent Mode with finding an apartment with in-unit laundry, keeping to a certain budget. Gemini then got to work, opening the browser, pulling up Zillow, searching for apartments, and booking a tour. AI Mode will pull from your previous search history in order to deliver you more relevant results. That includes results that apply to your whereabouts—say, local recommendations for an upcoming trip—as well as preferences. New Gemini features coming to WorkspaceGoogle announced a number of new Gemini features at I/O, some of which are coming to Workspace. One of the features Google focused on most was Personalized smart replies in Gmail. While Gmail has an AI-powered smart reply feature already, this one goes a step further, and bases its responses on all of your Google data. The goal is to generate a reply that sounds like you wrote it, and includes all the questions or comments you might reasonably have for the email in question. In practice, I'm not sure why I'd want to let AI do all of my communicating for me, but the feature will be available later this year, and for paid subscribers first.If you use Google Meet with a paid plan, expect to see live speech translation start to roll out today. The feature automatically dubs over speakers on a call in a target language, like an instant universal translator. Let's say you speak English and your meeting partner speaks Spanish: You hear them begin to speak in Spanish, before an AI voice takes over with the English translation. 'Try it on'Google doesn't want you returning the clothes you order online anymore. The company announced a new feature called "try it on" that uses AI to show you what you'd look like wearing whatever clothing item you're thinking about buying. This isn't a mere concept, either: Google is rolling out "try it on" today to Google Search lab users. If you want to learn more about the feature and how to use it, check out our full guide. Android XRAs the rumors suggested, Google talked a bit about Android XR, the company's software experience for glasses and headsets. Most of the news it shared was previously announced, but we did see some interesting features in action. For example when using one of the future glasses with Android XR built in, you'll be able to access a subtle HUD that can show you everything from photos to messages to Google Maps.On stage, we also saw a live demo of speech translation, which Android XR overlaying an English translation on screen as two presenters spoke in different languages.While there's no true timeline on when you can try Android XR, Google's big news is that it is working with both Warby Parker and Gentle Monster on making glasses with the service built in. Veo 3, Imagen 4, and FlowGoogle unveiled two new AI generation models at I/O this year: Imagen 4and Veo 3.Imagen 4 now generates higher-quality images with more detail than Imagen 3, Google's previous image generation model. However, the company specifically noted Imagen 4's improvements with text generation. If you ask the model to generate a poster, for example, Google says that the text will be both accurate to the request, as well as stylistic. Google kicked off the show with videos generated by Veo 3, so it's safe to say the company is quite proud of its video generation model. While the results are crisp, colorful, and occasionally jam-packed with elements, it definitely still suffers from the usual quirks and issues with AI-generated video. But the bigger story here is "Flow," Google's new AI video editor. Flow uses Veo 3 to generate videos, which you can then assemble like any oother non-linear editor. You can use Imagen 4 to generate an element you want in a shot, then ask Flow to add it to the next clip. In addition to the ability to cut or expand a shot, you can control the camera movement of each shot independently. It's the most "impressive" this tech has seemed to me, but outside of a high-tech story board, I can't imagine the use for this. Maybe I'm in the minority, but I certainly don't want to watch AI-generated videos, even if they are created via tools similar to the ones human video creators use. Veo 3 is only available to Google AI Ultra subscribers, though Flow is available in limited capacity with Veo 2 to AI Pro subscribers. Two new Chrome featuresChrome users can look forward to two new features following Google I/O. First, Google is bringing Gemini directly to the browser—no need to open the Gemini site. Second, Chrome can now update your old passwords on your behalf. This feature is launching later this year, though you'll need to wait for the websites themselves to offer support.A new way to pay for AIFinally, Google is offering new subscriptions to access its AI features. Google AI Premium is now AI Pro, and remains largely the same, minus the new ability to access Flow and Gemini in Chrome. It still costs per month.The new subscription is Google AI Ultra, which costs a whopping a month. For that price, you get everything in Google AI Pro, but with the highest limits for all of the AI models, including Gemini, Flow, Whisk, and NotebookLM. You get access to Gemini 2.5 Pro Deep Think, Veo 3, Project Mariner, YouTube Premium, and 30TB of cloud storage. What a deal.
#all #biggest #news #features #announced
All the Biggest News and Features Announced During Google I/O 2025
It should have been obvious that Google I/O 2025 would be jammed-packed, considering the company felt the need to hold a separate event to cover all of its Android news. But color me shocked that Google pulled off a nearly two hour-long presentation, full of announcements and reveals, mostly about AI. Not all AI announcements are equal, of course. Some of the news was geared towards enterprise users, and some towards developers. But many of the features discussed are on their way to consumers' devices too, some as soon as today. These are the updates I'm going to focus on here—you can expect to try out these features today, in the coming weeks, or at some point in the near future.Gemini Live is coming to the iPhoneEarlier this year, Google rolled out Gemini Live for all Android users via the Gemini app. The feature lets you share your camera feed or screen with Gemini, so it can help answer questions about what you're seeing. As of today, Google is now bringing the feature out to iPhones with the Gemini app as well. As long as you have the app, you can share your camera and screen with the AI, no matter what platform you're on. AI Mode is the future of Google SearchGoogle has been testing AI Mode in Search since March. The feature essentially turns Google Search into more of a Gemini experience, allowing you to stack multiple questions into one complex request. According to Google, it's AI can handle breaking down your query and searching the web for the most relevant sources. The result, in theory, is a complete report answering all aspects of your search, including links to sources and images.AI Mode is rolling out for all users—not just testers—over the coming weeks. But it's not just the AI Mode experience that Google has been testing. The company also announced new AI Mode features at I/O. Cram multiple searches into oneFirst, there's Deep Search, which multiplies the number of searches AI Mode typically would make for your query and generates an "expert-level fully-cited report" for you. I would still fact check it thoroughly, seeing as AI has a habit of hallucinating. AI Mode is also getting Gemini Live access, so you can share your screen or camera in Search.Use "Agent Mode" as a real world personal assistantProject Mariner is also coming to AI Mode. Google says you'll have access to "agentic capabilities," which basically means you can rely on the AI to complete tasks for you. For example, you'll be able to ask AI Mode to find you "affordable tickets for this Saturday’s Reds game in the lower level," and not only will the bot do the searching for you, it'll fill out the necessary forms. Google says that functionality will apply to event tickets, restaurant reservations, and local appointments.You can see that in action with Agent Mode, which will theoretically be able to execute complex tasks on your behalf. We don't know a lot about how that will work yet, but we do have a clear example from the Google I/O stage. During the presentation, Alphabet CEO Sundar Pichai tasked Gemini's Agent Mode with finding an apartment with in-unit laundry, keeping to a certain budget. Gemini then got to work, opening the browser, pulling up Zillow, searching for apartments, and booking a tour. AI Mode will pull from your previous search history in order to deliver you more relevant results. That includes results that apply to your whereabouts—say, local recommendations for an upcoming trip—as well as preferences. New Gemini features coming to WorkspaceGoogle announced a number of new Gemini features at I/O, some of which are coming to Workspace. One of the features Google focused on most was Personalized smart replies in Gmail. While Gmail has an AI-powered smart reply feature already, this one goes a step further, and bases its responses on all of your Google data. The goal is to generate a reply that sounds like you wrote it, and includes all the questions or comments you might reasonably have for the email in question. In practice, I'm not sure why I'd want to let AI do all of my communicating for me, but the feature will be available later this year, and for paid subscribers first.If you use Google Meet with a paid plan, expect to see live speech translation start to roll out today. The feature automatically dubs over speakers on a call in a target language, like an instant universal translator. Let's say you speak English and your meeting partner speaks Spanish: You hear them begin to speak in Spanish, before an AI voice takes over with the English translation. 'Try it on'Google doesn't want you returning the clothes you order online anymore. The company announced a new feature called "try it on" that uses AI to show you what you'd look like wearing whatever clothing item you're thinking about buying. This isn't a mere concept, either: Google is rolling out "try it on" today to Google Search lab users. If you want to learn more about the feature and how to use it, check out our full guide. Android XRAs the rumors suggested, Google talked a bit about Android XR, the company's software experience for glasses and headsets. Most of the news it shared was previously announced, but we did see some interesting features in action. For example when using one of the future glasses with Android XR built in, you'll be able to access a subtle HUD that can show you everything from photos to messages to Google Maps.On stage, we also saw a live demo of speech translation, which Android XR overlaying an English translation on screen as two presenters spoke in different languages.While there's no true timeline on when you can try Android XR, Google's big news is that it is working with both Warby Parker and Gentle Monster on making glasses with the service built in. Veo 3, Imagen 4, and FlowGoogle unveiled two new AI generation models at I/O this year: Imagen 4and Veo 3.Imagen 4 now generates higher-quality images with more detail than Imagen 3, Google's previous image generation model. However, the company specifically noted Imagen 4's improvements with text generation. If you ask the model to generate a poster, for example, Google says that the text will be both accurate to the request, as well as stylistic. Google kicked off the show with videos generated by Veo 3, so it's safe to say the company is quite proud of its video generation model. While the results are crisp, colorful, and occasionally jam-packed with elements, it definitely still suffers from the usual quirks and issues with AI-generated video. But the bigger story here is "Flow," Google's new AI video editor. Flow uses Veo 3 to generate videos, which you can then assemble like any oother non-linear editor. You can use Imagen 4 to generate an element you want in a shot, then ask Flow to add it to the next clip. In addition to the ability to cut or expand a shot, you can control the camera movement of each shot independently. It's the most "impressive" this tech has seemed to me, but outside of a high-tech story board, I can't imagine the use for this. Maybe I'm in the minority, but I certainly don't want to watch AI-generated videos, even if they are created via tools similar to the ones human video creators use. Veo 3 is only available to Google AI Ultra subscribers, though Flow is available in limited capacity with Veo 2 to AI Pro subscribers. Two new Chrome featuresChrome users can look forward to two new features following Google I/O. First, Google is bringing Gemini directly to the browser—no need to open the Gemini site. Second, Chrome can now update your old passwords on your behalf. This feature is launching later this year, though you'll need to wait for the websites themselves to offer support.A new way to pay for AIFinally, Google is offering new subscriptions to access its AI features. Google AI Premium is now AI Pro, and remains largely the same, minus the new ability to access Flow and Gemini in Chrome. It still costs per month.The new subscription is Google AI Ultra, which costs a whopping a month. For that price, you get everything in Google AI Pro, but with the highest limits for all of the AI models, including Gemini, Flow, Whisk, and NotebookLM. You get access to Gemini 2.5 Pro Deep Think, Veo 3, Project Mariner, YouTube Premium, and 30TB of cloud storage. What a deal.
#all #biggest #news #features #announced
·26 Views