Google I/O 2025: Everything announced at this year’s developer conference Google I/O 2025, Google’s biggest developer conference of the year, takes place Tuesday and Wednesday at the Shoreline Amphitheatre in Mountain View. We’re on the..."> Google I/O 2025: Everything announced at this year’s developer conference Google I/O 2025, Google’s biggest developer conference of the year, takes place Tuesday and Wednesday at the Shoreline Amphitheatre in Mountain View. We’re on the..." /> Google I/O 2025: Everything announced at this year’s developer conference Google I/O 2025, Google’s biggest developer conference of the year, takes place Tuesday and Wednesday at the Shoreline Amphitheatre in Mountain View. We’re on the..." />

Atualizar para Plus

Google I/O 2025: Everything announced at this year’s developer conference

Google I/O 2025, Google’s biggest developer conference of the year, takes place Tuesday and Wednesday at the Shoreline Amphitheatre in Mountain View. We’re on the ground bringing you the latest updates from the event. 
I/O showcases product announcements from across Google’s portfolio. We’ve got plenty of news relating to Android, Chrome, Google Search, YouTube, and — of course — Google’s AI-powered chatbot, Gemini.
Google hosted a separate event dedicated to Android updates: The Android Show. The company announced new ways to find lost Android phones and other items, additional device-level features for its Advanced Protection program, security tools to protect against scams and theft, and a new design language called Material 3 Expressive.
Here are all the things announced at Google I/O 2025.
Gemini Ultra
Gemini Ultradelivers the “highest level of access” to Google’s AI-powered apps and services, according to Google. It’s priced at per month and includes Google’s Veo 3 video generator, the company’s new Flow video editing app, and a powerful AI capability called Gemini 2.5 Pro Deep Think mode, which hasn’t launched yet.
AI Ultra comes with higher limits in Google’s NotebookLM platform and Whisk, the company’s image remixing app. AI Ultra subscribers also get access to Google’s Gemini chatbot in Chrome; some “agentic” tools powered by the company’s Project Mariner tech; YouTube Premium; and 30TB of storage across Google Drive, Google Photos, and Gmail.
Deep Think in Gemini 2.5 Pro
Deep Think is an “enhanced” reasoning mode for Google’s flagship Gemini 2.5 Pro model. It allows the model to consider multiple answers to questions before responding, boosting its performance on certain benchmarks.

Techcrunch event

Join us at TechCrunch Sessions: AI
Secure your spot for our leading AI industry event with speakers from OpenAI, Anthropic, and Cohere. For a limited time, tickets are just for an entire day of expert talks, workshops, and potent networking.

Exhibit at TechCrunch Sessions: AI
Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last.

Berkeley, CA
|
June 5

REGISTER NOW

Google didn’t go into detail about Deep Think works, but it could be similar to OpenAI’s o1-pro and upcoming o3-pro models, which likely use an engine to search for and synthesize the best solution to a given problem.
Deep Think is available to “trusted testers” via the Gemini API. Google said that it’s taking additional time to conduct safety evaluations before rolling out Deep Think widely.
Veo 3 video-generating AI model
Google claims that Veo 3 can generate sound effects, background noises, and even dialogue to accompany the videos it creates. Veo 3 also improves upon its predecessor, Veo 2, in terms of the quality of footage it can generate, Google says.
Veo 3 is available beginning Tuesday in Google’s Gemini chatbot app for subscribers to Google’s -per-month AI Ultra plan, where it can be prompted with text or an image.
Imagen 4 AI image generator
According to Google, Imagen 4 is fast — faster than Imagen 3. And it’ll soon get faster. In the near future, Google plans to release a variant of Imagen 4 that’s up to 10x quicker than Imagen 3.
Imagen 4 is capable of rendering “fine details” like fabrics, water droplets, and animal fur, according to Google. It can handle both photorealistic and abstract styles, creating images in a range of aspect ratios and up to 2K resolution.
Both Veo 3 and Imagen 4 will be used to power Flow, the company’s AI-powered video tool geared towards filmmaking. 
A sample from Imagen 4.Image Credits:Google
Gemini app updates
Google announced that Gemini apps have more than 400 monthly active users. 
Gemini Live’s camera and screen-sharing capabilities will roll out this week to all users on iOS and Android. The feature, powered by Project Astra, lets people have near-real time verbal conversations with Gemini, while also streaming video from their smartphone’s camera or screen to the AI model.
Google says Gemini Live will also start to integrate more deeply with its other apps in the coming weeks: It will soon be able to offer directions from Google Maps, create events in Google Calendar, and make to-do lists with Google Tasks.
Google says it’s updating Deep Research, Gemini’s AI agent that generates thorough research reports, by allowing users to upload their own private PDFs and images.
Stitch
Stitch is an AI-powered tool to help people design web and mobile app front ends by generating the necessary UI elements and code. Stitch can be prompted to create app UIs with a few words or even an image, providing HTML and CSS markup for the designs it generates.
Stitch is a bit more limited in what it can do compared to some other vibe coding products, but there’s a fair amount of customization options.
Google has also expanded access to Jules, its AI agent aimed at helping developers fix bugs in code. The tool helps developers understand complex code, create pull requests on GitHub, and handle certain backlog items and programming tasks.
Project Mariner
Project Mariner is Google’s experimental AI agent that browses and uses websites. Google says it has significantly updated how Project Mariner works, allowing the agent to take on nearly a dozen tasks at a time, and is now rolling it out to users.
For example, Project Mariner users can purchase tickets to a baseball game or buy groceries online without ever visiting a third-party website. People can just chat with Google’s AI agent, and it visits websites and takes actions for them.
Project Astra
Google’s low latency, multimodal AI experience, Project Astra, will power an array of new experiences in Search, the Gemini AI app, and products from third-party developers. 
Project Astra was born out of Google DeepMind as a way to showcase nearly real-time, multimodal AI capabilities. The company says it’s now building those Project Astra glasses with partners including Samsung and Warby Parker, but the company doesn’t have a set launch date yet. 

AI Mode
Google is rolling out AI Mode, the experimental Google Search feature that lets people ask complex, multi-part questions via an AI interface, to users in the U.S. this week.
AI Mode will support the use of complex data in sports and finance queries, and it will offer “try it on” options for apparel. Search Live, which is rolling out later this summer, will let you ask questions based on what your phone’s camera is seeing in real-time. 
Gmail is the first app to be supported with personalized context.
Beam 3D teleconferencing
Beam, previously called Starline, uses a combination of software and hardware, including a six-camera array and custom light field display, to let a user converse with someone as if they were in the same meeting room. An AI model converts video from the cameras, which are positioned at different angles and pointed toward the user, into a 3D rendering.
Google’s Beam boasts “near-perfect” millimeter-level head tracking and 60fps video streaming. When used with Google Meet, Beam provides an AI-powered real-time speech translation feature that preserves the original speaker’s voice, tone, and expressions.
And speaking of Google Meet, Google announced that Meet is getting real-time speech translation.
More AI updates
Google is launching Gemini in Chrome, which will give people access to a new AI browsing assistant that will help them quickly understand the context of a page and get tasks done. 
Gemma 3n is a model designed to run “smoothly” on phones, laptops, and tablets. It’s available in preview starting Tuesday; it can handle audio, text, images, and videos, according to Google.
The company also announced a ton of AI Workspace features coming to Gmail, Google Docs, and Google Vids. Most notably, Gmail is getting personalized smart replies and a new inbox-cleaning feature, while Vids is getting new ways to create and edit content.
Video Overviews are coming to NotebookLM, and the company rolled out SynthID Detector, a verification portal that uses Google’s SynthID watermarking technology to help identify AI-generated content. Lyria RealTime, the AI model that powers its experimental music production app, is now available via an API.
Wear OS 6
Wear OS 6 brings a unified font to tiles for a cleaner app look, and Pixel Watches are getting dynamic theming that syncs app colors with watch faces. 
The core promise of the new design reference platform is to let developers build better customization in apps along with seamless transitions. The company is releasing a design guideline for developers along with Figma design files.
Image Credits:Google /
Google Play
Google is beefing up the Play Store for Android developers with fresh tools to handle subscriptions, topic pages so users can dive into specific interests, audio samples to give folks a sneak peek into app content, and a new checkout experience to make selling add-ons smoother.
“Topic browse” pages for movies and showswill connect users to apps tied to tons of shows and movies. Plus, developers are getting dedicated pages for testing and releases, and tools to keep an eye on and improve their app rollouts. Developers using Google can also now halt live app releases if a critical problem pops up.
Subscription management tools are also getting an upgrade with multi-product checkout. Devs will soon be able to offer subscription add-ons alongside main subscriptions, all under one payment.
Android Studio
Android Studio is integrating new AI features, including “Journeys,” an “agentic AI” capability that coincides with the release of the Gemini 2.5 Pro model. And an “Agent Mode” will be able to handle more-intricate development processes.
Android Studio will receive new AI capabilities, including an enhanced “crash insights” feature in the App Quality Insights panel. This improvement, powered by Gemini, will analyze an app’s source code to identify potential causes of crashes and suggest fixes.
#google #everything #announced #this #years
Google I/O 2025: Everything announced at this year’s developer conference
Google I/O 2025, Google’s biggest developer conference of the year, takes place Tuesday and Wednesday at the Shoreline Amphitheatre in Mountain View. We’re on the ground bringing you the latest updates from the event.  I/O showcases product announcements from across Google’s portfolio. We’ve got plenty of news relating to Android, Chrome, Google Search, YouTube, and — of course — Google’s AI-powered chatbot, Gemini. Google hosted a separate event dedicated to Android updates: The Android Show. The company announced new ways to find lost Android phones and other items, additional device-level features for its Advanced Protection program, security tools to protect against scams and theft, and a new design language called Material 3 Expressive. Here are all the things announced at Google I/O 2025. Gemini Ultra Gemini Ultradelivers the “highest level of access” to Google’s AI-powered apps and services, according to Google. It’s priced at per month and includes Google’s Veo 3 video generator, the company’s new Flow video editing app, and a powerful AI capability called Gemini 2.5 Pro Deep Think mode, which hasn’t launched yet. AI Ultra comes with higher limits in Google’s NotebookLM platform and Whisk, the company’s image remixing app. AI Ultra subscribers also get access to Google’s Gemini chatbot in Chrome; some “agentic” tools powered by the company’s Project Mariner tech; YouTube Premium; and 30TB of storage across Google Drive, Google Photos, and Gmail. Deep Think in Gemini 2.5 Pro Deep Think is an “enhanced” reasoning mode for Google’s flagship Gemini 2.5 Pro model. It allows the model to consider multiple answers to questions before responding, boosting its performance on certain benchmarks. Techcrunch event Join us at TechCrunch Sessions: AI Secure your spot for our leading AI industry event with speakers from OpenAI, Anthropic, and Cohere. For a limited time, tickets are just for an entire day of expert talks, workshops, and potent networking. Exhibit at TechCrunch Sessions: AI Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last. Berkeley, CA | June 5 REGISTER NOW Google didn’t go into detail about Deep Think works, but it could be similar to OpenAI’s o1-pro and upcoming o3-pro models, which likely use an engine to search for and synthesize the best solution to a given problem. Deep Think is available to “trusted testers” via the Gemini API. Google said that it’s taking additional time to conduct safety evaluations before rolling out Deep Think widely. Veo 3 video-generating AI model Google claims that Veo 3 can generate sound effects, background noises, and even dialogue to accompany the videos it creates. Veo 3 also improves upon its predecessor, Veo 2, in terms of the quality of footage it can generate, Google says. Veo 3 is available beginning Tuesday in Google’s Gemini chatbot app for subscribers to Google’s -per-month AI Ultra plan, where it can be prompted with text or an image. Imagen 4 AI image generator According to Google, Imagen 4 is fast — faster than Imagen 3. And it’ll soon get faster. In the near future, Google plans to release a variant of Imagen 4 that’s up to 10x quicker than Imagen 3. Imagen 4 is capable of rendering “fine details” like fabrics, water droplets, and animal fur, according to Google. It can handle both photorealistic and abstract styles, creating images in a range of aspect ratios and up to 2K resolution. Both Veo 3 and Imagen 4 will be used to power Flow, the company’s AI-powered video tool geared towards filmmaking.  A sample from Imagen 4.Image Credits:Google Gemini app updates Google announced that Gemini apps have more than 400 monthly active users.  Gemini Live’s camera and screen-sharing capabilities will roll out this week to all users on iOS and Android. The feature, powered by Project Astra, lets people have near-real time verbal conversations with Gemini, while also streaming video from their smartphone’s camera or screen to the AI model. Google says Gemini Live will also start to integrate more deeply with its other apps in the coming weeks: It will soon be able to offer directions from Google Maps, create events in Google Calendar, and make to-do lists with Google Tasks. Google says it’s updating Deep Research, Gemini’s AI agent that generates thorough research reports, by allowing users to upload their own private PDFs and images. Stitch Stitch is an AI-powered tool to help people design web and mobile app front ends by generating the necessary UI elements and code. Stitch can be prompted to create app UIs with a few words or even an image, providing HTML and CSS markup for the designs it generates. Stitch is a bit more limited in what it can do compared to some other vibe coding products, but there’s a fair amount of customization options. Google has also expanded access to Jules, its AI agent aimed at helping developers fix bugs in code. The tool helps developers understand complex code, create pull requests on GitHub, and handle certain backlog items and programming tasks. Project Mariner Project Mariner is Google’s experimental AI agent that browses and uses websites. Google says it has significantly updated how Project Mariner works, allowing the agent to take on nearly a dozen tasks at a time, and is now rolling it out to users. For example, Project Mariner users can purchase tickets to a baseball game or buy groceries online without ever visiting a third-party website. People can just chat with Google’s AI agent, and it visits websites and takes actions for them. Project Astra Google’s low latency, multimodal AI experience, Project Astra, will power an array of new experiences in Search, the Gemini AI app, and products from third-party developers.  Project Astra was born out of Google DeepMind as a way to showcase nearly real-time, multimodal AI capabilities. The company says it’s now building those Project Astra glasses with partners including Samsung and Warby Parker, but the company doesn’t have a set launch date yet.  AI Mode Google is rolling out AI Mode, the experimental Google Search feature that lets people ask complex, multi-part questions via an AI interface, to users in the U.S. this week. AI Mode will support the use of complex data in sports and finance queries, and it will offer “try it on” options for apparel. Search Live, which is rolling out later this summer, will let you ask questions based on what your phone’s camera is seeing in real-time.  Gmail is the first app to be supported with personalized context. Beam 3D teleconferencing Beam, previously called Starline, uses a combination of software and hardware, including a six-camera array and custom light field display, to let a user converse with someone as if they were in the same meeting room. An AI model converts video from the cameras, which are positioned at different angles and pointed toward the user, into a 3D rendering. Google’s Beam boasts “near-perfect” millimeter-level head tracking and 60fps video streaming. When used with Google Meet, Beam provides an AI-powered real-time speech translation feature that preserves the original speaker’s voice, tone, and expressions. And speaking of Google Meet, Google announced that Meet is getting real-time speech translation. More AI updates Google is launching Gemini in Chrome, which will give people access to a new AI browsing assistant that will help them quickly understand the context of a page and get tasks done.  Gemma 3n is a model designed to run “smoothly” on phones, laptops, and tablets. It’s available in preview starting Tuesday; it can handle audio, text, images, and videos, according to Google. The company also announced a ton of AI Workspace features coming to Gmail, Google Docs, and Google Vids. Most notably, Gmail is getting personalized smart replies and a new inbox-cleaning feature, while Vids is getting new ways to create and edit content. Video Overviews are coming to NotebookLM, and the company rolled out SynthID Detector, a verification portal that uses Google’s SynthID watermarking technology to help identify AI-generated content. Lyria RealTime, the AI model that powers its experimental music production app, is now available via an API. Wear OS 6 Wear OS 6 brings a unified font to tiles for a cleaner app look, and Pixel Watches are getting dynamic theming that syncs app colors with watch faces.  The core promise of the new design reference platform is to let developers build better customization in apps along with seamless transitions. The company is releasing a design guideline for developers along with Figma design files. Image Credits:Google / Google Play Google is beefing up the Play Store for Android developers with fresh tools to handle subscriptions, topic pages so users can dive into specific interests, audio samples to give folks a sneak peek into app content, and a new checkout experience to make selling add-ons smoother. “Topic browse” pages for movies and showswill connect users to apps tied to tons of shows and movies. Plus, developers are getting dedicated pages for testing and releases, and tools to keep an eye on and improve their app rollouts. Developers using Google can also now halt live app releases if a critical problem pops up. Subscription management tools are also getting an upgrade with multi-product checkout. Devs will soon be able to offer subscription add-ons alongside main subscriptions, all under one payment. Android Studio Android Studio is integrating new AI features, including “Journeys,” an “agentic AI” capability that coincides with the release of the Gemini 2.5 Pro model. And an “Agent Mode” will be able to handle more-intricate development processes. Android Studio will receive new AI capabilities, including an enhanced “crash insights” feature in the App Quality Insights panel. This improvement, powered by Gemini, will analyze an app’s source code to identify potential causes of crashes and suggest fixes. #google #everything #announced #this #years
TECHCRUNCH.COM
Google I/O 2025: Everything announced at this year’s developer conference
Google I/O 2025, Google’s biggest developer conference of the year, takes place Tuesday and Wednesday at the Shoreline Amphitheatre in Mountain View. We’re on the ground bringing you the latest updates from the event.  I/O showcases product announcements from across Google’s portfolio. We’ve got plenty of news relating to Android, Chrome, Google Search, YouTube, and — of course — Google’s AI-powered chatbot, Gemini. Google hosted a separate event dedicated to Android updates: The Android Show. The company announced new ways to find lost Android phones and other items, additional device-level features for its Advanced Protection program, security tools to protect against scams and theft, and a new design language called Material 3 Expressive. Here are all the things announced at Google I/O 2025. Gemini Ultra Gemini Ultra (only in the U.S. for now) delivers the “highest level of access” to Google’s AI-powered apps and services, according to Google. It’s priced at $249.99 per month and includes Google’s Veo 3 video generator, the company’s new Flow video editing app, and a powerful AI capability called Gemini 2.5 Pro Deep Think mode, which hasn’t launched yet. AI Ultra comes with higher limits in Google’s NotebookLM platform and Whisk, the company’s image remixing app. AI Ultra subscribers also get access to Google’s Gemini chatbot in Chrome; some “agentic” tools powered by the company’s Project Mariner tech; YouTube Premium; and 30TB of storage across Google Drive, Google Photos, and Gmail. Deep Think in Gemini 2.5 Pro Deep Think is an “enhanced” reasoning mode for Google’s flagship Gemini 2.5 Pro model. It allows the model to consider multiple answers to questions before responding, boosting its performance on certain benchmarks. Techcrunch event Join us at TechCrunch Sessions: AI Secure your spot for our leading AI industry event with speakers from OpenAI, Anthropic, and Cohere. For a limited time, tickets are just $292 for an entire day of expert talks, workshops, and potent networking. Exhibit at TechCrunch Sessions: AI Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last. Berkeley, CA | June 5 REGISTER NOW Google didn’t go into detail about Deep Think works, but it could be similar to OpenAI’s o1-pro and upcoming o3-pro models, which likely use an engine to search for and synthesize the best solution to a given problem. Deep Think is available to “trusted testers” via the Gemini API. Google said that it’s taking additional time to conduct safety evaluations before rolling out Deep Think widely. Veo 3 video-generating AI model Google claims that Veo 3 can generate sound effects, background noises, and even dialogue to accompany the videos it creates. Veo 3 also improves upon its predecessor, Veo 2, in terms of the quality of footage it can generate, Google says. Veo 3 is available beginning Tuesday in Google’s Gemini chatbot app for subscribers to Google’s $249.99-per-month AI Ultra plan, where it can be prompted with text or an image. Imagen 4 AI image generator According to Google, Imagen 4 is fast — faster than Imagen 3. And it’ll soon get faster. In the near future, Google plans to release a variant of Imagen 4 that’s up to 10x quicker than Imagen 3. Imagen 4 is capable of rendering “fine details” like fabrics, water droplets, and animal fur, according to Google. It can handle both photorealistic and abstract styles, creating images in a range of aspect ratios and up to 2K resolution. Both Veo 3 and Imagen 4 will be used to power Flow, the company’s AI-powered video tool geared towards filmmaking.  A sample from Imagen 4.Image Credits:Google Gemini app updates Google announced that Gemini apps have more than 400 monthly active users.  Gemini Live’s camera and screen-sharing capabilities will roll out this week to all users on iOS and Android. The feature, powered by Project Astra, lets people have near-real time verbal conversations with Gemini, while also streaming video from their smartphone’s camera or screen to the AI model. Google says Gemini Live will also start to integrate more deeply with its other apps in the coming weeks: It will soon be able to offer directions from Google Maps, create events in Google Calendar, and make to-do lists with Google Tasks. Google says it’s updating Deep Research, Gemini’s AI agent that generates thorough research reports, by allowing users to upload their own private PDFs and images. Stitch Stitch is an AI-powered tool to help people design web and mobile app front ends by generating the necessary UI elements and code. Stitch can be prompted to create app UIs with a few words or even an image, providing HTML and CSS markup for the designs it generates. Stitch is a bit more limited in what it can do compared to some other vibe coding products, but there’s a fair amount of customization options. Google has also expanded access to Jules, its AI agent aimed at helping developers fix bugs in code. The tool helps developers understand complex code, create pull requests on GitHub, and handle certain backlog items and programming tasks. Project Mariner Project Mariner is Google’s experimental AI agent that browses and uses websites. Google says it has significantly updated how Project Mariner works, allowing the agent to take on nearly a dozen tasks at a time, and is now rolling it out to users. For example, Project Mariner users can purchase tickets to a baseball game or buy groceries online without ever visiting a third-party website. People can just chat with Google’s AI agent, and it visits websites and takes actions for them. Project Astra Google’s low latency, multimodal AI experience, Project Astra, will power an array of new experiences in Search, the Gemini AI app, and products from third-party developers.  Project Astra was born out of Google DeepMind as a way to showcase nearly real-time, multimodal AI capabilities. The company says it’s now building those Project Astra glasses with partners including Samsung and Warby Parker, but the company doesn’t have a set launch date yet.  AI Mode Google is rolling out AI Mode, the experimental Google Search feature that lets people ask complex, multi-part questions via an AI interface, to users in the U.S. this week. AI Mode will support the use of complex data in sports and finance queries, and it will offer “try it on” options for apparel. Search Live, which is rolling out later this summer, will let you ask questions based on what your phone’s camera is seeing in real-time.  Gmail is the first app to be supported with personalized context. Beam 3D teleconferencing Beam, previously called Starline, uses a combination of software and hardware, including a six-camera array and custom light field display, to let a user converse with someone as if they were in the same meeting room. An AI model converts video from the cameras, which are positioned at different angles and pointed toward the user, into a 3D rendering. Google’s Beam boasts “near-perfect” millimeter-level head tracking and 60fps video streaming. When used with Google Meet, Beam provides an AI-powered real-time speech translation feature that preserves the original speaker’s voice, tone, and expressions. And speaking of Google Meet, Google announced that Meet is getting real-time speech translation. More AI updates Google is launching Gemini in Chrome, which will give people access to a new AI browsing assistant that will help them quickly understand the context of a page and get tasks done.  Gemma 3n is a model designed to run “smoothly” on phones, laptops, and tablets. It’s available in preview starting Tuesday; it can handle audio, text, images, and videos, according to Google. The company also announced a ton of AI Workspace features coming to Gmail, Google Docs, and Google Vids. Most notably, Gmail is getting personalized smart replies and a new inbox-cleaning feature, while Vids is getting new ways to create and edit content. Video Overviews are coming to NotebookLM, and the company rolled out SynthID Detector, a verification portal that uses Google’s SynthID watermarking technology to help identify AI-generated content. Lyria RealTime, the AI model that powers its experimental music production app, is now available via an API. Wear OS 6 Wear OS 6 brings a unified font to tiles for a cleaner app look, and Pixel Watches are getting dynamic theming that syncs app colors with watch faces.  The core promise of the new design reference platform is to let developers build better customization in apps along with seamless transitions. The company is releasing a design guideline for developers along with Figma design files. Image Credits:Google / Google Play Google is beefing up the Play Store for Android developers with fresh tools to handle subscriptions, topic pages so users can dive into specific interests, audio samples to give folks a sneak peek into app content, and a new checkout experience to make selling add-ons smoother. “Topic browse” pages for movies and shows (U.S. only for now) will connect users to apps tied to tons of shows and movies. Plus, developers are getting dedicated pages for testing and releases, and tools to keep an eye on and improve their app rollouts. Developers using Google can also now halt live app releases if a critical problem pops up. Subscription management tools are also getting an upgrade with multi-product checkout. Devs will soon be able to offer subscription add-ons alongside main subscriptions, all under one payment. Android Studio Android Studio is integrating new AI features, including “Journeys,” an “agentic AI” capability that coincides with the release of the Gemini 2.5 Pro model. And an “Agent Mode” will be able to handle more-intricate development processes. Android Studio will receive new AI capabilities, including an enhanced “crash insights” feature in the App Quality Insights panel. This improvement, powered by Gemini, will analyze an app’s source code to identify potential causes of crashes and suggest fixes.
·193 Visualizações