Google Says Gemini's Agent Mode Will Finally Turn Its AI into a Real Personal Assistant It's 2025, and Google is now bringing its own agentic AI feature to the Gemini app. While the company has discussed agentic AI prototypes..."> Google Says Gemini's Agent Mode Will Finally Turn Its AI into a Real Personal Assistant It's 2025, and Google is now bringing its own agentic AI feature to the Gemini app. While the company has discussed agentic AI prototypes..." /> Google Says Gemini's Agent Mode Will Finally Turn Its AI into a Real Personal Assistant It's 2025, and Google is now bringing its own agentic AI feature to the Gemini app. While the company has discussed agentic AI prototypes..." />

Passa a Pro

Google Says Gemini's Agent Mode Will Finally Turn Its AI into a Real Personal Assistant

It's 2025, and Google is now bringing its own agentic AI feature to the Gemini app. While the company has discussed agentic AI prototypes before, it now seems ready to take them mainstream. At the Google I/O 2025 keynote, Google discussed how the new feature can go out on the web on its own and perform tasks for you. Just like OpenAI's Operator, it can take a prompt, make a checklist of things that need to be done, and then do them for you. According to Google, Agent Mode combines features like live web browsing and deep research with data integration from Google apps to accomplish its online tasks. The model is supposedly capable of executing multistep actions, start to finish, with minimal oversight from the user.

Credit: Google

We still don't know a lot about how exactly the feature will work, but Google gave us an example on stage. Here, Sundar Pichai asked Gemini to find a new apartment for rent within a limited budget, and with access to built-in laundry. Gemini then made a task list for things to do, like opening a browser, navigating to Zillow, searching for listings that match, and even booking a tour. All of this is possible because Google is using MCP in the background. Model Context Protocolis a new industry-wide protocol that web developers and apps can use to integrate directly with AI tools. In this example, Google can search through Zillow and book a tour using the protocol, which is much more reliable than spinning up a web browser and asking AI to click some buttons for you.Agentic capabilities aren't only limited to the Gemini app's Agent Mode. Google is also bringing a more modest version of them to Chrome, and Google Search. For example, Agentic features in AI mode can help you search for game tickets in the background.

Credit: Google

According to Google, Agent mode will be coming soon to the US as an early preview for the new Google AI Ultra plan, which costs per month. There's no word on wider availability yet.
#google #says #gemini039s #agent #mode
Google Says Gemini's Agent Mode Will Finally Turn Its AI into a Real Personal Assistant
It's 2025, and Google is now bringing its own agentic AI feature to the Gemini app. While the company has discussed agentic AI prototypes before, it now seems ready to take them mainstream. At the Google I/O 2025 keynote, Google discussed how the new feature can go out on the web on its own and perform tasks for you. Just like OpenAI's Operator, it can take a prompt, make a checklist of things that need to be done, and then do them for you. According to Google, Agent Mode combines features like live web browsing and deep research with data integration from Google apps to accomplish its online tasks. The model is supposedly capable of executing multistep actions, start to finish, with minimal oversight from the user. Credit: Google We still don't know a lot about how exactly the feature will work, but Google gave us an example on stage. Here, Sundar Pichai asked Gemini to find a new apartment for rent within a limited budget, and with access to built-in laundry. Gemini then made a task list for things to do, like opening a browser, navigating to Zillow, searching for listings that match, and even booking a tour. All of this is possible because Google is using MCP in the background. Model Context Protocolis a new industry-wide protocol that web developers and apps can use to integrate directly with AI tools. In this example, Google can search through Zillow and book a tour using the protocol, which is much more reliable than spinning up a web browser and asking AI to click some buttons for you.Agentic capabilities aren't only limited to the Gemini app's Agent Mode. Google is also bringing a more modest version of them to Chrome, and Google Search. For example, Agentic features in AI mode can help you search for game tickets in the background. Credit: Google According to Google, Agent mode will be coming soon to the US as an early preview for the new Google AI Ultra plan, which costs per month. There's no word on wider availability yet. #google #says #gemini039s #agent #mode
LIFEHACKER.COM
Google Says Gemini's Agent Mode Will Finally Turn Its AI into a Real Personal Assistant
It's 2025, and Google is now bringing its own agentic AI feature to the Gemini app. While the company has discussed agentic AI prototypes before, it now seems ready to take them mainstream. At the Google I/O 2025 keynote, Google discussed how the new feature can go out on the web on its own and perform tasks for you. Just like OpenAI's Operator, it can take a prompt, make a checklist of things that need to be done, and then do them for you. According to Google, Agent Mode combines features like live web browsing and deep research with data integration from Google apps to accomplish its online tasks. The model is supposedly capable of executing multistep actions, start to finish, with minimal oversight from the user. Credit: Google We still don't know a lot about how exactly the feature will work, but Google gave us an example on stage. Here, Sundar Pichai asked Gemini to find a new apartment for rent within a limited budget, and with access to built-in laundry. Gemini then made a task list for things to do, like opening a browser, navigating to Zillow, searching for listings that match, and even booking a tour. All of this is possible because Google is using MCP in the background. Model Context Protocol (introduced by Anthropic) is a new industry-wide protocol that web developers and apps can use to integrate directly with AI tools. In this example, Google can search through Zillow and book a tour using the protocol, which is much more reliable than spinning up a web browser and asking AI to click some buttons for you.Agentic capabilities aren't only limited to the Gemini app's Agent Mode. Google is also bringing a more modest version of them to Chrome, and Google Search. For example, Agentic features in AI mode can help you search for game tickets in the background. Credit: Google According to Google, Agent mode will be coming soon to the US as an early preview for the new Google AI Ultra plan, which costs $250 per month. There's no word on wider availability yet.
1 Commenti ·86 Views