www.techspot.com
Editor's take: Google's push to make Gemini a more interactive AI assistant could revolutionize how we use the technology. However, as users share their camera and screen with Gemini, what are the privacy implications? While the benefits of real-time assistance are clear, the potential for data misuse or overreach is also a concern. Google took the stage at the Mobile World Congress (MWC) in Barcelona on Monday to showcase the latest enhancements to its AI assistant, Gemini. The company revealed two new features to make Gemini more interactive and context-aware, including real-time video analysis and screen-sharing capabilities.One cool upgrade to Gemini Live lets users fire up their smartphone camera and point it at objects, surroundings, or even their computer screen for instant analysis and feedback (above). Whether identifying an item, explaining something technical, or helping troubleshoot a problem, Google wants Gemini to be more than just a chatbot it aims to be a hands-on AI assistant that actually sees what's happening.The second is a new screen-sharing feature that allows users to show Gemini Live their screen (below). The AI can then guide them through tasks, provide app-specific help, or summarize information from displayed content. Google aims to make digital assistance feel less like a chatbot and more like an ever-present AI helper that can interpret and respond to on-screen elements in real-time.However, these features won't come free. Google is locking real-time video analysis and screen sharing behind its AI Premium plan, which costs $20 per month. This move follows the industry trend of placing advanced AI capabilities behind paywalls, like OpenAI's GPT-4.5 access through ChatGPT Plus. There's also the question: How much do you trust giving Google access to your phone's camera?Google previosly demonstrated these capabilities last year for WMC 2024 attendees (below), although it was called Project Astra back then. Through the camera, Gemini could identify landmarks and objects and remember where the demonstrator's glasses were. With screen-sharing enabled, Gemini could assist in tasks like shopping or providing technical support with a simple camera scan.While the demo was impressive, Ars Technica notes that the current AI has problems with video analysis under less ideal (read: nonscripted) conditions. However, the update is more evolved, and the early response has been positive, with beta users praising the potential of an AI assistant that can see and respond to its environment. The rest of the world can soon see for themselves. Google confirmed that the updates will roll out to the Gemini app on Android later this month, with iOS availability expected soon after. // Related Stories
0 Yorumlar ·0 hisse senetleri ·48 Views