The latest Google Gemma AI model can run on phones Google’s family of “open” AI models, Gemma, is growing. During Google I/O 2025 on Tuesday, Google took the wraps off Gemma 3n, a model designed to run “smoothly” on phones, laptops, and..."> The latest Google Gemma AI model can run on phones Google’s family of “open” AI models, Gemma, is growing. During Google I/O 2025 on Tuesday, Google took the wraps off Gemma 3n, a model designed to run “smoothly” on phones, laptops, and..." /> The latest Google Gemma AI model can run on phones Google’s family of “open” AI models, Gemma, is growing. During Google I/O 2025 on Tuesday, Google took the wraps off Gemma 3n, a model designed to run “smoothly” on phones, laptops, and..." />

ترقية الحساب

The latest Google Gemma AI model can run on phones

Google’s family of “open” AI models, Gemma, is growing.
During Google I/O 2025 on Tuesday, Google took the wraps off Gemma 3n, a model designed to run “smoothly” on phones, laptops, and tablets. Available in preview starting Tuesday, Gemma 3n can handle audio, text, images, and videos, according to Google.
Models efficient enough to run offline and without the need for computing in the cloud have gained steam in the AI community in recent years. Not only are they cheaper to use than large models, but they preserve privacy by eliminating the need to transfer data to a remote data center.
During a keynote at I/O, Gemma Product Manager Gus Martins said that Gemma 3n can run on devices with less than 2GB of RAM. “Gemma 3n shares the same architecture as Gemini Nano, and is and is engineered for incredible performance,” he added.
In addition to Gemma 3n, Google is releasing MedGemma through its Health AI Developer Foundations program. According to the company, MedGemma is its most capable open model for analyzing health-related text and images.
“MedGemmaourcollection of open models for multimodaltext and image understanding,” Martins said. “MedGemma works great across a range of image and text applications, so that developerscan adapt the models for their own health apps.”
Also on the horizon is SignGemma, an open model to translate sign language into spoken-language text. Google says that SignGemma will enable developers to create new apps and integrations for deaf and hard-of-hearing users.

Techcrunch event

Join us at TechCrunch Sessions: AI
Secure your spot for our leading AI industry event with speakers from OpenAI, Anthropic, and Cohere. For a limited time, tickets are just for an entire day of expert talks, workshops, and potent networking.

Exhibit at TechCrunch Sessions: AI
Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last.

Berkeley, CA
|
June 5

REGISTER NOW

“SignGemma is a new family of models trained to translate sign language to spoken-language text, but it’s best at American Sign Language and English,” Martins said. “It’s the most capable sign language understanding model ever, and we can’t wait for you — developers and deaf and hard-of-hearing communities — to take this foundation and build with it.”
Worth noting is that Gemma has been criticized for its custom, non-standard licensing terms, which some developers say have made using the models commercially a risky proposition. That hasn’t dissuaded developers from downloading Gemma models tens of millions of times collectively, however.
Updated 2:40 p.m. Pacific: Added several quotes from Gemma Product Manager Gus Martins.

Topics
#latest #google #gemma #model #can
The latest Google Gemma AI model can run on phones
Google’s family of “open” AI models, Gemma, is growing. During Google I/O 2025 on Tuesday, Google took the wraps off Gemma 3n, a model designed to run “smoothly” on phones, laptops, and tablets. Available in preview starting Tuesday, Gemma 3n can handle audio, text, images, and videos, according to Google. Models efficient enough to run offline and without the need for computing in the cloud have gained steam in the AI community in recent years. Not only are they cheaper to use than large models, but they preserve privacy by eliminating the need to transfer data to a remote data center. During a keynote at I/O, Gemma Product Manager Gus Martins said that Gemma 3n can run on devices with less than 2GB of RAM. “Gemma 3n shares the same architecture as Gemini Nano, and is and is engineered for incredible performance,” he added. In addition to Gemma 3n, Google is releasing MedGemma through its Health AI Developer Foundations program. According to the company, MedGemma is its most capable open model for analyzing health-related text and images. “MedGemmaourcollection of open models for multimodaltext and image understanding,” Martins said. “MedGemma works great across a range of image and text applications, so that developerscan adapt the models for their own health apps.” Also on the horizon is SignGemma, an open model to translate sign language into spoken-language text. Google says that SignGemma will enable developers to create new apps and integrations for deaf and hard-of-hearing users. Techcrunch event Join us at TechCrunch Sessions: AI Secure your spot for our leading AI industry event with speakers from OpenAI, Anthropic, and Cohere. For a limited time, tickets are just for an entire day of expert talks, workshops, and potent networking. Exhibit at TechCrunch Sessions: AI Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last. Berkeley, CA | June 5 REGISTER NOW “SignGemma is a new family of models trained to translate sign language to spoken-language text, but it’s best at American Sign Language and English,” Martins said. “It’s the most capable sign language understanding model ever, and we can’t wait for you — developers and deaf and hard-of-hearing communities — to take this foundation and build with it.” Worth noting is that Gemma has been criticized for its custom, non-standard licensing terms, which some developers say have made using the models commercially a risky proposition. That hasn’t dissuaded developers from downloading Gemma models tens of millions of times collectively, however. Updated 2:40 p.m. Pacific: Added several quotes from Gemma Product Manager Gus Martins. Topics #latest #google #gemma #model #can
TECHCRUNCH.COM
The latest Google Gemma AI model can run on phones
Google’s family of “open” AI models, Gemma, is growing. During Google I/O 2025 on Tuesday, Google took the wraps off Gemma 3n, a model designed to run “smoothly” on phones, laptops, and tablets. Available in preview starting Tuesday, Gemma 3n can handle audio, text, images, and videos, according to Google. Models efficient enough to run offline and without the need for computing in the cloud have gained steam in the AI community in recent years. Not only are they cheaper to use than large models, but they preserve privacy by eliminating the need to transfer data to a remote data center. During a keynote at I/O, Gemma Product Manager Gus Martins said that Gemma 3n can run on devices with less than 2GB of RAM. “Gemma 3n shares the same architecture as Gemini Nano, and is and is engineered for incredible performance,” he added. In addition to Gemma 3n, Google is releasing MedGemma through its Health AI Developer Foundations program. According to the company, MedGemma is its most capable open model for analyzing health-related text and images. “MedGemma [is] our […] collection of open models for multimodal [health] text and image understanding,” Martins said. “MedGemma works great across a range of image and text applications, so that developers […] can adapt the models for their own health apps.” Also on the horizon is SignGemma, an open model to translate sign language into spoken-language text. Google says that SignGemma will enable developers to create new apps and integrations for deaf and hard-of-hearing users. Techcrunch event Join us at TechCrunch Sessions: AI Secure your spot for our leading AI industry event with speakers from OpenAI, Anthropic, and Cohere. For a limited time, tickets are just $292 for an entire day of expert talks, workshops, and potent networking. Exhibit at TechCrunch Sessions: AI Secure your spot at TC Sessions: AI and show 1,200+ decision-makers what you’ve built — without the big spend. Available through May 9 or while tables last. Berkeley, CA | June 5 REGISTER NOW “SignGemma is a new family of models trained to translate sign language to spoken-language text, but it’s best at American Sign Language and English,” Martins said. “It’s the most capable sign language understanding model ever, and we can’t wait for you — developers and deaf and hard-of-hearing communities — to take this foundation and build with it.” Worth noting is that Gemma has been criticized for its custom, non-standard licensing terms, which some developers say have made using the models commercially a risky proposition. That hasn’t dissuaded developers from downloading Gemma models tens of millions of times collectively, however. Updated 2:40 p.m. Pacific: Added several quotes from Gemma Product Manager Gus Martins. Topics
·169 مشاهدة