Google Expands Gemini's Multimodal AI Features to All Android Users
Google is officially rolling out powerful new capabilities to its next-gen AI assistant, Gemini, making advanced tools available to all eligible Android users. Previously available only to select Pixel, Samsung, and Gemini Advanced users, the latest update powered by the Gemini 2.5 models brings two major features to a broader audience:
Enhanced Interactivity with Multimodal AI
These features allow users to interact with Gemini in a more natural and intelligent way — simply by showing it what’s on their screen or through the camera in real time. Gemini is Google’s AI-powered assistant, developed as a replacement for the traditional Google Assistant across Android devices.
The Power of Multimodal AI
Unlike its predecessor, Gemini uses multimodal AI, which means it can understand and respond to:
- Visual inputs
- Real-time camera interactions
- Contextual information
Capabilities of Gemini 2.5 Pro
With this update, Gemini becomes more than just a voice assistant — it transforms into a smart, context-aware helper capable of assisting in tasks like troubleshooting tech issues, interpreting visual content, or diagnosing simple mechanical problems. The latest update is backed by Gemini 2.5 Pro and Gemini 2.5 Flash, currently in experimental mode.
Deep Research with 2.5 Pro
Google also announced Deep Research with 2.5 Pro, a new functionality geared toward advanced users that enables more in-depth assistance and task handling. Key new capabilities include:
- Enhanced assistant interactions
- Advanced troubleshooting support
- Improved task handling
Support for Education
In a move to support education, Google has made its Gemini Advanced subscription — normally priced at $20/month — available for free to college students in the United States. This subscription includes:
- Access to advanced AI capabilities
- Specialized educational tools
- Exclusive features for students
Expanding Reach to iPhone Users
iPhone users can also access Gemini’s latest multimodal features — but only through the Gemini Advanced subscription. Once subscribed, they gain the same functionality as Android users, including visual and real-time video input support.
Getting Started with Gemini Live
To use the new Gemini Live capabilities:
- Open Gemini on your Android device
- Select the Live feature
- Follow on-screen instructions for visual assistance
The Future of AI Assistance
Whether you’re trying to troubleshoot your phone, understand what’s happening on your laptop screen, or need help identifying something through your camera, Gemini is now equipped to assist — visually and contextually. Google’s expansion of Gemini’s capabilities signals a big leap in how AI assistants can support users in their daily lives. With multimodal understanding, real-time visual input, and advanced model access now available to more people, Gemini is well on its way to becoming one of the most powerful AI tools in the hands of everyday users. As Assistant is slowly phased out, Gemini is set to redefine the future of AI assistance on mobile devices.










