Google Announces On-Device Gemini Robotics Model
The first truly portable version of Google DeepMind’s Vision-Language-Action engine now runs entirely on the robot itself, opening the door to warehouse bots and factory cobots that keep working even when Wi-Fi dies. Google DeepMind’s new Gemini Robotics on-device model is a small powerhouse. The company says it retains “almost” the same prowess as the hybrid cloud version launched in March but is lean enough to live entirely on a robot’s onboard computer. In internal tests, bots running the model zipped zippers, folded shirts, and sorted unseen parts—all without sending a single packet to the cloud.
Local Processing for Robotics
This is a significant shift for Google, which has been pushing cloud-connected robotics through its RT-1 and RT-2 models. But this new "Gemini Robotics On-Device" model acknowledges what anyone who's ever dealt with spotty WiFi knows: sometimes you just can't rely on the internet.
Vision-Language-Action Models
Google's solution builds on what the company calls Vision-Language-Action (VLA) models — AI systems that can see their environment, understand natural language commands, and translate both into physical actions. The on-device model inherits the dexterity of Google's flagship cloud-based Gemini Robotics but compresses it down to run on the robot's local hardware. In tests, it performed complex manipulation tasks like folding origami and preparing salads.
Generalization and Privacy Concerns
Google says the model can learn new tasks with as few as 50 demonstrations and it can even transfer its knowledge to completely different robot bodies. But this local-first approach comes with trade-offs. On-device processing means more limited computational power compared to Google's massive cloud infrastructure.
Trusted Tester Program
Google is rolling out the technology through a trusted tester program along with a Gemini Robotics SDK that lets developers fine-tune the model for their specific use cases. It's a careful, controlled launch that suggests Google learned from the sometimes chaotic rollouts of consumer AI products.
Whether this represents the future of robotics or just one approach among many remains to be seen. Google's move to bring serious AI capability directly onto robots without the internet umbilical cord could accelerate the deployment of useful robots in settings where connectivity isn't guaranteed. In a world where even our cars are getting smarter, that might be exactly what robotics needs to finally leave the lab.




















