- By Prateek Levi
- Fri, 26 Sep 2025 06:53 PM (IST)
- Source:JND
Google DeepMind is pushing robotics into a new era with its latest AI upgrades. The company says its new models can help robots handle complex, multi-step tasks — moving beyond simple actions like unzipping a bag to making decisions that require real-world reasoning.
The update introduces two new systems: Gemini Robotics 1.5 and Gemini Robotics-ER 1.5. The latter is an embodied reasoning model designed to give robots the ability to plan ahead, interpret their surroundings, and carry out tasks with a deeper level of understanding. Both are evolutions of the original Gemini Robotics models first unveiled in March.
Carolina Parada, head of robotics at Google DeepMind, explained just how far these capabilities have come. Robots can now separate laundry by colour or even pack a suitcase based on the weather forecast in London.
Even more striking, these robots can now rely on digital tools such as Google Search to get information in real time. For instance, a robot might look up local waste management rules online before sorting trash, compost, and recyclables.
Parada acknowledged the difference compared to earlier systems, noting that while older robots were adept at following one instruction at a time, their understanding was narrow. “With this update, we’re now moving from one instruction to actually genuine understanding and problem-solving for physical tasks,” she said, as quoted by The Verge.
How the system works
When given a command, Gemini Robotics-ER 1.5 first interprets the environment and gathers any extra information it needs using digital tools. It then translates that into natural-language steps for Gemini Robotics 1.5, which executes the action.
A model that works across robots
Google has also highlighted another breakthrough: these models are not limited to a single type of robot. Skills learnt by one machine can transfer to another, regardless of shape or design. In testing, tasks performed on an ALOHA2 robot with twin arms could be replicated just as effectively on the humanoid Apptronik Apollo.
ALSO READ: Snapdragon 8 Elite Gen 5 To Power India’s Next Big Flagship Launches, OnePlus, Xiaomi And More
“This enables two things for us. One is to control very different robots—including a humanoid—with a single model,” said Google DeepMind engineer Kanishka Rao, quoted by The Verge. “And secondly, skills that are learnt on one robot can now be transferred to another robot.”