For the First Time – A Robot Has Learned To Imagine Itself
A robot created by Columbia Engineers learns to understand itself rather than the environment around it.
Our perception of our bodies is not always correct or realistic, as any athlete or fashion-conscious person knows, but it’s a crucial factor in how we behave in society. Your brain is continuously preparing for movement while you play ball or get dressed so that you can move your body without bumping, tripping, or falling.
Humans develop our body models as infants, and robots are starting to do the same. A team at Columbia Engineering revealed today that they have developed a robot that, for the first time, can learn a model of its whole body from scratch without any human aid. The researchers explain how their robot built a kinematic model of itself in a recent paper published in Science Robotics, and how it utilized that model to plan movements, accomplish objectives, and avoid obstacles in a range of scenarios. Even damage to its body was automatically detected and corrected.
The robot watches itself like an infant exploring itself in a hall of mirrors
The researchers placed a robotic arm within a circle of five streaming video cameras. The robot watched itself through the cameras as it undulated freely. Like an infant exploring itself for the first time in a hall of mirrors, the robot wiggled and contorted to learn how exactly its body moved in response to various motor commands. After about three hours, the robot stopped. Its internal deep neural network had finished learning the relationship between the robot’s motor actions and the volume it occupied in its environment.
Self-modeling robots will lead to more self-reliant autonomous systems
The ability of robots to model themselves without being assisted by engineers is important for many reasons: Not only does it save labor, but it also allows the robot to keep up with its own wear-and-tear, and even detect and compensate for damage. The authors argue that this ability is important as we need autonomous systems to be more self-reliant. A factory robot, for instance, could detect that something isn’t moving right, and compensate or call for assistance.
“We humans clearly have a notion of self,” explained the study’s first author Boyuan Chen, who led the work and is now an assistant professor at Duke University. “Close your eyes and try to imagine how your own body would move if you were to take some action, such as stretch your arms forward or take a step backward. Somewhere inside our brain we have a notion of self, a self-model that informs us what volume of our immediate surroundings we occupy, and how that volume changes as we move.”
Self-awareness in robots
The work is part of Lipson’s decades-long quest to find ways to grant robots some form of self-awareness. “Self-modeling is a primitive form of self-awareness,” he explained. “If a robot, animal, or human, has an accurate self-model, it can function better in the world, it can make better decisions, and it has an evolutionary advantage.”