This year marks 20 years since IBM computer Deep Blue beat chess world champion Garry Kasparov in a six-game match. It was undoubtedly a milestone: for years a game of chess had been the yardstick by which progress in the world of artificial intelligence (AI) had been measured.
There have been predictions of a robo-future where machines move like humans. In reality, in the two decades since Deep Blue, machine intelligence has advanced faster and further than machine mobility. The robot Olympics is probably still a long way off.
Alphabet, Google’s parent company, is one of the organizations sinking significant time and money into developing robots’ physical intelligence. Its AI subsidiary, DeepMind, has managed to produce an artificially intelligent machine that can walk, run and jump in simulated environments. Crucially it can learn to do this itself, without prior guidance.
Have you read?
Walk, run, jump
In a series of three papers DeepMind researchers have demonstrated how simulated robots can use AI to adapt and respond to various obstacles in a virtual environment.
In the first paper, scientists demonstrate how they managed to get a variety of simulated robot types to learn to jump, turn and crouch without specific instructions to do so. The simulations were only given high-level objectives, such as moving forwards without falling.
The second paper shows how movement learning can be applied to more human-like robots, using motion capture data of human behaviour to pre-learn certain skills, such as walking, getting up from the ground, running and turning. These skills can then be applied to overcome other virtual obstacles, meaning the humanoid AI can learn how to climb stairs or navigate walled corridors.
The final paper shows how scientists have developed a model that learns relationships between particular behaviours and to imitate actions it is shown. This means, for example, it can switch between different walking styles and adapt movements, despite never having been shown how to.