A new generation of evolving robots leap, bounce, and jitter into being

Features

  • Author: Leila Battison
  • Date: 24 Sep 2013
  • Copyright: Image appears courtesy of iStock Photo

The image of a sentient, adaptable, and indestructible robot has long been the preoccupation of science fiction writers and futurists alike. It is a dream for some, a nightmare for others, but van Neumann-style automatons are still a distant prospect. However, in the field of evolutionary robotics, recent game-changing steps have been made that have the potential to revolutionise the mathematics that could make such a pipedream possible. In a paper recently presented at the Genetic and Evolutionary Computation Conference (GECCO) in the Netherlands, researchers from the Cornell Creative Machines Lab have demonstrated how simple robots with rudimentary structures can adapt and evolve to produce forms that are strikingly similar to complex living organisms. And their tools? Just four simple materials and a life-like information encoding mechanism.

thumbnail image: A new generation of evolving robots leap, bounce, and jitter into being

The concept of evolving robots is not a new one. Indeed, the GECCO is a celebration of this uniquely interdisciplinary field. One seminal work has formed the foundation on which subsequent evolutionary robotics has built. In 1994, Karl Sims showed that robots could evolve with simple algorithms applied to an assemblage of predefined rigid sub-components. With direct encoding, low-resolution ‘virtual creatures’ were able to adapt to certain environments, with various body shapes developing that were optimised for swimming, walking and even jumping.

Yet in the nearly 20 years of computational advance since Sims’ work, the expectations for adaptable robots have increased a great deal. Rapid and complex computation should allow for a significant increase in resolution, and the commercialisation of 3D printing technology makes such robots a tangible possibility. The team of researchers at Cornell, led by Nick Cheney, have built on Sims’ previous research, and used four ‘organic’ materials and a powerful generative encoding to focus on evolving elaborate robots that are really good at just one thing: going fast.

Cheney and his colleagues aimed to produce life-like robots using life-like tools. The four materials they chose mimic the structural and functional materials typically found in an animal body: two types of muscle that alternately expand and contract, a soft tissue like cartilage, and a hard tissue like bone. Similarly, the encoding of information from one generation to the next was, as Cheney explains ‘directly inspired by developmental biology’.

In contrast to the traditionally used direct encoding that results in a general lack of regularity and coordination, this new work made use of CPPN-NEAT (Compositional Pattern-Producing Network in the NeuroEvolution of Augmenting Topologies), which is an artificial neural network that contains a variety of functions (including sigmoidal, Gaussian and others) that is able to produce symmetry, repetition, and interesting variation. Subsequent generations using this encoding can preserve successful organised ‘zones’ that can effectively adapt to a selection pressure.

The statistical representations of these adaptations are simple monotonic loss functions, as outlined by Taguchi, mirroring natural optimising systems in which ‘bigger is better’, or in this case, ‘faster is better’.

In this study, just one characteristic of the resultant robots was selected for, in contrast to a complex natural ecosystem, where many pressures could contribute to the natural selection of organisms. For simplicity, speed was needed for success – those robots able to cover the most lateral distance in the least time were the ones that survived to reproduce and improve. The statistical representations of these adaptations are simple monotonic loss functions, as outlined by Taguchi, mirroring natural optimising systems in which ‘bigger is better’, or in this case, ‘faster is better’.

Cheney and colleagues’ aim in this study was ‘to understand the algorithms underlying evolution, with a view to developing more complex robotic systems and informing both biological and engineering sciences’. Their results are robust and resilient models of evolutionary exploration, providing a basis for much further work. And along the way, they have produced some compelling images of surprisingly animal-like robots with well-presented images and multimedia that have been celebrated by the international press.

A variety of amusing forms and locomotion types are seen in these physical forms, from walking or trotting, to galloping, or simply dragging itself along like a wounded creature. They identified six dominant locomotion strategies:

• The ‘L-walker’, with alternating muscle contraction on one forward and one backward limb creating large, purposeful strides.

• The ‘Incher’ whose muscular tissue supports stumpy limbs that ‘inch’ along slowly and stably.

• A ‘Push-pull’ technique, which uses an anchor of soft tissue and a large powerful muscle to surge forward and drag its bulk along behind.

• The ‘Jitter’, which has hard tissue supports that help to maintain stability as the entire creature squirms chaotically forward.

• A ‘Jumper’, in which large, powerful areas of muscle-like tissue produce a momentum that drives an anchor of hard tissue forward.

• And ‘Wings’, strongly reminiscent of a stingray, consisting of large forward and back alternating muscle blocks driving synchronised undulation of wing- or flipper-like side limbs.


Using these four materials, interesting and effective forms were able to evolve at a range of voxel resolutions, from 5x5x5, to 20x20x20. Statistical analysis of the improvement of a ‘species’ over time showed that the greatest improvement in evolutionary innovation came when the robots were provided with a second muscle-like material. Although the hard tissue support is compelling for our concept of ‘bone’ development (as was seen in the strengthening of limbs for a galloping gait), the introduction of hard support does relatively little for improving adaptability.

The novel use of CPPN encoding allowed for large improvements early on in the robots’ evolution, as the simple forms explored new locomotion types. During this time, mirroring the real explosion of animal life 500 million years ago, strange experimental forms appeared and new strategies were innovated. After settling on a successful body shape and movement type, the complex life-like generative encoding allowed iterative improvement to the timing, coordination, etc. This is in stark contrast to the traditional direct encoding, which cannot allow for coordination of the whole organism, and forcing evolution to act piecemeal on random initial forms, resulting in only minor improvements over time. The resultant system rings of ‘antifragility’, a concept introduced by Nassim Nicholas Taleb to describe phenomena that actively evolve and improve under unpredictable circumstances. Similar to actual evolutionary dynamics, the CPPN encoding gives computational leeway to allow the genesis of unpredictably successful forms.

This fascinating study from the Cornell Creative machines lab is part of a wider project that aims to put in place the computational and practical foundations for creating robots that will be able to ‘perform autonomous tasks in unpredictable environments by developing adaptive behaviour’. Clear future goals include expanding the generative environment to develop cognitive behaviour, which will allow these artificial systems to react and adapt to more complex environments and multi-layered ecosystems, perhaps by employing the probabilistic approach to artificial intelligence pioneered by Judea Pearl. In addition they are developing techniques to three-dimensionally print solid, soft, and actuable muscle-like materials to physically realise the resulting automatons.

What can we expect in the future? Developments in these computational and physical technologies could see robots inching closer and closer to real organic life forms. Could we see a modern reinterpretation of the Turing test, where instead of testing for intelligence, we test for naturalistic realism? Perhaps we will one day struggle to distinguish our adaptable robotic companions from their living counterparts.

As yet, adaptive and reactive machines exist only in the virtual world, but they are more complex than ever, and as we move into our sci-fi future, the first tentative steps are being made toward their inevitable and inexorable advance. Whilst we wait on flying cars and hoverboards, a multitude of adaptable robots are poised in the wings to herald in a new age of evolutionary robotics.

Sources:

Link to images and videos:
http://creativemachines.cornell.edu/soft-robots/

Link to paper:
http://creativemachines.cornell.edu/sites/default/files/Cheney-MacCurdy-Clune-Lipson_UnshacklingEvolution_GECCO2013.pdf

Related Topics

Related Publications

Related Content

Site Footer

Address:

This website is provided by John Wiley & Sons Limited, The Atrium, Southern Gate, Chichester, West Sussex PO19 8SQ (Company No: 00641132, VAT No: 376766987)

Published features on StatisticsViews.com are checked for statistical accuracy by a panel from the European Network for Business and Industrial Statistics (ENBIS)   to whom Wiley and StatisticsViews.com express their gratitude. This panel are: Ron Kenett, David Steinberg, Shirley Coleman, Irena Ograjenšek, Fabrizio Ruggeri, Rainer Göb, Philippe Castagliola, Xavier Tort-Martorell, Bart De Ketelaere, Antonio Pievatolo, Martina Vandebroek, Lance Mitchell, Gilbert Saporta, Helmut Waldl and Stelios Psarakis.