A Blog by

WATCH: Amazing Video Reveals Why Roaches Are So Hard to Squish

No door will stop them: American cockroaches can squeeze through a space just three millimeters high.
No door will stop them: American cockroaches can squeeze through a space just three millimeters high.
Photo Credit Tom Libby, Kaushik Jayaram and Pauline Jennings. Courtesy of PolyPEDAL Lab UC Berkeley


Have you ever stomped a roach, just to have it skitter away unscathed?* Or seen one disappear into an impossibly small crack?

Now scientists have figured out how they do that, and the results are terrifying.

The American cockroach (Periplaneta americana, aka “the big ones”) can squeeze through a crack the height of two stacked pennies in about a second—a fact newly discovered by two brave scientists who are probably still seeing roaches squeezing under the doors of their nightmares.

See for yourself:

Not only can roaches fit through tight spaces by flattening their flexible exoskeleton and splaying their legs to the side, the researchers found, they can keep running nearly as fast while squished, the team reports Monday in the Proceedings of the National Academy of Sciences. (In roach terms, top speed is 1.5 meters, or 50 body lengths, per second. Scaled up, that’s equivalent to a human running 200 miles per hour.)

Robert Full and Kaushik Jayaram at Berkeley built tiny tunnels and used a roach-squishing machine to test the animals’ limits. (No roaches were harmed—Full says “we only pushed them to 900 times their body weight, and they could still do that without being hurt.” In fact, they ran just as fast afterward.)

“We find them just as disgusting and revolting as everybody else,” Full says. But he also thinks they’re amazing, and is designing roachy robots that can squeeze and scuttle just like the real thing. The robots take inspiration from roaches’ jointed exoskeletons, with a design similar to folded origami.

A new compressible robot, nicknamed CRAM, is inspired by the flexible yet tough cockroach.
A new compressible robot, nicknamed CRAM, is inspired by the flexible yet tough cockroach.
Photograph by Tom Libby, Kaushik Jayaram and Pauline Jennings. Courtesy of PolyPEDAL Lab UC Berkeley

Full sees roaches and other arthropods—insects, spiders, and the like—as the next big thing in robots inspired by nature. Unlike other soft robots inspired by worms or octopuses, insect-bots with hard exoskeletons and muscles could run fast, jump, climb, and fly, while still remaining flexible.

“We know that cockroaches can go everywhere. They’re virtually indestructible,” Full says. For roaches, being able to scuttle quickly through small spaces has allowed them to spread into virtually every habitat imaginable and outrun their competition. Other insects probably have their own versions of these super-squishing superpowers, too, he says.

(For more on the positive side of roaches, learn why cockroaches made it onto our list of “All-Star Animal Dads.”)

The new roach study “transformed how I view a seemingly ‘hard’ animal,” says Daniel Goldman of Georgia Tech, who studies the physics of animal movement.  

“Their idea to create a “soft” robot out of deformable “hard” parts is great, and should transform how we think of creating all-terrain robots,” Goldman says.

*If you would never, ever, stomp on a roach, and are horrified at the suggestion, you’re a kind person and a sensitive soul. Keep watching the video though—it may surprise you.

A Blog by

Cave-Exploring Snake Robot Gets Inspiration From Sidewinders

Three years ago, a robotic snake called Elizabeth slithered into Egyptian caves to search for long-hidden ships.

The caves lie on Egypt’s east coast, and contained the dismantled remnants of vessels that the Egyptians used to sail the Red Sea. They were discovered about a decade ago and some have surrendered their secrets with relative ease. Others, however, are too dangerous and unstable for people to explore.

Enter Elizabeth. The serpentine robot, built by Howie Choset at Carnegie Mellon University and named after his wife*, was designed to explore spaces that humans cannot. She can slide over rough terrain, slink through tight cracks, and manoeuvre around rubble. During her Egyptian field test, she performed beautifully, with one major exception: When the team tried to drive her up sandy slopes, she slipped and slid.

Real snakes face the same problem, and many desert-dwelling species have solved it through a bizarre technique called sidewinding. It’s a very counter-intuitive style of movement. From above, it looks like the snake is travelling sideways in a beautiful undulating wave. But it leaves behind a series of straight tracks, each the length of its body.

The trick to understanding the technique is to realise that the snake is never sliding. Instead, it is constantly picking itself off from its current position and laying itself down in a new spot. The head goes first, and the rest of the body follows. But before the body catches up completely, the head is off again. At any point of time, the snake is only touching the ground with two short parts of its body. That’s why it moves in a wave, but leaves a straight track.

Sidewinding is perfect for negotiating dunes. Rather than pushing against slippery sand, the snake’s rolling motion means that it’s mostly in static contact with its surface. Many species of snake can do this, but only two have truly mastered the technique—a rattlesnake from the US and Mexico, and a horned viper from Angola and Namibia. Confusingly, both are called sidewinders.

Choset’s robot Elizabeth could sidewind, but not very well. It was missing something that its real counterparts were doing.

Elizabeth, the robot snake. Credit: Nico Zevalios and Chaohui Gong.
Elizabeth, the robot snake. Credit: Nico Zevalios and Chaohui Gong.

To discover that mystery ingredient, Choset teamed up with Daniel Goldman from the Georgia Institute of Technology. For decades, Goldman has been fascinated by how animals move on and through sand. He has studied baby sea turtles as they clamber over a beach, and a pointy-nosed lizard called the sandfish as it swims through sand. And his team have built robots that emulate these animals, to reveal the physics behind their movements. Guy knows sand; guy knows robots. And as luck would have it, he was already starting to study sidewinders.

The team, led by postdoc Hamidreza Marvi and student Chaohui Gong, worked with six sidewinders (the American kind) from Zoo Atlanta. They put the snakes on a sandy trackway that could be inclined at different angles. They even trucked in sand from Arizona’s Yuma Desert to give the snakes material that they would normally face in the wild. “They’re excellent study subjects,” says Goldman. “They sidewind on command. Put them in a container and off they go.”

At first, the team assumed that as the track got steeper, the sidewinders would respond by digging their bodies more firmly into the ground, just like we would if we climbed a steep dune. They didn’t. Instead, they kept more of their body in contact with the ground, giving themselves more purchase on increasingly treacherous slopes. As the researchers raised the flat track to a 20 degree incline, the sidewinders compensated by laying down twice as much body.

The team also tested 13 other species of rattlesnake from Zoo Atlanta. None of them sidewind naturally, and none of them could negotiate the same slopes that the sidewinders could. They tried to climb straight up, and failed. “It was quite amusing,” says Goldman. “One of the comments we got from our reviewers was that it was obvious what the sidewinders do. Well, it wasn’t obvious to the other snakes!”

The team then programmed Elizabeth to mimic the sidewinders, and found that she suddenly became much better at moving up slopes. Her performance revealed that snakes have to stick within a certain range of contact lengths, and this range narrows as the slopes get steeper. If they don’t lay down enough body, they slip. If they lay down too much, they can’t lift the rest of themselves effectively, and run into the sand in front of them. They end up digging a hole, rather than making progress.

So, by playing with their robot, the team understood more about what the snakes do. And by studying the snakes, they improved their robot. “Using our understanding of fundamental engineering, we advanced these robots very far but we couldn’t get them up sandy hills,” says Choset. They only surmounted that final hurdle by studying nature.

Choset thinks that the snake-bots have many possible uses. They could search for survivors trapped in collapsed buildings. They could also inspect dangerous environments like nuclear storage facilities. And, of course, they could explore archaeological sites. “If we have the opportunity to return to Egypt, we’d use this capability,” he says. “Archaeology is like search and rescue except everyone’s been dead for thousands of years so there’s no rush.”

* Choset tells me that there was a second snake robot called Howard, but he was lost in some airline baggage mix-up. Samuel L. Jackson was unavailable for comment.

Reference: Marvi, Gong, Gravish, Astley, Travers, Hatton, Mendelson, Choset, Hu & Goldman. 2014. Sidewinding with minimal slip: Snake and robot ascent of sandy slopes. Science http://dx.doi.org/10.1126/science.1255718

More on animal robots:

A Blog by

Adaptive Colour-Changing Sheet Inspired By Octopus Skin

The most amazing skins in the world can be found in the sea, stretched over the bodies of octopuses, squid and cuttlefish. These animals, collectively known as cephalopods, can change the colour, shape and texture of their skin at a whim—just watch the ‘rock’ in the video above suddenly reveal its true nature. Their camouflage is also adaptive. Unlike, say, a stick insect or stonefish, which are limited to one disguise, an octopus’s shifting skin allows it to mimic a multitude of backgrounds. It sees, it becomes.

No man-made technology comes close. But one, at least, is nudging in the right direction.

A team of scientists led by Cunjiang Yu at the University of Houston and John Rogers at the University of Illinois at Urbana–Champaign have developed a flexible pixellated sheet that can detect light falling upon it and change its pattern to match. So far, its large pixels can change from black to white and back again. It’s a far cry from an octopus’s skin, but it does share some of the same qualities. For example, it changes colour automatically and relatively quickly—not cephalopod-quick, but within a second or so.

“This is by no means a deployable camouflage system but it’s a pretty good starting point,” says Rogers. Eventually, his team are working towards adaptive sheets that can wrap around solid objects and alter their appearance. These could be used to make military vehicles that automatically camouflage themselves, or clothes that change colour depending on lighting conditions.

Cephalopod skins have three layers. The top one consists of cells called chromatophores, which are sacs of coloured pigment, controlled by a ring of muscles. If the sac expands, it produces a pixel of colour; if it contracts, the pixel hides. These cells are responsible for hues like red, orange, yellow and black. The middle layer contains iridophores, cells that reflect the colours of the animal’s environment—they’re responsible for cooler colours like blues and greens. The bottom layer consists of leucophores, passive cells that diffuse white light in all directions, and act as a backdrop for the other colours.

The skin also contains light-sensitive molecules called opsins, much like those found in your retina. It’s still unclear what these do, but a reasonable guess is that they help cephalopods to “see” with their skin, and adapt their patterns very quickly without needing instructions from their brains.

The team drew inspiration from these skins when designing their own material. It consists of a 16 by 16 grid of squares, each of which consists of several layers.

  • The top one contains a heat-sensitive dye that reversibly changes colour from black at room temperature to colourless at 47 degrees Celsius, and back again. This is the equivalent of an octopus’s chromatophores.
  • The next layer is a thin piece of silver, which creates a bright white background, like the leucophores.
  • Below that, there’s a diode that heats the overlying dye and controls its colour. This is the equivalent of the muscles that control the chromatophores.
  • Finally, there’s a layer with a light-detector in one corner, a bit like a cephalopod’s skin opsins. All the top-most layers—the dye and the silver—have little notches missing from their corners so that the light-detector always gets a unimpeded view of its surroundings.
  • And the whole thing sits on a flexible base so it can bend and flex without breaking.

So, the light-detectors sense any incoming light, and tell the diodes in the illuminated panels to heat up. This turns the overlying dye from black to transparent. These pixels now reflects light from their silver layer, making them look white. You can see this happening in the videos below.  Here, different patches of light are shining onto the material from below, and it’s responding very quickly.

“There are analogies between layers of our system and those in the cephalopod skin, but all the actual function is achieved in radically different ways,” says Rogers. “The multi-layer architecture works really well, though. Evolution reached the same conclusion.”

“The most exciting thing about this is that it’s all automatic, without any external user input,” he adds.

There are obvious military applications for the device and the work was funded by the Office of Naval Research. But Rogers notes that the sheets are designed to sense and adapt—they don’t necessarily have to blend in. “There are a lot of applications in fashion and interior design,” he says. “You could apply these flexible sheets to any surface and create something that’s visually responsive to ambient lighting conditions. But our goal is not to make adaptable wallpaper; it’s on the fundamentals.”

Obviously, the material will have to be improved. Since it relies on heat to change colour, it’s relatively slow, consumes a lot of power, and only works in a narrow range of temperatures. But the team used a heat-sensitive dye because it was easy; it gave them time to focus on the rest of the system.

Now that this framework is in place, they think they could improve it very easily. Rather than heating diodes, they could use components that use changing electric fields. They could replace the dyes with other substances that offer a full range of colours, beyond just black and white. And they should be able to scale the sheet up easily—Rogers, after all, has a lot of experience in building flexible electronics using commonly used substances like silicon, rather than fancy (and expensive) new materials.

But he doubts he’ll ever make something that truly matches a cephalopod’s skin. “As an engineer looking at movies of squid, octopuses, and cuttlefish, you just realise that you’re not going to get close to that level of sophistication,” he says. “We tried to abstract the same principles and do the best we can with what we’ve got.”

Does their artificial skin have any advantages over what an octopus or squid can do?

“Well, it works on dry land!” says Rogers.

Reference: Yu, Li, Zhang, Huang, Malyrchuk, Wang, Shi, Gao, Su, Zhang, Xu, Hanlon, Huang & Rogers. 2014. Adaptive optoelectronic camouflage systems with designs inspired by cephalopod skins. PNAS http://dx.doi.org/10.1073/pnas.1410494111

A Blog by

A Swarm of a Thousand Cooperative, Self-Organising Robots

In a lab at Harvard’s Wyss Institute, the world’s largest swarm of cooperative robots is building a star… out of themselves. There are 1024 of these inch-wide ‘Kilobots’, and they can arrange themselves into different shapes, from a letter to a wrench. They are slow and comically jerky in their movements, but they are also autonomous. Once they’re given a shape, they can recreate it without any further instructions, simply by cooperating with their neighbours and organising themselves.

The Kilobots are the work of Mike Rubenstein, Alejandro Cornejo and Radhika Nagpal, who were inspired by natural swarms, where simple and limited units can cooperate to do great things. Thousands of fire ants can unite into living bridges, rafts and buildings. Billions of unthinking neurons can create the human brain. Trillions of cells can fashion a tree or a tyrannosaur. Scientists have tried to make artificial swarms with similar abilities, but building and programming them is expensive and difficult. Most of these robot herds consist of a few dozen units, and only a few include more than a hundred. The Kilobots smash that record.

They’re still a far cry from the combiner robots of my childhood cartoons: they’re arrange themselves into two-dimensional shapes rather than assembling Voltron-style into actual objects. But they’re already an impressive achievement. “This is not only the largest swarm of robots in the world but also an excellent test bed, allowing us to validate collective algorithms in practice,” says Roderich Gross from the University of Sheffield, who has bought 900 of the robots himself to use in his own experiments.

“This is a staggering work,” adds Iain Couzin, who studies collective animal behaviour at Princeton University. “It offers a vision of the future where robot groups could form structures on demand as, for example, in search-and-rescue in dangerous environments, or even the formation of miniature swarms within the body to detect and treat disease.”

"And I'll form... the wrench!" Credit: Michael Rubenstein, Harvard University.
“And I’ll form… the wrench!” Credit: Michael Rubenstein, Harvard University.

To create their legion, the team had to rethink every aspect of a typical robot. “If you have a power switch, it takes four seconds to push that, so it’ll take over an hour to turn on a thousand robots,” says Rubenstein. “Charging them, turning them on, sending them new instructions… everything you do with a thousand robots has to be at the level of all the robots at once.”

They also have to be cheap. Fancy parts might make each bot more powerful, but would turn a swarm into a budget-breaker. Even wheels were out. Instead, the team used simpler vibration motors. If you leave your phone on a table and it vibrates, it will also slide slightly: that’s how the Kilobots move. They have two motors: if either vibrates individually, the robot rotates; if both vibrate, it goes straight.

Well, straight-ish, anyway. The tyranny of cost-efficiency meant that the team had to lose any sensors that might tell the robots their bearings or positions. They can’t tell where they are, or if they’re going straight. But each one can shoot infrared beams to the surface below it, and sense the beams reflecting from its neighbours. By measuring how bright the reflections are, it can calculate its distance from other Kilobots.

This combination of stilted motion and dulled senses meant that each robot costs just $20. It also meant that “the robots were even more limited than we expected,” says Rubenstein. “The way they sense distance is noisy and imprecise. You can tell them to move and they won’t, and they’ll have no idea that they’re not moving.”

Fortunately, they have each other. A stuck Kilobot can’t tell if it’s stuck on its own, but it can communicate with its neighbours. If it thinks it’s moving but the distances from its neighbours change, it can deduce that something is wrong. And if neighbours estimate the distances between them and use the average, they can smooth out individual errors.

Using these principles, the team created a simple program that allows the robots to independently assemble into different shapes using just three behaviours. First, they move by skirting along the edges of a group. Second, they create gradients as a crude way of noting their position in the swarm. (A nominated source robot gets a gradient value of 0. Any adjacent robot that can see it sets its gradient value to 1. Any robot that sees 1 but not 0 sets its gradient to 2, and so on.) Finally, although they have no GPS, they can triangulate their position by talking to their neighbours. As long as the team nominates some robots as seeds, effectively turning them into the zero-point on a invisible graph, the rest of the swarm can then work out where they are.

Every Kilobot runs on the same program. The team only has to give them a shape and nominate four of them as seeds. Once that’s done, the rest slowly pour into the right pattern, in an endearingly life-like way. It takes them around 12 hours, but they do it all without any human intervention. And although the final shapes are always a little warped, that’s life-like too. Fire ants don’t have a Platonic ideal of what a bridge or raft should look like; they just work with their neighbours to get the job done.

Stills from movies showing the Kilobots assembling into a K and a star. Credit: Michael Rubenstein, Harvard University.
Stills from movies showing the Kilobots assembling into a K and a star. Credit: Michael Rubenstein, Harvard University.

Scientists have long been able to simulate huge swarms of life-like virtual particles in computers, using very simple rules. But the real world is full of pesky physics, inconvenient noise, and temperamental circuitry. Stuff goes wrong. By building an actual swarm, the team can address these problems and make their programs more robust. They’ve already had to deal with a litany of failed motors, stalled robots, collisions, and traffic jams. “The more times you run it, the more likely some random thing will show up that you don’t expect,” says Rubenstein. “That’s the problem with 1,000 robots: even rare things can happen very frequently.”

The next step will be to build robots that actually self-assemble by attaching to each other, says Marco Dorigo from the Free University of Brussels.. “We did so with tens of robots,” he says. “It will not be easy with one thousand.” Rubenstein agrees: “Physical connection is always difficult. If you have a dock, you tend to design the rest of the robot around that dock. It has a huge impact.”

Eventually, he also wants to get to a position where the robots can sense their environment and react accordingly, rather than just slide into some pre-determined shape. Like fire ants, when they get to a body of water, they wouldn’t have to be fed the image of a bridge; they would just self-assemble into one. “That’s a whole other level of intelligence, and it’s not really understood how to do that in robotics,” says Rubenstein. “But nature does it well.”

Reference: Rubenstein, Cornejo & Nagpal. 2014. Programmable self-assembly in a thousand-robot swarm. http://dx.doi.org/10.1126/science.1254295

More on robots:

A Blog by

Robots in disguise: soft-bodied walking machine can camouflage itself

None of our machines can do what a cuttlefish or octopus can do with its skin: change its pattern, colour, and texture to perfectly blend into its surroundings, in matter of milliseconds. Take a look at this classic video of an octopus revealing itself.

But Stephen Morin from Harvard University has been trying to duplicate this natural quick-change ability with a soft-bodied, colour-changing robot. For the moment, it comes nowhere near its natural counterparts – its camouflage is far from perfect, it is permanently tethered to cumbersome wires, and its changing colours have to be controlled by an operator. But it’s certainly a cool (and squishy) step in the right direction.

The camo-bot is an upgraded version of a soft-bodied machine that strode out of George Whitesides’ laboratory at Harvard University last year. That white, translucent machine ambled about on four legs, swapping hard motors and hydraulics for inflatable pockets of air. Now, Morin has fitted the robot’s back with a sheet of silicone containing a network of tiny tubes, each less than half a millimetre wide. By pumping coloured liquids through these “microfluidic” channels, he can change the robot’s colour in about 30 seconds.


A Blog by

Cockroaches and geckos disappear by swinging under ledges… and inspire robots

One minute, a cockroach is running headfirst off a ledge. The next minute, it’s gone, apparently having plummeted to its doom. But wait! It’s actually clinging to the underside of the ledge! This cockroach has watched one too many action movies.

The roach executes its death-defying manoeuvre by turning its hind legs into grappling hooks and its body into a pendulum. Just as it is about to fall, it grabs the edge of the ledge with the claws of its hind legs, swings onto the underneath the ledge and hangs upside-down. In the wild, this disappearing act allows it to avoid falls and escape from predators. And in Robert Full’s lab at University of California, Berkeley, the roach’s trick is inspiring the design of agile robots.

Full studies how animals move, but his team discovered the cockroach’s behaviour by accident. “We were testing the animal’s athleticism in crossing gaps using their antennae, and were surprised to find the insect gone,” says Full. “After searching, we discovered it upside-down under the ledge. To our knowledge, this is a new behavior, and certainly the first time it has been quantified.”


A Blog by

How leaping lizards, dinosaurs and robots use their tails

What do a leaping lizard, a Velociraptor and a tiny robot at Bob Full’s laboratory have in common? They all use their tails to correct the angle of their bodies when they jump.

Thomas Libby filmed rainbow agamas – a beautiful species with the no-frills scientific name of Agama agama – as they leapt from a horizontal platform onto a vertical wall. Before they jumped, they first had to vault onto a small platform. If the platform was covered in sandpaper, which provided a good grip, the agama could angle its body perfectly. In slow motion, it looks like an arrow, launching from platform to wall in a smooth arc (below, left)

If the platform was covered in a slippery piece of card, the agama lost its footing and it leapt at the wrong angle. It ought to have face-planted into the wall, but Libby found that it used its long, slender tail to correct itself (below, right). If its nose was pointing down, the agama could tilt it back up by swinging its tail upwards.


A Blog by

Monkeys grab and feel virtual objects with thoughts alone (and what this means for the World Cup)

It's a ninja monkey that fires energy blasts... what could possibly go wrong?

This is where we are now: at Duke University, a monkey controls a virtual arm using only its thoughts. Miguel Nicolelis had fitted the animal with a headset of electrodes that translates its brain activity into movements. It can grab virtual objects without using its arms. It can also feel the objects without its hands, because the headset stimulates its brain to create the sense of different textures. Monkey think, monkey do, monkey feel – all without moving a muscle.
And this is where  Nicolelis wants to be in three years: a young quadriplegic Brazilian man strolls confidently into a massive stadium. He controls his four prosthetic limbs with his thoughts, and they in turn send tactile information straight to his brain. The technology melds so fluidly with his mind that he confidently runs up and delivers the opening kick of the 2014 World Cup.

This sounds like a far-fetched dream, but Nicolelis – a big soccer fan – is talking to the Brazilian government to make it a reality. He has created an international consortium called the Walk Again Project, consisting of non-profit research institutions in the United States, Brazil, Germany and Switzerland. Their goal is to create a “high performance brain-controlled prosthetic device that enables patients to finally leave the wheelchair behind.”


A Blog by

Enter the nano-spiders – independent walking robots made of DNA


Two spiders are walking along a track – a seemingly ordinary scene, but these are no ordinary spiders. They are molecular robots and they, like the tracks they stride over, are fashioned from DNA. One of them has four legs and marches over its DNA landscape, turning and stopping with no controls from its human creators. The other has four legs and three arms – it walks along a miniature assembly line, picking up three pieces of cargo from loading machines (also made of DNA) and attaching them to itself. All of this is happening at the nanometre scale, far beyond what the naked eye can discern. Welcome to the exciting future of nanotechnology.

The two robots are the stars of two new papers that describe the latest advances in making independent, programmable nano-scale robots out of individual molecules. Such creations have featured in science-fiction stories for decades, from Michael Crichton’s Prey to Red Dwarf, but in reality, there are many barriers to creating such machines. For a start, big robots can be loaded with masses of software that guides their actions – no such luck at the nano-level.

The two new studies have solved this problem by programming the robots’ actions into their environment rather than their bodies. Standing on the shoulders of giants, both studies fuse two of the most interesting advances in nanotechnology: the design of DNA machines, fashioned from life’s essential double helix and possessing the ability to walk about; and the invention of DNA origami, where sets of specially constructed DNA molecules can be fused together into beautiful sheets and sculptures. Combine the two and you get a robot walker and a track for it to walk upon.


A Blog by

Robots evolve to deceive one another

Blogging on Peer-Reviewed ResearchIn a Swiss laboratory, a group of ten robots is competing for food. Prowling around a small arena, the machines are part of an innovative study looking at the evolution of communication, from engineers Sara Mitri and Dario Floreano and evolutionary biologist Laurent Keller.

They programmed robots with the task of finding a “food source” indicated by a light-coloured ring at one end of the arena, which they could “see” at close range with downward-facing sensors. The other end of the arena, labelled with a darker ring was “poisoned”. The bots get points based on how much time they spend near food or poison, which indicates how successful they are at their artificial lives.

They can also talk to one another. Each can produce a blue light that others can detect with cameras and that can give away the position of the food because of the flashing robots congregating nearby. In short, the blue light carries information, and after a few generations, the robots quickly evolved the ability to conceal that information and deceive one another.

Their evolution was made possible because each one was powered by an artificial neural network controlled by a binary “genome”. The network consisted of 11 neurons that were connected to the robot’s sensors and 3 that controlled its two tracks and its blue light. The neurons were linked via 33 connections – synpases – and the strength of these connections was each controlled by a single 8-bit gene. In total, each robot’s 264-bit genome determines how it reacts to information gleaned from its senses.

In the experiment, each round consisted of 100 groups of 10 robots, each competing for food in a separate arena. The 200 robots with the highest scores – the fittest of the population – “survived” to the next round. Their 33 genes were randomly mutated (with a 1 in 100 chance that any bit with change) and the robots were “mated” with each other to shuffle their genomes. The result was a new generation of robots, whose behaviour was inherited from the most successful representatives of the previous cohort.


A Blog by

Enter Adam, the Robot Scientist

Blogging on Peer-Reviewed ResearchIn a laboratory at Aberystwyth University, Wales, a scientist called Adam is doing some experiments. He is trying to find the genes responsible for producing some important enzymes in yeast, and he is going about it in a very familiar way. Based on existing knowledge, Adam is coming up with new hypotheses and designing experiments to test them. He carries them out, records and evaluates the results, and comes up with new questions. All of this is part and parcel of a typical scientist’s life but there is one important difference that sets Adam apart – he’s a robot.

Adam is the brainchild of Ross King and colleagues at Aberystwyth, who have described it as a “Robot Scientist“. The name is “almost an acronym” for “A Discovery Machine” and it also references Scottish economist Adam Smith and the obvious Biblical character. It has been loaded with equipment and software that allows it to independently design and carry out genetics experiments without any human intervention. And it has already begun to contribute to our scientific knowledge.

In a space the size of a small van, Adam contains a library of yeast strains in a freezer, two incubators, three pipettes for transferring liquid (one of which can manage 96 channels at once), three robot arms, a washer, a centrifuge, several cameras and sensors, and no less than four computers controlling the whole lot. All of this kit allows Adam to carry out his own research and to do it tirelessly – carrying out over 1000 experiments and making over 200,000 observations every day. All a technician needs to do is to keep Adam stocked up with fresh ingredients, take away waste and run the occasional clean.

The fast and prolific nature of robotic research assistants like Adam will undoubtedly become more and more important. Even now, science finds itself in the odd position of having more data than it knows what to do with. Experimental technology is becoming quicker, cheaper and more powerful and it’s generating a wealth of data that needs to be analysed – think of the flood of information coming in from genome sequencing projects alone. Data are being produced faster than it can be examined, but computers like Adam can play a significant role in coping with this glut.


A Blog by

Swimming, walking salamander robot reconstructs invasion of land


Blogging on Peer-Reviewed ResearchMoving robots are becoming more and more advanced, from Honda’s astronaut-like Asimo to the dancing Robo Sapien, a perennial favourite of Christmas stockings. But these advances are still fairly superficial. Most robots still move using pre-defined programmes and making a single robot switch between very different movements, such as walking or swimming, is very difficult. Each movement type would require significant programming effort.

a_badertscher_robot_beach1.jpgRobotics engineers are now looking to nature for inspiration. Animals, of course, are capable of a multitude of different styles of movement. They have been smoothly switching from swimming to walking for hundreds of millions of years, when our distant ancestors first invaded the land from the sea.

This ancient pioneer probably looked a fair bit like the salamanders of today’s rivers and ponds. On the land, modern salamanders walk by stepping forward with diagonally opposite pairs of legs, while its body sways about its hips and shoulders. In the water, they use a different tactic. Their limbs fold back and they swim by rapidly sending S-like waves down their bodies.


A Blog by

Robo-starfish learns about itself and adapts to injuries


Blogging on Peer-Reviewed ResearchI am walking strangely. About a week ago, I pulled something to my left ankle, which now hurts during the part of each step just before the foot leaves the ground. As a result, my other muscles are compensating for this to minimise the pain and my gait has shifted to something subtly different from the norm. In similar ways, all animal brains can compensate for injuries by computing new ways of moving that are often very different. This isn’t a conscious process and as such, we often take it for granted.

Starfish.jpgBut we can get a sense of how hard it actually is by trying to program a robot to do the same thing. It’s far from straightforward. Robots have been used for years to perform structured, repetitive tasks and as engineering has advanced, their movements have become more life-like and more stable. But they still have severe limitations, not the least of which is inflexibility in the face of injury or changes to their body shape. If a robot’s leg falls off, it becomes as useful as so much scrap metal.

So for robots, adaptiveness is a desirable virtue, especially if they are to be used in the field. Modern bots can independently develop complex behaviours without any previous programming but usually, this requires trial and error and lots of time. But not always. Josh Bongard and colleagues at Cornell University have developed an adaptable bot that’s programmed to continuously assesses its body structure and develop new ways of moving if anything changes.

It differs from other models in that it has no built-in redundancy plans, no strategies for dealing with anticipated problems. It’s simply programmed to examine itself and adapt accordingly. The concept of a robot that can adapt to new situations is often the precursor to nightmare scenarios in many a science-fiction film. So it is fortunate that Bongard’s robot isn’t armed or threatening, but instead looks more like a four-armed starfish.


A Blog by

Our brains react differently to artificial vs human intelligence

Blogging on Peer-Reviewed ResearchWith their latest film WALL-E, Pixar Studios have struck cinematic gold again, with a protagonist who may be the cutest thing to have ever been committed to celluloid. Despite being a blocky chunk of computer-generated metal, it’s amazing how real, emotive and characterful WALL-E can be. In fact, the film’s second act introduces a entire swarm of intelligent, subservient robots, brimming with personality.

Whether or not you buy into Pixar’s particular vision of humanity’s future, there’s no denying that both robotics and artificial intelligence are becoming ever more advanced. Ever since Deep Blue trounced Garry Kasparov at chess in 1996, it’s been almost inevitable that we will find ourselves interacting with increasingly intelligent robots. And that brings the study of artificial intelligence into the realm of psychologists as well as computer scientists.

Jianqiao Ge and Shihui Han from Peking University are two such psychologists and they are interested in the way our brains cope with artificial intelligence. Do we treat it as we would human intelligence, or is it processed differently? The duo used brain-scanning technology to answer this question, and found that there are indeed key differences. Watching human intelligence at work triggers parts of the brain that help us to understand someone else’s perspective – areas that don’t light up when we respond to artificial intelligence.