Modules of genes involved in metabolism. Source: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3357342/

The Parts of Life

ByCarl Zimmer
January 30, 2013
7 min read

We’re made of parts. Our skull is distinct from our spine. Our liver does not grade subtly into our intestines. Of course, the parts have to be connected for us to work as a whole: a skull completely separated from a spine is not much good to anyone. But those connections between the parts are relatively few. Our liver is linked to the intestines, but only by a few ducts. That’s a far cry from the intimate bonds between all the cells that make up the liver itself, not to mention the membrane that wraps around it like an astronaut’s suit. The distinctness of the parts of our bodies is reflected in what they do. In the liver, all sorts of biochemical reactions take place that occur nowhere else. Our skull protects our brain and chews our food–jobs carried out by no other part of our body.

Biologists like to call these parts modules, and they call the “partness” of our bodies modularity. It turns out that we are deeply modular. Our brain, for example, is made up of 86 billion neurons linked together by perhaps 100 trillion connections. But they’re not linked randomly. A neuron is typically part of a dense network of neighboring neurons. Some of the neurons in this module extend links to other modules, creating bigger modules. The brain can link its modules together in different networks to carry out different kinds of thought.

The proteins that make up our cells work in modules, too. Some proteins can only work in collaboration with certain other proteins. They may need to join together to make a channel, for example, or they may help out in an assembly line of chemical reactions that breaks down a toxin. You can draw a map of these interactions by connecting lines between genes that are turned into proteins in the same situations. The modules look like dense nests of links, with a few links joining together one module to another.

We are not alone. Other animals are modular. So are plants, fungi, protozoans, and bacteria. It’s enough to make you wonder why life is universally made up of parts.

You may be able to think up plenty of reasons that seem obvious. Maybe modules do things more efficiently. Maybe too much multi-tasking slows life down too much. Maybe modules make it easier for life to adapt to new challenges, by letting one part of an organism evolve without affecting the other parts. Or maybe during evolution, modules can be easily duplicated and then tweaked to tackle a new job.

Maybe. Or maybe not. To judge the merit of such ideas, scientists put them to the test. Scientists can compare modules in real organisms to look for patterns their hypotheses predict. They can tinker with bacteria to make them more or less modular and see how they perform. Recently three scientists, Jeff Clune of the University of Wyoming, Jean-Baptiste Mouret of Pierre and Marie Curie University in Paris, and Hod Lipson of Cornell used another method that’s become increasingly popular among scientists who want to understand the parts of life: they evolved a computer network.

Clune and his colleagues created a network inspired by the network of neurons we use to see. For a retina, it has eight virtual neurons, arranged in a four-by-two grid. Each one either sees light or darkness. Like a real neuron, Clune’s virtual neurons can respond to these inputs by sending a signal to neurons in the layer below. A single neuron may receive inputs from all eight neurons, or just one. It uses certain rules to decide whether to send a signal of its own in response to the next layer down. Finally, the network funnels down to a single neuron–a virtual brain, if you will–that can switch on or off in response to information that makes its way down through all the layers.

LIMITED TIME OFFER

Get a FREE tote featuring 1 of 7 ICONIC PLACES OF THE WORLD

The scientists made lots of different networks, varying which neurons were linked to which, as well as how strongly they influenced each other. And then they put these networks to a test. In effect, they asked if a specific pattern was present on the left side and whether a different pattern was present on the right side. If both were there, the eye needed to answer TRUE. Otherwise, it needed to respond FALSE.

They showed all 256 possible combinations to the networks and scored them for their accuracy. Not surprisingly, most were deeply awful. But a few were a little less awful, thanks only to chance.

Clune and his colleagues then mimicked natural selection. They selected the best-performing networks and duplicated them. They introduced a mutation-like feature to their program by randomly altering their links. Then the scientists tested the mutant networks again, and once again let the best ones produce new mutants.

Over 25,000 generations, some of the virtual eyes managed to get good–perfect in some cases. But then Clune and his colleagues threw another ingredient into the mix. They not only rewarded virtual eyes for becoming more accurate, but also for how few links they needed to do the job. It’s a plausible factor to include, because building and running more tangled networks can impose a higher cost on an organism. Neurons, for example, are big cells that require a lot of energy to build and also demand a lot of repair to keep running.

In this new environment, evolution operated differently. A lot more virtual eyes ended up recognizing patterns perfectly. They also became more adaptable. Clune and his colleagues turned the virtual eyes towards a new task: they had to recognize whether one particular pattern of four pixels was present on either the left or the right side. The minimal-wiring networks took much less time to evolve a skill at this new task than regular ones.

And there was one more difference between the two kinds of eyes–one that might tell us something about why life comes in parts. The minimal-wiring virtual eyes spontaneously evolved modules. The virtual neurons organized themselves into two networks–one on the left, and one on the right. Only at the final layer of the network do they combine their signals. In other words, a premium on minimally-linked networks spontaneously produces modules. (You can see this evolution in action in the video embedded at the end of the post.)

A skeptic might argue that modules evolved in this experiment because the problem that the virtual eye had to solve was itself modular. The two sides of the eye had to recognize its own pattern before making a final judgment. To test this possibility, the scientists evolved the eye with rewards for problems that couldn’t be broken down so neatly. For example, in one task, the eye had to determine whether there were four black squares anywhere in the eight-pixel grid. Even in these decidedly unmodular tasks, modules emerged.

Clune’s study suggests an evolutionary route to modules: as networks become more efficient, they become more modular. But once the parts of a system emerge, natural selection may then favor modules themselves, because they make living things more flexible in their evolution. Once life’s Legos get produced, in other words, evolution can start to play.

[Update 5:30 pm: Corrected description of first test]

Go Further