A Blog by

How to Program One of the Gut’s Most Common Microbes

Last month, I wrote a feature for New Scientist about smart probiotics—bacteria that have been genetically programmed to patrol our bodies, report on what they find, and improve our health. Here’s how the piece began:

“[There’s a] growing club of scientists who are tweaking our microbiome—the microbes that live in or on our bodies—in pursuit of better health. They are stuffing bacteria with circuitry composed of new combinations of genes, turning them into precision-targeted micro-drones designed to detect and fix specific problems.

Some lie in wait for pathogens like P. aeruginosa or the cholera bacterium Vibrio cholerae, releasing lethal payloads when they have the enemy in sight. Some use the same tactics to attack cancer cells. Others can sense signs of inflammation and release chemicals that could help to treat chronic conditions like inflammatory bowel disease. And these tricks aren’t just confined to the lab. In the next 18 months, at least one start-up is expected to put its newly created synthetic bugs into clinical trials with real people. Welcome to the age of smart probiotics, where specially designed bacterial rangers patrol the gut, reporting on the state of the environment, eliminating weedy species, and putting out fires.”

This work is part of the growing field of synthetic biology, which brings the principles of engineering to the messy world of living things. Synthetic biologists treat genes as “parts”, which they can pick from a registry, combine into “circuits” or “modules”, and stuff into a living “chassis”. Rather than modestly modifying one or two genes, they remix large networks, to produce yeast that can brew antimalarial drugs instead of beer,  cells that self-destruct if they turn cancerous, or microbes that can sense and quench inflammation in the gut.

Initially, these microbiome engineers started by modifying the obvious laboratory darlings, like Escherichia coli, or species used in probiotic yoghurts, like Lactobacillus. These bacteria have been studied for a long time and are easy to manipulate. But they are actually relatively rare in our guts. They also lack staying power, which is why the current generation of probiotics don’t colonise the people who swallow them, and rarely deliver on their fabled health promises.

If you want to turn a microbe into a gut ranger, you’re better off starting with a species that’s well-adapted there. And there are few better choices than Bacteroides thetaiotamicron—B-theta to its friends. Collectively, the Bacteroides genus makes up between 30 and 50 per cent of the microbes in a Western person’s gut. They’re exquisitely attuned to that environment and they’re excellent colonisers. And B-theta is arguably the best-studied of them. It was an early star of the microbiome craze: by working on this microbe back in the 1990s, pioneers like Jeff Gordon began to understand how important gut bacteria are to our lives.

Now, Mark Mimee and Alex Tucker from MIT have hacked B-theta, creating a small library of biological parts that can be used to programme it.

They started by building circuits that can permanently activate a given gene, and then tune its activity to a specific level within a 10,000-fold range. They tested these circuits by hooking them up to a gene that makes a glowing enzyme, and showed that they could precisely set the brightness of the glow.

Next, they created inducible circuits, which would activate a target gene only when they receive some kind of external trigger, like a drug or a dietary nutrient. When the trigger arrives, the circuit produces an enzyme that cuts out a particular piece of DNA, flips it around, and glues it back into place. A microbe that carries this circuit has memory—by inverting its DNA, it permanently records its encounter with the triggering substance. Mimee and Tucker could then tell if the trigger was present by sequencing the right region and looking for the inversion. They had effectively turned B-theta into a journalist that could sense and report on the events in a gut.

Finally, the team created circuits that can inactivate specific genes in B-theta. They used a powerful new technique called CRISPR interference, in which an enzyme called Cas9 is guided to a specific stretch of DNA. Cas9 normally acts like a pair of scissors that cuts whatever DNA it encounters. But in CRISPR interference, the scissors have been blunted. Rather than cutting a target gene, Cas9 just sits there, stopping other enzymes from activating it.

Mimee and Tucker connected Cas9 to genes that sense external triggers, so they could unleash it when they wanted. Then, they used different guide molecules to target Cas9 to specific genes. Now, they could inactivate those genes whenever they wanted, by delivering the right trigger. “It’s a flexible strategy for turning off any gene you want,” says Timothy Lu, who led the study.

A cynic might say that these circuits already existed, and the team just repurposed them for use in B-theta. But that was not easy. Unlike E.coli, which grows with ridiculous ease, B-theta is exquisitely sensitive to oxygen. To work with it, the team had to exclude the omnipresent gas by buying an anaerobic chamber. They also had to develop new ways of introducing foreign DNA into the bacterium—something that’s easy to do in E.coli, but harder in several other species.

Synthetic biology projects have often advanced to this point and then face-planted. Circuits that look good on paper and work in a dish will then fail when they’re incorporated into an actual cell or, in the case of gut microbes, when those cells are loaded into an animal. Pamela Silver from Harvard Medical School achieved one of the first successes last year by programming E.coli with a memory switch, and testing it in mice Lu’s team have now done the same. When they gave their programmed microbes to mice, everything worked. The inducible memory switches turned on when the mice ate the right triggers, as did the Cas9 suppressors. “We were surprised at how well they did,” says Lu.

“This is a beautiful, elegant piece of work that shows the power of synthetic biology to make a previously challenging organism immediately accessible to the scientific community,” says Michael Fischbach from the University of California, San Francisco, who is also programming his own microbes. “Bacteroides is an ideal ‘chassis’: a friendly bacterium that colonizes the gut professionally.”

“This study provides a nice proof of concept that portable components can be combined and function in this gut commensal,” agrees Justin Sonnenburg from Stanford University, who has been working with B-theta for decades and is also engineering it. This rapidly expanding direction for gut microbiota research will eventually give us new insight into microbiota-host interaction and medically useful microbes.”

By that, he means that programmed gut microbes could tell us a lot more about the gut than we currently know. The organ is still a bit of a black box.Food goes in and, some 8.5 metres later, waste comes out. Yes, we roughly understand what happens in the middle, but the details are still elusive. When Sonnenburg applied for his position at Stanford, an interviewer asked him: “What a single cell has experienced while transiting the digestive tract? If there’s a little inflammation, has it experienced that? Does it stick around eating plant polysaccharides? How could you tell?” Those are the kinds of questions that he, Lu, and others hope to address with their microbial reporters.

They also want to connect detection circuits to therapeutic ones, so that microbes can not only spot early signs of infections and chronic diseases, but also correct them. You could imagine handing out these sentinel microbes to people in the midst of epidemics, like the cholera outbreak that is still raging in Haiti. Alternatively, soldiers and tourists could take them before travelling abroad to regions with a high risk of diarrhoeal diseases. The possibilities are vast.

Reference: Mimee, Tucker, Voigt & Lu. 2015. Programming a Human Commensal Bacterium, Bacteroides thetaiotaomicron, to Sense and Respond to Stimuli in the Murine Gut Microbiota. Cell Systems http://dx.doi.org/10.1016/j.cels.2015.06.001

Genetically Engineering the Wild

Back in April, I wrote in National Geographic about the provocative idea of bringing extinct species back to life. In the five months that have passed since then, I haven’t spotted any mammoths or saber-tooth lions drifting through my front yard. If “de-extinction” ever does become real, it won’t for quite a while.

What I have seen over the past five months is a new conversation. Part of the conversation has revolved around the specifics of de-extinction. Some people are open to the possibilities of rebuilding genomes and embryos of vanished species. Some people find it a flashy distraction from the real work of fighting the current wave of extinctions.

But the conversation is bigger than mammoths and saber-tooth lions. It makes us think about how much we could–or should–manipulate DNA of wild animals and plants. This question applies not just to extinct species that are gone, but to endangered species that are rolling down the road towards extinction. And with estimates that at least 15 to 40% of species will be effectively extinct by 2050, that road is wide indeed. Is it okay to use genetic engineering to save some of them?

In Nature today, a group of conservation biologists take this conversation much further. They report on a meeting they had this spring in New Mexico to discuss how the changing climate will push some species towards extinction and what can be done about it.

For a few years now, some conservation biologists have argued that we should move species to places where they’re more likely to survive. If Florida is too hot in 50 years for a tree to survive, move the tree to Virginia.

But what if we were to move genes instead? That’s the question that the scientists at the New Mexico meeting considered.

Their conversation was based on the fact that animals and plants have evolved genes that adapt them to their environments. As trees move into drought-stricken plains, natural selection may favor genes that help them conserve their water. When pathogens emerge, natural selection may favor genes that make hosts resistant. If Florida is going to become more like, say, Brazil, then maybe genes from Brazil will help species survive in Florida. (As for what genes we might give the species in Brazil…well, that’s hard to say.)

Farmers and livestock breeders have harnessed genetic variation for centuries. They’ve crossed different breeds to create a combination of traits they desire. Conservationists have sometimes used hybridization as well, to nurture endangered species.

In Florida, for example, the dwindling panther population became inbred, and they had less success producing cubs. Conservationists trucked in eight panthers from a related subspecies in Texas. It’s been a dozen years since this cross-breeding took place, and their genetic pool now has more variation.

Hybridization can be very effective, but it’s also slow and inefficient. It jumbles together lots of DNA in lots of different ways; breeders then pick out the crosses that seem to perform best. In recent decades, genetic engineering has made it possible to move individual genes from one subspecies to another, or even one species to another. It might be possible to move genes into wild species to help them thrive. The scientists from the New Mexico meeting point to gene variants in rainbow trout, discovered earlier this year, that help the fish survive in warm water. Those variant could be inserted into other trout that are going to be threatened by rising river temperatures.

The scientists call wildlife genetic engineering “facilitated adaptation.” While they’re ready to give it a name, they don’t want to launch into it without a lot of consideration, however. They want to make sure facilitated adaptation doesn’t cause harm to species that are already on the brink of extinction. Genes often carry out more than one function, and so even if an imported gene has one beneficial effect, it might have others that are dangerous.

The scientists also worry that facilitated adaptation might sap the energy for fighting the causes of today’s extinction crisis. If scientists tell us we can just engineer penguins to live in warm temperatures, then who needs to do anything about climate change?

Even if we stopped warming the planet tomorrow, though, endangered species would still face other threats, some of which genetic engineering might help. We humans move pathogens around the planet, bringing new diseases to new places. A fungus from Europe has killed millions of bats in the United States and show no sign of slowing down. If scientists can determine why bats in Europe don’t die of the fungus, they might be able to insert their gene variant into American bats and make them resistant.

And if this seems like wishful thinking, consider the case of the American chestnut. As I wrote on the Loom, another fungus has nearly annihilated the tree. Fungus-fighting genes from other plants are now bringing it back.

I’ll be very curious to see how this new stage of the conversation plays out in weeks to come. (Feel free to leave your thoughts on the comments below.) But I also hope it doesn’t veer over ideological guard rails.

Opponents may argue that the very act of moving genes from one organism to another is a violation of nature’s diversity. But this is a romantic, pre-genomic view of life. Genes have flowed from species to species for billions of years.

Some supporters of genetic engineering may consider this an easy fix for our extinction crisis. But for many species, genetic engineering won’t help, I expect. You can’t tweak an elephant’s gene to make it bullet-proof. And even for those species that could be helped, scientists know precious little about the genes that could help them. Scientists have started to gather together what little they know about life’s genetic diversity, but they have only started. And unfortunately, for a lot of species, they’re running out of time.

Meet the Animats

Here is the story of how simple video-game creatures evolved a memory.

These simple creatures were devised by a group of scientists to study life’s complexity. There are lots of ways to define complexity, but the one that they were interested in exploring has to do with how organisms behave.

Every creature from a microbe to a mountain lion can respond to its surroundings. E. coli has sensors on its surface to detect certain molecules, and it processes those signals to make very simple decisions about how it will move. It travels in a straight line by spinning long twisted tails counterclockwise. If it switches to clockwise, the tails unravel and the microbe tumbles.

A worm with a few hundred neurons can take in a lot more information from its senses, and can respond with more behaviors. And we, with a hundred billion or so neurons in our brains have a wider range of responses to our world.

A group of scientists from Caltech, the University of Wisconsin, Michigan State University, and the Allen Brain Institute wanted to better understand how this complexity changes as life evolves. Does life get more complex as it adapts to its environment? Is more complex always better? Or–judging from the abundance of E. coli and its fellow microbes on the planet today–is complexity overrated?

There are two massive problems with trying to answer these questions. One is that it’s hard to run an experiment on living things to watch them evolve different levels complexity. The other is that it’s difficult to measure that complexity in a precise way. Simply counting the number of neurons in a brain isn’t good enough, for example. If a hundred billion neurons are joined together randomly, they won’t generate any useful behavior. How those neurons work together matters, too.

There are more precise ways to think about this complexity. William Bialek of Princeton has proposed that complexity is a measurement how much of the future an organism can predict from the past. Giulio Tononi of the University of Wisconsin has proposed the complexity is a measure of how many parts of a brain can separately process information, and how well they combine that information into a seamless whole. (I wrote more about Tononi’s Integrated Information Theory in the New York Times.)

Both Bialek and Tononi have laid out their theories in mathematical terms, so that you can use them to measure complexity in terms of bits. You can say the complexity of a system is precisely 10 bits. You don’t have to just throw up your hands and say, “It’s complicated.”

Unfortunately, there’s still a catch. As powerful as these theories may be, they only allow scientists to calculate the complexity of a brain (or any other information-processing system) if they can measure all the information in it. There are so many bits of information flooding through our brains, and in such an inaccessible way, that it’s pretty much impossible to actually calculate their complexity.

Chris Adami of Michigan State and his colleagues decided to overcome these hurdles–of observing evolution and measuring information precisely–by programming a swarm of artificial creatures which they dubbed animats.

To create animats, the scientists first had to create the world in which they would struggle to survive, reproduce, and evolve. The scientists put them in a maze made of a series of walls. To move forward, the animats had to crawl along each wall to find a doorway. If they passed through the doorway, they could move forward to the next wall, where they could search for a new door. The animats that traveled through the most walls were then able to reproduce.

Here’s a diagram of the animat’s anatomy:

Edlund et al 2011. doi:10.1371/journal.pcbi.1002236.g002
Edlund et al 2011. doi:10.1371/journal.pcbi.1002236.g002

The red triangles, marked 0 through 2, are simple eyes. All they do is sense whether they are next to an obstacle or not. The pink triangle marked 3 is a sensor that senses whether it’s in a doorway or not. Sensors 4 and 5 are collision detectors that sense whether the animat has crashed into the upper or lower borders of the maze. The information that these senses register is as simple as can be: they’re either on or off.

That information–on or off–flows from the senses to the animat’s brain–the circles marked 6, 7, 8, and 9. Each sense may be linked to one circle, or two, or all of them. The links may be strong or weak. If an eye has a strong connection to one of the circles, it may flip every time the eye senses an obstacle. A weak connection may mean that it only flips a quarter of the time the eye sees something. The parts of the brain can be linked to each other, too, helping to switch each other on and off.

Finally, the animat has legs, the green trapezoids marked 10 and 11. The brain can send signals to the legs, as can the sensors. The legs can respond in one of four ways–move left, move right, move forward, or do nothing.

To launch their experiment, the scientists created 300 animats with randomly generated instructions for how each part of their body worked. They then dropped each animat into an identical maze and run their programs for 300 steps.

In those 300 steps, a lot of the animats just meandered up and down their first walls, making no progress. At the end of the run, the scientists grabbed the 30 animats that had gone the furthest and let them reproduce.

Each animat got to produce ten new offspring. Their offspring inherited the same code as their parent, but each position in the code had a small chance of mutating. These mutations could strengthen or weaken a link between an eye and a part of the brain, or could add an entirely new link. The parts of the brain could change the signals they sent to each other, or to the legs. Again, the scientists didn’t program in changes that would make the animats faster. The mutations dropped into the animat genomes at random.

The scientists then set the animats into their mazes again and let 300 steps pass by. Once again, they picked the 30 that managed to travel the furthers to reproduce. This process is natural selection in its essence. Some organisms have more offspring thanks to inherited variations in their genes, and new variation can arise through mutations. Over many generations, this process can spontaneously change how organisms work.

One of the luxuries of digital evolution is that you can let it run practically ad infinitum. The scientists let the animats reproduce for 60,000 generations. And in that time, the animats evolved into much better maze-travelers.

Here, in glorious Pong-era video, is an animat from the 12th generation. The top panel shows it moving through the full maze, while the lower left panel zooms in on the animat. The lower right panel shows the activity in the animat’s brain. Note how it takes its own sweet time meandering up and down the walls:

And here is an animat from the 60,000th generation. It moves with assurance and swiftness. The researchers were able to calculate the perfect strategy for an animat, and this evolved specimen had reached 93% of the ideal performance. (The early animat from the 12th generation only performed at 6%.)

You might be wondering what those red arrows are in the doorways. They’re clues. Each arrow tells which direction to go to find the next doorway. The doorway sensor can respond to those signals, but at the start of the experiment, the animats have no way to use the information.

But after thousands of generations, some of the animats evolved the ability to pick up the clues. Their brain evolved a wiring allow it to store the information they picked up in each doorway and use it guide their movements till they got to the next doorway–whereupon they kicked out the old information and recorded the information in the new doorway. Once the animats evolved this simple memory, their performance skyrocketed.

It’s startling to open the virtual skull of the animats and looking at what their evolved brains look like. Here are two different animats after 49,000 generations.

Edlund et al 2011. doi:10.1371/journal.pcbi.1002236.g006
Edlund et al 2011. doi:10.1371/journal.pcbi.1002236.g006

Both animats don’t even use three of the four parts of their brain–that’s why only the circle marked nine is shown in the diagrams. Each one has evolved different patterns of inputs and outputs. It’s hard to break apart the systems into individual circuits and say that they do anything in particular. The behavior of the animat emerges from the whole network.

Thanks to the design of the experiment, the scientists could measure the complexity of the animats as they evolved–with a collossal amount of computing time. This graph shows the complexity of animats along the Y axis, using a measurement of Tononi’s integrated information. The color of the dots represents which generation each animat came from-blue is from the early generations, and red from the latest ones. And, finally, the X axis shows how fast the animats can travel, as a percentage of the highest possible speed an animate could possibly go.

Joshi et al., 2013 doi:10.1371/journal.pcbi.1003111.g005
Joshi et al., 2013 doi:10.1371/journal.pcbi.1003111.g005

There are two lessons from this graph, which can seem contradictory.

As the animats get better at getting through the maze, they get more complex. No 50% animat is less complex than a 20% animat.

To see whether selection really was essential to the rise of complexity, the scientists ran a test on some highly evolved, highly complex animats. At the end of each maze run, they didn’t pick out the top 10 percent of the animats as the parents of the next generation. They just picked 30 animats at random from all 300. After 1,000 generations without selection, the animats were pretty much hopeless, hardly able to find a single doorway. And their complexity crashed.

This research shows that an increase in complexity comes with adaptation–at least for animats. But look again at the graph. Look at the animats at any given level of fitness. Some of them are more complex than others. In other words, some animats can race along at the same speed as other animats that are twice as complex. All that extra complexity seems like a waste.

In the world of animats, evolving a better brain requires a minimal increase in complexity, so as to take in more information and make better use of it. Extra complexity doesn’t necessary make an animat better at traveling the maze, although it may provide the raw material for further evolutionary advances.

It would be interesting to see what would happen if some of the rules for animats were changed. In this experiment there was no cost to extra complexity–something that may not be true in the real world. The human brain makes huge demands of energy–twenty times more the same weight of muscle would. There’s lots of evidence that efficiency has a strong influence on the anatomy of our brains. Perhaps we might have more complex brains if we did. And if the animats had to pay a cost for extra complexity, they would evolve only the bare minimum. That’s an experiment I’d like to see. And don’t forget that arcade music….


Edlund JA, Chaumont N, Hintze A, Koch C, Tononi G, et al. (2011) Integrated Information Increases with Fitness in the Evolution of Animats. PLoS Comput Biol 7(10): e1002236. doi:10.1371/journal.pcbi.1002236

Joshi NJ, Tononi G, Koch C (2013) The Minimal Complexity of Adapting Agents Increases with Fitness. PLoS Comput Biol 9(7): e1003111. doi:10.1371/journal.pcbi.1003111

Rewiring Life: Learning About Synthetic Biology In Debates, Videos, and Comic Books

Today scientists at Stanford University reported they had implanted transistor-like bundles of genes into E. coli, making it possible to transform cells into biological computers. At Download the Universe, a science ebook review where I’m an editor, I take a look at the history of synthetic biology that led up to this remarkable feat. I also reflect on how to help young people become both excited and wise about these new kinds of technology. Check it out!

Resurrecting A Forest

For the cover story in the April 2013 issue of National Geographic, I explore an idea that sounds like pure science fiction: bringing extinct species back to life. What was once the purely the domain of Crichton and Spielberg is becoming a new field of research. Thanks to spectacular advances in cloning, reproductive technology, and DNA sequencing, scientists can now seriously explore the possibility of reviving some species from extinction. If not dinosaurs, then perhaps mammoths or passenger pigeons.

“De-extinction,” as its advocates sometimes call it, is part of a bigger trend these days in the world of conservation. Over the past five decades, conservation has usually taken the form of removing threats so that endangered species can recover–ban pollutants, protect habitats, stop hunting, and the like. Conservationists saved the brown pelican, for example, by protecting it from DDT and similar chemicals and by preserving the coastal wetlands where it lives. What they did not do, however, was tinker with brown pelican DNA to make the birds better able to survive. Indeed, the brown pelican gene pool–the product of millions of years of evolution before humans turned up–was ultimately what the scientists were trying to protect from oblivion.

Meanwhile, over those same five decades, molecular biologists have become adept at probing and manipulating genes. Sequencing genomes went from a dream to just another day’s work at the lab. In the 1970s, scientists began inserting genes from one species into another, and they can now build simple genetic circuits.

Conservation biologists have taken up many of these tools. They learned how to sequence DNA, for example, so that they could map populations of endangered species and track the flow of genes between them. They’ve used advanced reproductive technology to raise their success rate with captive breeding programs. The San Diego Zoo has frozen stem cells and tissues from thousands of species of animals to investigate for new ways to conserve them in the wild.

But conservation biologists have also seen some risks to biotechnology. If a synthetic organism could establish itself in the wild, for example, it could become an invasive species, putting native species at risk. (It’s important to point out that there’s no evidence that such an invasion has happened yet.) If we think of biodiversity as the world’s storehouse of genetic variation, then biotechnology has the potential to drive it down. Genetically engineered plants or animals might interbreed with wild relatives and spread their modified genes into the environment, reducing genetic variation in the wild.

Despite the potential risks, a number of conservation biologists are gingerly considering making even greater uses of biotechnology in order to protect biodiversity. Next month, for example, the Wildlife Conservation Society is hosting a meeting called “How Will Synthetic Biology and Conservation Shape the Future of Nature?”

Here’s a passage from the meeting’s framing statement:

“Critics have focused on the threats posed by novel life forms released into the environment, but little attention is paid to potential opportunities–to reconstruct extinct species or create customized ecological communities designed to produce ecosystem services. They may change the public perception of what is “natural” and certainly challenge the notion of evolution as a process beyond human construction.”

One of the few surviving American chestnuts, located in Maine. Photo courtesy of William Powell
One of the few surviving American chestnuts, located in Maine. Photo courtesy of William Powell

To me, there’s no better example of the ambiguous future of conservation biology than the story of the American chestnut.

When Europeans arrived in North America, they found forests filled with American chestnut trees. These mighty plants, which could grow to be 100 feet tall, were the most abundant trees in the forests, making up 25 percent of the standing timber of the eastern United States. In the summer, the peaks of Appalachian mountains appeared to be capped with snow, thanks to the explosion of white chestnut flowers. Chestnut trees anchored the ecosystems of eastern American forests, providing food and shelter to bears, Carolina parakeets, and a vast number of other species. They were also a mainstay of loggers, who could fill an entire train car with boards cut from a single tree.

In 1904, a scientist observed that a chestnut tree at the Bronx Zoo was dying. It turned out to be infected with a fungus that came to be known as chestnut blight. No one is quite sure how it got to the United States, but all the evidence we have indicates it hitch-hiked its way in the 1870s on chestnut trees imported from Japan.

Chestnut blight, while harmless to Asian trees, proved devastating to the American ones. The fungi released a toxic substance called oxalic acid that killed off the tissue, allowing them to feed on it. An infected tree developed cankers on its trunk, and once they spread around the full circumference of a tree, it could no longer carry water and nutrients from its roots to its branches.

A stand of blight-infected chestnuts in New York, 1915. Courtesy of William Powell
A stand of blight-infected chestnuts in New York, 1915. Courtesy of William Powell

Over the course of about eighty years, the chestnut blight spread across almost the entire range of the American chestnut, from Maine to Missippi. It conquered nine million acres and infected three billion trees. A few lone trees still survive unharmed here and there, but no one under the age of sixty has ever seen the forests of the eastern United States as they once were.

In the pantheon of extinction, American chestnuts are poised awkwardly at the door. Chestnut blight doesn’t kill the trees outright; as it spreads down to the roots, it encounters other microbes that outcompete it. As a result, infected trees become stumps. Sometimes they send up a new shoot, but once it reaches a few feet in height, the fungus attacks it again, and the shoot dies back.

“It’s basically functionally dead,” William Powell of SUNY College of Environmental Science and Forestry in Syracuse, New York, told me. “They sprout up, they get the blight again, and they are killed down to the ground. You know the story of Sisyphus? The guy who rolled the rock up the hill and it just kept rolling back down? Well, that’s kind of like what’s happening with the chestnut.”

It’s been a century since American foresters started trying to save the tree. They sprayed the trees with fungicial chemicals, to no avail. They infected the blight with fungus-invading viruses, but resistant strains continued to kill trees. They tried burning down chestnut trees to create a fungal firebreak, only to discover that the blight could silently infect oak trees, too.

They did what conservationists have always done–try to remove the threat–but nothing worked.

In the 1980s, a group of scientists embarked on a different approach, one that is now showing signs of success. If they couldn’t stop the blight, they would help the trees defend themselves.

The reason that chestnut blight was able to come to America in the first place was that Asian chestnuts can fight the fungus. They have genes that allow them to hold the cankers in check and scar them over. The trees can continue to grow and produce pollen and seeds. American chestnuts, evolving thousands of miles across the Pacific, never got the opportunity to evolve defenses against the blight. So the American Chestnut Foundation, a non-profit established to save the tree, decided to start breeding the two trees together, to see if they could provide the American chestnuts with Asian defenses.

When the foundation’s scientists interbred the American and Asian trees, the plants mixed together their genes in different combinations in their hybrid seeds. The scientists grew the seeds into saplings, and after a few years, it became clear that some of the hybird chestnuts had inherited some of the Asian defense genes. The cankers grew more slowly on them than on their American ancestors.

But the trees were no longer recognizable as American chestnuts, since half of their DNA came from Asian chestnuts. Asian chestnuts are small, orchard-like trees, and so the hybrids were far smaller than their towering American ancestors. These hybrids were not the solution to the chestnut blight, in other words. Their defenses were still weak, and they would not survive in American forests in the shadow of oaks and other big trees.

So the scientists kept breeding the trees. They used another tried-and-true method, known as backcrossing. They bred the American-Asian hybrids with American chestnuts, producing trees with only a quarter of their DNA coming from Asian chestnuts. Again, some of the new trees could resist the blight, while the others couldn’t. That was because the quarter of their DNA from the Asian trees contained the genes essential for fighting the disease. At the same time, the trees more closely resembled American chestnuts, because they inherited more of their DNA.

From this generation, the scientists picked the best-defended trees and back-crossed them again. They also mated hybrids with one another, shuffling the genes into new combinations, and selectively breeding the chestnuts that were both more resistant and bigger. They’ve now got thousands of trees that are 15 parts American and one part Asian growing on their experimental farm in Virginia.

A diagram of backcrossing experiments. American Chestnut Foundation
A diagram of backcrossing experiments. American Chestnut Foundation

That one-sixteenth of Asian chestnut DNA may not sound like a lot, but it is. “There are thousands of genes in there,” says Powell. For all we know, some of those genes may impair the success of chestnuts in American forests. “It’s better to be precise about the genes you put in,” Powell argues. Working with the American Chestnut Foundation, he and his colleagues have developed a surgical approach to breeding resistant chestnut trees.

In 1990, Powell and some colleagues started investigating how to move single genes into American chestnuts. It took years to get the project off the ground. You can’t insert genes into a tree simply by sticking a needle into a trunk. Genes can only be inserted into individual cells. So Powell and his colleagues had to figure out how to rear chestnuts in their lab.

Some plants can survive as cells in a lab forever. But chestnuts are not one of those plants. Powell and his colleagues found that they had to combine pollen and ovules to produce embryos. With just the right concentration of hormones, the embryos bud off more embryos, which bud off embryos in turn. The scientists can then pick off individual embryonic cells, insert genes into the, and then grow the cells into full-blown chestnut trees.

After figuring all of this out, the scientists began to search for genes to insert into the chestnut cells. At the time, no one had mapped Chinese chestnut genes, so Powell and his colleagues turned to better studied plants. Plant scientists had figured out how wheat fights fungi, making enzymes that chop up the oxyalic acid into harmless byproducts. Powell and his colleagues inserted the wheat gene for the enzyme into chestnut cells and then grew the cells into trees.

At first the enzyme wasn’t much help, so the scientists fine-tuned the genes so that the chestnuts made more of it. The more oxalic acid they made, the better they fought the chestnut blight. The scientists eventually produced trees that could limit the cankers and heal them over.

William Powell inspects American chestnuts with blight-inhibting enzymes. Photo: Syracuse University
William Powell inspects American chestnuts with blight-inhibting enzymes. Photo: Syracuse University

Last spring, the New York Botanic Gardens planted a few of the chestnuts for public display. (You can see the video of the ceremony here.) You can go to the gardens now and can see for yourself that the trees are growing and thriving, despite being exposed to chestnut blight spores wafting by. “We want to do everything transparently,” says Powell. “We don’t want people to think we’re hiding anything here.”

It may be five years or longer before these trees start growing in the wild. Powell and his colleagues need to spend a couple more years collecting data before submitting an application to the U.S. Department of Agriculture, and then the Environmental Protection Agency has to sign off on the project. Even the Food and Drug Administration will have to get in on the act, because the trees will produce nuts that people might eat.

But the trees growing in the Bronx are not the final version Powell hopes to see reviving America’s forests. It’s now finally possible for him and his colleagues to explore the Chinese chestnut tree genome, and so they’ve started  hunting for blight resistance genes. One gene for chopping up oxalic acid won’t be enough to provide full resistance, Powell suspects. He’s pretty sure that Chinese chestnut trees have evolved a number of genes that together render the blight harmless.

Adding in extra genes is essential, Powell believes, because the chestnut blight is not a fixed target. It is evolving, and it will probably be easy for it to evolve its way around just one line of defense. Each tree will need to be equipped for many attacks from evolved pathogens over the course of its lifetime, which can be as long as a century. Powell suspects a few genes will provide a durable defense, but he can’t say for sure which genes those are. So far, he and his colleagues have identified a list of candidate genes in Chinese chestnuts. “We’ve narrowed it down to about 900 now,” Powell told me with a laugh.

If, a century from now, Powell’s chestnuts tower once again over the eastern United States, how will we think of those forests? Will we think of them as nature restored to its former glory, ecosystems thriving once more? Or will we think of them as unnatural, the product of human tinkering? Or both? Given the past century of struggle to save the chestnut, the choice here is not natural versus unnatural. It’s chestnuts versus no chestnuts. “It’s not going to fix itself,” says Powell.


(Update, 3/12: I got a good question on Twitter when I pointed readers to this post:

There are indeed companies developing patented genetically modified trees. This article in the Guardian in November describes a company that has produced a eucalyptus tree that can grow faster and produce more wood, which could be raised in plantations. Environmental groups like the Sierra Club have criticized  this research because of the potential environmental damage it might lead to and called for a moratorium.

Powell and his colleagues have not patented their chestnut trees, however, nor do they have any plans to do so. As I wrote above, they’re searching for additional genes for resistance, and they’ve avoided patented ones as much as possible. Once they have figured out which genes they need to use, they will do a complete patent search. If the genes do turn out to be patented, they’ll ask the patent holders for a license for free use. “I view this as a not-for-profit endeavor,” says Powell.)


Powell and I will both be among the speakers at TEDxDeExtinction, taking place at the National Geographic Society in Washington DC this Friday. You can buy tickets to the all-day event here, or watch it livestreamed for free here. My story for National Geographic will be available online on Friday as well. For more information, visit National Geographic’s DeExtinction Hub.

Want to Get 70 Billion Copies of Your Book In Print? Print It In DNA

I have been meaning to read a book coming out soon called Regenesis: How Synthetic Biology Will Reinvent Nature and Ourselves. It’s written by Harvard biologist George Church and science writer Ed Regis. Church is doing stunning work on a number of fronts, from creating synthetic microbes to sequencing human genomes, so I definitely am interested in what he has to say. I don’t know how many other people will be, so I have no idea how well the book will do. But in a tour de force of biochemical publishing, he has created 70 billion copies. Instead of paper and ink, or pdf’s and pixels, he’s used DNA.

Much as pdf’s are built on a digital system of 1s and 0s, DNA is a string of nucleotides, which can be one of four different types. Church and his colleagues turned his whole book–including illustrations–into a 5.27 MB file–which they then translated into a sequence of DNA. They stored the DNA on a chip and then sequenced it to read the text. The book is broken up into little chunks of DNA, each of which has a portion of the book itself as well as an address to indicate where it should go. They recovered the book with only 10 wrong bits out of 5.27 million. Using standard DNA-copying methods, they duplicated the DNA into 70 billion copies.

Scientists have stored little pieces of information in DNA before, but Church’s book is about 1,000 times bigger. I doubt anyone would buy a DNA edition of Regenesis on Amazon, since they’d need some expensive equipment and a lot of time to translate it into a format our brains can comprehend. But the costs are crashing, and DNA is a far more stable medium than that hard drive on your desk that you’re waiting to die. In fact, Regenesis could endure for centuries in its genetic form. Perhaps librarians of the future will need to get a degree in biology…


(Link to Church’s paper)

Photo by Today is a good day – via Creative Commons

A Blog by

Synthetic XNA molecules can evolve and store genetic information, just like DNA

Out of all the possible molecules in the world, just two form the basis of life’s grand variety: DNA and RNA. They alone can store and pass on genetic information. Within their repetitive twists, these polymers encode the stuff of every whale, ant, flower, tree and bacterium.

But even though DNA and RNA play these roles exclusively, they’re not the only molecules that can. Vitor Pinheiro from the MRC Laboratory of Molecular Biology has developed six alternative polymers called XNAs that can also store genetic information and evolve through natural selection. None of them are found in nature. They are part of a dawning era of “synthetic genetics”, which expands the chemistry of life in new uncharted directions.


Parasite mind-control, ebooks, and killer flu: My first Google+ Hangout video

One of the most interesting features of Google’s new social media service, Google+, is Google+ Hangout On Air. A group of people get onto G+ all at once, fire up their computers’ cameras, and have a conversation. Google puts whoever is speaking at the moment on the main screen. You can join a hangout if it’s public or if you have an invitation, and–coolest of all–it automatically records the conversation and throws it onto Youtube.

Right now only a few people have access to this service. I jealously watched fellow Discover blogger Phil Plait talk about exoplanets last month. (You can too.) And then I got invited to join the folks at the Singularity Hub for a hangout, too. It’s up on Youtube, and you can also see it embedded here below. We talked about all sorts of things–from mind-controlling parasites to bird flu to using viruses to cure antibiotic-resistant bacteria to the future of ebooks and much more.

I deeply crave this technology. I used to participate in a primitive forerunner of this, known as Bloggingheads. I bowed out due to editorial differences, but I still think the basic system is an exciting medium. I hope Google opens up their Hangout On Air service to more people, because it could be a whole lot of fun.

Flu Fighters

Michael Osterholm, his face a pink-cheeked scowl, looked out across the table, beyond the packed room at the New York Academy of Sciences, and out through the windows. The New York Academy of Sciences is housed on the fortieth floor of 7 World Trade Center, and their endless bank of windows affords a staggering view of Manhattan, Brooklyn, and New Jersey. One reason that its view is so magnificent is that there’s a huge gap in the skyline–and a huge gouge in the ground–where the Twin Towers once stood.

Osterholm had come here from Minnesota, where he runs a research center for infections diseases and terrorism, to talk Thursday night about the threat of a new kind of flu sitting in labs in the Netherlands and Wisconsin. In nature, it’s a flu that spreads easily between birds but doesn’t travel well from human to human. The Dutch and Wisconsin scientists had found ways to get this bird flu, known as H5N1, to move between ferrets. For Osterholm, ferrets were uncomfortably close to humans on the evolutionary tree. And so he, along with other members of an advisory board, issued a recommendation in December that key information in the papers about the research should be left out.

Osterholm looked out at the empty space beyond the windows. “Who would have imagined that you could use box cutters to take down the World Trade Center?” Osterholm asked. The risk from the new bird flu might seem equally unlikely, he warned, but it could end up being far more devastating. “We can’t afford to be wrong.”

The bird flu controversy first started to bubble up in September, when Ron Fouchier of the Erasmus Medical Center in Rotterdam described some of his unpublished results at a scientific meeting in Malta. It kicked into high gear when the National Science Advisory Board on Biosecurity issued their ruling, which Fouchier and Yoshihiro Kawaoka have agreed to. In January, the researchers agreed to stop doing any H5N1 research for two months, during which time the scientific community would try to come up with a plan about how to deal with such controversial research.

Viruses very often spark controversies, but often the controversy is between the scientists who study them and groups of people beyond the academy. Think of HIV denialism, of the non-existent link between vaccines and autism, of the purported connection between the XMRV virus and chronic fatigue syndrome. The new bird flu controversy is different. It’s split the scientific community wide open. I’ve written about this controversy in recent weeks over at Slate, as well as here at the Loom. Like most reporters covering the story, I’ve sampled the sharply opposing viewpoints of scientists over the phone or via emails. But on Thursday night, we got to see this debate in person. The New York Academy of Sciences brought together a group of experts to talk about new virus, and whether self-censorship is a prudent protection or a dangerous precedent. I wasn’t sure what to expect; I was a bit worried it might have turned out to be a fairly dry discussion of how to inspect the hood equipment in virus labs. Instead, we witnessed explosive confrontation between scientists who think we may be facing a world-destroying catastrophe, and others who think our fear of non-existent threats is going to destroy science’s power to help us out of clear and present dangers.

The panel included two members of the National Science Advisory Board on Biosecurity: Michael Osterholm and Arturo Casadevall of Albert Einstein College of Medicine. They both made it clear that they were speaking at the meeting as individuals, rather than as official spokesmen for the board. But they presented a fairly united front. The board has been around for eight years, and it has only considered issuing a recommendation twice. The first time was in 2005, when scientists unearthed the bodies of victims of the 1918 flu epidemic, which killed an estimated 50 million people. The researchers isolated the 1918 virus and sequenced its genes. The board decided they had no objections about publishing the research. But six years later, they decided that, as bad as the 1918 flu might have been, the risk of an H5N1 outbreak was worse.

One big factor in their recent decision was the mortality rate when H5N1 gets into people. The World Health Organization’s official estimate is 60%. The 1918 flu, by contrast, had a death rate of about two percent. If H5N1 could gain the ability to spread among humans–either naturally, or through a lab experiment–it could bring that fearsome death rate to the entire world. “It’s the lion king of infectious diseases,” Osterholm said, no doubt dismaying Disney lawyers across the country.

Sitting a few seats down the panel from Osterholm was Peter Palese, one of the world’s leading experts on flu, who works at Mount Sinai Medical School. Palese disputed Osterholm’s apocalyptic warnings. Where Osterholm burned hot, Palese kept cool, but he did not hide his utter rejection of the board’s decision. Just because a flu virus can be transmitted by another mammal species, he argued, doesn’t automatically mean it can spread among humans. In fact, ferrets are rather delicate in the face of a flu infections, easily suffering from brain damage. Our closer relatives among the primates, by contrast, don’t get sick from flu at all. (Jon Cohen explores the ferret question in depth in a news article for Science.)

Palese also questioned whether H5N1 is all that dangerous. He argued that the World Health Organization based its mortality rate only on the people who came into hospitals and tested positive for H5N1. But this particular strain of bird flu mostly strikes people in poor countries, especially in southeast Asia, where medical services are scarce. The people who make it to a hospital could well be a small fraction of all the people who come down with H5N1.

“The asymptomatic people are not being counted,” Palese said. If those extra people only got sick for a few days and then got on with their lives, the true mortality rate might be far less than 60% “It’s really much lower,” he said, pointing to surveys in Thailand and other countries that revealed evidence that a fair number of people had been exposed to H5N1 at some point in the past. (Palese recently published this same argument in the Proceedings of the National Academy of Sciences.)

This argument positively enraged Osterholm. He had clearly read Palese’s recent PNAS commentary and had prepared a rebuttal. “What you’re saying is just propaganda,” he told Palese. The trouble with Palese’s numbers were that they came from lousy studies, Osterholm argued. There are many ways to overestimate how many people have been exposed to a particular virus. A common test involves fishing for antibodies in blood samples. If your test isn’t precise enough, you may end up dredging up antibodies to other viruses. Osterholm had gone through surveys of H5N1 exposure, setting aside the lousy studies and tallying up the results from the best of the bunch. He came up with an estimate of .6% or less. If very few people have been exposed, the recorded deaths from H5N1 represent a frighteningly high rate.

Casadevall granted that perhaps H5N1 wasn’t 60% fatal. But it could be half that and still be a planetary nightmare. Even if it was ten times lower, it would still be far worse than the 1918 flu. “The numbers of unbelievable, any way you look at it,” he said.

Palese was unmoved. The new H5N1 viruses might pose a risk–a small one, in Palese’s mind–but scientists could handle it. All the research that had triggered the controversy wasn’t conducted in someone’s backyard. It was carried out in well-protected labs. Palese noted that the board doesn’t seem to have any objections to the work that’s done these days on smallpox, a virus that killed millions of people every year until it was eradicated in the 1970s. If scientists can in fact safely experiment with dangerous viruses, there is no need to paralyze the scientific community over bird flu. “You can always assume the worst,” Palese said. “But where do we stop being afraid?”

Osterholm glowered at Palese. “You do not represent the mainstream of influenzologists when it comes to this issue on influenza,” he said. I glanced at some of the other journalist in the audience, wondering if Osterholm could see us scribbling notes.

Osterholm stressed that he was not against research on bird flu in general. He just wanted the scientific community to balance the potential costs and benefits. He didn’t see very much significance in the new bird flu work. It wouldn’t help public health workers monitoring H5N1 viruses for lineages that might be evolving into a human pathogen. Nor did he see any benefit for developing vaccines or antivirals. On the other hand, he saw a risk–a small one, possibly–of tremendous devastation.

But when it comes to viruses can we really calculate such ratios of costs to benefits? Vincent Racaniello, a Columbia University virologist who was also on the panel, doesn’t think so. We’re bad at estimating risks. In 1981, for example, Racaniello and his colleagues pioneered a method for making polio viruses: they stuck the virus’s genes on a ring of DNA called a plasmid, which they then inserted into E. coli bacteria. The engineered E. coli spewed out polio genes, which Racaniello could insert into human culture cells, which then made full-blown polio viruses. People worried that Racaniello’s bacteria would get into people’s guts and start a polio epidemic. (It didn’t.)

We’re also bad at determining the benefits of research. Racaniello recalled how microbiologists in the 1950s discovered that E. coli defend themselves against invading viruses by chopping up their genes. Nobody thought much of that discovery for over a decade. But then in the late 1960s, a few researchers realized that they could use E. coli’s enzymes to cut up DNA and then paste them into new combinations. The entire biotechnology industry was born from that late eureka.

“You could have never predicted that,” said Racaniello. “You never know who will do the right experiment. So that’s why you need to give the information to everyone.”

The way things stand right now, everyone will not be getting that information. I tried to follow the reasoning for holding back key parts of the studies, but, honestly, I can’t recount it in a way that makes sense. As far as I could tell, the thinking was somebody just fooling around out of curiosity would be able to use the full information to create a deadly flu. But the fact is that the scientists who produced the new bird flu used standard methods that have been published many times over. I was also confused by how Nature and Science, the two journals where the redacted papers are to be published, will handle distributing the information to those who need to know about it. An editor from Nature talked about how hard it would be to set up a system. I had been expecting them to have a system to unveil for us.

“None of us ever wants to see a redaction again,” said Casadevall. The most sensible way to avoid that would be to figure out a way to make decisions about risks and benefits much earlier in the life cycle of an experiment. If the mission of an experiment is to create a deadly virus, just to see if it can be done, the panelists agreed that that is probably not a study to run. But what kind of system can stop not just these experiments, but other experiments that might present unexpected dangers? Casadevall worries that every graduate student may have to fill out 100-page forms for even the most harmless of experiments. “You’ll kill science,” he said.

Casadevall was expressing a concern that all the scientists on the panel shared: they worry that this affair will keep them from doing research. For now, they’re trying to work out a fairly self-regulating system to handle this sort of controversial research, perhaps in the hopes that the government won’t come sweeping in. But there was one non-scientist on the panel who did her best to make the scientists aware of the world outside their community.

Laurie Garrett, an award-winning health reporter who now works at the Council on Foreign Relations, pointed out that the flu is not just something that American scientists study in their labs. It’s a global problem. There’s a huge amount of resentment in poor countries where bird flu is the biggest threat, not just to humans, but to the poultry industry. “Poor people are killing their chickens for you,” Garrett said. “They’re going bankrupt.”

Making matters worse, as Garrett has recently written, is the distrust that has developed in the developing world towards Western medical research and the pharmaceutical industry. Indonesia, where many of the H5N1 deaths have occurred, has been reluctant to share bird flu samples with Western scientists, for fear that they would make huge profits from vaccines developed from them. The World Health Organization has set up an international agreement for the exchange of wild bird flu strains between different countries, but it’s in fragile shape.

So for all the sparks that flew in New York Thursday night, the real fireworks over the flu are yet to come.


[Update 2/3 9 am: Corrected description of Racaniello’s experiment. Thanks to Matt Frieman. 2:50 pm Fixed Fouchier’s institution name and month of his talk. Thanks to Jon Cohen. 8 pm: Expanded Osterholm’s “mainstream of influenzologists” quote after seeing his objection to a similarly truncated version in Christine Gorman’s story for Scientific American and reviewing my own recording. It’s a valid clarification .]

Life with a capital L? (Like Zimmer with a capital Z?)

Over on Facebook, David Hillis, an evolutionary biologist at the University of Texas, took up my question as to whether anyone can define life in three words. His short answer was no, but his long answer, which I’ve stitched together here from a series of comments he wrote, was very interesting (links are mine):

Like all historical entities (including other biological taxa), it is only sensible to “define” Life ostensively (by pointing to it, noting when and where it began, and following its lineages from there) rather than intensionally (using a list of characteristics). This applies to the taxon we call Life (hence capitalized, as a formal name). You could define a class concept called life (not a formal taxon), but then that concept would clearly differ from person to person (whereas it is much less problematic to note examples of the taxon Life). So, I’d say that I can point to and circumscribe Life, and that it the appropriate way to “define” any biological taxon. A list of its unique characteristics is then a diagnosis, rather than a definition. So, I’d argue that any intensional definition of Life is illogical (does not recognize the nature of Life), no matter how many words are used.

Defining Life (the taxon) is like defining other particular historical entities. We don’t “define” Carl Zimmer or the United States of America by listing out their attributes. Instead, we point to their origin and history. The same should be true for Life. If we ever discover a Life2, we’ll have a new origin and history to point to.

The question people actually want to ask is “Are there entities in the universe that are similar to the Life we know about here on Earth?” The answer, of course, depends on what people mean by the arbitrary meaning of “similar”. One person might answer “I mean ‘self-replicating with variations’.” Then, the answer is yes: humans have created imperfectly self-replicating systems (“artificial life”) here on Earth. But then someone else says “But that is not what I meant by similar…I meant that they had to have metabolism and cellular structure and a nucelic-acid-based genetic system.” OK, then we have to keep looking to find something that similar. But then someone else says “But that’s pretty arbitrary…I’d still consider it alive if it didn’t have cellular structure.” Exactly…it is indeed arbitrary to argue over how similar something has to be to consider it “similar” to Life. So, in the end, we can ostensively define Life (by referencing its origin and history), and we can do the same for other historical entities that some people might also want to say are alive, but there can be no simple “right” answer that will satisfy everyone about which entities should be considered alive, because we all emphasize different characteristics in defining an arbitrary class concept of “life”.

Can you define life in three words?

We are all sure we know what life is, but if you try to actually define it, things get tricky fast. I wrote a feature about the scientific struggle to define life in 2007 for Seed, and I’ve been keeping tabs on the evolution of this metaphysical quandary ever since. I was particularly intrigued to discover recently that one scientist thinks he can define life–and do so in just three words. I’ve written an essay about his short and sweet definition for the web magazine Txchnologist. Check it out.

The two faces of E. coli: my article in Newsweek and interview with the BBC

On Friday, as the E. coli outbreak gained horrific speed in Germany, Newsweek asked me to write about how this epidemic came to be. Scientists still have a lot to figure out about it, but some things are clear–in particular, that the bacteria have great scope for evolution into new deadly strains, thanks in part to the shuttling of viruses between them. (In my book Microcosm, I explain how this is true not just for E. coli, but for much of life.) My piece appears in the new issue of Newsweek, which you can read online here. (One late-breaking piece of news that didn’t make it in, by the way, is the finding yesterday that the new outbreak appears to have come from bean sprouts.)

While I was working on my Newsweek piece, a reporter for the BBC called me up for an article on the good side of E. coli. I explained how much of how we understand about life itself came out of research on this typically harmless bug, and that the biotechnology industry was build upon its biology. That piece came out over the weekend. Check it out.

[Image: glass microbe by Luke Jerram]

Tomorrow: Synthetic Biology lecture in Manchester, Connecticut

If you live in central Connecticut, please consider coming to my public lecture tomorrow (Wednesday 4/12). It’s entitled, “Synthetic Biology: Playing God or Harnessing Nature?” The talk is sponsored by the Connecticut Association of Biology Teachers, the Connecticut Valley Branch of the American Society for Microbiology, and Manchester Community College.

Here are the details:

Where: Manchester Community College, Great Path Academy Building, Community Commons. (Here are directions and maps.)

When: 5:30 pm, Wednesday, April 12

More information here.

Copyright law meets synthetic life meets James Joyce

Last year I wrote about how Craig Venter and his colleagues had inscribed a passage from James Joyce into the genome of a synthetic microbe. The line, “To live, to err, to fall, to triumph, to recreate life out of life,” was certainly apropos, but it was also ironic, since it is now being defaced as Venter’s microbes multiply and mutate.

Turns out there’s an even weirder twist on this story. Reporting from SXSW, David Ewalt writes about a talk Venter just gave. Venter recounted how, after the news of the synthetic microbe hit, he got a cease-and-desist letter from the Joyce estate. Apparently, the estate claimed he should have asked permission before copying the language. Venter claimed fair use.

Man, do I wish this would go to court! Imagine the legal arguments. I wonder what would happen if the court found in the Joyce estate’s favor. Would Venter have to pay for every time his microbes multiplied? Millions of little acts of copyright infringement?

[Update: Looks like it wasn’t actually a cease-and-desist letter the Joyce estate sent–more an expression of disappointment. Ah, life’s grand game of telephone. Joyce would have loved it. After all, he was the sort of novelist who’d write :

“What has she in the bag? A misbirth with a trailing navelcord, hushed in ruddy wool. The cords of all link back, strandentwining cable of all flesh. That is why mystic monks. Will you be as gods? Gaze in your omphalos. Hello. Kinch here. Put me on to Edenville. Aleph, alpha: nought, nought, one. Spouse and helpmate of Adam Kadmon: Heva, naked Eve. She had no navel. Gaze. Belly without blemish, bulging big, a buckler of taut vellum, no, whiteheaped corn, orient and immortal, standing from everlasting to everlasting”]

DIY Tumors

Last month I wrote a piece for the New York Times about what ten scientists are looking forward to in 2011. One of the scientists, Rob Carlson, saw garage stem-cell research in our near future:

“It seems pretty likely within this year someone will show how to go from an adult peripheral blood draw to pluripotent stem cells. It means anyone who wants to try to make stem cells will be able to give it a whirl.”

Carlson took to his own blog to write at more length about what exactly he meant. For one thing, stem cell biohackers may want to think twice before sticking stem cells in their own bodies. They could end up with what Carlson calls DIY tumors. Check it out.