Way before people started inhaling nicotine in cigarette smoke, plants were using the chemical to defend themselves from animals. Nicotine is a poison, and an exceptionally deadly one. It targets proteins that tell our muscles to fire when they receive signals from our nerves. At high enough doses of nicotine, these proteins force muscles to constantly contract, leading to paralysis and death. And since the same proteins are found in every animal with muscles, nicotine can kill cows and caterpillars alike.
The tobacco hornworm is an exception. As a caterpillar, this moth specialises in eating tobacco leaves, because it can cope with doses of nicotine that would kill other species. It gets rid of most of the poison in its waste but, adding insult to injury, it also co-opts a small fraction for its own protection.
Pavan Kumar and colleagues from the Max Planck Institute for Chemical Ecology in Germany have shown that it exhales the poison through pores in its skin, creating a toxic miasma that deters hungry spiders. They call it a “defensive halitosis”.
In 2010, Kumar’s team, led by Ian Baldwin, raised tobacco hornworm caterpillars on genetically modified tobacco that doesn’t make much nicotine. They found that a gene called CYP6B46 was less active than usual in the guts of these insects, suggesting that it’s usually involved in resisting the effects of nicotine.
To test this idea, the team engineered tobacco plants that could deactivate the gene in any caterpillars that fed upon them, and planted them at a private ranch in Utah’s Great Basin Desert. They waited, and watched.
Soon, they noticed that hornworm caterpillars were more likely to die during the night if they ate the modified plants. A few nocturnal surveys revealed the cause of their deaths—wolf spiders. These powerful, fast-running hunters usually pose no threat to hornworms that eat nicotine-rich meals. However, they readily killed any caterpillars that ate the modified tobacco and had inactivated CYP6B46 genes. Why?
The answer seemed obvious at first. CYP6B46 is part of a large family of metabolic genes, which animals frequently use to detoxify the chemicals in the plants they eat. The team assumed that CYP6B46 was neutralising nicotine by breaking it down into safer substances. But, to their surprise, they couldn’t find any traces of these by-products in the caterpillars’ bodies or faeces.
Instead, they showed that CYP6B46 redirects a tiny amount of nicotine from the caterpillars’ guts to their haemolymph—the liquid that fills their bodies and acts as their bloodstream. From there, the caterpillars can vent the nicotine into the outside world by opening their spiracles—small breathing holes in their flanks, which allow air to enter and leave their bodies.
The caterpillars send just 0.65 percent of the nicotine they eat into their haemolymph. But even this tiny amount is enough to quadruple the concentration of nicotine in the air around them, creating an effective anti-spider spray.
When a wolf spider attacks, it first inspects its prey with chemically sensitive appendages. Here’s what happens when it approaches a caterpillar with a nicotine cloud.
And here’s what happens when it approaches a caterpillar with an inactivated CYP6B46 gene. The caterpillar can’t shunt nicotine from its gut to its haemolymph and can’t exhale the poison into the surrounding air. It pays the price for it.
The hornworm’s nicotine cloud probably works against other predators too. In earlier studies, when caterpillars are reared on tobacco, ants are less likely to attack them. Parasitic wasp larvae are also less likely to survive inside the caterpillars’ bodies, presumably because they are directly poisoned by the nicotine in their haemolymph. But the defence isn’t fool-proof. Kumar’s team showed that two predators—big-eyed bugs and antlions—will kill hornworms despite their halitosis. No one knows why.
Chemical theft is fairly common in the animal world, and many caterpillars store defensive poisons from the plants they eat. For example, the eastern tent caterpillar munches on plants that are loaded with hydrogen cyanide, which it then vomits onto marauding ants. But nicotine is too deadly to store. Instead, the tobacco hornworm has evolved a way of getting rid of it, which also doubles as a potent defence.
A couple weeks ago, All Things Considered asked me to talk about the deaths in 2013 of three Nobel-prize winning scientists: Francois Jacob, Frederick Sanger, and David Hubel. I had blogged about Jacob’s death in April, and reflecting on his career in conjunction with those of Sanger and Hubel was a thought-provoking experience. In some ways, these three scientists seemed worlds apart–Jacob poring over bacteria feeding on sugar, Sanger tearing apart insulin molecules, and Hubel using electrodes to eavesdrop on neurons in the brains of cats.
But what unites them all, I think, was their ability to use the very simple scientific tools available to scientists in the 1950s to open up vast realms of biological complexity–from the orchestral activity of the genome to the reality-building network of cells in our brains.
Here’s the story that NPR producer Rebecca Hersher put together for last night’s show. I’ve embedded it below:
Of course, there would have been plenty to say about many other troikas of scientists who passed away this year. On Twitter, ecologist Jacquelyn Gill reminded me of the pioneering ecologist Ruth Patrick, for example. Neuroscientist John Kubie pointed me to his homage to Robert Muller, who did ground-breaking work on memory. The Scientist has a longer list on their blog. While we mourn their loss, science preserves their memory in the research that goes on today, made possible by their earlier work.
I’ve been reading and thinking a lot lately about the process of aging. Many scientists who study it argue — quite convincingly — that it’s the most important scientific topic of our time. In his 1997 bestseller Time of Our Lives, biological gerontologist Tom Kirkwood writes that the science of human aging is “one of the last great mysteries of the living world.”
Over the past century, Kirkwood notes, developed countries have used preventative and offensive tactics to slash infant mortality, smoking, and accident rates, and to conquer most infections. In the 1880s, the top causes of death were respiratory diseases (like tuberculosis and influenza) and digestive diseases (like cholera and typhoid), and life expectancy was around 46 years. Today, we’re living three decades longer and dying of illnesses — such as cancer, stroke, and dementia — that most of our ancestors didn’t grow old enough to get.
Perhaps because people are living longer and longer, we tend to think about aging as a modern phenomenon. “Data from the Census Bureau tell us that there are currently around 39 million Americans age 65 and older, up from 25.5 million just 30 years ago,” notes the website of the National Institute on Aging. “This population explosion is unprecedented in history, and the resulting demographic shift is causing profound social and economic changes.”
Though it may be getting a surge of scientific and cultural attention, aging isn’t a new problem. Far from it: Philosophers have been fretting over old age for thousands of years, asking essentially the same thorny, metaphysical questions that get asked today. This became obvious to me this weekend while reading The Nature of Man, a fascinating and surprisingly eloquent book published in 1903 by Russian biologist Élie Metchnikoff.
The book’s basic premise — that science and reason can lead to optimism and happiness, despite religious arguments to the contrary — is interesting in its own right. And I’ll get into how it relates to aging. But Metchnikoff’s argument is even more interesting if you know a bit about his personal life.
When he was 18 years old, Metchnikoff married a woman with tuberculosis. She was sick enough on their wedding day to be carried to the church, and stayed sick for the next decade before dying in 1873. Devastated, Metchnikoff tried to kill himself with an opiate overdose. He married again in 1875, and five years after that his second wife caught typhoid fever. She almost died, and Metchnikoff again attempted suicide.
Metchnikoff’s depression lifted in 1883 with the discovery that would make him famous (and later earn him the Nobel Prize). He was the first to identify phagocytes, cells of the immune system that engulf and destroy invading microbes. He became friendly with Louis Pasteur, whose discoveries of microbes and vaccines had prevented all kinds of sickness and death. In 1888, Metchnikoff was given an appointment at Pasteur’s prestigious research institute, in Paris, where he worked until his death in 1916.
Given Metchnikoff’s life experiences, you can understand why he may have felt reverence and gratitude for science. This was the era, after all, when scientists like Metchnikoff and Pasteur and many others were figuring out how pathogens worked and, with that scientific understanding, developing methods to fight them off.
The premise of Metchnikoff’s The Nature of Man is underscored in its subtitle, “Studies in Optimistic Philosophy.” In it Metchnikoff explains his optimism not only about science’s ability to fight disease, but to ward off a much more menacing threat: aging.
The inevitable decline of aging, Metchnikoff notes, has long pushed people away from science and into the consoling hug of religion. He cites a 2,000-year-old sermon by the Buddha: “Behold, O monks, the holy truth as to suffering. Birth is suffering, old age is suffering, disease is suffering, and death is suffering.” And he notes the same fatalism in a slew of modern writings, from Ecclesiastes (“He that increaseth knowledge, increaseth sorrow”) to Shakespeare (“Conscience does make cowards of us all”), Jean-Jacques Rousseau (“Know O people that nature has desired to preserve you from science as a mother tries to snatch a dangerous weapon from the hands of a child”), and Ferdinand Brunetière (“Science is powerless to resolve the sole problems that are essential, that concern the origin of man, the rules for his conduct, and his future destiny”).
But Metchnikoff seems especially irked by fellow Russian Leo Tolstoy’s thoughts on the inadequacies of science — perhaps because Tolstoy also struggled with depression and suicidal thoughts. Here’s a long snippet from his A Confession, published in 1884, in which he describes how he was satisfied with science until he began feeling the decline of old age and the reality of his own death:
My question — that which at the age of fifty brought me to the verge of suicide — was the simplest of questions, lying in the soul of every man from the foolish child to the wisest elder: it was a question without an answer to which one cannot live, as I had found by experience. It was: “What will come of what I am doing today or shall do tomorrow? What will come of my whole life?”
Differently expressed, the question is: “Why should I live, why wish for anything, or do anything?” It can also be expressed thus: “Is there any meaning in my life that the inevitable death awaiting me does not destroy?”
… From early youth I had been interested in the abstract sciences, but later the mathematical and natural sciences attracted me, and until I put my question definitely to myself, until that question had itself grown up within me urgently demanding a decision, I contented myself with those counterfeit answers which science gives.
Now in the experimental sphere I said to myself: ‘Everything develops and differentiates itself, moving towards complexity and perfection, and there are laws directing this movement. You are a part of the whole. Having learnt as far as possible the whole, and having learnt the law of evolution, you will understand also your place in the whole and will know yourself.’ Ashamed as I am to confess it, there was a time when I seemed satisfied with that. It was just the time when I was myself becoming more complex and was developing. My muscles were growing and strengthening, my memory was being enriched, my capacity to think and understand was increasing, I was growing and developing; and feeling this growth in myself it was natural for me to think that such was the universal law in which I should find the solution of the question of my life.
But a time came when the growth within me ceased. I felt that I was not developing, but fading, my muscles were weakening, my teeth falling out, and I saw that the law not only did not explain anything to me, but that there never had been or could be such a law, and that I had taken for a law what I had found in myself at a certain period of my life. I regarded the definition of that law more strictly, and it became clear to me that there could be no law of endless development; it became clear that to say, ‘in infinite space and time everything develops, becomes more perfect and more complex, is differentiated’, is to say nothing at all. These are all words with no meaning, for in the infinite there is neither complex nor simple, neither forward nor backward, nor better or worse.
Science and rationality, Tolstoy continued, are what make life insufferable. The only way to survive is to give in to an irrational faith: “Whatever the faith may be, and whatever answers it may give, and to whomsoever it gives them, every such answer gives to the finite existence of man an infinite meaning, a meaning not destroyed by sufferings, deprivations, or death.”
Metchnikoff, an atheist, is unsurprisingly critical of this outlook. His book (spoiler alert) doesn’t give the answer to the inevitability of death, but it does offer some hope regarding life’s sufferings and deprivations. Just as science had begun to unravel the mechanisms of microbial disease, Metchnikoff argues, so could it find the biological underpinnings of aging. And if aging could be understood, then its painful manifestations could be slowed, or even stopped. Released of the pain of growing old, there’d be no reason for anyone to be fearful or pessimistic about life, no reason to want to leave this earth.
It turns out that Metchnikoff’s specific ideas about what causes aging didn’t pan out.* But his ultimate claim — that science can help more of us live longer, and with less pain — has proven true, as evidenced by the last century’s rise in life expectancy and the increasing numbers of very old people. Obviously, today’s scientists have not yet figured out how to dramatically slow or stop the aging process. But there’s no inherent reason to think they won’t get there eventually — just as Pasteur’s work on microbes paved the way for treatments for the tuberculosis and typhoid that struck Metchnikoff’s wives.
“Scientists are accustomed to exploring the unknown,” Kirkwood writes in Time of Our Lives. “It serves no useful purpose to pretend that the deep secrets of aging will come easily. But the more we learn, the more reliably we will be able to anticipate future discoveries.”
*Metchnikoff believed that aging was caused in part by the distribution of gut bacteria, which, he wrote, “contributes nothing to the well-being of man” and “is the source of many poisons harmful to the body.” He drank sour milk every day, claiming that the lactic acid it contained would kill harmful gut bacteria. Some thirty years later, inspired by Metchnikoff’s book, Japanese scientist Minoru Shirota created a drink, called Yakult, made of a cultured strain of lactic acid bacteria. It was the world’s first commercial probiotic.
For my new “Matter” column for the New York Times, I take a look at a new idea to explain that mystery between our ears. Our brains are enormous for our body size, and our minds are capable of extraordinary feats of cognition. Two neuroscientists have offered up a hypothesis that links these two facts, suggesting how an increase in brain size could have led to a change in how the brain is networked. Check it out.
You may also want to check out P.Z. Myers’s critique of the “tether hypothesis” on his blog Pharyngula. He raises some important questions about the idea, based on his own experiences as a neuroscientist. I’m puzzled, though, why he decided to kick it off with this swipe at me:
I suppose it helps to be at Harvard. It also helps to have a combination of subjects — evolution and the human brain — that Zimmer has written about in the past. It helps to have a paper with lots of very pretty diagrams — the authors’ hypothesis is professionally illustrated. It’s also a good idea to have a vast sweeping explanation for the exceptionalism of the human brain…You know what you don’t need? Data, or a hypothesis that makes sense.
I had no idea that Harvard had such a power over my feeble powers of judgment. Or that I am so vulnerable to pretty pictures.
What I thought happened was this: the tether hypothesis comes from Randy Buckner and one of his postdoctoral researchers, Fenna Krienen. I was long familiar with their work on mapping human brain networks, having visited them a few years ago when I wrote a story about the aging brain. Buckner was new to Harvard when I visited him, having made a name for himself beforehand at Washington University–which mysteriously failed to prejudice me against him.
After my visit, Buckner and his colleagues went on to do other important studies on the structure of the human brain, which they published in leading neuroscience journals. When I saw Buckner and Krienen’s new paper in the journal Trends in Cognitive Sciences, I did not, in fact, say, “Ooh, pretty pictures, ooh Harvard!” I said, “Scientists with a proven track record expanding their work on human brains to a comparison to other species. Interesting.”
Since I’m a journalist and not a neuroscientist, I also contacted outside experts. For example, I contacted Chet Sherwood of George Washington University. Now, I suppose Myers would think I’d be scared away because Sherwood isn’t at Harvard, but I actually am capable of recognizing that he’s an expert on mammal brain evolution who is familiar with the tether hypothesis–and therefore someone whose opinion should matter to me.
It turned out, as I mention in my article, that Sherwood found the tether hypothesis to be an exciting idea. He is intrigued by how it can potentially explain a lot about the anatomy and function of the human brain in a relatively simple way. That’s the sort of comment that makes me think that a paper would make for an interesting column.
It doesn’t surprise me that another scientist–in this case, Myers–disagrees. That’s how science works; recognizing that, I’ve included plenty of critics in my articles over the years. What does surprise me is that Myers would use a scientific critique to impugn my capacity as a journalist.
By this time of the year, we are drowning in Best Of 2013 lists, featuring the most important events and discoveries of the year. I like to take a different tack here: I’ve deliberately strayed away from things that made other lists, like those from Wired, Nature, Scientific American, Science and, yes, National Geographic. There will be no Voyager, dark matter, olinguitos, BRAIN Initiative, or H7N9. For the same reason, and with sadness, I’m also omitting a few topics that I covered including the oldest hominin DNA ever sequenced, and lab-grown model brains.
Instead, this is a list of my favourite stories, compiled for no other reason that I loved learning and writing about them. It’s a list of unexpected finds and underappreciated progress. Enjoy.
“The image above is an extreme close-up of a common British insect called a planthopper. You’re looking at it from below, at the point where its two hind legs connect to its body. In the middle, you can clearly see that the top of each leg has a row of small teeth, which interlock together. As the planthopper jumps, the teeth ensure that its legs rotate together and extend at the same time.
“In the cow genome, one particular piece of DNA, known as BovB, has run amok. It’s there in its thousands. Around a quarter of a cow’s DNA is made of BovB sequences or their descendants. If you draw BovB’s family tree, it looks like you’ve entered a bizarre parallel universe where cows are more closely related to snakes than to elephants, and where one gecko is more closely related to horses than to other lizards… This jumping gene not only hops around genomes, but between them.”
“Animal mucus — whether from humans, fish or corals — is loaded with bacteria-killing viruses called phages. These protect their hosts from infection by destroying incoming bacteria. In return, the phages are exposed to a steady torrent of microbes in which to reproduce. “It’s a unique form of symbiosis, between animals and viruses.”
“Dominic Clarke and Heather Whitney from the University of Bristol have shown that bumblebees can sense the electric field that surrounds a flower. They can even learn to distinguish between fields produced by different floral shapes, or use them to work out whether a flower has been recently visited by other bees. Flowers aren’t just visual spectacles and smelly beacons. They’re also electric billboards.”
“This is one the most extraordinary and convoluted evolutionary tales that I have ever heard. It’s the origin story of a group of viruses called REVs. It’s the tale of how naturalists and scientists inadvertently created a bird virus out of a mammalian one through zoo-collecting and medical research.” It involves a turkey, a pheasant, vaccines, a vanishing malaria parasite, and a mongoose.
“Women are born with two copies of the X chromosome, while men have just one. This double dose of X-linked genes might cause problems, so women inactivate one copy of X in each cell [using] a gene called XIST (pronounced “exist”). Jun Jiang from the University of Massachusetts Medical School has [now] used XIST to shut down chromosome 21. “Most genetic diseases are caused by one gene, and gene therapies correct that gene,” says Jeanne Lawrence, who led the study. “In this case, we show that you can manipulate one gene and correct hundreds.” It’s chromosome therapy, rather than gene therapy.”
“From all across the galaxy, the light of billions of stars finds its way to Earth, passes through our atmosphere, and enters the eyes of a small South African beetle rolling a ball of dung. The beetle’s eyes are not sensitive enough to pick out individual stars but it can see the Milky Way as a fuzzy stripe, streaking across the night sky. With two of its four eyes, it gazes into the guts of our galaxy, and uses starlight to find its way home.”
“The organism was initially called NLF, for “new life form”. Jean-Michel Claverie and Chantal Abergel, evolutionary biologists at Aix-Marseille University in France, found it in a water sample collected off the coast of Chile, where it seemed to be infecting and killing amoebae. Under a microscope, it appeared as a large, dark spot, about the size of a small bacterial cell. Later, after the researchers discovered a similar organism in a pond in Australia, they realized that both are viruses — the largest yet found.”
“When I first read about thresher sharks as a kid, I imaged that they would swim towards its prey, bank sharply, and lash out sideways with their tails. Instead, here’s what usually happens. The thresher accelerates towards a ball of fish and brakes sharply by twisting its large pectoral fins. It lowers its snout, pitches its whole body forward, and flexes the base of its tail. This slings the tail tip over its head like a trebuchet, with an average speed of 30 miles per hour.”
“Every individual is a hermaphrodite with both male and female genitals. When they have sex, they can simultaneously penetrate each other, with penises that extend to their whole body length. The penises are also forked. [One] branch ends in a fiendish spine called the penile stylet. It stabs straight into the partner’s forehead, and pumps fluid from the prostate gland.”
“Here’s (a concise history of) what happened since a little creature called Alalcomenaeus died: Its body sinks to the ocean floor, gets covered in sediment and slowly turns into a stony fossil. Meanwhile, all the world’s land has time to glom together into a mega-continent called Pangaea before breaking up again. Life, was restricted to the oceans, invades the land. Plants and fungi go first, producing thin coverings of mosses and lichens and eventually giant forests. The insects appear, and take to the skies. Other marine animals evolve familiar traits like bones and jaws, and their descendants diversify across the land. Dinosaurs come, see and conquer, before (mostly) dying out. Mammals get their day and one of them, armed with technology and knowledge, unearths Alalcomenaeus from its ancient resting place in what is now China.
As I said: a vastly, hugely, mind-boggling big span of time. Lots happened. And through all of it, the nervous system of this buried animal remained intact.”
“On one hand, [the cause of narcolepsy] seems straightforward: people slowly lose a special group of neurons that produce hypocretin, a hormone that keeps us awake. But what kills the neurons? There’s been a lot of evidence to support the idea [that the immune system is responsible] but a team of scientists from Stanford University have finally found what they describe as a “smoking gun”. People with narcolepsy, and only people with narcolepsy, have a special group of immune cells that targets hypocretin. The study also helps to explain some puzzling quirks about narcolepsy, like why the 2009 swine flu pandemic led to a surge of cases in China, or why one particular vaccine against that strain did the same in Europe.”
The citrus mealybug looks like a walking dandruff flake, or perhaps a woodlouse that’s been rolled in flour. It’s also the insect version of a Russian nesting doll. If you look inside its cells, you’ll find a bacterium called Tremblaya princeps. And if you look inside Tremblaya, you’ll find yet another bacterium called Moranella endobia. As if this wasn’t complicated enough, some of these machines are built using genetic instructions that are loaned from three other groups of bacteria. So, six different branches on the tree of life have come together to allow this three-way partnership to make the nutrients they need!”
“Mountain ranges and rivers can act as physical barriers that separate closely related species and keep them from cross-breeding. But the trillions of microbes in an animal’s guts could have the same role. Robert Brucker and Seth Bordenstein, biologists at Vanderbilt University in Nashville, Tennessee, have found that the gut bacteria of two recently diverged wasp species act as a living barrier that stops their evolutionary paths from reuniting. The wasps have subtly different collections of gut microbes, and when they cross-breed, the hybrids develop a distorted microbiome that causes their untimely deaths.”
“Sick of being caught on the backfoot [by new epidemics], one team of scientists is spearheading a new approach to dealing with emerging diseases. The researchers want to catalog every single mammalian virus in the world, before they have a chance to spread to humans. Daszak’s team began by counting all the viruses in a single species, the Indian flying fox. Then, they extrapolated to include all 5,500 mammals, estimating that these animals harbor at least 320,000 viruses waiting to be discovered… “In my lifetime, we might be able to find every mammalian virus that might infect us,” said Daszak. “And once you know your enemies, you can start to do something about them.”
“Psychologists have been sailing through some pretty troubled waters of late. Critics, many of whom are psychologists themselves, say that these lines of evidence point towards a “replicability crisis”, where an unknown proportion of the field’s results simply aren’t true. To address these concerns, a team of scientists from 36 different labs joined together, like some sort of verification Voltron, to replicate 13 experiments from past psychological studies.They chose experiments that were simple and quick to do, and merged them into a single online package that volunteers could finish in just 15 minutes. This is Big Replication—scientific self-correction on a massive scale.”
“‘Doctors assume that after clinical death, the brain is dead and inactive,’ says Jimo Borjigin. “They use the term ‘unconscious’ again and again. But death is a process. It’s not a black-or-white line.” In a new study, Borjigin discovered that rats show an unexpected pattern of brain activity immediately after cardiac arrest. With neither breath nor heartbeats, these rodents were clinically dead but for at least 30 seconds, their brains showed several signals of conscious thought, and strong signalsto boot. This suggests that our final journey into permanent unconsciousness may actually involve a brief state of heightened consciousness.”
“Two teams of scientists showed that three cancer treatments rely on gut bacteria to mobilise the immune system and kill tumour cells—not just in the gut, but also in the blood (lymphomas) and skin (melanomas). Remove the bacteria with antibiotics, and you also neuter the drugs.”
“Yoshiaki Yamaguchi and Toru Suzuki have engineered mice that are, with apologies for the awful word, unjetlaggable. If you change the light in their cages to mimic an 8-hour time difference, they readjust almost immediately. Put them on a red-eye flight from San Francisco to London and they’d be fine.”
I’m really optimistic about the future for long, deep, rich science reporting. There are more places that a publishing it, more ways of finding it, and a seemingly huge cadre of people who are writing it well. So without further ado, here’s a list of my top pieces of the year. It has blossomed to 15 from last year’s 12 because I was gripped by indecision and they’re all so good. In no particular order:
1) Bones of Contention, by Paige Williams for the New Yorker. The curious case of USA v. One Tyrannosaurus Bataar Skeleton frames this exquisitely crafted tale about a Florida man’s trade in Mongolian dinosaurs, and the amazing world of fossils, auctions, and private collectors.
“He sold sloth claws, elephant jaws, wolf molars, dinosaur ribs—a wide range of anatomical fragments that went, mostly, for between ten and fifty dollars. Increasingly, Florida Fossils got into triple digits, especially when Prokopi started selling dinosaur parts. In the fall of 2011, he sold two Mongolian oviraptor nests for more than three hundred and fifty dollars each, a tyrannosaurus ileum for five hundred and sixty-one dollars, a tyrannosaurus tooth for three hundred and twenty-five dollars, and a tyrannosaurus tail vertebra for four hundred and ten dollars.”
2) Imagining the Post-Antibiotics Future, by Maryn McKenna for Medium. The post-antibiotic world is much worse than you might imagine, and this sweeping piece takes us through the implications for medicine, agriculture & more.
““Many treatments require suppressing the immune system, to help destroy cancer or to keep a transplanted organ viable. That suppression makes people unusually vulnerable to infection. Antibiotics reduce the threat; without them, chemotherapy or radiation treatment would be as dangerous as the cancers they seek to cure… Similarly with transplantation. And severe burns are hugely susceptible to infection. Burn units would have a very, very difficult task keeping people alive.””
3) Uprooted, by Virginia Hughes for Matter. An incredible story about how DNA testing is changing the way people look at their genealogy, and revealing that some people aren’t who they thought they were. (And a special shout-out to Hughes’ piece on Romanian orphans—it was very hard to choose between these.)
“Searching your genetic ancestry can certainly be fun: You can trace the migration patterns of 10,000-year-old ancestors, or discover whether a distant relative ruled a continent or rode on the Mayflower. But the technology can just as easily unearth more private acts—infidelities, sperm donations, adoptions—of more recent generations, including previously unknown behaviors of your grandparents, parents, and even spouses. Family secrets have never been so vulnerable.”
4) The Social Life of Genes, by David Dobbs for Pacific Standard. A stunning piece about how our day-to-day lives quickly influence how our genes are deployed, and how cells are machines “for turning experience into biology”.
“Half were European honeybees, Apis mellifera ligustica, the sweet-tempered kind most beekeepers raise. The other half were ligustica’s genetically close cousins, Apis mellifera scutellata, the African strain better known as killer bees. Though the two subspecies are nearly indistinguishable, the latter defend territory far more aggressively. Kick a European honeybee hive and perhaps a hundred bees will attack you. Kick a killer bee hive and you may suffer a thousand stings or more. Two thousand will kill you.”
“His quest to save the orange offers a close look at the daunting process of genetically modifying one well-loved organism — on a deadline… Only in recent months has he begun to face the full magnitude of the gap between what science can achieve and what society might accept.”
“She sat in a Land Rover, 30 feet away, while three other males ganged up on C-Boy and tried to kill him. His struggle to survive against those daunting odds, dramatic in itself, reflected a larger truth about the Serengeti lion: Continual risk of death, even more than the ability to cause it, is what shapes the social behavior of this ferocious but ever jeopardized animal.”
7) A Life-Or-Death Situation, by Robin Marantz Henig, for the New York Times. A moving and compassionate piece about end-of-life care.
“Suffering, suicide, euthanasia, a dignified death — these were subjects she had thought and written about for years, and now, suddenly, they turned unbearably personal. Alongside her physically ravaged husband, she would watch lofty ideas be trumped by reality — and would discover just how messy, raw and muddled the end of life can be.”
8) Drive-Thru Astronomy, by Lee Billings for Aeon. Billings goes on a roadtrip through a scale model of the Solar System that spans the United States, and it’s a fantastic ride.
“McCartney half-jokingly asked if Pluto was still there. Its box is mounted at eye level, and the building is open to all comers, all day. McCartney said he had a half dozen ceramic dwarf planet replacements squirrelled away in a desk drawer for when the planet goes missing. I assured him that Pluto was in place. ‘Great. Call me when you’re at Uranus and I’ll meet you at Saturn,’ McCartney said, before hanging up abruptly.”
9) The Case of the Missing Ancestor, by Jamie Shreeve for National Geographic. In 2011, we discovered a new group of ancient humans by sequencing DNA form a tiny bone chip. This amazing piece, which reads like a thriller, tells the story.
“Krause himself recalls that Friday as “scientifically the most exciting day of my life.” The tiny chip of a finger bone, it seemed, was not from a modern human at all. But it wasn’t from a Neanderthal either. It belonged to a new kind of human being, never before seen.”
10) Omens, by Ross Andersen for Aeon. A beautiful essay on extinctions and the end of humans, suffused with gorgeous poetry.
“It’s a sad story from the dinosaurs’ perspective, but there is symmetry to it, for they too rose to power on the back of a mass extinction. One hundred and fifty million years before the asteroid struck, a supervolcanic surge killed off the large crurotarsans, a group that outcompeted the dinosaurs for aeons. Mass extinctions serve as guillotines and kingmakers both.”
11) The Boy Whose Brain Could Unlock Autism, by Maia Szalavitz for Matter. This stunning piece looks at a neuroscientist’s quest to understand his autistic son, and the new “intense world” hypothesis of autism.
“Imagine being born into a world of bewildering, inescapable sensory overload, like a visitor from a much darker, calmer, quieter planet. Your mother’s eyes: a strobe light. Your father’s voice: a growling jackhammer. That cute little onesie everyone thinks is so soft? Sandpaper with diamond grit. And what about all that cooing and affection? A barrage of chaotic, indecipherable input, a cacophony of raw, unfilterable data.”
“He said to me, ‘Isn’t this an amazing result?’” Dubernard recalls of the patchwork of skin grafts used to remodel Woods’ face. “And I told him, ‘Yes, this is good. But you know what? A face transplant would be better.’”
13) Cows might fly, by Veronique Greenwood for Aeon. What begins with some quirky trivia about Swiss airlifted cows slowly reveals itself as a meditation on our environmental future in a world that’s running out of land. Relentless entertaining and fascinating.
From time to time, a hiker through the Swiss Alps might witness a startling sight. First, the sound of a helicopter reverberates off the valley walls. Then the chopper appears, a long cable hanging from its belly. When the burden at the end of the cable heaves into view, it is not a rescued mountaineer, en route to the hospital. Nor is it a pot of cement or a pallet of planks, on the way to a high-mountain building project. It is a single cow, hanging gently from a harness, her dark eyes alert, her hooves high above the ground.
14) Bad Blood, by Will Storrfor Matter. This chronicle of the life and death of Russian dissident Alexander Litvinenko, and the poison that killed him, has won awards, and rightly so.
“As [uranium] throws out these chunks — cannonballs containing two protons and two neutrons, a combination known as an alpha particle — it cascades down the periodic table, transforming itself into a different element each time. Just before its arrival at lead-206, it becomes a substance called polonium-210. And it is at this point that the elements of science become the elements of murder.”
15) The Spy Who Loved Frogs, by Brendan Borrell for Nature. A gripping piece about a young scientist must follow the jungle path of a herpetologist who led a secret double life.
“Before leaving for the Philippines as an undergraduate in 1992, Rafe Brown scoured his supervisor’s bookshelf to learn as much as he could about the creatures he might encounter. He flipped through a photocopy of a 1922 monograph by the prolific herpetologist Edward Taylor, and became mesmerized by a particular lizard, Ptychozoon intermedium, the Philippine parachute gecko. With marbled skin, webs between its toes and aerodynamic flaps along its body that allow it to glide down from the treetops, it was just about the strangest animal that Brown had ever seen.“
Here are three of my own longreads from this year.
Ant Farm (Aeon), about how ants are killing Ghana’s coffee supply, how plant diseases could bring the world to its knees, and why we’re woefully unprepared to stop them.
The Power of Swarms (Wired), about the surprising, amazing science of herds, shoals, flocks, tumours, brains, and other collectives.
Dynasty (Nature), about Bob Paine: a man who changed science not just through his own work, but by inspiring a legacy of other scientists.
In the spring of 1968, experimental psychologist Donald Lewis and his colleagues published a study about memory that was well before its time. The researchers first trained rats to fear a particular sound. A day later, the animals heard the same sound followed immediately by an electroconvulsive shock to the head. After getting shocked, the animals forgot their fear of the sound. The old memory was gone.
The study was remarkable for its focus on memory retrieval, rather than memory formation. See, research up until then had suggested that a memory is only unstable (and thus vulnerable to change) in the minutes or hours after it’s first created. During this period, called ‘consolidation’, the memory moves into the brain’s long-term storage and, it was thought, into a stable and fixed molecular state. “For decades people had thought that once a memory is wired in the brain it stays there forever,” says Karim Nader, a neuroscientist at McGill University in Montreal. But Lewis’s study showed that wasn’t true: When a rat recalled a stored memory, the memory somehow became unstable again, making it vulnerable to erasure.
Lewis’s study contained a revolutionary idea, but it didn’t revolutionize the memory field. It was published in a top journal, Science, and followed up immediately by another group of researchers. But as that group reported the following year, also in Science, they couldn’t replicate Lewis’s findings. Over the next few decades, a couple of other studies came out suggesting that memories become unstable during recollection, but the idea never made it into the textbooks. “Part of the field believed it, but the other part of the field just didn’t believe it,” Nader says. “And the ones who didn’t believe it were the dominant ones.”
Nader gets credit for rekindling the idea in the late 1990s, while working as a postdoc in Joe LeDoux’s lab at New York University. Nader showed that, in rats, old memories can be erased by infusing a drug into the animal’s brain as it recalls the memory. Because the drug blocked protein synthesis, this experiment was evidence that memories go through a ‘reconsolidation’ process after being recalled, and that this process requires protein synthesis (just like the initial consolidation does).
Unlike Lewis’s study, Nader’s, published in Nature in 2000, did rock the field, not least because of its clinical implications. The results opened up the possibility that the frightening memories that haunt people with post-traumatic stress disorder could be erased — even long after they are formed.
A study out today in Nature Neuroscience takes this research a big step further by demonstrating not only that memory reconsolidation happens in people, but that it can be blocked with electroconvulsive therapy, or ECT.
Marijn Kroes and his colleagues at Radboud University Nijmegen, in the Netherlands, tested the memories of 39 people with severe depression who were already undergoing ECT. This rare treatment has an interesting cultural history. “People have the One Flew Over the Cuckoo’s Nest idea of it,” Kroes says, referring to the 1975 film in which Jack Nicholson’s character is forced — wide awake, whimpering and writhing in pain — to undergo ECT at a mental institution. “Luckily it’s nothing like that.”
Today ECT is used as a last resort for people with severe depression who do not respond to antidepressants, psychotherapy, or other treatments. It’s done in a hospital room, after the patient receives muscle relaxants and general anesthesia. A brief electric current passes through the brain, inducing a seizure. For reasons no one yet knows, it usually works: ECT had an 86 percent remission rate for people with major depression, according to one study.
Kroes wanted to find out whether ECT could erase memories during active recall, as it had for the rats in Lewis’s experiment nearly 50 years ago.
All of the participants first saw two narrated slide shows, each describing a different traumatic story with 11 pictures. In one of them, for example, two sisters leave their house to visit their brother at a nearby bar. As they walk past an alley, one of the sisters is kidnapped by a man and held at knife point:
The other story is similar in structure, but shows a boy getting hit by a car and then in surgery having his feet reattached to his legs. In the words of the paper, these are “high-arousing stories with negative valence.” I’ll say.
A week after the participants watched the slide shows, the researchers showed them the first slide from one of the stories as a memory trigger. A few minutes later, some participants were given ECT. After the treatment, they were given a memory test about the details of both stories.
Participants who did not undergo ECT got about half of the questions about the triggered memory correct, compared with 40 percent of questions about the other story (which they had not seen for a week).
In stark contrast, participants who received ECT seemed to have no memory of the story they had been reminded of — they scored a 25 percent on a four-answer multiple choice test, the same as guessing at random. The same participants showed significantly better recall — 35 percent — of the second, non-triggered story. ECT, in other words, selectively erased the memories that were being actively recalled, just as it had for Lewis’s rats.
“It’s a very elegant paper, compelling data, and it’s a difficult study to do,” says Daniela Schiller, a neuroscientist at Mount Sinai School of Medicine in New York, who was not involved in the work. “It’s very impressive they managed to do that, and that they even tried.”
Schiller’s experiments have also bolstered the reconsolidation hypothesis. She has shown, for example, that if people recall a fearful memory and then go through ‘extinction learning’ — meaning that they’re shown the fearful stimulus over and over again without any pain — they can erase the emotional sting of the memory. Other groups have shown something similar by giving people propranolol, a beta-blocker, immediately after recalling a memory.
The new study adds ECT to the list. There are still a lot of questions. For example, it’s not clear how ECT is disrupting reconsolidation. Or if it’s doing it at all: The effect could be partly or wholly due to anesthesia, though the researchers say this is unlikely. Most importantly, no one knows whether the procedure would work with old, real memories, as opposed to those artificially created in the lab.
Kroes and his colleagues are planning a clinical trial in which they will use ECT to lessen traumatic memories in people with PTSD. “Just to be clear: it’s a long way from being an actual clinical application,” he says. “A lot of experimental, fundamental science is often very difficult to translate into the real world.”
I asked all of these scientists a question that I’m sure they get every week: When are we going to be able to erase whatever memories we want, like in Eternal Sunshine of the Spotless Mind?
Kroes was quick to say, and rightly so, that ECT should only be used as a serious medical treatment. “You have to have some kind of disease state,” he says.
But as far as the technology goes, and what it could do, everybody told me that there’s no reason to think that we couldn’t play out a Spotless Mind scenario in the not-too-distant future.
“I wouldn’t have said so a few years, but there’s just more and more evidence, with different types of memories, different types of manipulations, and different species,” Schiller says.
Right or wrong, there’s certainly demand for it. Nader remembers the big public response to his 2000 Nature paper. “The day after it was published, a number of women emailed Joe [LeDoux] and asked, ‘Can you get rid of the memories of my ex-husband?'”
But would he be OK with that kind of application, should the technology advance as experts expect it will? “For me,” Nader says, “I don’t think that would be the end of the world.”
Veronique Greenwood’s piece on Swiss helicopter-flown cows is one of the most delightful things I’ve read this year. A perfect example of a piece that starts about something quirky and is gradually reveals itself to be about something really important.
A sad tribute to the space robots we lost in 2013, by Adam Mann.
A child checks into hospital with mitochondrial disease. The hospital says it’s psychiatric. Her parents lose custody. This is “parent-ectomy“, where a disputed medical diagnosis leads to parents losing care of their child. An incredible, gut-wrenching story by Neil Swidey and Patricia Wen.
“”Are you really Santa Claus?” she asked. At first, [Colonel] Shoup was, understandably, confused.” A most wonderful story in which a typo leads to lots of kids calling a top secret military phone line. By Megan Garber.
Maybe some “terrible two” toddlers stay terrible? What then? David Dobbs considers.
Nathan Myhrvold—patent troll, food enthusiast and dinosaur hobbyist—challenges some high-profile papers on dinosaur growth rates. The author’s responses are thus far worrying, but the critique is long and more is sure to come.
Dust-borne bacteria from houses with dogs can prevent allergies (in mice) by changing their gut microbes. By me.
The title of my blog post is provocative, I know, but I’m actually just lifting it from the title of a new commentary in the journal Molecular Psychiatry by Thomas Insel, the director of the National Institutes of Mental Health. In his piece, Insel expresses his excitement about a new way of thinking about how genes can contribute to our risk of psychiatric disorders such as schizophrenia. It’s based on an emerging understanding of the human genome that I explored in a recent story for the New York Times: each of us does not carry around a single personal genome, but many personal genomes.
When we start out as a single fertilized egg, we have a single genome. When the cell divides in two, there’s a tiny chance that any spot in the DNA will mutate. Over many divisions, the copies of that original genome accumulate mutations and become different from one another. Scientists only now have the tools to dig into this so-called mosaicism and see how different our genomes can become.
Scientists have long known that mosaicism can be important for cancer, but it’s only recently that experts on other diseases have thought about it. Insel clearly has turned his mind in its direction. As he notes in his commentary, a number of studies have implicated genes in the risk of conditions such as autism. But the picture is still murky, as reflected by the fact that among identical twins, it’s often the case that one sibling will develop a mental disorder and the other will not.
Part of the solution to this mystery, he suggests, is that the brain is a mosaic.
“The brain’s genome or more accurately genomes, may prove to be even stranger than we have imagined,” Insel writes.
What might be happening is this: when embryos are developing, the neurons of the brain are growing and dividing. A neuron may acquire a mutation, which it then passes down to daughter neurons. That new mutation alters how those neurons work and makes a person prone to developing a particular mental disorder. But you wouldn’t know that this mutation is playing a role if you just took a cheek swab from a patient and sequenced the DNA from the cells you retrieved. The mutation you need to see is locked away in the brain.
Scientists have already linked these late-arising mutations to a few brain disorders. One is hemimegalencephaly, in which one side of the brain becomes bigger than the other. Even though only a few percent of the neurons in the brain carry the mutation, they can still trigger large-scale changes to half of the brain. Some disorders seem to require a one-two punch, in which a child inherits a mutation from a parent, and then a new mutation arises on top of that in the brain.
Insel suspects that some mental disorders may have a similar origin. For example, males are more likely than females to develop most neurodevelopmental disorders. That may be because they’re especially vulnerable to late-arising mutations. While females have two X chromosomes, males have only one, the second X being replaced by a Y. If a mutation arises on the X chromosome as a male embryo develops, there isn’t a healthy back-up on another X chromosome to compensate.
As promising as this line of research may be, however, it won’t be easy to search for the brain’s mosaic. Cheek swabs won’t do. Scientists will need to look at individual neurons in the brain. As Insel notes, technology for probing single cells is improving enormously. But there’s still a needle-in-the-haystack quality to such a search. And the raw material for this kind of search is hard to come by. You can’t grab a few neurons from a living person with the ease that you can get cheek cells. You need autopsied brains donated to science.
So it’s unlikely that doctors would actually run a brain mutation test on patients to search for this mosaicism. Instead, understanding the mosaic brain could offer a more general insight: by identifying the late-arising mutations that lead to mental disorders, scientists will better understand their biology. And that knowledge could, some day, lead to better treatments.
The common cuckoo is famed for its knack for mooching off the parental instincts of other birds. It lays its eggs in the nests of at least 100 other species, turning them into inadvertent foster parents for its greedy chicks. For this reason, it’s called a brood parasite.
It’s not alone. Among the birds, the full list of brood parasites includes more than 50 members of the cuckoo family, cowbirds, honeyguides, several finches, and at least one duck.
Now, William Feeney from the Australian National University has found that brand of reproductive cheating goes hand in hand with its polar opposite: cooperative breeding, where birds raise their young with help from siblings or offspring, often at the cost of the helpers’ own reproductive success.
The two strategies couldn’t be more different but Feeney found that each drives the evolution of the other. In places where one is common, the other is too. Exploitation goes hand-in-hand with cooperation.
Biologists have been exploring the origins of cooperative breeding for almost 140 years. “What I liked about the new paper is that is presents convincing evidence for an idea that wasn’t even really on the table—it is beneficial because groups reduce the risk of brood parasitism,” says Bruce Lyon from the University of California, Santa Cruz.
Similarly, scientists who study brood parasites have mostly focused on defences like spotting a cuckoo’s eggs. “For some reason the social system of the hosts was not really considered,” says Lyon.
The seeds of the discovery were planted in Australia, where Feeney’s group were studying the aptly named superb fairy-wren. It gets parasitised by Horsfield’s bronze cuckoo, but not without a fight. The team noticed that the fairy-wrens attacked incoming cuckoos with extreme prejudice, and that larger groups almost never got parasitised. They wondered if the two behaviours—cooperative breeding and brood parasitism—were connected in other parts of the world.
The answer was a resounding yes. Look at the maps below. The top one shows the global spread of cooperatively breeding passerines—the small perching birds that are the most frequent target of brood parasites. Red areas are rich in cooperative species, while white areas are devoid of them. The bottom map shows the spread of brood parasites. There’s a very strong match between the two. Even if you account for the total richness of species in a given area, places with lots of cooperative breeders also have lots of brood parasites. Africa and Australasia are particularly rich in both.
Of course, this connection could be due to some unrelated factor. For example, extreme parenting styles might just be more common in Africa and Australia, because these continents have harsh, variable environments. If the two strategies really are connected, then you’d expect that connection to hold within regions as well as between them.
That’s exactly what Feeney found. Even within Africa and southern Australia, brood parasites are much more likely to target cooperative breeders than other birds.
Here’s a family tree showing all the birds from southern Africa. The orange circles denote species that are cuckoo hosts, and the blue circles are the cooperative breeders. Around 28 percent of the hosts help each other out in the nest, compared to just 8 percent of the non-hosts.
And here are all the passerines in Australia. Again: an obvious connection. Around 53 percent of the hosts help each other out in the nest, compared to just 12 percent of the non-hosts.
There are two possible reasons for this correlation. Brood parasites might selectively target cooperative breeders because they’d provide the best care. Alternatively, birds might resort to teamwork because they can better defend their nest against parasites.
Feeney found that both answers are right.
His team returned to the superb fairy-wren—a bird where some parents get help in the nest, but others don’t. By comparing different nests over 6 years, they showed that cuckoo chicks grow faster and survive better if they’re raised by larger groups of fairy-wrens. So cooperative breeding can foster the rise of brood parasitism.
But cuckoos rarely get to realise this benefit, because larger groups of fairy-wrens are also better at fending them off. Collectively, they’re more vigilant around the nest. If one of them spots a cuckoo, it makes a cuckoo-specific alarm call and the entire group mobs the intruder. The larger the group, the more persistently they attack. So the presence of brood parasites fosters can foster the rise of cooperative breeding.
Naomi Langmore, who led the study, thinks that the relative strength of these two effects probably changes over time. Cuckoos and other brood parasites often switch hosts. When this happens, it’s initially easier for them to reap the benefits of a larger surrogate family because the new hosts have evolved to detect or repel their infiltrators. Over time, as such defences emerge, the balance shifts. For the fairy-wrens and bronze cuckoos, “cooperation protecting against parasitism is the stronger force,” says Langmore.
Of course, there are many reasons for species to evolve cooperative breeding, and the threat of parasites is just one of them. If there aren’t enough territories to go around, or if predators are particularly rampant, it will also benefit youngsters to stay near their families rather than strike out on their own. “Brood parasitism is not exclusive of other factors but may simply help tip the balance in favour of helping over dispersing,” says Lyon.
“The relative importance of brood parasitism in selecting for cooperative breeding is likely to vary from species to species,” says Langmore, “but our evidence suggests that, overall, it is one of the major selective forces favouring the evolution of cooperation.”
She’s not just talking about birds, either. There are also many brood parasites among the insects, including a group of over 3,000 cuckoo wasps. Many lay their eggs in the nests of other wasps, and their grubs devour the hosts’ own eggs and larvae. “The hosts of cuckoo wasps also mount highly aggressive colony attacks on the parasites,” says Langmore, “and hosts from parasitized populations have actually evolved larger bodies so they are better able to drive off the parasites.”
Reference: Feeney, Medina, Somveille, Heinsohn, Hall, Mulder, Stein, Kilner & Langmore. 2013. Brood Parasitism and the Evolution of Cooperative Breeding in Birds. Science http://dx.doi.org/10.1126/science.1240039
HIV is a virus that kills by crippling our defences against other infections. It sends our immune system into a creeping decline. Germs that were once easy to fight off now become debilitating and lethal threats. A simple cold can kill. Tumours start to grow.
This is AIDS. It was formally described in 1981 and now, over 30 years later, we’re finally starting to understand why it happens.
HIV can infect many different types of white blood cell, but chief among them are the CD4 T-cells. These are the bugle-players of the immune system—they mobilise other immune cells, which actively kill viruses and other invaders. HIV prevents these troops from entering the fray, because it slowly destroys the CD4 T-cells.
Only a minority fall to the virus directly. More than 95 percent don’t seem to be infected, but die anyway. This collateral damage is what leads to the symptoms of AIDS; it’s what makes HIV so lethal. If we want to know why this virus has killed 34 million people since its discovery, we need to know why these bystander CD4 cells die… and we don’t. “In many ways, the question of why these cells die after HIV infection has been neglected, and it’s at the heart of what the virus does—it kills CD4 cells,” says Gary Nabel, Chief Scientific Officer at Sanofi.
Warner Greene from the Gladstone Institute of Virology and Immunology has been trying to solve this mystery for years, and he thinks he has finally cracked it. In two papers, published simultaneously in Science and Nature, his team lays out why HIV kills so many bystander cells and, better still, a possible way of stopping it.
In 2010, Greene’s team, led by Gilad Doitsh, showed that HIV actually tries to infect the bystander CD4 cells, but fails. Ironically, it’s their botched attempt that kills the cell.
During an infection, HIV fuses with a CD4 cell, and releases its genetic material, in the form of RNA molecules. These are converted into DNA, and inserted into the cell’s genome. When the cell divides, it copies its own genes and duplicates the hitchhiking viral DNA too. But in the bystander CD4 cells, which are in a resting state, the process that coverts RNA into DNA repeatedly stalls. Rather than producing the complete HIV genome, it churns out small fragments of viral DNA, and the infection can’t continue.
That’s great, except the cell now has bits of viral DNA floating about. Three years back, the team suggested that some sensor inside the CD4 cells detects this DNA and triggers a self-destruction programme.
Now, Kathryn Monroe at the Gladstone Institutes has discovered the sensor. She used a piece of HIV DNA to fish for molecules in CD4 cells that might stick to it. She caught several bites, but the most enticing one was a protein called IFI16. When Monroe removed this protein from resting CD4 cells, they didn’t overreact to the DNA pieces left behind by the virus’s bungled attempts at infection. They didn’t die.
IFI16 evolved as an antiviral DNA sensor. It’s meant to launch a defensive programme that kills infected cells before they can contaminate their neighbours. But when it comes to HIV, this protective response just kills the host faster. IFI16 turns into a general who gets false intelligence, panics, and pushes the big, red button anyway. “CD4 cell death is more a suicide than a murder,” says Greene.
The cells don’t go out quietly either.
In many cases, cells commit suicide through a gentle process called apoptosis. They shrink and break up into neat parcels, which are tidied away by cleaner cells. They die with a whimper; they don’t leave a mess. Everyone assumed that bystander CD4 cells die in this way.
Instead, Doitsh, together with student Nicole Galloway, showed that they die through a more violent process called pyroptosis. They swell instead of shrinking. Their membranes rupture, and their innards leak out through the holes.
These escaping molecules include interleukin-1 beta (IL1β), which summons more CD4 cells to the site of infection. The result is a massive amount of inflammation, and a vicious cycle—emphasis on vicious. HIV tries to infect a few CD4 cells, which go through pyroptosis in response. Their leaked remains summon more CD4 cells, which also get abortively infected, and also go through explosive suicide. Their deaths summon yet more cells, and so on.
“We think this is the major driver that depletes the CD4 T-cells,” says Greene. “It’s at the heart of AIDS.”
“The two papers provide substantial insights into how HIV depletes CD4 T-cells,” says Dan Barouch from Harvard University. “We didn’t have a clear mechanism for how that happened before, and it’s a central aspect of HIV pathogenesis.”
Greene thinks that pyroptosis (or the lack of it) could explain why HIV usually causes AIDS in humans but its relatives, the SIVs, barely sickens the apes and monkeys that they infect. SIVs can kill CD4 cells directly, but they can’t trigger the same pyroptosis response in other primates. They kill a few cells but the majority survive, and the immune system stays strong. “That’s the evolutionary solution—not to control the virus but to control the host response,” says Greene. “I think if we had another million years, we’d evolve in the same way.”
Thankfully, his team is working to a tighter schedule. They’ve already found a molecule that can stop pyroptosis, at least in lab-grown cells.
The whole messy process depends on a protein called caspase-1. Without it, you don’t get any mature IL1β, and without that, you don’t trigger the vicious cycle of CD4 cell death. Caspase-1 plays many other roles in the body, and several pharmaceutical companies have tried to make drugs that block it, for the purposes of treating other diseases. One of these, VX-765, was developed to treat chronic epilepsy and autoimmune diseases.
Greene’s team showed that it completely prevents HIV from killing the bystander CD4 cells. No caspase-1 activity. No IL1β signals. No inflammation. No mass cell death.
No AIDS? That remains to be seen. These are only lab experiments, after all, and the drug still needs to be tested in actual HIV patients.
Encouragingly, it has already gone through early phase II clinical trials, which means that we know it’s safe and well-tolerated. “Maybe it could be repurposed for HIV infection,” says Greene. He imagines a joint attack: current antiretroviral treatments would target HIV itself, while caspase-1 blockers would stop the patient’s immune system from overreacting to the virus.
Greene is now in talks with the drug’s manufactuer—Vertex Pharmaceuticals—about launching a proper HIV trial. There are other options too—several other caspase-1 inhibitors have been developed, although they haven’t done enough in their respective diseases to justify taking them to market and seeking FDA approval. If Greene can’t get the go ahead for VX-765, he’ll just look somewhere else.
He also wants to see if caspase-1 blockers could have other benefits. Since they target the host rather than the virus, he thinks it’s less likely that you’d get resistance to them. They could also give people more time while they wait for antiretrovirals. “For every 10 people we put on antiretrovirals today, 16 more become infected,” says Greene. “There are 16 million people who should be on these drugs but aren’t, and are progressing to AIDS and dying. Maybe these caspase-1 inhibitors could be used as a bridge therapy while they wait.”
And, in the lab experiments, the caspase-1 blockers also prevented the inflammation that goes hand-in-hand with CD4 cell death. Greene suspects that this inflammation accelerates the ageing process in HIV patients. “It’s why they’re dying of heart attacks, liver diseases, dementia and cancer at an earlier age than anticipated,” he says. “Maybe we could restore their normal lifespan or improve their quality of life?”
Meanwhile, other scientists have discovered more cellular sensors that detect HIV in other types of cells. Nabel’s team showed that a protein called DNPK-1 senses HIV DNA once it has been inserted into a CD4 cell’s genome, which triggers a different self-destruct sequence. But this only happens in the small proportion of CD4 cells where the infection process is truly underway.
Another protein called cGAS can also detect HIV DNA, but in a different group of white blood cells. It’s not found in the CD4 cells that Greene examined.
This baffling variety comes as no surprise to Andrew Bowie from Trinity College Dublin, who studies how the immune system detects viruses. He was the one who discovered that IFI16 is a DNA sensor back in 2010. “Since then, we suspected that these sensors would have very cell-type specific roles in sensing viruses,” he says.
And scientists have made tremendous strides in understanding these roles just this year. The cGAS discovery was announced in February, DNAPK-1 in June, and now IFI16 in December! “We’re seeing a Renaissance of our understanding of the fundamentals of HIV infection,” says Nabel. “The more we know, the better off we’ll be with controlling it.”
References: Monroe, Yang, Johnson, Geng, Ditosh, Krogan & Greene. 2013. IFI16 DNA Sensor Is Required for Death of Lymphoid CD4 T Cells Abortively Infected with HIV. Science. Tbc.
Doitsh, Galloway, Geng, Monroe, Zepeda, Yang, Hunt, Hatano, Sowinski & Greene. 2013. Pyroptosis drives depletion of CD4 T cells in HIV-infected lymphoid tissues. Nature. Tbc.
There’s more news on the ancient human DNA front: as I report in my new “Matter” column in the New York Times, scientists have now reconstructed the genome of a Neanderthal with exquisite accuracy. Their genome sequence is as good as what you’d get if you had your own genome sequenced with the finest equipment available today. And yet the DNA comes from a fossil that’s approximately 130,000 years old.
You can read more about this remarkable feat–and what it implies–in my column. But there’s something more that I didn’t have room to discuss that I found really intriguing. Here’s the tree of human evolution that scientists have generated from the Neanderthal genome in comparison with other human DNA:
Now zoom out to the tree of all living things*:
Evolution is a mixture of flow–the cascade of genes from parents to offspring, and the criss-cross movement between populations and species. It has made us who were are, over just the past 60,000 years and over the past four billion.
[Note: The image at the top of this post comes from the Neanderthal Museum in Germany. I have never been there, but I can only guess that it’s fantastic.]
[*This tree is somewhat out of date. Eukaryotes now look to be just one branch of the Archaea, for example, rather than a third domain. But the criss-crossing remains.]
CAUTION: THE RESULTS DESCRIBED IN THIS POST ARE BASED ON A PAPER THAT HAS SINCE BEEN RETRACTED. SEE MORE DETAILS HERE.
Narcolepsy is a mysterious disorder that involves sudden, uncontrollable sleepiness, among many other symptoms. On one hand, its cause seems straightforward: people slowly lose a special group of neurons that produce hypocretin, a hormone that keeps us awake.
But what kills the neurons?
Many scientists have long suspected that the immune system is responsible. That would make narcolepsy an autoimmune disease–one in which a person’s immune system turns on their own healthy cells.
There’s been a lot of evidence to support this idea, but a team of scientists from Stanford University have finally found what they describe as a “smoking gun”. People with narcolepsy, and only people with narcolepsy, have a special group of immune cells that targets hypocretin. These cells might be attacking the neurons directly, or acting through an intermediary, or something else altogether. Either way, it’s the first clear, direct sign of autoimmunity.
The study also helps to explain some puzzling quirks about narcolepsy, like why the 2009 swine flu pandemic led to a surge of cases in China, or why one particular vaccine against that strain did the same in Europe.
A year ago today, Phenomena was launched, and I just wanted to take a moment to thank all of you for reading the work of Virginia Hughes, Brian Switek, Ed Yong, and myself over these past 365 days. The Loom has seen a lot of homes in its ten years, but Phenomena has been the best, I must say, from its delightful design to the support of people at National Geographic such as Jamie Shreeve and Brian Howard.
In case you’re curious, here are the ten most-read posts I wrote here over the past year:
Today marks the first anniversary of Phenomena, the blog network that I started with Carl Zimmer, Virginia Hughes and Brian Switek a year ago. It’s been a privilege and a delight to be part of National Geographic. Our benevolent overlords—special thanks to Jamie Shreeve and Brian Howard—have treated us very well, given us a lot of exposure, and (as is important with blog networks) relaxed and let us do our thing without any impositions. And my esteemed co-bloggers, Carl, Ginny and Brian, continue to set the bar for what good science writing should look like; they inspire me to do better.
I’ve felt under much less pressure here, so I’ve felt able to devote more care and attention to each individual post. Excluding the Saturday link-fests, the occasional personal posts, and brief pointers to other work, I have written 191 pieces since joining Phenomena. That’s less than the previous year’s total of 231, but I’ve tried to chuck in a lot more depth and reporting into each post, so hopefully the trade-off is a positive one.
Below, you’ll find a list of my 20 most popular posts, by traffic, since joining Phenomena. A few things of note:
As per usual, even though I have covered stories of more scientific import, discoveries with more obvious practical implications, and studies with potential for affecting people’s lives, they are largely unrepresented here. Instead, we have a motley collection of sex, violence, and quirkiness. I love my readers.
Also, I’d say that my average post is around 1,000 words long, but several were much longer. The one about the frog resurrection (#6) is around 1,900 words long. The one about Lilly Grossman (#14) is 3,100 words long. Remind me again how long content doesn’t work online and how blog posts need to be pithy and short?