A Blog by

A Swarm of a Thousand Cooperative, Self-Organising Robots

In a lab at Harvard’s Wyss Institute, the world’s largest swarm of cooperative robots is building a star… out of themselves. There are 1024 of these inch-wide ‘Kilobots’, and they can arrange themselves into different shapes, from a letter to a wrench. They are slow and comically jerky in their movements, but they are also autonomous. Once they’re given a shape, they can recreate it without any further instructions, simply by cooperating with their neighbours and organising themselves.

The Kilobots are the work of Mike Rubenstein, Alejandro Cornejo and Radhika Nagpal, who were inspired by natural swarms, where simple and limited units can cooperate to do great things. Thousands of fire ants can unite into living bridges, rafts and buildings. Billions of unthinking neurons can create the human brain. Trillions of cells can fashion a tree or a tyrannosaur. Scientists have tried to make artificial swarms with similar abilities, but building and programming them is expensive and difficult. Most of these robot herds consist of a few dozen units, and only a few include more than a hundred. The Kilobots smash that record.

They’re still a far cry from the combiner robots of my childhood cartoons: they’re arrange themselves into two-dimensional shapes rather than assembling Voltron-style into actual objects. But they’re already an impressive achievement. “This is not only the largest swarm of robots in the world but also an excellent test bed, allowing us to validate collective algorithms in practice,” says Roderich Gross from the University of Sheffield, who has bought 900 of the robots himself to use in his own experiments.

“This is a staggering work,” adds Iain Couzin, who studies collective animal behaviour at Princeton University. “It offers a vision of the future where robot groups could form structures on demand as, for example, in search-and-rescue in dangerous environments, or even the formation of miniature swarms within the body to detect and treat disease.”

"And I'll form... the wrench!" Credit: Michael Rubenstein, Harvard University.
“And I’ll form… the wrench!” Credit: Michael Rubenstein, Harvard University.

To create their legion, the team had to rethink every aspect of a typical robot. “If you have a power switch, it takes four seconds to push that, so it’ll take over an hour to turn on a thousand robots,” says Rubenstein. “Charging them, turning them on, sending them new instructions… everything you do with a thousand robots has to be at the level of all the robots at once.”

They also have to be cheap. Fancy parts might make each bot more powerful, but would turn a swarm into a budget-breaker. Even wheels were out. Instead, the team used simpler vibration motors. If you leave your phone on a table and it vibrates, it will also slide slightly: that’s how the Kilobots move. They have two motors: if either vibrates individually, the robot rotates; if both vibrate, it goes straight.

Well, straight-ish, anyway. The tyranny of cost-efficiency meant that the team had to lose any sensors that might tell the robots their bearings or positions. They can’t tell where they are, or if they’re going straight. But each one can shoot infrared beams to the surface below it, and sense the beams reflecting from its neighbours. By measuring how bright the reflections are, it can calculate its distance from other Kilobots.

This combination of stilted motion and dulled senses meant that each robot costs just $20. It also meant that “the robots were even more limited than we expected,” says Rubenstein. “The way they sense distance is noisy and imprecise. You can tell them to move and they won’t, and they’ll have no idea that they’re not moving.”

Fortunately, they have each other. A stuck Kilobot can’t tell if it’s stuck on its own, but it can communicate with its neighbours. If it thinks it’s moving but the distances from its neighbours change, it can deduce that something is wrong. And if neighbours estimate the distances between them and use the average, they can smooth out individual errors.

Using these principles, the team created a simple program that allows the robots to independently assemble into different shapes using just three behaviours. First, they move by skirting along the edges of a group. Second, they create gradients as a crude way of noting their position in the swarm. (A nominated source robot gets a gradient value of 0. Any adjacent robot that can see it sets its gradient value to 1. Any robot that sees 1 but not 0 sets its gradient to 2, and so on.) Finally, although they have no GPS, they can triangulate their position by talking to their neighbours. As long as the team nominates some robots as seeds, effectively turning them into the zero-point on a invisible graph, the rest of the swarm can then work out where they are.

Every Kilobot runs on the same program. The team only has to give them a shape and nominate four of them as seeds. Once that’s done, the rest slowly pour into the right pattern, in an endearingly life-like way. It takes them around 12 hours, but they do it all without any human intervention. And although the final shapes are always a little warped, that’s life-like too. Fire ants don’t have a Platonic ideal of what a bridge or raft should look like; they just work with their neighbours to get the job done.

Stills from movies showing the Kilobots assembling into a K and a star. Credit: Michael Rubenstein, Harvard University.
Stills from movies showing the Kilobots assembling into a K and a star. Credit: Michael Rubenstein, Harvard University.

Scientists have long been able to simulate huge swarms of life-like virtual particles in computers, using very simple rules. But the real world is full of pesky physics, inconvenient noise, and temperamental circuitry. Stuff goes wrong. By building an actual swarm, the team can address these problems and make their programs more robust. They’ve already had to deal with a litany of failed motors, stalled robots, collisions, and traffic jams. “The more times you run it, the more likely some random thing will show up that you don’t expect,” says Rubenstein. “That’s the problem with 1,000 robots: even rare things can happen very frequently.”

The next step will be to build robots that actually self-assemble by attaching to each other, says Marco Dorigo from the Free University of Brussels.. “We did so with tens of robots,” he says. “It will not be easy with one thousand.” Rubenstein agrees: “Physical connection is always difficult. If you have a dock, you tend to design the rest of the robot around that dock. It has a huge impact.”

Eventually, he also wants to get to a position where the robots can sense their environment and react accordingly, rather than just slide into some pre-determined shape. Like fire ants, when they get to a body of water, they wouldn’t have to be fed the image of a bridge; they would just self-assemble into one. “That’s a whole other level of intelligence, and it’s not really understood how to do that in robotics,” says Rubenstein. “But nature does it well.”

Reference: Rubenstein, Cornejo & Nagpal. 2014. Programmable self-assembly in a thousand-robot swarm. http://dx.doi.org/10.1126/science.1254295

More on robots:

A Blog by

In Defense of Brain Imaging

Brain imaging has fared pretty well in its three decades of existence, all in all. A quick search of the PubMed database for one of the most popular methods, functional magnetic resonance imaging (fMRI), yields some 22,000 studies.  In 2010 the federal government promised $40 million for the Human Connectome Project, which aims to map all of the human brain’s connections. And brain imaging will no doubt play a big part in the president’s new, $4.5 billion BRAIN Initiative. If you bring up brain scanning at a summer BBQ party, your neighbors may think you’re weird, but they’ll be somewhat familiar with what you’re talking about. (Not so for, say, calcium imaging of zebrafish neurons…)

And yet, like any youngster, neuroimaging has suffered its share of embarrassing moments. In 2008, researchers from MIT reported that many high-profile imaging studies used statistical methods resulting in ‘voodoo correlations’: artificially inflated links between emotions or personality traits and specific patterns of brain activity. The next year, a Dartmouth team put a dead salmon in a scanner, showed it a bunch of photos of people, and then asked the salmon to determine what emotion the people in the photos were feeling. Thanks to random noise in the data, a small region in the fish’s brain appeared to “activate” when it was “thinking” about others’ emotions. Books like Brainwashed, A Skeptic’s Guide to the Mind, Neuro: The New Brain Sciences and the Management of the Mind, and the upcoming The Myth of Mirror Neurons have all added fuel to the skeptical fire.

There are many valid concerns about brain imaging — I’ve called them out, on occasion. But a new commentary in the Hastings Center Report has me wondering if the criticism itself has gone a bit overboard. In the piece, titled “Brain Images, Babies, and Bathwater: Critiquing Critiques of Functional Neuroimaging,” neuroscientist Martha Farah makes two compelling counterpoints. One is that brain imaging methods have improved a great deal since the technology’s inception. The second is that its drawbacks — statistical pitfalls, inappropriate interpretations, and the like — are not much different from those of other scientific fields.

First, the improvements. At the dawn of brain imaging, Farah notes, researchers were concerned largely with mapping which parts of the brain light up during specific tasks, such as reading words or seeing colors. This garnered criticism from many who said that imaging was just a flashy, expensive, modern phrenology. “If the mind happens in space at all, it happens somewhere north of the neck. What exactly turns on knowing how far north?” wrote philosopher Jerry Fodor in the London Review of Books.

But the purpose of those early localization experiments, according to Farah, was mostly to validate the new technology — to make sure that the areas that were preferentially activated in the scanner during reading, say, were the same regions that older methods (such as lesion studies) had identified as being important for reading. Once validated, researchers moved on to more interesting questions. “The bulk of functional neuroimaging research in the 21st century is not motivated by localization per se,” Farah writes.

Researchers have developed new ways of analyzing imaging data that doesn’t have anything to do with matching specific regions to specific behaviors. Last year, for example, I wrote about a method developed by Farah’s colleague Geoffrey Aguirre that allows researchers to study how a brain adapts to seeing (or hearing or smelling or whatever) the same stimulus again and again, or how the brain responds to a stimulus differently depending on what it experienced just before.

Other groups are using brain scanners to visualize not the activity of a single region, but rather the coordinated synchrony of many regions across the entire brain. This method, called ‘resting-state functional connectivity’, has revealed, among other things, that there is a network of regions that are most active when we are daydreaming, or introspecting, not engaged in anything in particular.

All that is to say: Today’s neuroimaging is more sophisticated than it used to be. But yes, it still has problems.

Its statistics, for one thing, are complicated as hell. Researchers divide brain scans into tens of thousands of ‘voxels’, or three-dimensional pixels. And each voxel gets its own statistical test to determine whether its activity really differs between two experimental conditions (reading and not-reading, say). Most statistical tests are considered legit if they reach a ‘significance level’ of .05 or less, which means that there’s a 5 percent or less chance that the activity occurred due to random chance. But if you have 50,000 voxels, then a significance level of .05 means that 2,500 of them would look significant by chance alone!

This problem, known as ‘multiple comparisons’, is what caused a dead salmon to show brain activity. “There’s no simple solution to it,” Farah writes. The salmon study, in fact, used a much more stringent significance level of .001, meaning that there was just a .1 percent chance that any given voxel’s activity was due to chance. And yet, that cut-off would still mean a brain of 50,000 voxels would have 50 spurious signals.

Researchers can control for multiple comparisons by focusing on a smaller region of interest to begin with, or by using various statistical tricks. Some studies don’t control for it properly. But then again — and here’s Farah’s strongest point — the same could be said for lots of other fields. To make this point, a 2006 study in the Journal of Clinical Epidemiology compared the astrological signs and hospital diagnoses for all 10.7 million adult residents of Ontario, finding that “residents born under Leo had a higher probability of gastrointestinal hemorrhage, while Sagittarians had a higher probability of humerus fracture.”

A different statistical snag led to the aforementioned voodoo correlations. These false associations between brain and behavior arose because researchers used the same dataset to both discover a trend and to use the newly discovered trend to make predictions. It’s obviously a problem that many headline-grabbing studies (several were published in top journals) made this mistake. Here again, though, the error is not unique to brain imaging. The same kind of double-dipping happens in epidemiology, genetics, and finance. For example, some economists will use a dataset to group assets into portfolios and then use the same dataset to test pricing models of said assets.

Perhaps the stickiest criticism lodged against brain imaging is the idea that it is more “seductive” to the public than other forms of scientific data. One 2008 study reported that people are more likely to find news articles about cognitive neuroscience convincing if the text appears next to brain scans, as opposed to other images or no image. “These data lend support to the notion that part of the fascination, and the credibility, of brain imaging research lies in the persuasive power of the actual brain images themselves,” the authors wrote. Farah points out, however, that four other laboratories (including hers) have tried — and failed — to replicate that study.

Anecdotally, I’ve certainly noticed that my non-scientist friends are often awe-struck regarding brain imaging in a way that they aren’t with, oh, optogenetics. But even if that’s the case, and brain imaging is especially attractive to the public, why would that be a valid argument against its continued use? It would be like saying that because the public is interested in genetic testing, and because genetic testing is often misinterpreted, scientists should stop studying genetics. It doesn’t make much sense.

Brain imaging isn’t a perfect scientific tool; nothing is. But there are many good reasons why it has revolutionized neuroscience over the past few decades. We — the media, the public, scientists themselves — should always be skeptical of neuroimaging data, and be quick to acknowledge shoddy statistics and hype. Just as we should for data of any kind.

A Blog by

Now THIS Is a Synapse

Every time I read about the synapse, the all-important junction between two neurons, the cartoon above pops into my head. It shows the gist of how a synapse works: An electrical pulse enters the cell on the left and activates those little blue balls, called vesicles, to release their chemical contents, called neurotransmitters. The neurotransmitters spill out into the space between the cells, called the cleft, and activate those blue rectangles, called ion channels. The channels trigger the cell on the right to fire its own electrical pulse, or action potential, and this message travels on to the next cell. It’s pretty neat. Our brains are full of trillions of synapses, each with the capability of converting an electrical signal into a chemical one and back again.

My doodle is conceptually useful for understanding many neuroscience studies. It helped me visualize, for example, how researchers record the messages of brain cells, and how the synapse plays a role in developmental disorders, and how the firing patterns of all of these synapses provide our brains with a sophisticated coding scheme.

The downside of the cartoon synapse is that it gives a false impression. It makes it seem as if the synapse is simple and all figured out, when actually it’s mostly baffling. I was reminded of its complexity by a study published in today’s issue of Science. Researchers in Germany used an array of techniques — including Western blot, mass spectrometry, electron microscopy, and super-resolution fluorescence microscopy — to create a three-dimensional model of a typical synapse in the adult rat brain. You’ll see in the video below that their new model doesn’t look much like my drawing:

To get the most out of the video, click on the white arrows in the lower right hand corner, which will expand it to full screen. The video shows the synaptic bouton, which is the left part of my cartoon. The glowing red “active zone” at the bottom is where the neurotransmitters get dumped into the cleft. Toward the end of the video you can see a close-up of a vesicle releasing its contents and then being recycled by the cell.

The model shows some 300,000 individual proteins, and remember — they’re all hanging out at a single synapse! The image below shows a cross-section of the bouton; each color corresponds to a different kind of protein. The active zone is again the glowing red part at the bottom.

Wilhelm et al., Science 2014

(Click to enlarge)

More often than not, neuroscientists (and therefore, science writers covering neuroscience) tend to focus on a single protein at a time. For instance, I’ve written about that green guy, parvalbumin, because in certain neurons the protein seems to trigger high-frequency brain waves that have been linked to cognition. And that red SNAP-25 has been linked to ADHD, and the yellow VDAC has been proposed as a good target for chemotherapy drugs.

The only way to untangle this complex picture is to focus on its individual components, figuring out one piece at a time. But the next time you read about one of those pieces, recall how it fits into the whole, and be wowed.

A Blog by

Videos: A (Very) Close Look Inside the Zebrafish Brain

About a year ago I wrote a story about the hottest new animal model in neuroscience: baby zebrafish. The critters are not much to look at. They’re the size and shape of a curled eyelash, with big bulging eyes. But when some neuroscientists look at the fish, they see a lot of potential. The fish have around 300,000 neurons — enough to perform relatively complex behaviors, such as swimming in different directions and learning to fear certain stimuli. And most importantly, the embryonic fish are transparent, making it easy to watch their brain cells in action, all at once.

This week Eric Betzig of HHMI’s Janelia Farm Research Campus in Ashburn, Virginia, reports in Nature Methods a new technology that dramatically sharpens those microscopic images. You can see for yourself in the video below, which shows neurons deep in the midbrain of a living, 3-day-old zebrafish:

(Credit: Betzig Lab, HHMI’s Janelia Farm Research Campus)

The new microscopy can not only show the activity patterns of groups of neurons, but tiny structures within each cell. For example, the video below zooms into a neuron in the deep hindbrain of a 4-day-old fish. Mitochondria, the structures that provide the cell’s energy, are in pink, and the plasma membrane, or outer covering, is in green:

(Credit: Betzig Lab, HHMI’s Janelia Farm Research Campus)

I won’t pretend to totally understand the physics of how the microscope works, but apparently the researchers borrowed a technique from astronomy called “adaptive optics.” Here’s a good explanation from HHMI (which funds Betzig’s work):

Over the last decade, Betzig and others have taken a cue from astronomers in using adaptive optics to correct for the light-bending heterogeneity of biological tissues. Astronomers apply adaptive optics by shining a laser high in the atmosphere in the same direction as an object they want to observe, Betzig explains. The light returning from this so-called guide star gets distorted as it travels through the turbulent atmosphere back to the telescope. Using a tool called a wavefront sensor, astronomers measure this distortion directly, then use the measurements to deform a telescope mirror to cancel out the atmospheric aberrations. The correction gives a much clearer view of the target object they want to observe.

A microscopy technique that Betzig developed in 2010 with Na Ji, who is now also a group leader at Janelia, achieves similar results by using an isolated fluorescent object such as a cell body or an embedded bead in the tissue as the “guide star.”

…The team created a guide star by focusing light from the microscope into a glowing point within the sample. Using a technique called two-photon excitation, they could penetrate infrared light deep within the tissue and illuminate a specific point. The wavefront sensor would then determine how the light that returned from this guide star had warped as it passed through the tissue, so that the appropriate correction could be applied.

Go read the whole article to find out more.

A Blog by

An Electric Sock For the Heart

The titles of scientific papers can be a bit intimidating. For example, I’m currently reading “3D multifunctional integumentary membranes for spatiotemporal cardiac measurements and stimulation across the entire epicardium”.

In other words: electric heart socks.

A team of scientists led by John Rogers at the University of Illinois at Urbana-Champaign has created a web of electronics that wraps around a living heart and measures everything from temperature to electrical activity. It’s an ultra-thin and skin-like sheath, which looks like a grid of tiny black squares connected by S-shaped wires. Its embrace is snug and form-fitting, but gentle and elastic. It measures the heart’s beats, without ever impeding them.

Electronic cardiac sock

Its goal is to monitor the heart in unprecedented detail, and to spot the unusual patterns of electrical activity that precede a heart attack. Eventually, it might even be able to intervene by delivering its own electrical bursts.

Cardiac socks have been around since the 1980s but the earliest ones were literal socks—fabric wraps that resembled the shape of the heart, with large electrodes sewn into place. They were crude devices, and the electrodes had a tough time making close and unchanging contact with the heart. After all, this is an organ known for constantly and vigorously moving.

The new socks solve these problems. To make one, graduate students Lizhi Xu and Sarah Gutbrod scan a target heart and print out a three-dimensional model of it. They mould the electronics to the contours of the model, before peeling them off, and applying them to the actual heart. They engineer the sock to be ever so slightly smaller than the real organ, so its fit is snug but never constraining.

This is all part of Rogers’ incredible line of flexible, stretchable electronics. His devices are made of mostly made of the usual brittle and rigid materials like silicon, but they eschew right angles and flat planes of traditional electronics for the curves and flexibility of living tissues. I’ve written about his tattoo-like “electronic-skin”, curved cameras inspired by an insect’s eye, and even electronics that dissolve over time.

The heart sock is typical of these devices. The tiny black squares contain a number of different sensors, which detect temperature, pressure, pH, electrical activity and LEDs. (The LEDs shine onto voltage-sensitive dyes, which emit different colours of light depending on the electrical activity of the heart.) Meanwhile, the flexible, S-shaped wires that connect them allow the grid to stretch and flex without breaking. As the heart expands and contracts, the web does too.

So far, the team have tested their device on isolated rabbit hearts and one from a deceased organ donor. Since these organs are hooked up to artificial pumps, the team could wilfully change their temperature or pH to see if the sensors could detect the changes. They could. They could sense when the hearts switched from steady beats to uncoordinated quivers.

Rogers thinks that tests in live patients are close. If anything, the doctors he is working with are more eager to push ahead. “We’re scientists of a very conservative mindset. They have patients who are dying,” he says. “They have a great appetite for trying out good stuff.”

The main challenge is to find a way of powering the device independently, and communicating with it wirelessly, so that it can be implanted for a long time. Eventually, Rogers also wants to add components that can stimulate the heart as well as recording from it, and fix any aberrant problems rather than just divining them.

It’s a “remarkable accomplishment” and a “great advance in materials science”, says Ronald Berger at Johns Hopkins Medicine, although he is less sure that the device will be useful is diagnosing or treating heart disease. “I don’t quite see the clinical application of these sensors.  There might be some therapy that is best implemented with careful titration using advanced sensors, but I’m not sure what that therapy is.”

But Berger adds that the sock has great promise as a research tool, and a couple of other scientists I contacted agree. After all, scientists can use the device to do what other technologies cannot: measure and match the heart’s electrical activity and physical changes, over its entire surface and in real-time.

For more on John Rogers’ flexible electronics, check out this feature from Discover that I co-wrote with Valerie Ross.

Reference: Xu, Gutbrod, Bonifas, Su, Sulkin, Lu, Chung, Jang, Liu, Lu, Webb, Kim, Laughner, Cheng, Liu, Ameen, Jeong, Kim, Huang, Efimov & Rogers. 2014.  3D multifunctional integumentary membranes for spatiotemporal cardiac measurements and stimulation across the entire epicardium. Nature Communications. http://dx.doi.org/ 10.1038/ncomms4329

A Blog by

Cyborg Bladders Stop Incontinence In Rats After Spine Damage

Implants that read and decipher our brain activity have allowed people to control computers, robotic limbs or even remote-controlled helicopters, just by thinking about it. These devices are called BMIs, short for brain-machine interfaces.

But our cyborg future isn’t limited to machines that hook up to our brains. At the University of Cambridge, James Fawcett has created a BMI where the B stands for bladder.  The implanted machine senses when a bladder is full, and automatically sends signals that stop the organ from emptying itself.

So far, it works in rats. It will take a lot of work to translate the technique into humans, but it could give bladder control back to people who have lost it through spinal injuries.

As the bladder fills up, its walls start to stretch. Neurons in the bladder wall detect these changes and send signals to the dorsal root, a structure at the back of the spinal cord. If left to themselves, these signals trigger a reflex that empties the bladder. That doesn’t usually happen because of neurons that travel in the opposite direction, descending from the ventral root at the front of the spine into the bladder. These counteract the emptying reflex and allow us to void the bladder when we actually want.

Spinal injuries often rob people of that control, by damaging the ventral neurons. “Take those away and you dribble all over your clothes every half hour,” says Fawcett.

There are two fixes. The first was developed by an eccentric British neuroscientist called Giles Brindley in the 1970s. Brindley is infamous for a lecture in which he demonstrated the effectiveness of a treatment for erectile dysfunction, by dropping his trousers and showing his erect penis to the audience. But his real claim to fame is an implant that stimulates the ventral root directly, allowing people with spinal injuries to urinate on demand.

There’s a catch—it only works if surgeons cut the neurons in the dorsal root so the bladder can’t spontaneously empty itself. This has severe side effects: men can’t get erections, women have dry vaginas, and both sexes end up with weak pelvic muscles.

The only other alternative is to paralyse the bladder with botox. Now, it can’t contract at all, and people have to empty it by sticking a catheter down their urethra. That’s expensive, difficult, unpleasant, and comes with a high risk of infection.

Fawcett’s team, led by Daniel Chew and Lan Zhu, have developed a better way.

First, they hack into the bladder’s communication lines. Rather than cutting through the dorsal root, they tease out fine strands of neurons called dorsal rootlets, and thread them into tiny sheaths called microchannels. The channels record the signals going from the bladder to the spine, revealing what the organ is up to.

When the bladder is ready to empty itself, the channels detect a big spike in activity. They react by sending signals to a stimulator that’s hooked up to the nerves leading into the bladder’s muscles. The stimulator hits these nerves with a high-frequency electric pulse that stops them from firing naturally. The bladder’s muscles don’t contract, and no unwanted urine is spilled. When the user actually wants to wee, they just push a button and the stimulator delivers a low-frequency pulse instead. Only then does the bladder contract.

This device does everything that a normal bladder does, but uses electronics to stand in for damaged nerves. It works on a closed loop, so users should be able to go about their day to day lives without worrying about incontinence. And it doesn’t sever the dorsal root, so it carries none of the side effects of the Brindley method.

“That would be a major advance,” says Kenneth Gustafson, a biomedical engineer from Case Western Reserve University. “Restoration of bladder control is one of the most important problems of individuals with spinal cord injuries.”

“The quality of the neural recordings that they’re showing with their channel electrodes is really very impressive and convincing,” says Robert Gaunt from the University of Pittsburgh, who has also worked on neural prosthetics for the bladder.

The team have successfully tested their device in rats, and they’re working on scaling it up to humans. “We haven’t actually trying dissecting human dorsal roots into rootlets but the anatomy’s quite similar,” says Fawcett.

“It’s good to see this has come to fruition,” says Clare Fowler from University College London, who studies ways of solving incontinence in people with neurological problems. “There have been a lot of very clever developments to get this working, and they are to be congratulated.” However, she adds that the device is “many years away from translation into human usefulness.”

Gaunt adds that the nerves that control the bladder muscles are near to those that control its sphincter. If you shut down the former with high-frequency pulses, you might risk accidentally shutting down the sphincter too—it would then relax, and the bladder might empty.

But the main problem is longevity. The device needs to be turned into something like a pacemaker, which can be implanted reliably for long periods of time. Currently, that’s impossible because the rootlets can only survive for 18 months in the microchannels before they build up fatal amounts of scar tissue. “That’s not long enough to be useful,” says Fawcett, who is working on ways of extending their lifespan.

Fawcett adds that his work isn’t just about the bladder. His microchannels offer a new way of effectively recording signals from nerves outside the brain—a goal that has historically been very difficult. Tap into the right nerves, and the device could potentially be used to control everything from prosthetic limbs to immune reactions to the digestive system.

Again, that’s a far-off goal. “We’re not sure that outside the dorsal root, we can tease the peripheral nerves into rootlets,” says Fawcett. “They weave around a lot more, so you’d risk damaging them. We’re looking into that currently.”

Reference: Chew, Zhu, Delivopoulos, Minev, Musick, Mosse, Craggs, Donaldson, Lacour, McMahon & Fawcett. 2013. A Microchannel Neuroprosthesis for Bladder Control After Spinal Cord Injury in Rat. Vol 5 Issue 210 210ra155

A Blog by

Smart Knife Helps Surgeons Cut Cancer

Cancer is one slippery SOB. It can appear mysteriously, then hang out, thrive, and grow for ages without being spotted. When it is detected, cancer is often unpredictable: It can be fatal, it can be harmless. When under attack, it doesn’t easily fall. Radiation, chemotherapy, and other drugs might curb its spread or substantially shrink it, but they rarely wipe it out. Even under the swift and steady surgeon’s knife, bits of cancer manage to escape.

Take breast cancer. Nearly 700,000 women are diagnosed with it every year in the United States and Europe. About half undergo breast conserving surgery, in which surgeons attempt to excise the tumor while preserving as much healthy tissue as possible. The procedure is tougher than it sounds. Surgeons rely on images of the tumor to guide their cuts, but often have trouble determining its precise borders while the patient is on the table. If they’re not sure whether a piece of tissue is cancerous, they can snip it out and send it to a nearby laboratory for analysis. Then they wait, anywhere from 20 minutes to an hour, all while the patient is still under anesthesia, to get the results. It’s no wonder that even the best surgeons can miss part of the tumor; an estimated 20 percent of women who go through the surgery end up repeating it.

Those numbers may improve thanks to a new surgical tool called the iKnife (i for intelligent). As described in today’s issue of Science Translational Medicine, the tool does sophisticated chemical fingerprinting to help surgeons identify — in real time, right there in the operating room — whether tissue is cancerous or healthy.

“With our technology, identification takes a second — actually, 0.7 seconds,” says Zoltán Takáts, an analytical chemist at Imperial College London who invented the new tool. “One can sample thousands of points during a surgical intervention and it still wouldn’t increase the length of the surgery.”

The iKnife in action. Image courtesy of Science Translational Medicine/AAAS.
The iKnife in action. Image courtesy of Science Translational Medicine/AAAS.

Takáts’s story begins more than a decade ago, when as a postdoctoral fellow at Purdue University he came up with an important innovation in mass spectrometry. Mass spec, as it’s often called, is a common method for determining the chemical make-up of a substance. Mass spec can measure traces of illicit drugs in an athlete’s blood, for example, or the type of pesticide residue covering an apple, or the amount of caffeine in a cup of coffee. Before going through mass spec, samples have to be converted from atoms or molecules into ions — a process that used to require vacuum chambers and tedious preparations. But in 2004, Takáts’s team published in Science a way to ionize samples by simply putting them under a gas jet.

Takáts was immediately interested in applying the new technique to the identification of biological tissues during surgery. The method turned out not to work for this purpose, but while he was figuring that out, he generated a lot of excitement from the surgical community. “They were chasing us,” he says, laughing. “They were saying, ‘OK, we understand that you can’t use this method, but can’t you come up with something else which would work?'”

The idea for the iKnife came from the realization that there’s no need to create a tool that ionizes tissue — surgeons already have one that’s used all the time. Its technical name is an “electrosurgical device”; a more descriptive one is “flesh vaporizer”. Since 1925, doctors have been using these small electric wands for cauterizing wounds and performing dissections. “Pretty much in every surgical theater all over the planet you can find it being used on an everyday basis,” Takáts says.

The worst part about these devices is the smoke they produce. “This is really a smoke of burnt flesh. It’s as nasty as it sounds,” Takáts says. But that smelly smoke contains ionized tissue that’s perfect for mass spec analysis.

Over the past several years, Takáts and his colleagues have conducted a series of rodent experiments testing the new tool, which is essentially an electrosurgical wand hooked up to a rolling mass spec machine (see top photo). They found that smoke produced from burning one type of tissue has a different mass-spec signature than does smoke coming from another kind of tissue. More importantly, they found that cancerous tissue leaves a different chemical trace than healthy tissue does.

That makes sense. Most of the chemicals sensed with this method are phospholipids, fat molecules that line the membrane of each cell. Cancer cells are constantly dividing into new cells, which means they’re constantly synthesizing phospholipids. “So the membrane lipid composition of tumor cells will be quite different from healthy cells. This is what allows us to differentiate them,” Takáts says.

In the new study, the researchers apply the technology to human tissues for the first time. They first created a database of chemical signatures gleaned from doing mass spec on thousands of stored tumor samples. The researchers then put all of that data through a complicated statistical analysis to find patterns that reliably distinguished one tissue type from another. Finally, they tested whether those algorithms could correctly identify cancerous tissue as it was being removed, right in the operating room. It worked: The iKnife was used in 91 surgeries, and in every single one, the tissue identification it made in the operating room matched the one made by traditional laboratory methods after the surgery.

The paper is “a tour de force,” says Nick Winograd, a professor of chemistry at Penn State who was not involved in the research. Winograd is an expert in using mass spec to identify biological samples.

Over the years there have been many attempts to get this technology into a clinical setting, Winograd notes. “But so far there hasn’t really been anything that you can really raise your flag about.” Part of the problem, he says, is that these chemical signatures are only subtly different from one another. They’re all made of the same molecules, just in slightly different combinations. “So you really have to be clever in your data analysis if you’re going to find [patterns] that differ in a systematic way from tissue type to tissue type,” he says.

The iKnife is just one example of a bigger trend of metabolic profiling in medical science. Just as geneticists have done oodles of association studies pairing specific genetic variants to this or that disease, researchers hope to find links between chemical signatures and disease. Proponents say that metabolic studies will give even more information than their genomic counterparts, because they are influenced by both genetic and environmental factors. The foods, medications, and chemical exposures we take in every day don’t change our DNA code, but they do leave a chemical imprint in our tissues. “Understanding that interface of genes and environment is absolutely critical for the future,” says Jeremy Nicholson, a chemist who leads the department of surgery and cancer at Imperial, where the iKnife work took place.

Just last month, Imperial launched a multi-million dollar “Phenome Centre” aimed at conducting chemical studies at many levels — on the scale of the individual patient, like the iKnife work, but also at a population scale, comparing chemicals from the blood, urine, and even microbial communities of groups of people over time.

The Centre has 19 high-tech spectroscopy machines, giving it the capacity to perform more than a million discrete assays a year, Nicholson says. The machines are hand-me-downs from the 2012 Olympic Games, held in London. The U.K. spent around $30 million for equipment to screen the thousands of Olympians for illicit drug use, Nicholson says. “They only found 12 people who were cheating.”

A Blog by

Genomes for the Curious

So jealous: Science journalist Eliza Strickland not only had her genome sequenced but she got to write a long feature about the experience. Her story just came out in IEEE Spectrum and I’d highly recommend giving it a read. It includes a mini-profile of Jonathan Rothberg, the CEO of DNA sequencing company Ion Torrent and one of the biggest names in genetics. But Strickland’s personal story is what really drives the narrative. (Spoiler alert: This post will mention highlights of her story, so go read it first if you want to be surprised.)

Strickland went to the commercial sequencing lab of Baylor College of Medicine in Houston. Doctors from all over the country have sent samples of their patients’ blood to Baylor for exome sequencing. (The exome includes all of the sections of the genome that code for proteins. It’s only a fraction of the whole genome, but the only part that scientists know how to interpret, for now). The test costs $7,000 and requires a doctor’s referral.

Strickland writes that she was Baylor’s first “merely curious” patient, meaning that she didn’t have anything wrong with her. She just wanted to look at her potential risks. Technically, she told me, she was referred by a doctor, Baylor’s own Jim Lupski, an MD/PhD. So does that mean Baylor is taking orders from the merely curious? Yep, pretty much. “If you had a family doctor who was on board with a curiosity-driven exome scan, you could get it done. I don’t believe Baylor would raise any objections,” she says.

In her story, Strickland describes how the whole process worked, including important conversations she had with her family before and after the test. Lupski and his team interpreted her results, creating a six-page report of all of the variants in her genome that might be medically relevant. (Baylor told her from the get-go that they wouldn’t be interpreting non-medical variants, such as hair or eye color.) Strickland carries several potentially scary genomic blips, such as those linked to Parkinson’s and kidney failure, but nothing that requires immediate action.

At first, she was disappointed that the findings weren’t more informative. But then she finds something interesting about Usher syndrome, a recessive disease that she’s a carrier for:

Then comes a surprise that casts doubt on my first judgment and forces me to see exome sequencing in a new light. In the weeks following my meeting at Baylor I idly Google the various conditions listed in the report the doctors gave me. One afternoon I type in “Usher syndrome” and follow a link to a National Institutes of Health Web page about the disorder. A few sentences in, I feel a shock of recognition. The syndrome, I read, is associated not just with deafness but also with night blindness and severe balance problems. My mother has been completely unable to see in the dark for as long as she can remember, and both she and her older brother have gotten dangerously wobbly on their feet over the past decade.

There are many bioethicists and researchers who would argue that this information isn’t worth knowing, either, because there is no treatment for Usher syndrome. I totally disagree with this stance (as I’ve argued elsewhere), and I think Strickland’s story shows why even ambiguous or “non-actionable” information can be powerful, assuming you want to hear about it. Strickland’s mother won’t benefit from a treatment, but she now has an explanation for otherwise mysterious symptoms. And if Strickland wanted to have children someday (I’m not sure if she does), her husband could be screened to see if he’s also an Usher carrier.

But most interesting to me is the power of genome scans to help everyday people better understand what their genes do and do not say about their future. As the costs come down, and more and more curious people have experiences like Strickland’s, I think (and hope) that our culture will gradually stop being afraid of the big bad genome. It’s a potentially useful medical tool, like a cholesterol test or cat scan, that may or may not lead to medical insights. But you can’t find out what it means unless you look.

A Blog by

Shakespeare’s Sonnets and MLK’s Speech Stored in DNA Speck

When Nick Goldman first opened the package, he couldn’t quite believe that it contained anything at all, much less all of Shakespeare’s sonnets. The parcel had come from a facility in the US and arrived at the European Bioinformatics Institute in the UK, in March 2012.  It contained a series of small plastic vials, at the bottom of which were… apparently nothing. It was Goldman’s colleague Ewan Birney who showed him the tiny dust-like specks that he had missed.

These specks were DNA, and they contained:

  • All of the Bard’s 154 sonnets.
  • A 26-second clip of Martin Luther King’s legendary “I have a dream” speech
  • A PDF of James Watson and Francis Crick’s classic paper where they detailed the structure of DNA
  • A JPEG photo of Goldman and Birney’s institute
  • A code that converted all of that into DNA in the first place

The team sent the vials off to a facility in Germany, where colleages dissolved the DNA in water, sequenced it, and reconstructed all the files with 100 percent accuracy. It vindicated the team’s efforts to encode digital information into DNA using a new technique—one that could be easily scaled up to global levels. And it showed the potential of the famous double-helix as a way of storing our growing morass of data.

In cold, dark faciliites like Svalbard's Global Seed Vault (which is unstaffed), DNA files could last for tens of thousands of years. Credit: Svalbard Global Seed Vault/Mari Tefre

A better format

DNA has several big advantages over traditional storage media like CDs, tapes or hard disks. For a start, it takes up far less space. Goldman’s files came to 757 kilobytes and he could barely see them. For a more dramatic comparison, CERN, Europe’s big particle physics laboratory, currently stores around 90 petabytes of data (a petabyte is a million gigabytes) on around 100 tape drives. Goldman’s method could fit that into 41 grams of DNA. That’s a cupful.

DNA is also incredibly durable. As long as it is kept in cold, dry and dark conditions, it can last for tens of thousands of years with minimal care. “The experiment was done 60,000 years ago when a mammoth died and lay there in the ice,” says Goldman. Readable DNA fragments have been recovered from such mammoths, as well as a slew of other prehistoric creatures. “And those weren’t even carefully prepared samples. If you did that under controlled circumstances, you should be good for more than 60,000 years.”

(For those of you wondering if the information would mutate, it can’t. It’s not inside a living thing, and not being copied. It’s just the isolated non-living molecule.)

And using DNA would finally divorce the thing that stores information from the things that read it. Time and again, our storage formats become obsolete because we stop making the machines that read them—think about video tapes, cassettes, or floppy disks. That’s a faff—it means that archivists have to constantly replace all their equipment, and laboriously rewrite their documents in the new format du jour, all at great expense. But we will always want to read DNA. It’s the molecule of life. Biologists will always study it. The sequencers may change, but as Goldman says, “You can stick it in a cave in Norway, leave it there in a thousand years, and we’ll still be able to read that.”

Credit: Goldman et al., Nature

The code

DNA has a proven track record for storing information. It already stores all the instructions necessary to build one of you, or a giraffe, or an oak tree, or a beetle (oh so many beetles). To exploit it, all you need to do is to convert the binary 1s and 0s that we currently use into the As, Gs, Cs and Ts of DNA.

A Harvard scientist called George Church did exactly that last year. He used a simple cipher, where A and C represented 0, and G and T represented 1. In this way, he encoded his new book, some image files, and a Javascript programme, amounting to 5.2 million bits of information

Goldman and Birney have encoded the same amount, but with a more complex scheme. In their system, every byte—a string of 8 ones or zeroes—is converted into five DNA letters. These strings are designed so that there are never any adjacent repeats. This makes it easier for sequencing machines to read and explains why they had a far lower error rate (that is, none) compared to Church’s method.

Using their cipher, they converted every stream of data into a set of DNA strings. Each one is exactly 117 letters long and contains indexing information to show where it belongs in the overall code. The strings also overlap, so that every bit is covered by four separate strings. Again, this reduces error. Any mistake would have to happen on four separate strings, which is very unlikely.

Accuracy aside, Goldman’s coding system has a more fanciful advantage—it should be apocalypse-proof. Let’s get a bit fanciful: Imagine that there’s a calamity that wrecks human civilisation, creating a huge discontinuity in our technology. The survivors rebuild and eventually relearn what DNA is and how to decode it. Maybe they find some of these stores, locked away in a vault.  “They’d quickly notice that this isn’t DNA like anything they’ve seen,” says Goldman. “There are no repeats. Everything’s the same length. It’s obviously not something from a bacterium or a human. Maybe it’s worth investigating. Of course you’d need to send some sort of Rosetta stone to tell people how to decode the message…”

"Well, isn't it lucky we stored our cat photos as DNA before all this happened?" (Scene from The Road, 2929 Productions)

Scaling up

Goldman calculated that this method could be feasibly scaled up to cover all of the world’s data (which currently stands at around 3 zettabytes—3 million million gigabytes). For now, the big problems are cost and speed. It’s still expensive to read DNA, and really expensive to write it. The team estimate that you would pay $12,400 to encode every megabyte of data, and $220 to read it back, based on current costs. But those costs are falling exponentially, far faster than those of other electronics.

If you use DNA, you face a steep one-time cost of writing the data. If you use other technologies, you face the recurring costs of having to re-write the data into whatever new format has arrived. It’s the ratio between these two prices that drives the economics of DNA storage.

At the moment, DNA only becomes cost-effective if you want to store things for 600 to 5000 years—that’s the threshold where  the one-time cost outweighs all the constant re-writing. But if the price of writing DNA falls by 100 times in the decade, as it assuredly will, then DNA becomes a cost-effective option for storing anything beyond 50 years. “Maybe you’d store your wedding videos,” says Goldman.

DNA technology is also getting faster, but for now, it only makes sense to use it for data that you want to keep for a very long time but aren’t going to access very often.

CERN’s a good example. By 2015, the Large Hadron Collider will be collecting around 50 to 60 petabytes every year—that’s a lot of tape! They also have to migrate their entire archives to new media every four to five years, to save space and avoid the cost of maintaining old equipment. And although people rarely use old data, it has to be kept for at least 20 years, and probably even longer. DNA could be a perfect means of storing these archives (although CERN’s senior computer scientist German Cancio tells me that it will still have to be read and verified every 2 years).

Reference: Goldman, Bertone, Chen, Dessimoz, LeProust, Sipos & Birney. 2013. Towards practical, high-capacity, low-maintenance information storage in synthesized DNA. Nature http://dx.doi.org/10.1038/nature11875


A Blog by

Tick Tock

Ginny and her niece

I’d like to be a mother—someday. Now is not a good time. I’m 28 years old, unmarried, and trying to build a freelance writing business from a small New York apartment.

I grew up in the wake of the feminist movement, and boy am I glad about that. Gender inequalities still exist, of course (ahem). But since grade school, my parents, teachers and favorite after-school-TV-show characters have encouraged me to invest in my education and career, just like any ambitious man. And I have.

Alas, biology still holds a trump card: my closing fertility window. By the time I’m 38, my bank account may be pregnant, but my eggs will be fossils. In last week’s issue of New Scientist, I wrote about a far-out experimental solution: freezing pieces of my ovary. The premise of the story was that if this technology ever gets off the ground, it could fulfill the original promise of the birth control pill, allowing women to make career decisions without the pressure of a ticking clock.

And it’s such a satisfying premise, isn’t it, especially for science-loving feminists like me. But after five months of airing it, triumphantly, to everyone I know, and thinking about their responses, my enthusiasm has waned. The cultural limits on the age of motherhood, I’m afraid, are far stronger than the biological ones.

A Blog by

Correcting Hollywood Science: The Microexpressions of Mike Daisey Edition

This past weekend I spent too many hours on Netflix watching Lie to Me, the Fox television drama that ran from 2009 to 2011. It’s a crime procedural (my favorite genre) about Dr. Cal Lightman, a psychologist who can spot liars by analyzing their body language and super-fast facial ticks, called microexpressions.

On the show, Lightman’s obsession with faces stems from a decades-old film of his mother recorded by her therapist. She had been institutionalized for depression, but on the film, she tells the therapist how good she feels after treatment, and how she longs to see her children. The therapist is convinced, allows her to go home, and she promptly commits suicide. After years of analyzing the footage, Lightman discovers that his mother’s face had shown flashes of agony while she lied about her happiness. He goes on to create a system for coding subtle facial expressions and launches a consulting firm, The Lightman Group, that helps police (and all sorts of other clients) detect when individuals are lying, and why.

It’s one of those shows that sticks with you, or with me, anyway. For the past few days I’ve been surreptitiously scrutinizing the faces of everyone I see—people exchanging small talk at a birthday party, people telling outrageous true stories on stage, my longtime friends, even my fiancé. Could I discover their hidden feelings just by paying closer attention? It’s tricky, of course, when you don’t know if someone is lying. But what about when you do know, like in the sad case of Mike Daisey?

Yesterday I hatched a plan: Learn the basics of the real science behind Lie and Me, then watch a bunch of old Daisey clips on YouTube and root out the signs of his deception.

A Blog by

Sperm Waves

Some 40 years ago, researchers at the University of Missouri were searching for an alternative to the condom — a cheap, trustworthy and reversible form of male birth control.

For their first study, published in 1975, they strapped anesthetized rats, face-down, to a plexiglass platform with a cut-out cup full of water for their dangling scrota. The scientists then exposed the animals’ testicles to a variety of things.

Heat, for example, can kill sperm (which is thought to explain why the testes hang outside of the body). So some of the animals got a 140-degree Fahrenheit water bath for 15 minutes. Others received a dose of infrared radiation, or short blasts of microwaves or ultrasound. After treatment, the animals had constant access to females until they impregnated them.

A Blog by

Dry Spells

In the spring of the year 73, thousands of Roman soldiers raided Masada, a fortress on top of a cliff in the Judean Desert. For seven years, the Jews had tried, unsuccessfully, to split from the Roman empire, and Masada was the last holdout. According to the ancient historian Josephus, when the Romans breached Masada’s walls, they found 960 dead bodies of Jewish extremists, called Sicarii, who had killed themselves to avoid the inevitable enslavement. Because of Masada’s remote location and harsh, dry climate, nothing much happened to the site for the next 2,000 years, until archaeologists started digging it up in 1963. They found attack ramps and siege towers (some of the best examples we have, apparently, of Roman war technologies), palaces, cisterns, swimming pools, 27 human skeletons and, deep under the rubble, a handful of seeds.


A Blog by

Breaking Through

This past summer, I spent two weeks sitting, working and, once, sleeping next to a hospital bed, trying and failing to communicate with my father.

He had called for an ambulance on the evening of July 25 because he couldn’t breathe. With end-stage emphysema, he often couldn’t breathe, but apparently that night he was frightened enough to call for help. At the hospital, the doctors intubated him and doused him with the sedatives one needs to withstand a hard plastic tube down the throat. My sister and I never knew if he had agreed to the intubation, or if he was too weak or panicked to voice a clear opinion. Over the next few days in the ICU, although still heavily sedated, he sometimes acted in ways that seemed deliberate: he would open his eyes wide, or furrow his brow, or nod to a question or squeeze my hand. But I was never really sure. I wasn’t sure if he would have wanted us to agree to the tracheostomy procedure, on August 2, or remove the ventilator, on August 9.

What if I could have been more sure?

I couldn’t help but think about that a couple of weeks ago while having coffee with Jon Bardin at the Society for Neuroscience meeting in Washington, D.C. A few years back, Jon left the science magazine where we both worked to pursue a PhD in neuroscience. He joined the lab of Nicholas Schiff, an expert on the neural basis of consciousness, and began studying the brain activity of people with severe brain injury. And now at the conference, Jon told me, he would be presenting a poster of unpublished data suggesting that brain waves can reveal whether a somewhat conscious person is tuning in when other people speak.