A Blog by

Type Me a Tower: Assembling Real Structures With Only a Keyboard

This was the dream, right? To go from bits, from 101001110011110000, to actual atoms, to sit at your computer and—just by typing—manipulate things in the real world. Like, for example, putting blocks on top of blocks, building bridges, toppling towers, just by tapping commands on a touch screen.

Eleven years ago, science fiction writer Bruce Sperling imagined how, from a distance, we might stack, assemble, and disassemble not just tinker toys but also “big hefty skull-crackingly solid things that you can pick up and throw.” Making digital information physical, he said, is “the world that needs conquering.”

Well, sound the trumpets. Or maybe the piccolos. The conquering has begun. And in the most charming way.

Dancing Floors

This month engineers at the Tangible Media Group at MIT unveiled a new way to stack blocks. It’s so cool to see. It’s like you’re sitting in your living room and suddenly the floor magically starts pirouetting up and down, gently moving the furniture, stacking the furniture, toppling the furniture—and with such grace! If Fred Astaire were to come back as a floor with Ginger Rogers as a block, they’d look like this:

Kinetic Blocks from Tangible Media Group on Vimeo.

Five engineers did this. They are led by the MIT Media Lab’s Professor Hiroshi Ishii, who wanted “to give kinetic ability to otherwise inanimate objects,” which was done by pushing pins. The pins, in turn, were pushed by “tangible user interfaces,” software programs that created all those thrusts, leaps, and—my favorite—the “shadowing” exercise, where a hand moves cubes in one place and the movement is mirrored at a distance. Pretty elegant engineering.

Does this mean that one day somebody far off (maybe on a separate planet) will be able to “build” an identical structure remotely? Or move it? Or take it apart? Or, instead of a floor or carpets or cushions, maybe one day even air can be pushed and pulled to rearrange a distant object? I don’t know where all this leads, but clearly playing with blocks is not what it used to be.

Kinematic? What’s Kinematic?

At the end of the video, having tried ordinary blocks and magnetic blocks, the team switches to what are called “kinematic” blocks. I’d never heard of those. I looked them up.

They aren’t the future. They’re already here, little modules that attach, twist, and wriggle—no cables necessary. They’re suitable for five-year-olds and totally delightful. Adding them to the mix, the team says, creates new “degrees of freedom” for potential users. With pins pushing below and levers moving within, building blocks will soon move like animals.

You don’t see that in the MIT video; their kinematic blocks stay mostly quiet and mysterious, but in German kindergartens, you can see what future blocks might do. Somewhere in space R2D2 is weeping. Take a look:


This isn’t the first time the Tangible Media Group has worked with pins; They have an amazing video that shows how you can sit in one location and use the pin interface to move distant objects; Your real hands get digitally turned into ‘ghost’ hands, and you can move things in a room you are nowhere near! I think you ought to take a peek … here.

When my producer, Becky, read this post, she told me about a movie, “Big Hero 6” where the build-anything-anytime-anywhere notion becomes a glorious movie fantasy. A boy named Hiro gets to walk on air, because he ‘imagines steps’ and once imagined, they spring into being, right under his feet. This isn’t bits to atoms. This is neurons to atoms…and even the Media Lab isn’t doing that.

A Blog by

My Manic-Depressive Cereal Spoon Just Lost Consciousness

I’ll get to the spoon in a minute. But first I’d like to mention zippers. Because the guy who made the spoon once had a problem with zippers. He thought he could make a better zipper. Here’s what he came up with:

Illustration of a man walking out of a door with his fly zipper down, and then a beep going off reminding him to zip it up
Illustration by Dominic Wilcox
Illustration by Dominic Wilcox

OK, the advantage gained may be awkwardly small (or just awkward), but that’s Dominic Wilcox. He’s part artist, part satirist, part engineer, part maniac. He likes to make things better, though “better” to him may feel suspiciously un-better to you.

illustration of an engagement ring with a ring on either sign that have signs pointing to the engagement ring, to draw attention to it
Illustration by Dominic Wilcox
Illustration by Dominic Wilcox

Still, his ideas keep coming. I’ve got two favorites. The first is his GPS Shoe, a gorgeous pair of real soft leather shoes with teeny LED lights embedded in the leatherwork. Dominic says he “thought about The Wizard of Oz and how Dorothy could click her shoes together to go home,” and so in this video, he shows us a pair of self-directing shoes that would take someone “home” (or anywhere else they might want to go). He went to a Northamptonshire shoemaker, then to a computer-savvy engineer, and together they came up with a pair that, like Dorothy’s ruby slippers, allows you to click your heels, which then links your shoes up to a GPS satellite. If you’ve told your computer the street address of where you’re going, all you have to do is walk outside and look down.

Video still showing a pair of shoes with red, blinking lights that tell directions embedded in the toes
Video Still from “No Place Like Home,” by Dominic Wilcox
Video Still from "No Place Like Home," by Dominic Wilcox

Your left shoe points (with a teeny winking light) in the proper direction; your right foot indicates how far you have to go. In the video Dominic’s shoes take him across Northamptonshire straight to the gallery where they will be displayed. He comes to several corners and park paths that fork, and his shoes make all the choices. He just walks. He calls this project “No Place Like Home.”

Video still of a man at a fork in the road, looking down at his shoes for direction
Video Still from “No Place Like Home,” by Dominic Wilcox
Video Still from "No Place Like Home," by Dominic Wilcox

Dominic Wilcox works mostly in London, takes commissions, and hires his designing brain out to big companies for what I imagine are big bucks. There is something deeply radical about this man. I can’t put my finger on it, but his inventions are in no sense tame. When a company hires him, he delivers their message, but he does it with such crazy power, such force, that instead of giggling and moving on, he makes you wonder, Are they mad? What were they thinking? The messages hit, but a little too hard. Which is his secret power. Dominic is so good, he’s subversive.

OK, now we’re ready for the spoon.

Dominic made it for the Kellogg’s cereal company. It’s a spoon with two googly eyes. Cute to look at and designed to be adorable, it’s a breakfast spoon for eating cornflakes or Rice Krispies. He calls it the Get Enough Robot Spoon.

But here’s the thing about this spoon. He’s given it moods. It starts sleepy, with its eyes closed. When you put it to use, when you scoop it into a bowl, it seems to awaken, drawing power from repeated scooping. The more you eat, the more awake it gets, to the point that—at the height of breakfast—its eyes start to roll in its head. It seems to be on a crazy cereal high, driven wild by consumption. But once you put it down, or should you choose to carry it with you all day long—yup, dedicated cereal eaters must always be prepared; see the video below—the spoon grows quiet from disuse, and falls eventually into a haze, then a heavy lidded quiet, and then into something that feels like a depressive sleep. I may be reading too much into this, but take a look. See what you think.

Do you get the feeling that if you stop eating cereal, you may be killing your pet spoon? I’m just asking.

The double-edgedness of his work doesn’t seem to hurt. Companies love him. He keeps getting commissions, keeps getting attention, and keeps producing new, startling experiments. He’s come up with a way to switch how his ears work, so the left one hears what the right one should hear, and the right one the left. He’s imagined a hotel elevator like no elevator in the world; he’s created the world’s first upside-down bungee jump, where instead of leaping off a cliff attached to a cord, the cliff … wait, I don’t want to tell you. I want to show you. His blog is where you can find most of his inventions, but probably the most pleasing way to discover Dominic is to walk straight into this short, beautiful video from Liam Saint-Pierre. But I’d avoid the square peas.

If you want more (and I’m thinking you do), there’s a Dominic Wilcox book, chock-full of drawings and imaginings, called Variations on Normal and that’s where you can find another part of him: his gift for getting even. While gentle in appearance, Dominic has a little Chuck Norris or Arnold Schwarzenegger in him; it comes out when he’s punishing people in his mind. Check out how he’d solve the guy-who-doesn’t-shut-off-his-cell-phone problem, and how he’d punish a litterer. He can be clever. Even fiendish.

A Blog by

Injecting Electronics Into Brain Not as Freaky as it Sounds

No need to wait for the cyborg future—it’s already here. Adding to a growing list of electronics that can be implanted in the body, scientists are working to perfect the ultimate merger of mind and machine: devices fused directly to the brain.

A new type of flexible electronics can be injected through a syringe to unfurl and implant directly into the brains of mice, shows a study published Monday in Nature Nanotechnology. Researchers injected a fine electronic mesh and were able to monitor brain activity in the mice.

“You’re blurring the living and the nonliving,” says Charles Lieber, a nanoscientist at Harvard and co-author of the study. One day, he says, electronics might not only monitor brain activity but also deliver therapeutic treatments for Parkinson’s disease, or even act as a bridge over damaged areas of the brain. Deep brain stimulation is already used for Parkinson’s, but uses relatively large probes, which can cause formation of scar tissue around the probe.

The tiny size (just a couple of millimeters unfurled) of the new devices allow them to be placed precisely in the brain while minimizing damage, a separate team of Korean researchers note in an accompanying article. Ultimately, the goal is to interweave the electronics so finely with brain cells that communication between the two becomes seamless.

And that’s just the latest in the merging of electronics into the human body. While Lieber envisions using the implants in science and medicine—for example, to monitor brain activity and improve deep-brain stimulation treatment for Parkinson’s disease—others are already using non-medical electronic implants to become the first generation of cyborgs. These do-it-yourselfers call themselves biohackers, and they aren’t waiting for clinical trials or FDA approval to launch the cybernetic future.

At the website Dangerous Things, you can buy a kit—complete with syringe, surgical gloves and Band-Aid—to inject a small electronic device into your own body. The kits use a radio-frequency ID tag, or RFID, similar to the chips implanted to identify lost dogs and cats. These can be scanned to communicate with other devices. The site warns that implanting the chips should be done with medical supervision and “is strictly at your own risk.”

X-Ray Amal Graafstra
An X-ray image of Amal Graafstra’s hands shows the two electronic tags he had implanted. Image: Dangerous Things

The website’s charismatic founder, Amal Graafstra, has  RFID implants in each hand, and can use them to unlock doors and phones, log into computers, and start his car by waving a hand.

“One of the holy grails of biohacking is the brain-computer interface,” Graafstra says. He likens brain-wiring efforts so far to eavesdropping on neural activity with a glass to our ears and then shouting back with a bullhorn; electronics simply overwhelm the subtle communication between brain cells. “The ultimate goal, I think, would be a synthetic synapse,” he says, in which nanomaterials would function much like living brain cells, allowing far more nuanced communication between mind and machine.

An article in the Telegraph in October 2014 sums up today’s state of the art in brain-hacking:

“Quietly, almost without anyone really noticing, we have entered the age of the cyborg, or cybernetic organism: a living thing both natural and artificial. Artificial retinas and cochlear implants (which connect directly to the brain through the auditory nerve system) restore sight to the blind and hearing to the deaf. Deep-brain implants, known as “brain pacemakers,” alleviate the symptoms of 30,000 Parkinson’s sufferers worldwide. The Wellcome Trust is now trialling a silicon chip that sits directly on the brains of Alzheimer’s patients, stimulating them and warning of dangerous episodes.”

The goal of a complete merger of biology and technology is exciting to champions of transhumanism, which aims to enhance human intelligence, abilities, and longevity through technology.

But not everyone is thrilled about a future filled with genetic engineering, artificial intelligence, and cyborg technology. Implanting electronics in the brain, more so than in the hands or even the eye, goes directly to one of the biggest fear about cyborgs: a threat to free will. Could someone hijack an implant to control its user’s thoughts or actions? Or to read their minds?

That’s unrealistic, at least with current technology. The kinds of electronics that Lieber and others are working on have inherently limited use—such as delivering a small electric pulse to a particular spot—and would be useful only to people with a serious medical condition.

“Some people think we’re going to implant a microprocessor in people’s heads,” Lieber says, “but that has to interface to something.” And a tiny electronic device attached to one part of the brain simply cannot take over a person’s thoughts. “There’s always going to be someone interested in doing something bad,” he adds, so it’s important to monitor the technology as it becomes more sophisticated.

Graafstra says biohacking has “some maturing to do,” and studies like Lieber’s are a good step in bringing scientific rigor to what has at times been a Wild West.

“I think the biohacker understands that we are our brains,” he says. “You are your mind, and the body is the life support system for the mind. And like an SUV, it’s upgradeable now.”


A Blog by

New Microscope Puts the Life Back in Biology (with Videos!)

Life moves.

Or more precisely, as neuroscientist Eric Betzig and his colleagues put it in today’s issue of Science: “Every living thing is a complex thermodynamic pocket of reduced entropy through which matter and energy flow continuously.”

Betzig’s name may sound familiar. Two weeks ago he won the 2014 Nobel Prize in Chemistry for developing fancy microscopes. In today’s Science paper he shows off the latest tech, dubbed ‘optical lattice microscopy’, which captures not only the physical structure of a biological sample, but the way it changes in space over time.


A Blog by

Adaptive Colour-Changing Sheet Inspired By Octopus Skin

The most amazing skins in the world can be found in the sea, stretched over the bodies of octopuses, squid and cuttlefish. These animals, collectively known as cephalopods, can change the colour, shape and texture of their skin at a whim—just watch the ‘rock’ in the video above suddenly reveal its true nature. Their camouflage is also adaptive. Unlike, say, a stick insect or stonefish, which are limited to one disguise, an octopus’s shifting skin allows it to mimic a multitude of backgrounds. It sees, it becomes.

No man-made technology comes close. But one, at least, is nudging in the right direction.

A team of scientists led by Cunjiang Yu at the University of Houston and John Rogers at the University of Illinois at Urbana–Champaign have developed a flexible pixellated sheet that can detect light falling upon it and change its pattern to match. So far, its large pixels can change from black to white and back again. It’s a far cry from an octopus’s skin, but it does share some of the same qualities. For example, it changes colour automatically and relatively quickly—not cephalopod-quick, but within a second or so.

“This is by no means a deployable camouflage system but it’s a pretty good starting point,” says Rogers. Eventually, his team are working towards adaptive sheets that can wrap around solid objects and alter their appearance. These could be used to make military vehicles that automatically camouflage themselves, or clothes that change colour depending on lighting conditions.

Cephalopod skins have three layers. The top one consists of cells called chromatophores, which are sacs of coloured pigment, controlled by a ring of muscles. If the sac expands, it produces a pixel of colour; if it contracts, the pixel hides. These cells are responsible for hues like red, orange, yellow and black. The middle layer contains iridophores, cells that reflect the colours of the animal’s environment—they’re responsible for cooler colours like blues and greens. The bottom layer consists of leucophores, passive cells that diffuse white light in all directions, and act as a backdrop for the other colours.

The skin also contains light-sensitive molecules called opsins, much like those found in your retina. It’s still unclear what these do, but a reasonable guess is that they help cephalopods to “see” with their skin, and adapt their patterns very quickly without needing instructions from their brains.

The team drew inspiration from these skins when designing their own material. It consists of a 16 by 16 grid of squares, each of which consists of several layers.

  • The top one contains a heat-sensitive dye that reversibly changes colour from black at room temperature to colourless at 47 degrees Celsius, and back again. This is the equivalent of an octopus’s chromatophores.
  • The next layer is a thin piece of silver, which creates a bright white background, like the leucophores.
  • Below that, there’s a diode that heats the overlying dye and controls its colour. This is the equivalent of the muscles that control the chromatophores.
  • Finally, there’s a layer with a light-detector in one corner, a bit like a cephalopod’s skin opsins. All the top-most layers—the dye and the silver—have little notches missing from their corners so that the light-detector always gets a unimpeded view of its surroundings.
  • And the whole thing sits on a flexible base so it can bend and flex without breaking.

So, the light-detectors sense any incoming light, and tell the diodes in the illuminated panels to heat up. This turns the overlying dye from black to transparent. These pixels now reflects light from their silver layer, making them look white. You can see this happening in the videos below.  Here, different patches of light are shining onto the material from below, and it’s responding very quickly.

“There are analogies between layers of our system and those in the cephalopod skin, but all the actual function is achieved in radically different ways,” says Rogers. “The multi-layer architecture works really well, though. Evolution reached the same conclusion.”

“The most exciting thing about this is that it’s all automatic, without any external user input,” he adds.

There are obvious military applications for the device and the work was funded by the Office of Naval Research. But Rogers notes that the sheets are designed to sense and adapt—they don’t necessarily have to blend in. “There are a lot of applications in fashion and interior design,” he says. “You could apply these flexible sheets to any surface and create something that’s visually responsive to ambient lighting conditions. But our goal is not to make adaptable wallpaper; it’s on the fundamentals.”

Obviously, the material will have to be improved. Since it relies on heat to change colour, it’s relatively slow, consumes a lot of power, and only works in a narrow range of temperatures. But the team used a heat-sensitive dye because it was easy; it gave them time to focus on the rest of the system.

Now that this framework is in place, they think they could improve it very easily. Rather than heating diodes, they could use components that use changing electric fields. They could replace the dyes with other substances that offer a full range of colours, beyond just black and white. And they should be able to scale the sheet up easily—Rogers, after all, has a lot of experience in building flexible electronics using commonly used substances like silicon, rather than fancy (and expensive) new materials.

But he doubts he’ll ever make something that truly matches a cephalopod’s skin. “As an engineer looking at movies of squid, octopuses, and cuttlefish, you just realise that you’re not going to get close to that level of sophistication,” he says. “We tried to abstract the same principles and do the best we can with what we’ve got.”

Does their artificial skin have any advantages over what an octopus or squid can do?

“Well, it works on dry land!” says Rogers.

Reference: Yu, Li, Zhang, Huang, Malyrchuk, Wang, Shi, Gao, Su, Zhang, Xu, Hanlon, Huang & Rogers. 2014. Adaptive optoelectronic camouflage systems with designs inspired by cephalopod skins. PNAS http://dx.doi.org/10.1073/pnas.1410494111

A Blog by

A Swarm of a Thousand Cooperative, Self-Organising Robots

In a lab at Harvard’s Wyss Institute, the world’s largest swarm of cooperative robots is building a star… out of themselves. There are 1024 of these inch-wide ‘Kilobots’, and they can arrange themselves into different shapes, from a letter to a wrench. They are slow and comically jerky in their movements, but they are also autonomous. Once they’re given a shape, they can recreate it without any further instructions, simply by cooperating with their neighbours and organising themselves.

The Kilobots are the work of Mike Rubenstein, Alejandro Cornejo and Radhika Nagpal, who were inspired by natural swarms, where simple and limited units can cooperate to do great things. Thousands of fire ants can unite into living bridges, rafts and buildings. Billions of unthinking neurons can create the human brain. Trillions of cells can fashion a tree or a tyrannosaur. Scientists have tried to make artificial swarms with similar abilities, but building and programming them is expensive and difficult. Most of these robot herds consist of a few dozen units, and only a few include more than a hundred. The Kilobots smash that record.

They’re still a far cry from the combiner robots of my childhood cartoons: they’re arrange themselves into two-dimensional shapes rather than assembling Voltron-style into actual objects. But they’re already an impressive achievement. “This is not only the largest swarm of robots in the world but also an excellent test bed, allowing us to validate collective algorithms in practice,” says Roderich Gross from the University of Sheffield, who has bought 900 of the robots himself to use in his own experiments.

“This is a staggering work,” adds Iain Couzin, who studies collective animal behaviour at Princeton University. “It offers a vision of the future where robot groups could form structures on demand as, for example, in search-and-rescue in dangerous environments, or even the formation of miniature swarms within the body to detect and treat disease.”

"And I'll form... the wrench!" Credit: Michael Rubenstein, Harvard University.
“And I’ll form… the wrench!” Credit: Michael Rubenstein, Harvard University.

To create their legion, the team had to rethink every aspect of a typical robot. “If you have a power switch, it takes four seconds to push that, so it’ll take over an hour to turn on a thousand robots,” says Rubenstein. “Charging them, turning them on, sending them new instructions… everything you do with a thousand robots has to be at the level of all the robots at once.”

They also have to be cheap. Fancy parts might make each bot more powerful, but would turn a swarm into a budget-breaker. Even wheels were out. Instead, the team used simpler vibration motors. If you leave your phone on a table and it vibrates, it will also slide slightly: that’s how the Kilobots move. They have two motors: if either vibrates individually, the robot rotates; if both vibrate, it goes straight.

Well, straight-ish, anyway. The tyranny of cost-efficiency meant that the team had to lose any sensors that might tell the robots their bearings or positions. They can’t tell where they are, or if they’re going straight. But each one can shoot infrared beams to the surface below it, and sense the beams reflecting from its neighbours. By measuring how bright the reflections are, it can calculate its distance from other Kilobots.

This combination of stilted motion and dulled senses meant that each robot costs just $20. It also meant that “the robots were even more limited than we expected,” says Rubenstein. “The way they sense distance is noisy and imprecise. You can tell them to move and they won’t, and they’ll have no idea that they’re not moving.”

Fortunately, they have each other. A stuck Kilobot can’t tell if it’s stuck on its own, but it can communicate with its neighbours. If it thinks it’s moving but the distances from its neighbours change, it can deduce that something is wrong. And if neighbours estimate the distances between them and use the average, they can smooth out individual errors.

Using these principles, the team created a simple program that allows the robots to independently assemble into different shapes using just three behaviours. First, they move by skirting along the edges of a group. Second, they create gradients as a crude way of noting their position in the swarm. (A nominated source robot gets a gradient value of 0. Any adjacent robot that can see it sets its gradient value to 1. Any robot that sees 1 but not 0 sets its gradient to 2, and so on.) Finally, although they have no GPS, they can triangulate their position by talking to their neighbours. As long as the team nominates some robots as seeds, effectively turning them into the zero-point on a invisible graph, the rest of the swarm can then work out where they are.

Every Kilobot runs on the same program. The team only has to give them a shape and nominate four of them as seeds. Once that’s done, the rest slowly pour into the right pattern, in an endearingly life-like way. It takes them around 12 hours, but they do it all without any human intervention. And although the final shapes are always a little warped, that’s life-like too. Fire ants don’t have a Platonic ideal of what a bridge or raft should look like; they just work with their neighbours to get the job done.

Stills from movies showing the Kilobots assembling into a K and a star. Credit: Michael Rubenstein, Harvard University.
Stills from movies showing the Kilobots assembling into a K and a star. Credit: Michael Rubenstein, Harvard University.

Scientists have long been able to simulate huge swarms of life-like virtual particles in computers, using very simple rules. But the real world is full of pesky physics, inconvenient noise, and temperamental circuitry. Stuff goes wrong. By building an actual swarm, the team can address these problems and make their programs more robust. They’ve already had to deal with a litany of failed motors, stalled robots, collisions, and traffic jams. “The more times you run it, the more likely some random thing will show up that you don’t expect,” says Rubenstein. “That’s the problem with 1,000 robots: even rare things can happen very frequently.”

The next step will be to build robots that actually self-assemble by attaching to each other, says Marco Dorigo from the Free University of Brussels.. “We did so with tens of robots,” he says. “It will not be easy with one thousand.” Rubenstein agrees: “Physical connection is always difficult. If you have a dock, you tend to design the rest of the robot around that dock. It has a huge impact.”

Eventually, he also wants to get to a position where the robots can sense their environment and react accordingly, rather than just slide into some pre-determined shape. Like fire ants, when they get to a body of water, they wouldn’t have to be fed the image of a bridge; they would just self-assemble into one. “That’s a whole other level of intelligence, and it’s not really understood how to do that in robotics,” says Rubenstein. “But nature does it well.”

Reference: Rubenstein, Cornejo & Nagpal. 2014. Programmable self-assembly in a thousand-robot swarm. http://dx.doi.org/10.1126/science.1254295

More on robots:

A Blog by

A Bird-Like Flock of Autonomous Drones

This story appears in shorter form at Nature News.

In a field outside Budapest, Hungary, ten quadcopter drones are flying as a coordinated flock. They zip through the great outdoors, fly in formation, or even follow a leader.

The little machines are the work of Hungarian scientists led by physicist Tamas Vicsek from Eotvos University in Budapest. They’re autonomous, meaning that they compute their flight plans on their own, without any central control. They can follow instructions, but they work out their own paths using GPS signals to navigate and radio signals to talk to one another. They’re the closest thing we have to an artificial flock of birds.

The copter flock is a real-life version of an influential computer programme called Boids, created by Craig Reynolds in 1986. He programmed virtual flying objects—the eponymous Boids—to move according to three simple rules. They aligned with the average heading of their neighbours; they were attracted to each other; and they also repulsed each other to keep some personal space. These three simple rules were enough to simulate a realistic bird-like flock.

Boids was massively influential for Hollywood animators looking to depict swarms of bats or stampeding wildebeest. But it also showed scientists that the behaviour of animal collectives could arise from individuals obeying similar simple rules, rather than hewing to some master plan or communicating telepathically.

Vicsek is one of several pioneers of collective motion, who have expanded on these principles over the last few decades. And for five years, he has been trying to apply them to actual robots.

It hasn’t been easy. Alignment, attraction and repulsion can keep a virtual flock together, but in the world of wires and rotors, they aren’t enough. “The big enemies are noise and delay,” says Vicsek. The GPS signals that the copters rely on are very noisy, making it hard for them to accurately discern their position. They also need time to receive and process those signals, and these lags mean they often get dangerously close to one another or overshoot their mark.

It took close to five years to solve these problems. The team even had to build their own bespoke electronics lab to make their own copters, since store-bought ones were too unstable and kept on crashing. “When they crash, they crash very quickly,” says Vicsek. “We could only do experiments in areas without people or animals.”

In the meantime, his competitors were building their own mechanical flocks but Vicsek says that most of the reported successes have cheated in critical ways. Some, for example, could only fly indoors. Others communicated with a central supercomputer that did all their processing for them and gave them precise flight commands.

Only Dario Floreano, based in Switzerland, has come close to a truly autonomous flock. He created a group of fliers that can move together in outdoor environments, but they were hardly manoeuvrable. They could only move at the same constant (and slow) speed, and they avoided crashing into one another by flying at fixed (and different) heights. They were autonomous and impressive, but they offered a pale comparison to the dynamic flights of birds or Boids.

By contrast, Vicsek’s drones are free in their movements. Tell them to form a rotating ring, or a straight line, and they’ll coordinate themselves into the right position. Tell them that they’re heading towards an imaginary alleyway, and they’ll queue up to squeeze through a gap.

This isn’t just about aesthetics. “Like natural groups, this flock of robots is very robust to the failure or death of individuals, changing group size, and environmental perturbations such as sudden gusts of wind,” says Iain Couzin, another leader in the study of collective behaviour. “These emergent properties make self-organized robot flocks ideally suited to a wide range of applications involving efficient search and object delivery, especially in inhospitable environments.”

There are obvious military applications too, but Vicsek prefers to focus about peaceful ones. His son envisions a flock of sprayer drones that eliminate pools of stagnant water where mosquitoes breed. Vicsek himself likes to imagine quadcopters as artificial pollinators. “I think of these as future bees,” he says.

Reference: Vicsek has submitted a paper about this work as a presentation at the upcoming IEEE/RSJ International Conference on Intelligent Robots and Systems in Chicago, Illinois.

For more on collective motion, check out my big Wired piece on the science of swarms.

A Blog by

An Electric Sock For the Heart

The titles of scientific papers can be a bit intimidating. For example, I’m currently reading “3D multifunctional integumentary membranes for spatiotemporal cardiac measurements and stimulation across the entire epicardium”.

In other words: electric heart socks.

A team of scientists led by John Rogers at the University of Illinois at Urbana-Champaign has created a web of electronics that wraps around a living heart and measures everything from temperature to electrical activity. It’s an ultra-thin and skin-like sheath, which looks like a grid of tiny black squares connected by S-shaped wires. Its embrace is snug and form-fitting, but gentle and elastic. It measures the heart’s beats, without ever impeding them.

Electronic cardiac sock

Its goal is to monitor the heart in unprecedented detail, and to spot the unusual patterns of electrical activity that precede a heart attack. Eventually, it might even be able to intervene by delivering its own electrical bursts.

Cardiac socks have been around since the 1980s but the earliest ones were literal socks—fabric wraps that resembled the shape of the heart, with large electrodes sewn into place. They were crude devices, and the electrodes had a tough time making close and unchanging contact with the heart. After all, this is an organ known for constantly and vigorously moving.

The new socks solve these problems. To make one, graduate students Lizhi Xu and Sarah Gutbrod scan a target heart and print out a three-dimensional model of it. They mould the electronics to the contours of the model, before peeling them off, and applying them to the actual heart. They engineer the sock to be ever so slightly smaller than the real organ, so its fit is snug but never constraining.

This is all part of Rogers’ incredible line of flexible, stretchable electronics. His devices are made of mostly made of the usual brittle and rigid materials like silicon, but they eschew right angles and flat planes of traditional electronics for the curves and flexibility of living tissues. I’ve written about his tattoo-like “electronic-skin”, curved cameras inspired by an insect’s eye, and even electronics that dissolve over time.

The heart sock is typical of these devices. The tiny black squares contain a number of different sensors, which detect temperature, pressure, pH, electrical activity and LEDs. (The LEDs shine onto voltage-sensitive dyes, which emit different colours of light depending on the electrical activity of the heart.) Meanwhile, the flexible, S-shaped wires that connect them allow the grid to stretch and flex without breaking. As the heart expands and contracts, the web does too.

So far, the team have tested their device on isolated rabbit hearts and one from a deceased organ donor. Since these organs are hooked up to artificial pumps, the team could wilfully change their temperature or pH to see if the sensors could detect the changes. They could. They could sense when the hearts switched from steady beats to uncoordinated quivers.

Rogers thinks that tests in live patients are close. If anything, the doctors he is working with are more eager to push ahead. “We’re scientists of a very conservative mindset. They have patients who are dying,” he says. “They have a great appetite for trying out good stuff.”

The main challenge is to find a way of powering the device independently, and communicating with it wirelessly, so that it can be implanted for a long time. Eventually, Rogers also wants to add components that can stimulate the heart as well as recording from it, and fix any aberrant problems rather than just divining them.

It’s a “remarkable accomplishment” and a “great advance in materials science”, says Ronald Berger at Johns Hopkins Medicine, although he is less sure that the device will be useful is diagnosing or treating heart disease. “I don’t quite see the clinical application of these sensors.  There might be some therapy that is best implemented with careful titration using advanced sensors, but I’m not sure what that therapy is.”

But Berger adds that the sock has great promise as a research tool, and a couple of other scientists I contacted agree. After all, scientists can use the device to do what other technologies cannot: measure and match the heart’s electrical activity and physical changes, over its entire surface and in real-time.

For more on John Rogers’ flexible electronics, check out this feature from Discover that I co-wrote with Valerie Ross.

Reference: Xu, Gutbrod, Bonifas, Su, Sulkin, Lu, Chung, Jang, Liu, Lu, Webb, Kim, Laughner, Cheng, Liu, Ameen, Jeong, Kim, Huang, Efimov & Rogers. 2014.  3D multifunctional integumentary membranes for spatiotemporal cardiac measurements and stimulation across the entire epicardium. Nature Communications. http://dx.doi.org/ 10.1038/ncomms4329

A Blog by

Feel the Noise

If you’ve ever clenched up at the sound of nails on a chalkboard, or felt a pleasant chill when listening to an opera soprano, then you have an intuitive sense of the way our brains sometimes mix information from our senses. For the latest issue of Nautilus magazine I wrote a story about a woman whose brain mixes more than most, allowing her to feel many types of sounds on her skin.

Over the past decade or so, neuroscientists have revamped their view of how the brain processes sensory information. According to the traditional model, the cortex, or outer layers of the brain, processes only one sense at a time. For example, the primary visual cortex at the back of the head was thought to process only input from the eyes, while the auditory cortex above the ears dealt with information from the ears and the somatosensory cortex near the top of the head took in signals from the skin. But a growing number of studies have found that these cortical areas actually integrate information from many senses at once.

One of the most fascinating examples of this line of work, just published in Psychological Science, took advantage of a technology called transcranial direct current stimulation, or tDCS. This tool essentially gives researchers a safe, non-invasive way to activate specific parts of the human brain. Pretty wild, right? Here’s how it works. Researchers place two electrodes in various positions on a volunteer’s scalp. A small electric current passes between the electrodes, stimulating the neurons underneath.

In the new study (cleverly named “Feeling Better”) neuroscientist Jeffrey Yau of Johns Hopkins University used tDCS to stimulate the brain as volunteers performed two different tasks related to touch perception. One task is similar to reading Braille: blindfolded volunteers placed their fingers over gratings of bars of varied widths and spacing. The closer the bars, the more difficult it is for someone to determine whether there is one bar or two. The smallest distance at which the volunteer can correctly make this call is called the “spacial acuity.”

The second task measures the frequency of vibrations, similar to the different kinds of rumblings you might feel while waiting on a subway platform. On a given trial, volunteers use their index fingers to feel vibrations produced by a metal probe. They feel two vibrations back to back and then judge which they perceive to be stronger.

Yau ushered participants through each of these tasks before and after stimulating their brains. He found that activating volunteers’ primary visual cortex improved their tactile acuity, whereas stimulating their primary auditory cortex improved their ability to discriminate between different tactile frequencies.

What does this mean?

These findings make sense, Yau says, if you reframe the traditional view of how the cortex is organized. As I mentioned, the primary visual cortex has typically been thought of as the region that processes input from the eyes. But what if instead it was a region that processed information about shape, no matter what organ that information came from? Most of the time, shape information comes from the eyes, but sometimes—such as in this experiment—it can come from touch. Similarly, the primary auditory cortex might not be tailored for interpreting sounds, per se, but rather frequency information of any kind, including but not limited to sounds.

Yau speculates that we should be thinking differently about the other senses, too. The somatosensory cortex might process skin input, sure, but also other information related to keeping track of our body in physical space.

Yau’s study is one of many to reveal so-called multi-sensory processing in the brain. (Check out my Nautilus piece to learn about similar findings from other labs.)

“Within the last six or seven years, so much evidence has emerged that shows that early sensory cortex is not modality specific,” Yau says. Nevertheless, because subfields have built up around particular senses, Yau says it will probably take awhile before traditional theory of uni-sensory processing is dethroned. “That’s the idea that is always pushed in the textbooks. I think it’s hard to fight that dogma.”

A Blog by

Cyborg Bladders Stop Incontinence In Rats After Spine Damage

Implants that read and decipher our brain activity have allowed people to control computers, robotic limbs or even remote-controlled helicopters, just by thinking about it. These devices are called BMIs, short for brain-machine interfaces.

But our cyborg future isn’t limited to machines that hook up to our brains. At the University of Cambridge, James Fawcett has created a BMI where the B stands for bladder.  The implanted machine senses when a bladder is full, and automatically sends signals that stop the organ from emptying itself.

So far, it works in rats. It will take a lot of work to translate the technique into humans, but it could give bladder control back to people who have lost it through spinal injuries.

As the bladder fills up, its walls start to stretch. Neurons in the bladder wall detect these changes and send signals to the dorsal root, a structure at the back of the spinal cord. If left to themselves, these signals trigger a reflex that empties the bladder. That doesn’t usually happen because of neurons that travel in the opposite direction, descending from the ventral root at the front of the spine into the bladder. These counteract the emptying reflex and allow us to void the bladder when we actually want.

Spinal injuries often rob people of that control, by damaging the ventral neurons. “Take those away and you dribble all over your clothes every half hour,” says Fawcett.

There are two fixes. The first was developed by an eccentric British neuroscientist called Giles Brindley in the 1970s. Brindley is infamous for a lecture in which he demonstrated the effectiveness of a treatment for erectile dysfunction, by dropping his trousers and showing his erect penis to the audience. But his real claim to fame is an implant that stimulates the ventral root directly, allowing people with spinal injuries to urinate on demand.

There’s a catch—it only works if surgeons cut the neurons in the dorsal root so the bladder can’t spontaneously empty itself. This has severe side effects: men can’t get erections, women have dry vaginas, and both sexes end up with weak pelvic muscles.

The only other alternative is to paralyse the bladder with botox. Now, it can’t contract at all, and people have to empty it by sticking a catheter down their urethra. That’s expensive, difficult, unpleasant, and comes with a high risk of infection.

Fawcett’s team, led by Daniel Chew and Lan Zhu, have developed a better way.

First, they hack into the bladder’s communication lines. Rather than cutting through the dorsal root, they tease out fine strands of neurons called dorsal rootlets, and thread them into tiny sheaths called microchannels. The channels record the signals going from the bladder to the spine, revealing what the organ is up to.

When the bladder is ready to empty itself, the channels detect a big spike in activity. They react by sending signals to a stimulator that’s hooked up to the nerves leading into the bladder’s muscles. The stimulator hits these nerves with a high-frequency electric pulse that stops them from firing naturally. The bladder’s muscles don’t contract, and no unwanted urine is spilled. When the user actually wants to wee, they just push a button and the stimulator delivers a low-frequency pulse instead. Only then does the bladder contract.

This device does everything that a normal bladder does, but uses electronics to stand in for damaged nerves. It works on a closed loop, so users should be able to go about their day to day lives without worrying about incontinence. And it doesn’t sever the dorsal root, so it carries none of the side effects of the Brindley method.

“That would be a major advance,” says Kenneth Gustafson, a biomedical engineer from Case Western Reserve University. “Restoration of bladder control is one of the most important problems of individuals with spinal cord injuries.”

“The quality of the neural recordings that they’re showing with their channel electrodes is really very impressive and convincing,” says Robert Gaunt from the University of Pittsburgh, who has also worked on neural prosthetics for the bladder.

The team have successfully tested their device in rats, and they’re working on scaling it up to humans. “We haven’t actually trying dissecting human dorsal roots into rootlets but the anatomy’s quite similar,” says Fawcett.

“It’s good to see this has come to fruition,” says Clare Fowler from University College London, who studies ways of solving incontinence in people with neurological problems. “There have been a lot of very clever developments to get this working, and they are to be congratulated.” However, she adds that the device is “many years away from translation into human usefulness.”

Gaunt adds that the nerves that control the bladder muscles are near to those that control its sphincter. If you shut down the former with high-frequency pulses, you might risk accidentally shutting down the sphincter too—it would then relax, and the bladder might empty.

But the main problem is longevity. The device needs to be turned into something like a pacemaker, which can be implanted reliably for long periods of time. Currently, that’s impossible because the rootlets can only survive for 18 months in the microchannels before they build up fatal amounts of scar tissue. “That’s not long enough to be useful,” says Fawcett, who is working on ways of extending their lifespan.

Fawcett adds that his work isn’t just about the bladder. His microchannels offer a new way of effectively recording signals from nerves outside the brain—a goal that has historically been very difficult. Tap into the right nerves, and the device could potentially be used to control everything from prosthetic limbs to immune reactions to the digestive system.

Again, that’s a far-off goal. “We’re not sure that outside the dorsal root, we can tease the peripheral nerves into rootlets,” says Fawcett. “They weave around a lot more, so you’d risk damaging them. We’re looking into that currently.”

Reference: Chew, Zhu, Delivopoulos, Minev, Musick, Mosse, Craggs, Donaldson, Lacour, McMahon & Fawcett. 2013. A Microchannel Neuroprosthesis for Bladder Control After Spinal Cord Injury in Rat. Vol 5 Issue 210 210ra155

A Blog by

The End of Family Secrets?

I’ve been tied to the genealogy community ever since I can remember. My dad was always into it — for decades, he collected old documents and photos, and went on fact-finding trips to libraries and cemeteries, all to fill in the holes of his ever-expanding family tree. As I’ve written about before, I’ve had trouble wrapping my head around his obsession. Why spend so much time digging up the past?

But I may be in the minority. Genealogy is a booming business, with an estimated 84 million people worldwide spending serious money on the hobby.

As it turns out, the industry owes a big part of its recent success to a technology that I’m quite invested in: genetic testing. Several dozen companies now sell DNA tests that allow customers to trace their ancestry. This technology can show you, for example, how closely you’re related to Neanderthals, or whether you’re part Native American or an Ashkenazi Jew. But the technology can just as easily unearth private information—infidelities, sperm donations, adoptions—of more recent generations, including previously unknown behaviors of your grandparents, parents, and even spouses. Family secrets have never been so vulnerable.

My latest story is about the rise of this so-called “genetic genealogy” and how it has forced some people to confront painful questions about privacy, identity, and family. The story is out today in MATTER, a new publication for long-form narratives about big ideas in science and technology.

The star of my story is Cheryl Whittle, a 61-year-old from eastern Virginia who graciously invited me into her home and into her extended family. Cheryl took her first DNA test in 2009, and what happened after that is a story with lots of twists and turns, joys and sorrows. You’ll have to go read the story to see what I mean — Here you can read a teaser, or buy the whole 10,000 words for just $.99.

One of the things I loved about reporting this story was seeing how genetic technology is being integrated into the lives of people who aren’t all that interested in science. Here’s a quick video of Cheryl, for example, explaining — in fluent genetic lingo — how to compare her 23 pairs of chromosomes to someone else’s using the online service of 23andMe, a popular genetic testing company:

Thanks to genealogy hobbyists like Cheryl, genetic databases are growing larger every day. And this raises some important issues regarding privacy and ethics. It’s plausible that in the not-­too-­distant future, we’ll all be identifiable in genetic databases, whether through our personal contribution or that of our relatives. Is that a good thing? A bad thing?

I’ve heard a wide range of answers to these questions. A couple of months ago I asked my father’s first cousin, John Twist, who has been an avid genealogist for decades, whether he had bought any DNA tests to further his research. Genealogists tend to have a sharing mentality, so his response surprised me. He wrote:

I have NOT sent in my DNA.  I would have, perhaps, earlier, but now with the revelations of Big Brother, I just don’t want to. I read that they caught the BTK killer in Kansas City (?) through his daughters pap smear.  AND, in an article I read yesterday from MIT (?) a fellow said he’d rec’d an anonymous DNA sample and was able to identify the person who’d given it through Ancestry – well, something like that.

My own views tend to fall on the other side of the spectrum. I’m keen on the potential benefits of direct-to-consumer genetic testing, whether it’s used for estimating your medical risks or unearthing family secrets. That said, the full range of its legal and ethical implications has not yet come to light.

Dov Fox, an assistant professor of law at the University of San Diego who specializes in genetic and bioethical issues, told me that it’s only a matter of time before genetic genealogy leads to lawsuits regarding fidelity, paternity, and inheritance. But it’s unclear, for now, how the law will handle those cases.

Here in the U.S., there aren’t any federal privacy statutes that would apply, Fox says. The U.S. Genetic Information Nondiscrimination Act (GINA), passed in 2008, says that health insurers and employers cannot use an individual’s genetic information to deny medical coverage or to make employment decisions. But genetic genealogy doesn’t have anything to do with medical risks. That means lawyers will have to get creative in how they present their cases.

“What happens often with advances in science and technology is that we try to shoehorn new advances into ill-fitting existing statutes,” Fox says. So genetic genealogy cases might hinge upon laws originally written for blackmail, libel, or even peeping Tom violations.

Maybe it’s not all that surprising that genetic genealogy, a new technology, hasn’t ironed out its privacy standards yet.

“When telephones were first becoming widely adopted, you couldn’t just dial someone directly. An operator would put your call through and often listen to the call,” says J. Bradley Jansen, Director of the Center for Financial Privacy and Human Rights in Washington, D.C., and the founder of the Genealogical Privacy blog. When a technology is new, its novelty trumps any privacy worries. That was true for Facebook, too: At the beginning, everyone shared everything with abandon. “But as technologies mature, privacy, which had been a luxury, becomes an essential commodity,” Jansen says.

I hope he’s right.

And I hope you like the story, now up at MATTER.

A Blog by

How Forensic Linguistics Outed J.K. Rowling (Not to Mention James Madison, Barack Obama, and the Rest of Us)

Earlier this week, the UK’s Sunday Times rocked the publishing world by revealing that Robert Galbraith, the first-time author of a new crime novel called The Cuckoo’s Calling, is none other than J.K. Rowling, the superstar author of the Harry Potter series. Then the New York Times told the story of how the Sunday Times’s arts editor, Richard Brooks, had figured it out.

One of Brooks’s colleagues got an anonymous tip on Twitter claiming that Galbraith was Rowling. The tipster’s Twitter account was then swiftly deleted. Before confronting the publisher with the question, Brooks’s team did some web sleuthing. They found that the two authors shared the same publisher and agent. And, after consulting with two computer scientists, they discovered that The Cuckoo’s Calling and Rowling’s other books show striking linguistic similarities. Satisfied that the Twitter tipster was right, Brooks reached out to Rowling. Finally, on Saturday morning, as the New York Times reports, “he received a response from a Rowling spokeswoman, who said that she had ‘decided to fess up’.”

While the literary world was buzzing about whether that anonymous tipster was actually Rowling’s publisher, Little, Brown and Company (it wasn’t), I wanted to know how those computer scientists did their mysterious linguistic analyses. I called both of them yesterday and learned not only how the Rowling investigation worked, but about the fascinating world of forensic linguistics.

With computers and sophisticated statistical analyses, researchers are mining all sorts of famous texts for clues about their authors. Perhaps more surprising: They’re are also mining not-so-famous texts, like blogs, tweets, Facebook updates and even Amazon reviews for clues about people’s lifestyles and buying habits. The whole idea is so amusingly ironic, isn’t it? Writers choose words deliberately, to convey specific messages. But those same words, it turns out, carry personal information that we don’t realize we’re giving out.

“There’s a kind of fascination with the thought that a computer sleuth can discover things that are hidden there in the text. Things about the style of the writing that the reader can’t detect and the author can’t do anything about, a kind of signature or DNA or fingerprint of the way they write,” says Peter Millican of Oxford University, one of the experts consulted by the Sunday Times.

Cal Flyn, a reporter with the Sunday Times, sent email requests to Millican and to Patrick Juola, a computer scientist at Duquesne University in Pittsburgh. Flyn told them the hypothesis — that Galbraith was Rowling — and gave them the text of five books to test that hypothesis. Those books included Cuckoo, obviously, as well as a novel by Rowling called The Casual Vacancy. The other three were all, like Cuckoo, British crime novels: The St. Zita Society by Ruth Rendell, The Private Patient by P.D. James, and The Wire in the Blood by Val McDermid.

Juola ran each book (or, more precisely, the sequence of tens of thousands of words that make up a book) through a computer program that he and his students have been working on for more than 10 years, dubbed JGAAP. He compared Cuckoo to the other books using four different analyses, each focused on a different aspect of writing.

One of those tests, for example, compared all of the word pairings, or sets of adjacent words, in each book. “That’s better than individual words in a lot of ways because it captures not just what you’re talking about but also how you’re talking about it,” Juola says. This test could show, for example, the types of things an author describes as expensive: an expensive car, expensive clothes, expensive food, and so on. “It might be that this is a word that everyone uses, like expensive, but depending on what you’re focusing on, it [conveys] a different idea.”

Juola also ran a test that searched for “character n-grams”, or sequences of adjacent characters. He focused on 4-grams, or four-letter sequences. For example, a search for the sequence “jump” would bring up not only jump, but jumps, jumped, and jumping. “That lets us look at concepts and related words without worrying about tense and conjugation,” he says.

Those two tests turn up relatively rare words. But even a book’s most common words — words like a, and, of, the — leave a hidden signature. So Juola’s program also tallied the 100 most common words in each book and compared the small differences in frequency. One book might have used the word “the” six percent of the time, while another uses it only 4 percent.

Juola’s final test completely separates a word from its meaning, by sorting words simply by their length. What fraction of a book is made of three-letter words, or eight-letter words? These distributions are fairly similar from book to book, but statistical analyses can dig into the subtle differences. And this particular test “was very characteristically Rowling,” Juola says. “Word lengths was one of the strongest pieces of evidence that [Cuckoo] was Rowling.”

It took Juola about an hour and a half to do all of these word-crunchings, and all four tests suggested that Cuckoo was more similar to Rowling’s Casual Vacancy than the other books. And that’s what he relayed back to Flyn. Still, though, he wasn’t totally confident in the result. After all, he had no way of knowing whether the real author was somebody who wasn’t in the comparison set of books who happened to write like Rowling does. “It could have been somebody who looked like her. That’s the risk with any police line-up, too,” he says.

Meanwhile, across the pond, Peter Millican was running a parallel Rowling investigation. After getting Flyn’s email, Millican told her he needed more comparison data, so he ended up with an additional book from each of the four known authors (using Harry Potter and the Deathly Hallows as the second known Rowling book). He ran those eight books, plus Cuckoo, into his own linguistics software program, called Signature.

Signature includes a fancy statistical method called principal component analysis to compare all of the books on six features: word length, sentence length, paragraph length, letter frequency, punctuation frequency, and word usage.

Word frequency tests can be done in different ways. Juola, as I described, looked at word pairings and at the most common words. Another approach that can be quite definitive, Millican says, is a comparison of rare words. The classical example concerns the Federalist Papers, a series of essays written by Alexander Hamilton, James Madison, and John Jay during the creation of the U.S. Constitution. In 1963, researchers used word counts to determine the authorship of 12 of these essays that were written by either Madison or Hamilton. They found that Madison’s essays tended to use “whilst” and never “while”, and “on” rather than “upon”. Hamilton, in contrast, tended to use “while”, not “whilst”, and used “on” and “upon” at the same frequency. The 12 anonymous papers never used “while” and rarely used “upon”, pointing strongly to Madison as the author.

Millican found a few potentially distinctive words in his Rowling investigation. The other authors tended to use the words “course” (as in, of course), “someone” and “realized” a bit more than Rowling did. But the difference wasn’t statistically significant enough for Millican to run with it. So, like Juola, he turned to the most common words. Millican pulled out the 500 most common words in each book, and then went through and manually removed the words that were subject-specific, such as “Harry”, “wand”, and “police”.

Of all of the tests he can run with his program, Millican finds these word usage comparisons most compelling. “You end up with a graph, and on the graph it’s absolutely clear that Cuckoo’s Calling is lining up with Harry Potter. And it’s also clear that the Ruth Rendell books are close together, the Val McDermid books are close together, and so on,” he says. “It is identifying something objective that’s there. You can’t easily describe in English what it’s detecting, but it’s clearly detecting a similarity.”

On all of Millican’s tests, Cuckoo turned out to be most similar to a known Rowling book, and on four of them, both Rowling books were closer than any of the others. Millican got the files around 8pm on Friday night. Five hours later, he emailed the Sunday Times. “I said, ‘I’m pretty certain that if it’s one of these four authors, it’s Rowling.'”

This isn’t the first time that Millican has found himself in the middle of a high-profile authorship dispute. In the fall of 2008, just a couple of weeks before the U.S. presidential election, he got an email from the brother-in-law of a Republican congressman from Utah. He told him that they had used his Signature software (which is downloadable from his website) to show that Barack Obama’s book, Dreams from my Father, could have been written by Bill Ayers, a domestic terrorist. “They were planning to have a press conference in Washington to expose Obama one week before the election and got in touch with me,” Millican recalls, chuckling. “It was quite a strange situation to be in.”

Millican re-ran the analysis and definitively showed that Dreams was not, in fact, written by Ayers (you can read more about what he did here).

Juola told me some crazy stories, too. He once worked on a legal case in which a man had written a set of anonymous newspaper articles that were critical of a foreign government. He was facing deportation proceedings in the United States, and knew that if he was deported then the secret police in said foreign government would be waiting for him at the airport. Juola’s analyses confirmed that the anonymous articles were, in fact, written by the man. And because of that, he was permitted to stay in the U.S. “We were able to establish his identity to the satisfaction of the judge,” Juola says.

That story, he adds, shows how powerful this kind of science can be. “There are a lot of real controversies with real consequences for the people involved that are a lot more important than just, did this obscure novel get written by this particular famous author?”

The words of many of us, in fact, are probably being mined at this very moment. Some researchers, Juola told me, are working on analyzing product reviews left on websites like Amazon.com. These investigations could root out phony glowing reviews left by company representatives, for example, or reveal valuable demographic patterns.

“They might say, hmmm, that’s funny, it looks like all of the women from the American West are rating our product a star and a half lower than men from the northeast, so obviously we need ot do some adjustment of our advertisements,” he says. “Not many companies are going to admit to doing this kind of thing, but anytime you’ve got some sort of investigation going on, whether police or security clarance or a job application, one of the things you’re going to look at is somebody’s public profile on the web. Anything is fair game.”

In fact, it was a good thing the original tipster of the Rowling news deleted his or her Twitter account, Juola says. “If we still had the account, we could have looked at the phrasings to see if it corresponded to anyone who works at the publishing house.”

A Blog by

Scientists Legitimize My Match.com Marriage!

When I started internet dating, in December of 2006, I was embarrassed about it. Why would any self-respecting 22-year-old, after all, want to wade through a virtual pool of creeps and weirdos? I would talk about it in a jokey tone, as if I were only after the avant-garde experience. And that was part of the appeal. But thinking back on it now, my online search for love was far more earnest than I ever admitted to my friends or to myself.

The NYC bar scene (like most bar scenes, right?) is not a great place to spark a serious relationship. I’d meet someone and know immediately whether we had chemistry; I could see how a guy dressed, talked, smiled, maybe even how he danced. But I knew little about all the other things that matter, like his career goals, political ideology, religious background, or whether he could write a coherent sentence. I didn’t know, in other words, if we shared values.

Internet dating is the opposite. The click of a mouse gives you a comprehensive profile of a potential date: age, religion, political party, degrees, hobbies, profession, income (yes, really, many people post their stats); whether he smokes, drinks, or wants marriage, children, or pets. Exchange a few emails with somebody and you know pretty quickly whether they’re illiterate or funny or brooding. You don’t know if you’re going to click.

Eight months after signing up with Match.com, I had been out with 30 guys. The vast majority were awkward flops. And yet they all had seemed so great in their profiles! I was frustrated, and ready to give up on the internet. Chemistry is king, I thought, and chemistry is what online dating will never have.

Then (after ignoring several of his initial emails) I went out with a great guy. A year ago today, we left for our honeymoon.

Because it worked out for me, I often get asked about internet dating — Did I like it? Isn’t it strange? Why did it work? And I always just shrug, chalking up my (eventual) success to dumb luck. But a study out today on internet-based marriages has got me thinking more deeply about my experience. I think it’s just as difficult to meet someone online as it is offline. But if you do meet someone online and end up tying the knot, then your marriage has slightly better odds of working out than if you had met your mate offline. And that might be because of those shared values.

The study: Researchers gave an online survey about marital satisfaction to 19,131 Americans who were married once between 2005 and 2012. As published today in the Proceedings of the National Academy of Sciences, about 35 percent or respondents met their spouse online — not only through an online dating site, but via social networks, chat rooms, blogs, and even virtual worlds — and 65 percent met IRL, at work, at church, in bars, through friends or blind dates.

People who fall under certain demographic categories — men, people aged 30-49, Hispanics, people who are wealthy, and people who are employed — are more likely to meet their spouse online than off, the study found.

But here’s the part that will make headlines today. Of the entire sample, 7.44 percent were separated or divorced, and these came disproportionately from the group who met their spouses offline. More specifically, 7.67 percent of the met-offline group were separated compared with 5.96 percent of the met-online folks, a small yet statistically significant difference. What’s more, people who met their spouse online reported higher marital satisfaction than did those who met offline. Both of these findings held after the researchers controlled for sex, age, education, ethnicity, income, religion, employment status, and year of marriage.

The researchers offer several possible explanations. The differences between online and offline meetings could be because the larger pool of eligible mates available online allowed respondents to be more selective. Or it could be “the nature of the users who are attracted to and gain access to that site,” the authors write. For example, people who sign up for online dating may carry some important personality trait, like impulsiveness, or may be more motivated to get married in the first place.

But I’m most swayed by what I’m calling the “shared values” explanation. As the researchers point out, other studies have found that people are more likely to share authentic information about themselves online than in face-to-face settings. And in the new study, people who met offline in venues related to their shared interests — such as in church, school, or social gatherings — reported higher marital satisfaction than did those who met through family, bars, or blind dates.

In the end, I guess I’m arguing a cliché. The search for a life partner is never easy, online or off. Either way, you can only be happy with somebody else if you know what makes you happy alone.


Update, 6/3/13, 3:45pm: I should have noted, as fab science writer Maia Szalavitz did over at Time.comthat the study was funded by the dating website eHarmony.

A Blog by

Slideshow: Seven Sweet Maps of Health Data

When I hear the terms geographic information system (GIS) or global positioning system (GPS), it usually makes me think of government agents spying on us, or of particularly hellish road trips. But these technologies could also be incredibly useful for public health and medical research, as I learned from an interesting commentary in today’s issue of Science.

The idea hasn’t quite caught on yet in the medical research community. “Most health research has yet to take full advantage of the latest developments in geospatial data collection, analysis, and modeling,” Douglas Richardson of the Association of American Geographers told the Science podcast. Richardson and the other authors of new commentary gave many reasons why more health researchers should get excited about GIS.

GIS can help manage the ever-growing heaps of health-related data. Like the six billion letters in one person’s complete genome sequence, or the location of methyl groups scattered on that genome, or the medical history (hospital visits, prescriptions, allergies) and daily habits (food intake, exercise, environmental exposures) that can help make sense of that genome. GIS technologies can also help track disease (flu, HIV, or addiction, say) across time and space, to help researchers see how transmission patterns are influenced by social or environmental changes (like hand-washing protocols, antiretroviral access, or unemployment).

This is obviously a field where pictures matter, so I put together the slideshow above to give you a sense of the kind of data mash-ups that are possible with GIS techniques. Here’s a bit more information about each slide:

An 1855 map of London's Soho neighborhood during a cholera outbreak. Deaths (black circles) clustered around one water pump (black X) on Broad Street.

1. This is a remake of a map made by John Snow in 1855 showing the extent of a cholera outbreak in London’s Soho neighborhood. Black circles show deaths; black Xs water pumps. This map showed that the source of the outbreak was a water pump on Broad Street.

Mei-Po Kwan
Mei-Po Kwan
Hypothetical map of an individual's environmental exposures (time plotted on vertical axis). See more in post below.

2. This is a hypothetical graph showing one individual’s environmental exposures over time (vertical axis) and space (horizontal axis). The blue lines represent an individual’s path over time. The bottom plane represents the transportation network of the study area. The orange horizontal plane represents the spatial distribution of a risk factor (like air pollution or liquor stores, say) for one time point. Image courtesy of Mei-Po Kwan at the University of Illinois at Urbana-Champaign.

Bethany Deeds, NIDA
Bethany Deeds, NIDA
A pilot study is comparing neighborhood violence (left) with individuals' GPS locations (middle), self-reported stress (right) and urine drug tests.

3. This shows data from a pilot study conducted by the Intramural Research Program at the National Institute on Drug Abuse. The researchers tracked 27 individuals over 100 days. The researchers are comparing neighborhood violence (left panel) with individuals’ GPS locations (middle), self-reported stress (right) and urine drug tests. Image courtesy of Bethany Deeds, NIDA.

Oliver et al., 2007
Oliver et al., 2007
Researchers can use GIS data (even without GPS) to define neighborhoods for specific people (red dots).

4. GIS data can be used (even without GPS) to define neighborhoods for specific people (red dots). In this 2007 study, Canadian researchers used telephone surveys to ask people about their walking patterns. Based on respondents’ zip codes and the geography and urban layout of their area, the researchers were able to define the boundaries (red lines) of each participant’s typical walking space.

Wesolowski et al., 2012
Wesolowski et al., 2012
A study tracks malaria prevalence (left) and cell phone towers (middle) in Kenya to define regions of interest for further study (right).

5. Last year, Caroline Buckee of Harvard and her colleagues published a study in Science that used GIS to quantify the impact of human movements on the spread of malaria in Kenya. The researchers used maps of malaria prevalence (left map) and cell phone towers (middle) to define regions of interest for further study (right).

Bethany Deeds, NIDA
Bethany Deeds, NIDA
Maps of hospital discharges for meth abuse in California.

6. These maps of California show the spread of meth abuse over time. As published last year, Paul Gruenewald from the Prevention Research Center in Berkeley, California, and colleagues put these maps together by analyzing hospital discharges for meth abuse by zip code. Image courtesy of Bethany Deeds, NIDA. (Incidentally, when I was working at SEED several years ago, I wrote a story about geographers tracking meth labs that may have the best lede I’ve ever written.)

Nate Heard, U.S. Dept. of State
Nate Heard, U.S. Dept. of State
Map depicts country size relative to the number of people infected with HIV.

7. This shows the size of countries relative to the number of adults living there with HIV in 2009. White areas have a prevalence of .49 percent or less; dark red indicates prevalence between 17.5 and 26 percent. Data sourced from the UNAIDS Report on the Global AIDS Epidemic; image courtesy of Nate Heard, U.S. Department of State.


Interested in becoming a mapmaker, or in learning about what they do? Check out Emily Underwood‘s cool piece on the “new cartographers” in Science Careers.

A Blog by

Dissolving electronics – medical sensors that disintegrate

When I last spoke to John Rogers from the University of Illinois, we talked about his new “electronic skin” – a patch that can be applied like a temporary tattoo, that monitor heartbeats and brain activity, and that flexes and bends without breaking. We talked about his curved camera, inspired by the human eye. We spoke about his flexible medical sensors that can mould to the contours of a beating heart or the fissures of a human brain. We chatted about the $500,000 MIT-Lemelson prize that he had won for his inventions.

I tell you all this because I want you to understand that when John Rogers says his team’s new invention is “some of our best stuff ever”, he’s not speaking lightly.

Rogers has now created a line of “transient electronics”, which last for a specified amount of time before completely dissolving away.  Having made his name by taking rigid and brittle electronics and making them flexible and bendy, he has now flipped durability on its head too. Electronics are typically engineered to last as long as possible, but Rogers wants to create machines that will disintegrate after a given time. And his team have already shown how this disappearing tech could be used to make medical implants that are absorbed by the body after their work is done.

Medical implants are an obvious application, and the one that led them down this road in the first place. They have already been working on flexible sensors that can be implanted into the brain and heart, to monitor for signs of epilepsy or heart attacks. “The thing you bump up against is how to get these things to survive in the body for a long time without adverse effects,” says Rogers. “One way to deal with that problem is to move around it. A lot of these implants don’t need to last forever.”