National Geographic

Will We Ever… Simulate the Brain?

Here’s the 15th piece from my BBC column

For years, Henry Markram has claimed that he can simulate the human brain in a computer within a decade. On 23 January 2013, the European Commission told him to prove it. His ambitious Human Brain Project (HBP) won one of two ceiling-shattering grants from the EC to the tune of a billion euros, ending a two-year contest against several other grandiose projects. Can he now deliver? Is it even possible to build a computer simulation of the most powerful computer in the world—the 1.4-kg cluster of 86 billion neurons that sits inside our skulls?

The very idea has many neuroscientists in an uproar, and the HBP’s substantial budget, awarded at a tumultuous time for research funding, is not helping. The common refrain is that the brain is just too complicated to simulate, and our understanding of it is at too primordial a stage.

Then, there’s Markram’s strategy. Neuroscientists have built computer simulations of neurons since the 1950s, but the vast majority treat these cells as single abstract points. Markram says he wants to build the cells as they are—gloriously detailed branching networks, full of active genes and electrical activity. He wants to simulate them down to their ion channels—the molecular gates that allow neurons to build up a voltage by shuttling charged particles in and out of their membrane borders. He wants to represent the genes that switch on and off inside them. He wants to simulate the 3,000 or so synapses that allow neurons to communicate with their neighbours.

Erin McKiernan, who builds computer models of single neurons, is a fan of this bottom-up approach. “Really understanding what’s happening at a fundamental level and building up—I generally agree with that,” she says. “But I tend to disagree with the time frame. [Markram] said that in 10 years, we could have a fully simulated brain I don’t think that’ll happen.”

Even building McKiernan’s single-neuron models is a fiendishly complicated task. “For many neurons, we don’t understand well the complement of ion channels within them, how they work together to produce electrical activity, how they change over development or injury,” she says. “At the next level, we have even less knowledge about how these cells connect, or how they’re constantly reaching out, retracting or changing their strength.” It’s ignorance all the way down.

“For sure, what we have is a tiny, tiny fraction of what we need,” Markram says. Worse still, experimentally mapping out every molecule, cell and connection is completely unfeasible in terms of cost, technical requirements and motivation. But he argues that building a unified model is the only way to unite our knowledge, and to start filling in the gaps in a focused way. By putting it all together, we can use what we know to predict what we don’t, and also refine everything on the fly as new insights come in.

The crucial piece of information, and the one Markram’s team is devoting the most time towards, is a complete inventory of which genes are active in which neurons. Neurons aren’t all the same – they come in a variety of types that perform different roles and deploy different genes. Once Markram has the full list—the so-called “single-cell transcriptome”—he is confident that he can use it to deduce the blend of different neurons in various parts of the brain, recreate the electrical behaviour of each type of cell, or even simulate how a neuron’s branches would grow from scratch. “We’re discovering biological principles that are putting the brain together,” he says.

For over two decades, his team have teased out the basic details of a rat’s neurons, and produced a virtual set of cylindrical brain slices called cortical columns. The current simulation has 100 of these columns, and each has around 10,000 neurons—less than 2 percent of a rat’s brain and just over 0.001 percent of ours. “You have to practice this first with rodents so you’re confident that the rules apply, and do spot checks to show that these rules can transfer to humans,” he says.

Eugene Izhikevich from the Brain Corporation, who helped to build a model with 100 billion neurons, is convinced that we should be able to build a network with all the anatomy and connectivity of a real brain. An expert could slice through it and not tell the difference. “It’d be like a Turing test for how close the model would be to the human brain,” he says.

But that would be a fantastic simulation of a dead brain in an empty vat. A living one pulses with electrical activity—small-scale currents that travel along neurons, and large waves that pass across entire lobes. Real brains live inside bodies and interact with environments. If we could simulate this dynamism, what would emerge? Learning? Intelligence? Consciousness?

“People think I want to build this magical model that will eventually speak or do something interesting,” says Markram. “I know I’m partially to blame for it—in a TED lecture, you have to speak in a very general way. But what it will do is secondary. We’re not trying to make a machine behave like a human. We’re trying to organise the data.”

That worries neuroscientist Chris Eliasmith from the University of Waterloo in Ontario, Canada. “The project is impressive but might leave people baffled that someone would spend a lot of time and effort building something that doesn’t do anything,” he says. Markram’s isn’t the only project to do this. Last November, IBM presented a brain simulation called SyNAPSE, which includes 530 billion neurons with 100 trillion synapses connecting them, and does… not very much.  It’s basically a big computer. It still needs to be programmed. “Markram would complain that those neurons aren’t realistic enough, but throwing a ton of neurons together and approximately wiring them according to biology isn’t going to bridge this gap,” says Eliasmith.

Eliasmith has taken a completely different approach. He is putting function first. Last November, he unveiled a model called Spaun, which simulates a paltry 2.5 million neurons but shows behaviour. It still simulates the physiology and wiring of the individual neurons, but organises them according to what we know about the brain’s architecture. It’s a top-down model, as well as a bottom-up one, and sets the benchmark for brain simulations that actually do something. It can recognise and copy lists of numbers, carry out simple arithmetic, and solve basic reasoning problems. It even makes errors in the same way we do—for example, it’s more likely to remember items at the start and end of a list.

But the point of Spaun is not to build an artificial brain either. It’s a test-bed for neuroscience—a platform that we can use to understand how the brain works. Does Region X control Function Y? Build it and see if that’s true. If you knock out Region X, will Spaun’s mental abilities suffer in a predictable way? Try it.

This kind of experiment will be hard to do with the HBP’s bottom-up architecture. Even if that simulation shows properties like intelligence, it will be difficult to understand where those came from. It won’t be a simple matter of tweaking one part of the simulation and seeing what happens. If you are trying to understand the brain and you do a really good simulation, the problem is that you end up with… the brain. And the brain is very complicated.

Besides, Izhikevich points out that technology is quickly outpacing many of the abilities that our brains are good at. “I can do arithmetic better on a calculator. A computer can play chess better than you,” he says. By the time a brain simulation is sophisticated enough to reproduce brain’s full repertoire of behaviour, other technologies will be able to do the same things faster and better, and “the problem won’t be interesting anymore,” says Izhikevich.

So, simulating a brain isn’t a goal in itself. It’s an end to some means. It’s a way of organising tools, experts, and data. “Walking the path is the most important part,” says Izhikevich.

More will we ever:

 

There are 10 Comments. Add Yours.

  1. Johnny O
    February 15, 2013

    Why are they starting at “the top”? How about dealing with “the basics” first – like eating, sleeping and sh*tting? That was how the brain started: simply neurological functions had to be established before anything complex could happen.

  2. Joonjeong Yi
    February 16, 2013

    The idea of Markram will be failed since any computer cannot simulate nuerons interconnected differently for each individual’s brain connnectome what Dr. Sebastian Seung explained in TED.

  3. John Kubie
    February 16, 2013

    the notion of bottom up and top-down modeling isn’t new. People have been modeling single neurons and small networks of neurons for decades; and slowly adding specific cellular features, At the same time, people have been modeling larger networks of neurons with simpler neurons. And, at the same time, people have been making robotic devices using sensory systems, motor systems and computations somewhat similar to brain computations for awhile. Each of these approaches is advancing, and the approaches converging. Two quick points:
    1. Modeling simple invertebrate nervous systems has advanced rapidily, but remains greatly simplified comared to the real thing. These will ‘work’ well before a human brain is simulated. I feel we have to understand how the brain evolved, before trying to build a complete one.
    2. I feel trying to build a brain in a vat is wasted effort. A mature human brain gets that way through a develpmental sequence that requires extensive interaction with the environment. this can’t be short-circuited. My strong hunch is that the first close approximation to a functional brain will be a robot, not a computer.

  4. Chris
    February 17, 2013

    I think it’s disingenuous to call the brain “the most powerful computer in the world”. It functions with great incoherence, noisyness and distraction. On a macroscopic level it is not that deterministic and works nothing like the Turing Machine – the accepted mathematical definition of a computer. No human can outperform a modern computer at speed in strictly following an algorithm to its termination. One must therefore not compare computer speed/memory with the brain, but instead compare neural matter’s capabilities with simulated neural nets. I reckon we are nearly capable of simulating smaller brains on current machines but we require better computer models more so than better hardware.

  5. Roedy Green
    February 18, 2013

    We are vain. Our large brain took a twinkling to evolve compared with the snail brain. It can’t be more than half a dozen tricks. There was not time for more subtlety.

  6. Roedy Green
    February 18, 2013

    Back in the 70′s I wrote OPTOW, a computer program to design high voltage transmission lines. Engineers came by to laugh at its efforts. Every week I steadily improved it. One week it was as good as a human, the next week it was 10% better. 50 engineers with PhDs and masters degrees were out of a job. Psychologically we adjudicate capability even a bit below our own as hopeless. We also fail to take into account artificial intelligence constantly improves, unlike our own. I think AI will eventually take the world by surprise, just as OPTOW did, for the exact same reasons.

  7. Jeff
    February 18, 2013

    The money for this grant was meant for a high-risk, potentially high-reward project. Someone with ambition is wanting to try to tackle a grand challenge in a bold way. Let them. We could sit around for several more centuries and debate about whether it’s going to help or not, or we can applaud those last few people in the world who actually TRY TO DO SOMETHING.

    My god, I’m so glad our generation wasn’t tasked with exploring/settling the Americas, or fighting the world wars. Nothing would have gotten done, as everyone would want to debate it for a lifetime before getting started.

  8. Stephen Minhinnick
    February 18, 2013

    “The project is impressive but might leave people baffled that someone would spend a lot of time and effort building something that doesn’t do anything,” says Chris Eliasmith.

    What does the Large Hadron Collider “do”? Sounds like Henry Markram plans to build something similar to work out the underlying details of the working brain. You could call it the “Large Neuron Collider”.

  9. Stan Sandler
    February 23, 2013

    The worker bee brain has only about 1 billion neurons. With that it can control its body, interpret its senses, do all the tasks in a beehive (sequentially as it ages), function as a member of a complicated society (communicate…) and a great amount of learning (spatial mapping of nearly a thousand hectares, learn mazes in only 6 times the time of a rat….). So why start with something the size and complexity of the human brain.

  10. Roedy Green
    February 24, 2013

    Circa 1990 I met Bernard C. Till, who for his masters thesis has simlated the nervous system of a nematode. He has mapped the 47 (149?) neurons of the beast, the coonections and the “S functions” (the key to the magic) of the various connections. What utterly blew my mind is the simlated worm exhibited EVERY known behaviour of the worm.

    If I has tried to create such a simlation directly with a computer program it would have taken many megabytes of code.

    Along simlar lines I learned that humans are specied by something like 18,000 genes. Computer programmers would require terabytes of design documents to specify a human, and its biochemstry, embryology etc. We have a lot to learn form nature’s terseness.

Add Your Comments

All fields required.

Related Posts