A Blog by

In Defense of Brain Imaging

Brain imaging has fared pretty well in its three decades of existence, all in all. A quick search of the PubMed database for one of the most popular methods, functional magnetic resonance imaging (fMRI), yields some 22,000 studies.  In 2010 the federal government promised $40 million for the Human Connectome Project, which aims to map all of the human brain’s connections. And brain imaging will no doubt play a big part in the president’s new, $4.5 billion BRAIN Initiative. If you bring up brain scanning at a summer BBQ party, your neighbors may think you’re weird, but they’ll be somewhat familiar with what you’re talking about. (Not so for, say, calcium imaging of zebrafish neurons…)

And yet, like any youngster, neuroimaging has suffered its share of embarrassing moments. In 2008, researchers from MIT reported that many high-profile imaging studies used statistical methods resulting in ‘voodoo correlations’: artificially inflated links between emotions or personality traits and specific patterns of brain activity. The next year, a Dartmouth team put a dead salmon in a scanner, showed it a bunch of photos of people, and then asked the salmon to determine what emotion the people in the photos were feeling. Thanks to random noise in the data, a small region in the fish’s brain appeared to “activate” when it was “thinking” about others’ emotions. Books like Brainwashed, A Skeptic’s Guide to the Mind, Neuro: The New Brain Sciences and the Management of the Mind, and the upcoming The Myth of Mirror Neurons have all added fuel to the skeptical fire.

There are many valid concerns about brain imaging — I’ve called them out, on occasion. But a new commentary in the Hastings Center Report has me wondering if the criticism itself has gone a bit overboard. In the piece, titled “Brain Images, Babies, and Bathwater: Critiquing Critiques of Functional Neuroimaging,” neuroscientist Martha Farah makes two compelling counterpoints. One is that brain imaging methods have improved a great deal since the technology’s inception. The second is that its drawbacks — statistical pitfalls, inappropriate interpretations, and the like — are not much different from those of other scientific fields.

First, the improvements. At the dawn of brain imaging, Farah notes, researchers were concerned largely with mapping which parts of the brain light up during specific tasks, such as reading words or seeing colors. This garnered criticism from many who said that imaging was just a flashy, expensive, modern phrenology. “If the mind happens in space at all, it happens somewhere north of the neck. What exactly turns on knowing how far north?” wrote philosopher Jerry Fodor in the London Review of Books.

But the purpose of those early localization experiments, according to Farah, was mostly to validate the new technology — to make sure that the areas that were preferentially activated in the scanner during reading, say, were the same regions that older methods (such as lesion studies) had identified as being important for reading. Once validated, researchers moved on to more interesting questions. “The bulk of functional neuroimaging research in the 21st century is not motivated by localization per se,” Farah writes.

Researchers have developed new ways of analyzing imaging data that doesn’t have anything to do with matching specific regions to specific behaviors. Last year, for example, I wrote about a method developed by Farah’s colleague Geoffrey Aguirre that allows researchers to study how a brain adapts to seeing (or hearing or smelling or whatever) the same stimulus again and again, or how the brain responds to a stimulus differently depending on what it experienced just before.

Other groups are using brain scanners to visualize not the activity of a single region, but rather the coordinated synchrony of many regions across the entire brain. This method, called ‘resting-state functional connectivity’, has revealed, among other things, that there is a network of regions that are most active when we are daydreaming, or introspecting, not engaged in anything in particular.

All that is to say: Today’s neuroimaging is more sophisticated than it used to be. But yes, it still has problems.

Its statistics, for one thing, are complicated as hell. Researchers divide brain scans into tens of thousands of ‘voxels’, or three-dimensional pixels. And each voxel gets its own statistical test to determine whether its activity really differs between two experimental conditions (reading and not-reading, say). Most statistical tests are considered legit if they reach a ‘significance level’ of .05 or less, which means that there’s a 5 percent or less chance that the activity occurred due to random chance. But if you have 50,000 voxels, then a significance level of .05 means that 2,500 of them would look significant by chance alone!

This problem, known as ‘multiple comparisons’, is what caused a dead salmon to show brain activity. “There’s no simple solution to it,” Farah writes. The salmon study, in fact, used a much more stringent significance level of .001, meaning that there was just a .1 percent chance that any given voxel’s activity was due to chance. And yet, that cut-off would still mean a brain of 50,000 voxels would have 50 spurious signals.

Researchers can control for multiple comparisons by focusing on a smaller region of interest to begin with, or by using various statistical tricks. Some studies don’t control for it properly. But then again — and here’s Farah’s strongest point — the same could be said for lots of other fields. To make this point, a 2006 study in the Journal of Clinical Epidemiology compared the astrological signs and hospital diagnoses for all 10.7 million adult residents of Ontario, finding that “residents born under Leo had a higher probability of gastrointestinal hemorrhage, while Sagittarians had a higher probability of humerus fracture.”

A different statistical snag led to the aforementioned voodoo correlations. These false associations between brain and behavior arose because researchers used the same dataset to both discover a trend and to use the newly discovered trend to make predictions. It’s obviously a problem that many headline-grabbing studies (several were published in top journals) made this mistake. Here again, though, the error is not unique to brain imaging. The same kind of double-dipping happens in epidemiology, genetics, and finance. For example, some economists will use a dataset to group assets into portfolios and then use the same dataset to test pricing models of said assets.

Perhaps the stickiest criticism lodged against brain imaging is the idea that it is more “seductive” to the public than other forms of scientific data. One 2008 study reported that people are more likely to find news articles about cognitive neuroscience convincing if the text appears next to brain scans, as opposed to other images or no image. “These data lend support to the notion that part of the fascination, and the credibility, of brain imaging research lies in the persuasive power of the actual brain images themselves,” the authors wrote. Farah points out, however, that four other laboratories (including hers) have tried — and failed — to replicate that study.

Anecdotally, I’ve certainly noticed that my non-scientist friends are often awe-struck regarding brain imaging in a way that they aren’t with, oh, optogenetics. But even if that’s the case, and brain imaging is especially attractive to the public, why would that be a valid argument against its continued use? It would be like saying that because the public is interested in genetic testing, and because genetic testing is often misinterpreted, scientists should stop studying genetics. It doesn’t make much sense.

Brain imaging isn’t a perfect scientific tool; nothing is. But there are many good reasons why it has revolutionized neuroscience over the past few decades. We — the media, the public, scientists themselves — should always be skeptical of neuroimaging data, and be quick to acknowledge shoddy statistics and hype. Just as we should for data of any kind.

19 thoughts on “In Defense of Brain Imaging

  1. I agree with everything in this article. fMRI analysis has been improving by leaps and bounds over the past decade, and can now be used to test specific models of representation and function (not just rough localization). The field still has a ways to go in terms of statistical analysis – many studies determine statistical clustering thresholds using a random field analysis, whose assumptions I’ve never seen properly validated in fMRI data. But the dead salmon experiment is more of a (funny!) joke than an actual criticism of the real methods most respectable fMRI scientists use.

  2. The fact that other fields also have the problem of Vul’s “non-independence” error is surely a call for more stringent criteria in reviewing papers, rather than a defence of fMRI. While the author argues the corollary of this, that since fMRI studies are not alone in having this problem and so should not be singled out, this does not make fMRI data inherently useful unless the statistical quirks can be avoided. The point is more or less moot – the technique is not itself made useless by invalid data coming from it, but on the other hand there may be other techniques where real statistical significance is more tractable.

  3. An important point to make here is that the definition of significance levels made in the article is actually incorrect. It is a common error to state that “most statistical tests are considered legit if they reach a ‘significance level’ of .05 or less, which means that there’s a 5 percent or less chance that the activity occurred due to random chance.”. But in fact, it goes the other way around: a p value of 0.05 means that, by chance alone, one would expect a difference as large as was observed in 5% of cases. Importantly, this is the reason why the actual chance that a “significant” finding actually represents a real result might be actually very low, especially in the case of unlikely hypotheses or multiple comparisons, as occurs in fMRI. For a more thorough discussion of the matter, see http://www.nature.com/news/scientific-method-statistical-errors-1.14700

  4. The fMRI studies quality is gradually improving. But even the mere localization studies are important. Don’t you think? Then ask the patients about to undergo neurosurgery for epilepsy or brain tumors ….

  5. “Last year, for example, I wrote about a method developed by Farah’s colleague Geoffrey Aguirre that allows researchers to study how a brain adapts to seeing (or hearing or smelling or whatever) the same stimulus again and again”. Studies like this show that the brain DOES adapt to seeing (or hearing or smelling or whatever) the same stimulus again and again. But what do they tell us about HOW the brain adapts to seeing (or hearing or smelling or whatever) the same stimulus again and again? Two distinct issues.

  6. Why didn’t you report on the other articles in the Hasting Report — only the one by Farrah, which was clearly intended to “balance” the negativity of the others, and only the apologetic parts of Farrah’s piece — she is actually almost as critical as the others in places.

    Do you really think that Helen Mayberg, *the* top researcher in the neuroscience of depression, is off base in her Hastings Report paper (which you declined to mention) when she says neuroscience is nowhere near actual clinical application for depression, despite rampant initial optimism, and doubts if neuroscience will *ever* be able to deal with the full range of depression.

    Even your hero Geoffrey Aguirre said in his Hasting Report piece (which you also ignored)… “While there are several practical limits on the biological information that current technologies can measure, these limits—as important as they are—are minor in comparison to the fundamental logical restraints on the conclusions that can be drawn from brain imaging studies.

    Sorry, but I have to file your article under “apologetics”, just when we need to be taking a really hard look at neuroscience, lest the already large bubble become so huge that, when it bursts it does a lot of widespread damage to good people and ideas. I realize how hard it is for a journalist to break from the pack, but someone needs to have the moxie to do it.

    1. Thanks for reading, David. I’ve actually written a good deal about the limits of neuroscience, the BRAIN project, etc., and so have many others, as I spent a good chunk of this post describing.

  7. Yes, I saw your discussion of the limits of neuroscience — good for you!

    But why did you describe the Hastings Report as going “a bit overboard” and ignore as if they did not exist five of its six articles, reporting *only* the explicitly apologetic one (and only the apologetic parts, at that).

    You are not alone. I’ve been following reaction to the Hasting Report since it came out and all the stories but one have been almost equally apologetic and selective in their reporting or moreso. And that includes several blogs whose charter is actually critique of neuroscience.

    But perhaps the most salient — and disturbing — fact about the followup is how little of it there has been, despite the incredible scientific luminosity of the authors. In fact, maybe that’s the problem — it’s too forceful and compelling. One can only conclude that the neuroscience community is scared you-know-what-less about this report and its implications. Perhaps that is *the* story here — and not only did you and most everybody else miss it, you are part of the problem!

  8. There hasn’t been a large reaction to the Hasting’s Report because, in general, it is simply restating things that are well known within the neuroscience and brain imaging community.

    I can’t pretend that the “magic” of fMRI isn’t oversold- I did fMRI research for years, I’ve seen more than my share of circular reasoning and reverse inference- but beneath the hype there is a huge amount of good research.
    There is also quite a bit of discussion about the limits of fMRI and best practices for the reporting results of fMRI results.

    I am familiar with Mayberg’s work because I also used fMRI to study depression. And I agree with her- I think we have a long way to go before fMRI (or any sort of brain imaging) can be considered a useful part of psychiatric diagnosis or treatment. But not all the reasons for this have to do with fMRI. Mayberg herself has written about the difficulties inherent in defining major depression, and has postulated that we are actually grouping several conditions with different biological causes together simply because they share overlapping symptoms (http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3014414/). This confusion makes it very difficult to use fMRI as a diagnostic tool currently. However, with additional research, I can see neuroscience and psychiatry converging.

    1. Did you read Mayberg’s Hastings paper? (http://onlinelibrary.wiley.com/doi/10.1002/hast.296/full) Yes, it may have been said before, but not as forcefully, and not by a top researcher in the field whose own hopes for breakthroughs have been repeatedly dashed. And it was not sponsored by a world-class bioethics center (Hastings). And it was not accompanied by five other articles by illustrious scientists with similar messages (and yes, despite this blog post, Farrah’s paper acknowledges the “bathwater” that needs to be thrown out as well as the “baby” to be saved).

      Also, please read Geoffrey’s Aguirre’s Hastings paper, including my above citation from the abstract, and starting especially at the section on theoretical limits neuroscience inference — http://onlinelibrary.wiley.com/doi/10.1002/hast.294/full#hast294-sec-0050.

      The optimism you (Borghi) express — “However, with additional research, I can see neuroscience and psychiatry converging.” — is exactly what was said when fMRI first came along. In other words, this *is* the additional research, and with it things have only gotten worse. Instead of neuroscience and psychiatry (or more accurately, clinical psychology) converging, we have found that they actually reinforce each other’s complexity and uncertainty — we now have the multiplication of their individual complexities (which are bad enough for psychology alone), or maybe the exponential.

      Frankly, if this were a horse race, I’d say that the horse was highly touted a while ago, but stumbled out of the gate in its prior race and has run out of the money in every race so far. Why are you betting on a win in the next race, not to mention taking the Triple Crown?!

  9. I’ve read the Mayberg paper and the Aguirre paper and both of them outline points I’ve seen discussed over and over again in the neuroimaging literature. Aguirre is certainly not the only voice in discussions about the logical constants of fMRI analysis; Brad Postle and colleagues write about it extensively (goo.gl/P1wvON), Nikos Logothetis- who is probably as preeminent in fMRI circles as they come has written about it (goo.gl/1MBD1S). And these discussions aren’t just recent, they go back to the earliest days of event related fMRI analysis (goo.gl/BtvjXA). Of course exaggerations and overhype about fMRI persist, most likely because of how easily fMRI captures the popular imagination (goo.gl/ogNRYR), but its not correct to assume there hasn’t been an enormous amount of discussion and criticism within the field before now.

    I’m a bit uncertain how the quote from Aguirre supports your point. Of course there are logical constraints to fMRI. There are logical constraints to the conclusions that can be drawn from every method and technique in the whole of science. At the conclusion of the section you asked me to (re-)read is the following, “Despite the many cautions and limitations, I will close by recognizing the astonishing power of neuroimaging techniques. Contrary to the claims of some critics, neuroimaging is not modern phrenology. As the field has been embracing new, powerful analytic techniques, there has been a shift in goals; instead of focusing on explaining how mental states arise, neuroscientists are now also trying to predict mental states—at least the mental states of those individuals who have agreed to cooperate with the study.” That isn’t a condemnation of the field, that is a statement about its continuing potential.

    Similarly, I am not sure the Mayberg piece is as negative as you believe it to be. Yes it is principally about tempering enthusiasm for fMRI’s utility as a psychiatric tool. But, in my reading, those statements are meant to apply only to the present. Mayberg even titles the last section of her piece “Cautious Optimism about Future Clinical Utility.”

    As for my own optimism, perhaps I was too general when I said I thought neuroscience and psychiatry would eventually converge. Mayberg mentions the utility of fMRI for understanding the “different types, degrees, and etiologies of depression.” The major difficulty in using neuroscience to diagnose psychiatric conditions is that our definitions of those conditions are based on symptoms. There is no reason to the think that the cluster of symptoms we currently call depression all arise from the same biological signature. Because neuroscientists generally have to follow DSM criteria (because there aren’t other definitions), study populations are immensely heterogeneous. The only way to remedy that is to do more research. And as we’re seeing with NIMH’s shift away from the DSM, neuroscience research will likely cause a shift in psychiatry. With more research (a lot more, that will take a lot of time) neuroscience may not only be a helpful tool for diagnosis and treatment but may transform how we think about mental illness.

    1. OK, wanna make some bets. I think that in 10 years we will judge ourselves farther than ever from all the goals you set out. The shift to neuroscience in place of DSM will be judged a failure. We will be no closer to neuroscience helping diagnosis and treat mental disorders than we are today (which is nowhere, according to Mayberg). They will not transform our understanding of mental illness. There will be talk of reducing funding for psychological neuroscience. And the BRAIN initiative will be judged pretty much a waste of money, at least in terms of any breakthroughs or concrete benefits.

    2. Is my bet is unfair because 10 years is too short? How long do you need – 20 years, 50, 100? 1000? Do you really think you can predict the accomplishments of any developing field, let alone neuroscience, out past 5 years, if that? Heck, the iPad is only 4 years old – do you think anybody in 2009 could have predicted where we would be in that realm today, let alone in 2004 or 1994?

      Admitting that 10 years is not enough is tantamount to saying you have no idea what will happen. Maybe we’ll run into the equivalent of Heisenberg’s uncertainty principle or the speed of light. In fact, I think what we will run into – or come to realize is a barrier – will be complexity!

      In fact that’s just what has already happened in depression neuro-research. As Mayberg points out, optimism was quite high 20 years ago that with fMRI we would soon nail not only forward but also reverse brain correlates of many if not most mental disorders and replace the DSM approach. That clearly didn’t work out very well, and I think complexity is arguably the reason. Mayberg documents it for one issue – depression – and as far as I can tell, it’s only getting worse. I’ve probably read a couple dozen cognitive neuroscience papers in the last year or so in a variety of subfields and I think every one has had more gaps and limitations than actual results, and raised many more significant questions than it answered.

      I think scientists (and without knowing it, the general public) are implicitly assuming there will be breakthroughs, like Maxwell’s equations in the mid-1800s, that cut through the complexity and unify a lot of disparate observations and theories. So scientists grasp at things like so-called mirror neurons as such breakthroughs. But as they dig in they find not only is it a lot trickier than they initially imagined, and in fact there is no real chance that complex mental phenomena like empathy or emulation can be characterized in terms of individual neuron behavior.
      So rather than neuroscience cutting through the uncertainty and complexity of psychology (DSM being a symptom of that), it’s actually adding more complexity on top, so that the complexity of the two (probably nonlinearly chaotic) systems together is much worse even than either alone, neither of which we really understand.

      Just to clarify my “bet”, BRAIN and other neuroscience research will continue to tell us more and more about the *brain*per se — no question about that. With respect to the *mind* however, neuroscience will continue to tell us pretty much nothing – at least nothing we didn’t already know by non-neuro means. In fact, as it has already been doing, research will probably reveal ever more complexity, uncertainty and ambiguity about the connection to brain. So rather than any significant simplifying breakthroughs, we will in fact see the opposite. The complexity will grow and grow until it overwhelms any attempts to grasp it. In effect, we will have a model which is just as complex (if not moreso) than the phenomena it purports to explain.

      Thus, we may be technically closer to the holy grail of understanding mind via brain, be we will in fact *feel* farther than ever from that goal – or come to realize that we were not anywhere close in the first place, despite our optimism. We are quite likely climbing Mt. Everest to get to the moon, so camp 2 is closer than camp 1, but we may be simply on the wrong path. There may not be a reductionist path to understanding mind via science.

      Your observation about depression is a good case in point. You said, “There is no reason to the think that the cluster of symptoms we currently call depression all arise from the same biological signature.” That’s actually 20-20 hindsight. you say it’s obviously obviously wrong today, but it wasn’t so obviously wrong 20 years ago. In fact, it’s exactly what Mayberg and other neuroscientists did hope and expect in the early days of fMRI, and for a while thought they had confirmed, “We thought we had found an illness biomarker—an objective way to diagnose depression.”

      But it just turned out they hadn’t done enough research, or put another way, had jumped to conclusions on the basis of those hopes and initial findings. “Unfortunately, that hypothesis quickly required revision… By the late 1990s, it was increasingly clear that despite the seemingly comparable diagnostic criteria used to enroll patients across studies, the functional brain profiles of depressed patients could be quite varied. …we were forced to conclude that depression could not be reliably diagnosed in an individual patient using functional neuroimaging scans. Because neuroscientists generally have to follow DSM criteria (because there aren’t other definitions), study populations are immensely heterogeneous.”

      In other words, depression is not one thing – there are many types and subtypes, and it’s a symptom of many underlying things. It’s like a cough – there are many kinds and each is potentially a symptom of many things. Except in the medical case, we can identify most of the underlying causes and make a reasonable diagnosis – most of the time. In psychology/neuroscience, we have nowhere near such certainty, so the complexity and uncertainty of the underlying issues psychological or organic (neurobiological), if we can even speak of them at this point (or ever), the symptom (depression) are all multiplied together rather than resolving into some tractable schemata.

      So my 10-year prediction is a negative one – that at the current pace in high-order neuroscience (excluding motor and sensory), in the next 10 years we will discover a lot about the brain per se, but make no more progress in understanding mind and its disorders via brain than we have in the past 20, which is pretty close to zero. We will still be “calibrating” neuroscience by measuring it against what we know about mind by researching behavior, mental disorders and their treatments directly. RDoC and BRAIN will be judged failures, at least in the sense of still having no foreseeable impact on diagnosis or treatment of mental disorders or even basic understanding.

      In other words, we will be where we are now, with 10 hard-driving more years and continuing optimism behind us. To hope otherwise is magical thinking — it’s been fruitless until now, but dark the dawn when day is nigh. As optimism collapses, And the gap between expectations and reality will too wide to paper over any longer with articles in the popular press. Despair will be in the air with respect to *ever* making real progress in understanding mind via brain.

      Negative outlook? Yep! Wanna make that bet?

  10. I don’t think that is a fair bet.

    We unquestionably know a lot more about how psychiatric conditions like depression affect the brain than we did ten years ago (here is a nice review specific to depression- goo.gl/H92uks). Say what you want about RDoC, but it demonstrates that neuroscience is already transforming our understanding how mental illness.

    I think it is important to consider how we should judge the success or failure of applying neuroscience to mental illness. The ultimate goal is obviously to apply fMRI and other tools to perform more objective diagnoses and optimize treatment, and obviously we’re not there yet. But this doesn’t mean all the research conducted on the neural basis of mental illness has been a failure.

    I’d say that we’re still in the basic research phase of applying neuroscience to psychiatry. We currently don’t know enough about the relationship between psychiatric conditions and the brain to apply fMRI (or any other neuroscience technique) to diagnosis or treatment. But that doesn’t mean we’ll never get there. We’re closer now than we were ten years ago, and I have no doubt that we’ll be even closer ten years from now. As our understanding of the neural basis of these conditions continues to improve, so will our ability to diagnose and treat.

    Mayberg does mention how neuroscience may never be applicable to all forms of depression, a point I have to acknowledge as probably correct. But depression is hugely heterogenous. Just because we probably won’t be able to apply neuroscience to the treatment of every single case of depression does not mean we won’t be able to apply it to any case. If you read the Mayberg paper, most of it is about how, though it can’t currently be used to treat depression, fMRI has been an enormously powerful tool for exploring how the condition (or conditions) affect the brain. This is not a failure. This is a necessary first step.

    As for you predictions, the Human Brain Project (the European version of the BRAIN Initiative) is already making exciting progress in generating maps of the brain. The projects being started under the auspices of the BRAIN initiative seem equally likely to lead to significant insights. It would be naive to assume neuroscience will transform psychiatry completely in the next decade, but I think its equally cynical to assume that we’ll make no progress.

    The use of DSM criteria to characterize mental illness is clearly flawed.

    The result of all this research has been rapidly developing pictures of how these conditions affect the brain. Though fMRI research hasn’t yet directly affected diagnosis and treatment, it has contributed significantly to the basic understanding of these diseases necessary for the tre

    developing a much clearer picture of how these conditions affect the brain. We’re

  11. I hit enter before editing that last post, it was supposed to end before the sentence about the DSM. Anyway, I stand by my point, just because we’re not currently able to use fMRI as a diagnostic or treatment tool doesn’t mean the research has failed.

  12. we have recently published an (open-access) article on deficient approaches to human neuroimaging which may be interesting for the discussion here. Broadly speaking we claim that the common methodological frameworks that are used for brain mapping (statistical parametric mapping) lead to a distorted picture of human brain function.



  13. Thank you for this important article, from authors with major scientific credentials. (Not to minimize the others, I note that three of them are from branches of the Max Planck Institute.)

    My first thought upon reading it is how could these methodological deficiencies and — more important — likely widespread incorrect, even meaningless results have eluded observation over some 20 years of thousands of fMRI studies. The answer is that the world of functional brain localization is almost entirely a self-contained universe, with no connections to other realities that enable confirmation of results or, more to the point, applications that can be tested for validity and efficacy in human terms.

    To say that we are building only theoretical models does not absolve neuroscience. General Relativity (the theory of gravity) was not considered firmly established until some of its empirical predictions, such as gravitational lensing, were confirmed. The standard model of particle physics, despite gaps and issues, has made many confirmed predictions, often to three decimal places. Today’s neuroscience totally lacks anything comparable in terms of testable predictions — it is playing in a sandbox of its own devising. And, if this article is correct, even if neuroscience could be correlated to a testable reality it would almost certainly fail miserably.

    But again, the reality to which neuroscience supposedly refers is the human mind, psychology. It is worth quoting in detail what this article says about that…

    “Terminology of Psychology … it cannot be emphasized strongly enough how important an objective terminology is when it comes to adequate scientific reasoning. Consider for instance terms from neuroscience such as synapse, neuron, action potential, cortical column, gyrus and sulcus (Turner, 2012). It is easy to define such terms objectively, thus allowing qualified scientists to identify and investigate the object of research. This is, however, not always the case for terms used in cognitive neuroscience: consider for instance terms such as perception, consciousness, attention and altruism. These terms are often vaguely defined, if at all. Thus, the employed terminology is often beyond objective scientific definability (Turner, 2012). Curiously enough, there are even cases where the terminology of brain-mapping studies closely resembles Gall’s phrenology (Poldrack, 2010). Finally, the terminology of cognitive neuroscience may depend on the cultural background and the current Zeitgeist. It thus remains unclear whether a consensus ontology is achievable at all.”

    In my words, combining the uncertain methodology of neuroscience and the inherent complexity of the brain with the lack of scientific definition and complexity of psychology leads to a combination of uncertainties and complexities that makes the accuracy and meaning of the results highly suspect.

  14. Does controlling for multiple comparisons really qualify as a “statistical trick”? My understanding is that it changes the criteria for significance to account for the number of comparisons, make it more difficult to get a ‘false positive’. If any brilliant person reading this can clarify for me, that would be really appreciated.

    I also wonder if hypothesis testing is the main statistical tool used in imaging studies. I’ve seen a lot of factor analysis and many different kinds of tools used to eke out relationships in studies using fMRI. Am I way off base?

    Finally, I think that the value of theory in these disciplines is that it can help avoid stating that random or accidental patterns are true indicators of what’s happening. Maybe as the theory develops our ability to block out the “static” will improve.

Leave a Reply to Alysha Cancel reply

Your email address will not be published. Required fields are marked *