National Geographic

The Case for Junk DNA

Genomes are like books of life. But until recently, their covers were locked. Finally we can now open the books and page through them. But we only have a modest understanding of what we’re actually seeing. We are still not sure how much our genome encodes information that is important to our survival, and how much is just garbled padding.

Today is a good day to dip into the debate over what the genome is made of, thanks to the publication of an interesting commentary from Alex Palazzo and Ryan Gregory in PLOS Genetics. It’s called “The Case for Junk DNA.”

The debate over the genome can get dizzying. I find the best antidote to the vertigo is a little history. This history starts in the early 1900s.

At the time, geneticists knew that we carry genes–factors passed down from parents to offspring that influence our bodies–but they didn’t know what genes were made of.

That changed starting in the 1950s. Scientists recognized that genes were made of DNA, and then figured out how the genes shape our biology.

Our DNA is a string of units called bases. Our cells read the bases in a stretch of DNA–a gene–and build a molecule called RNA with a corresponding sequence. The cells then use the RNA as a guide to build a protein. Our bodies contain many different proteins, which give them structure and carry out jobs like digesting food.

But in the 1950s, scientists also began to discover bits of DNA outside the protein-coding regions that were important too. These so-called regulatory elements acted as switches for protein-coding genes. A protein  latching onto one of those switches could prompt a cell to make lots of proteins from a given gene. Or it could shut down the gene completely.

Meanwhile, scientists were also finding pieces of DNA in the genome that appeared to be neither protein-coding genes nor regulatory elements. In the 1960s, for example, Roy Britten and David Kohne found hundreds of thousands of repeating segments of DNA, each of which turned out to be just a few hundred bases long. Many of these repeating sequences were the product of virus-like stretches of DNA. These pieces of “selfish DNA” made copies of themselves that were inserted back in the genome. Mutations then reduced them into inert fragments.

Other scientists found extra copies of genes that had mutations preventing them from making proteins–what came to be known as pseudogenes.

The human genome, we now know, contains about 20,000 protein-coding genes. That may sound like a lot of genetic material. But it only makes up about 2 percent of the genome. Some plants are even more extreme. While we have about 3.2 billion bases in our genomes, onions have 16 billion, mostly consisting of repeating sequences and virus-like DNA.

The rest of the genome became a mysterious wilderness for geneticists. They would go on expeditions to map the non-coding regions and try to figure out what they were made of.

Some segments of DNA turned out to have functions, even if they didn’t encode proteins or served as switches. For example, sometimes our cells make RNA molecules that don’t simply serve as templates for proteins. Instead, they have jobs of their own, such as sensing chemicals in the cell. So those stretches of DNA are considered genes, too–just not protein-coding genes.

With the exploration of the genome came a bloom of labels, some of which came to be used in confusing–and sometimes careless–ways. “Non-coding DNA” came to be a shorthand for DNA that didn’t encode proteins. But non-coding DNA could still have a function, such as switching off genes or producing useful RNA molecules.

Scientists also started referring to “junk DNA.” Different scientists used the term to refer to different things. The Japanese geneticist Susumu Ohno used the term when developing a theory for how DNA mutates. Ohno envisioned protein-coding genes being accidentally duplicated. Later, mutations would hit the new copies of those genes. In a few cases, the mutations would give the new gene copies a new function. In most, however, they just killed the gene. He referred to the extra useless copies of genes as junk DNA. Other people used the term to refer broadly to any piece of DNA that didn’t have a function.

And then–like crossing the streams in Ghostbusters–junk DNA and non-coding DNA got mixed up. Sometimes scientists discovered a stretch of non-coding DNA that had a function. They might clip out the segment from the DNA in an egg and find it couldn’t develop properly.  BAM!–there was a press release declaring that non-coding DNA had long been dismissed as junk, but lo and behold, non-coding DNA can do something after all.

Given that regulatory elements were discovered in the 1950s (the discovery was recognized with Nobel Prizes), this is just illogical.

Nevertheless, a worthwhile questioned remained: how of the genome had a function? How much was junk?

To Britten and Kohne, the idea that repeating DNA was useless was “repugnant.” Seemingly on aesthetic grounds, they preferred the idea that it had a function that hadn’t been discovered yet.

Others, however, argued that repeating DNA (and pseudogenes and so on) were just junk–vast vestiges of disabled genetic material that we carry down through the generations. If the genome was mostly functional, then it was hard to see why it takes five times more functional DNA to make an onion than a human–or to explain the huge range of genome sizes:

From Palazzo and Gregory, PLOS Genetics 2014

From Palazzo and Gregory, PLOS Genetics 2014. Size of genome is in millions of bases. The star marks humans

In recent years, a consortium of scientists carried out a project called the Encyclopedia of DNA Elements (ENCODE for short) to classify all the parts of the genome. To see if non-coding DNA was functional, they checked for  proteins that were attached to them–possibly switching on regulatory elements. They found a lot of them.

“These data enabled us to assign biochemical functions for 80% of the genome, in particular outside of the well-studied protein-coding regions,” they reported.

Science translated that conclusion into a headline, “ENCODE Project writes eulogy for junk DNA.”

A lot of defenders of junk have attacked this conclusion–or, to be more specific, how the research got translated into press releases and then into news articles. In their new review, Palazzo and Gregory present some of the main objections.

Just because proteins grab onto a piece of DNA, for example, doesn’t actually mean that there’s a gene nearby that is going to make something useful. It could just happen to have the right sequence to make the proteins stick to it.

And even if a segment of DNA does give rise to RNA, that RNA may not have a function. The cell may accidentally make RNA molecules, which they then chop up.

If I had to guess why Britten and Kohne found junk DNA repugnant, it probably had to do with evolution. Darwin, after all, had shown how natural selection can transform a population, and how, over millions of years, it could produce adaptations. In the 1900s, geneticists turned his idea into a modern theory. Genes that boosted reproduction could become more common, while ones that didn’t could be eliminated from a population. You’d expect that natural selection would have left the genome mostly full of functional stuff.

Palazzo and Gregory, on the other hand, argue that evolution should produce junk. The reason has to do with the fact that natural selection can be quite weak in some situations. The smaller a population gets, the less effective natural selection is at favoring beneficial mutations. In small populations, a mutation can spread even if it’s not beneficial. And compared to bacteria, the population of humans is very small. (Technically speaking, it’s the “effective population size” that’s small–follow the link for an explanation of the difference.) When non-functional DNA builds up in our genome, it’s harder for natural selection to strip it out than if we were bacteria.

While junk is expected, a junk-free genome is not. Palazzo and Gregory based this claim on a concept with an awesome name: mutational meltdown.

Here’s how it works. A population of, say, frogs is reproducing. Every time they produce a new tadpole, that tadpole gains a certain number of mutations. A few of those mutations may be beneficial. The rest will be neutral or harmful. If harmful mutations emerge at a rate that’s too fast for natural selection to weed them out, they’ll start to pile up in the genome. Overall, the population will get sicker, producing fewer offspring. Eventually the mutations will drive the whole population to extinction.

Mutational meltdown puts an upper limit on how many genes an organism can have. If a frog has 10,000 genes, those are 10,000 potential targets for a harmful mutation. If the frog has 100,000 genes, it has ten times more targets.

Estimates of the human mutation rate suggest that somewhere between 70 to 150 new mutations strike the genome of every baby. Based on the risk of mutational meltdown, Palazzo and Gregory estimate that only ten percent of the human genome can be functional.* The other ninety percent must be junk DNA. If a mutation alters junk DNA, it doesn’t do any harm because the junk isn’t doing us any good to begin with. If our genome was 80 percent functional–the figure batted around when the ENCODE project results first came out–then we should be extinct.

It may sound wishy-washy for me to say this, but the junk DNA debates will probably settle somewhere in between the two extremes. Is the entire genome functional? No. Is everything aside from protein-coding genes junk? No–we’ve already known that non-coding DNA can be functional for over 50 years. Even if “only” ten percent of the genome turns out to be functional, that’s a huge collection of DNA. It’s six times bigger than the DNA found in all our protein-coding genes. There could be thousands of RNA molecules scientists have yet to understand.

Even if ninety percent of the genome does prove to be junk, that doesn’t mean the junk hasn’t played a role in our evolution. As I wrote last week in the New York Times, it’s from these non-coding regions that many new protein-coding genes evolve. What’s more, much of our genome is made up of viruses, and every now and then evolution has, in effect, harnessed those viral genes to carry out a job for our own bodies. The junk is a part of us, and it, too, helps to make us what we are.

*I mean functional in terms of its sequence. The DNA might still do something important structurally–helping the molecule bend in a particular way, for example.

[Update: Fixed caption. Tweaked the last paragraph to clarify that it's not a case of teleology.]

There are 27 Comments. Add Yours.

  1. Abe
    May 9, 2014

    Nice summary!

    But I believe that your caption for the genome sizes from the Palazzo and Gregory article is incorrect. Firstly, reading the data as “Size of genome is in thousands of bases.” would make the human genome size a thousand times smaller than the 3.2 billion bases you cite. Secondly, the PLOS article references two databases for their graph, both of which use C-values (weight of DNA in a haploid cell, measured in picograms), rather than count of bases to measure the genome size (I would guess that the two measures are highly correlated). Can you look into this discrepancy?

    [CZ: Yes, that was a mistake. Thanks for the note. I've fixed it.]

  2. gina rex
    May 9, 2014

    Nature is a hoarder. It doesn’t start over, but keeps adding and replacing by small changes. Regardless of the pronouns “I” and “we” and claims of supreme evolutionary status, humans, like all species, are bags of both tried and true and repurposed “junk.”

  3. David Bump
    May 10, 2014

    Neither Darwin nor anyone else has ever shown how “over millions of years, [natural selection] could produce adaptations” other than those we’ve observed over thousands of years of hunting, fishing, farming, and breeding cattle and pets. Many of these have happened very rapidly, but we’ve never observed anything other than changes in size, proportion, color, and slight variations in integuments of a similar nature – longer, shorter, wavier, more or less.

    Please cite a study or two in which, from non-coding regions, a new protein-coding gene was observed to evolve. Was it truly new, or was it nearly identical to a gene in other organisms? Have we just happened to occasionally harnessed genes from viruses, or could it be that some viruses are part of a system of sharing useful genes? There could be many such wonderful discoveries if we looked outside our boxes of expectations, assumptions, and the dogmatic thinking that plagues us all, even scientists.

  4. Michael Leichter
    May 11, 2014

    @David Bump
    Ever heard of Pubmed ? Play with it, find and educate yourself !
    examples for a start of reading: (took me 2 minutes to find an there is much more)

    Okamura et al Genomics 2006: “Frequent appearance of novel protein-coding sequences by frameshift translation”

    Chen et al. Science 2010: “New Genes in Drosophila Quickly Become Essential”

    Guerzoni and McLysaght PLOS Genetics 2011: “De Novo Origins of Human Genes”

    • David Bump
      May 12, 2014

      @ Michael Leichter: Excellent suggestion! I have been using that and other resources for many years. However, I find actually reading at least the abstracts of articles gives a deeper education than a few minutes of web searching. Please keep in mind my exact wording and both questions, “Please cite a study or two in which, from non-coding regions, a new protein-coding gene was observed to evolve. Was it truly new, or was it nearly identical to a gene in other organisms?” Were new genes actually observed to evolve in these studies? If they were, how sure can we be that they aren’t variations within a limited set of relatively-easily obtained modifications, perhaps found in “junk DNA” where duplication and mutations can re-create them? Let’s look at your suggestions…

      Okamura et al Genomics 2006: “Frequent appearance of novel protein-coding sequences by frameshift translation”
      http://biologiaevolutiva.org/anavarro/bioevo/wp-content/uploads/2010/11/1_2006.pdf
      “…frameshift analysis of [existing - DB] human protein-coding
      genes. … We employed a strategy
      based on BLAST analysis [11] using simulated protein
      sequences translated in silico …”

      So they did not observe the evolution of genes, they inferred them using a computer simulation. Next…

      http://www.sciencemag.org/content/330/6011/1682
      Science 17 December 2010:
      Vol. 330 no. 6011 pp. 1682-1685
      DOI: 10.1126/science.1196380
      REPORT
      New Genes in Drosophila Quickly Become Essential
      Sidi Chen, Yong E. Zhang, Manyuan Long*
      “…we identified and phenotyped 195 young protein-coding genes, which originated 3 to 35 million years ago…”

      Again, not observation, but inference of events that supposedly took place millions of years ago. Next…

      http://www.plosgenetics.org/article/info%3Adoi%2F10.1371%2Fjournal.pgen.1002381
      De Novo Origins of Human Genes
      Daniele Guerzoni, Aoife McLysaght mail
      Published: November 10, 2011DOI: 10.1371/journal.pgen.1002381
      “In this issue of PLoS Genetics, Wu et al. [15] report 60 putative de novo human-specific genes. … the genes are simple, and their evolution de novo seems plausible. …Some of these cases may be human-specific extensions of pre-existing genes, rather than entirely de novo genes”

      I’m sure someone with your education knows the significance of “putative” and the uncertainty implied in “seems plausible.” Finally, let’s take a look at another case from the “much more” out there file…

      http://www.genetics.org/content/176/2/1131
      Evidence for de Novo Evolution of Testis-Expressed Genes in the Drosophila yakuba/Drosophila erecta Clade
      David J. Begun, Heather A. Lindfors, Andrew D. Kern and Corbin D. Jones
      “Here we use testis-derived expressed sequence tags (ESTs) from Drosophila yakuba to identify genes that have likely arisen either in D. yakuba or in the D. yakuba/D. erecta ancestor. … three of these genes have very short open reading frames, which suggests the possibility that a significant number of testis-biased de novo genes in the D. yakuba/D. erecta clade may be noncoding RNA genes. These data, along with previously published data from D. melanogaster, support the idea that many de novo Drosophila genes function in male reproduction and that a small region of the X chromosome in the melanogaster subgroup may be a hotspot for the evolution of novel testis-biased genes.”

      Again, we see analysis of existing genes is the basis for claims that sometime in the past “new” genes “have likely arisen…” Note, too, that the area of expression and the functioning of these new genes are clustered and they may arise from a “hotspot” on a chromosome. This suggests again the possibility that what we’re observing could be the currently active variants within a genetic toolbox that allows for long-term adaptations, storage of unused variants, and possibly borrowing of useful sequences imported from other organisms by viruses.

  5. Donald Forsdyke
    May 11, 2014

    The non-genic DNA gets preferentially transcribed into RNA when cells are “stressed,” such as when invaded by a virus. It is then beneficial to have sequences that match virus transcripts. Then double-stranded RNA can be formed and this further activates cell alarms. Non-conservation of this “junk” DNA keeps predatory viruses guessing a to what “antibody RNAs” they will encounter in their next host. For more see: http://post.queensu.ca/~forsdyke/book05.htm, and also consider alternative strategies. One way of coping with a flat tyre is to carry a puncture repair kit. Another is to carry a hundred spare tyres. Depending on the life style of the organism, one strategy may be better than the other.

  6. Marc Robinson-Rechavi
    May 13, 2014

    I wouldn’t say that settling at 10% functional, or even 20%, would be a “somewhere in between the two extremes”. It would be exactly what defenders of junk DNA have always said. I don’t know of any “extreme” position defending 98% junk in the human genome.

    See for example the accounting (pre-ENCODE) of Larry Moran at Sandwalk:
    http://sandwalk.blogspot.ch/2008/02/theme-genomes-junk-dna.html

    which is actually very consistent with the numbers given by Ewan Birney (ENCODE head) on his blog:
    http://genomeinformatician.blogspot.co.uk/2012/09/encode-my-own-thoughts.html

    [CZ: I'm not referring to figures that particular people like Moran or Birney are putting forward. The extremes I'm referring to are the conceptual ones--at one end, the only functional elements being protein-coding genes, regulatory elements, and a small number of RNA genes; at the other, a fully "alive" genome. It's certainly true that Moran has estimated 10-20 percent of the genome to be functional. It's also important for readers to appreciate that this would mean that the non-coding portion of the functional genome would be 5 to 10 times bigger than the coding portion.]

  7. Michael Leichter
    May 13, 2014

    @ David Bump
    Just to clarify:
    1st: I read the abstracts of all my suggestions and 2nd: I actually read the Guerzoni and McLysaght paper.

    From your comments I take that you don’t accept inference/evidence from phylogenetic data and computer simulations (so bad for you).

    I suppose that you won’t accept anything of the evidence being produced so far.

    As you wrote: “… that supposedly took place millions of years ago”
    I’d like to ask you: what do you believe is age of the earth ?

    I have in addition some further problems with your interpretation of the abstracts, as you say (an example): “This suggests again the possibility that what we’re observing could be the currently active variants within a genetic toolbox that allows for long-term adaptations, storage of unused variants, and possibly borrowing of useful sequences imported from other organisms by viruses.”
    First of all I don’t understand your “This suggests again…”, I simply don’t see any connection between the abstract and your claims.
    As I understand it, you propose (based on what evidence? ) that there exists some kind of “ready made,dormant” information (toolbox?!) in the diverse genomes waiting there to be used.
    In this context I’d like to know from you who put this information there in the first place, if not the well known natural processes messing with the genome(s) ?!. The problem with that is (besides that you would have to explain where the “toolbox” comes from), that one principle in genetics says: use it or loose it. “Dormant” sequences acquire (neutral) mutations over time and the information is rendered useless and your “toolbox” would be gone.

    I guess (correct me if I’m wrong) that you don’t accept evidence from the test tube either? There is an interesting report (before you ask: I read it a while ago) by Keefe and Szostak, from 2001 in Nature: “Functional proteins from a random-sequence library”, who present evidence that there is a lot of functional “potential” in random sequences given a little mutation and selection. (YES: I know it’s in vitro).

    Tell me: What evidence would you accept?!

    I guess that you expect that someone in a controlled environment (I hope that you accept controlled environments, as they tend to be not 100% natural) keeps a population of organisms, waits for some random sequences to appear in their genomes (de novo). Then you want to see that some of these sequences are expressed and code for a functional protein. (I hope you would allow for some time for drift and selection to kick in.)
    To sum up, I guess you expect the “ultra Lenski”. Or?

    Again: What evidence would you accept?!

    • David Bump
      May 14, 2014

      @ Michael Leichter: Thanks for the clarification, as I’ve had people tell me to read things they hadn’t read themselves before. Yes, I do expect something like an “ultra Lenski” study. I had something of an epiphany when I studied some key historical writings, especially several of Francis Bacon’s and Charles Darwin’s _Origin_: All the trouble in science comes from getting away from its original and proper domain — repeated observation and experimental verification of ongoing processes. All of the really solid and useful science comes through this method, all the ongoing conflict with religious beliefs, plus other problems, come from areas where philosophical assumptions play key roles, where data is extrapolated to the extreme, where inductive reasoning becomes jumping to conclusions, etc.
      Living things have been showing more complexity, specificity, and levels of interactivity than researches suspected for as long as they’ve been guessing at it. Huxley and Haeckel thought that some goo from the ocean floor might be a sort of living fossil of chemicals on the threshold of life (Bathybius haeckelii). Pasteur had a hard time convincing some researchers that the tiniest forms of life couldn’t simply form from nutritious broth. Metabolism was at first thought to be roughly equivalent to combustion.
      The gradual nature of the discovery of the intricacies of life has had the effect illustrated by boiling a frog by gradually heating its water, or the White Queen effect of believing the impossible through practice. Go back in time and stand beside Wilberforce and tell everyone about the system of storing, retrieving, and acting on information within each cell and tiny bacteria, putting Babbage’s Differential Engine to shame (Huxley might interrupt with, “I’d like to know from you who put this information there in the first place”), tell them of the many molecules that fit together like locks and keys, try to tell them how chlorophyll acts to gather light like an antenna and convert the energy through a level of physics that hasn’t been invented yet into a useful form, at a scale below the atomic. Tell them of the marvelous rotary mechanism that provides the cell’s energy, and the leglike motors that march along rails with loads of chemicals. I wonder if even Huxley would believe that such things could form from natural selection acting on random variations, despite the philosophical necessity of naturalism.
      Be that as it may, over 100 years of science based on the doctrine that it must have happened somehow may leave you thinking it’s easy to believe, there’s plenty of evidence for it… well, fine, but if you can believe that, why not believe there’s still more to the story?
      I base my hunch that the new genes are actually part of a system for controlled, limited adaptation on the observation that they are similar to old/other genes, and that there are similar concepts in biology: references to “the Hox regulatory toolkit” (http://www.plosbiology.org/article/info:doi/10.1371/journal.pbio.1001774 SYNOPSIS: A Footnote to the Evolution of Digits, Mary Hoff, Published: January 21, 2014DOI: 10.1371/journal.pbio.1001774)
      Molecular pathways as a “toolkit”: “‘We have found that the cell biology toolkit was pretty sophisticated before the dawn of animals,’ says Sean Carroll of the University of Wisconsin-Madison’s Howard Hughes Medical Institute.” (Ancient ancestor’s legacy of life
      By Dr David Whitehouse, 2003, http://news.bbc.co.uk/go/pr/fr/-/1/hi/sci/tech/3086681.stm)
      Genetic cassettes in lampreys: (“Another manifestation of GOD” — review article by Martin F. Flajnik (_Nature_ v. 430, 8 July 2004 p. 157))
      There’s also a “cassette” system in a virus: “A is for adaptation” by Jef D. Boeke (Nature, v. 431, 23 Sept. 04, p. 408)
      The frameshifting also reminded me of how insects use alternate splicing to obtain flexibility to respond to various threats: “Insects may have complex immunity: Thanks to alternate splicing of Dscam, they could possess up to 18,000 immune receptors,” a Daily News item by Charles Q. Choi in _The Scientist_, http://www.the-scientist.com/news/20050819/01
      Then there’s the growing exploration of epigenetics (e.g. NATURE NEWS, “Epigenetics posited as important for evolutionary success” Sujata Gupta, 09 January 2013)
      There was a study indicating that epigenetics could be responsible for recreating a gene lost in the previous generation, but I can’t locate it now.
      This report relates to your question about preservation against genetic drift: “‘Junk’ DNA reveals vital role: Inscrutable genetic sequences seem indispensable,” 7 May 2004, by Helen Pearson (http://www.nature.com/nsu/040503/040503-9.html)

      So, even as an evolutionist I would encourage researchers not to camp on the easy, most desirable answers, but look to see if there is more involved.

  8. amphiox
    May 14, 2014

    David Bump, to refuse to accept an inference of past events based on solid analysis of observation of present features, and to demand, apparently, a real-time direct observation, of events which the theory itself explicitly states are rare, random, and unpredictable, is neither a reasonable, nor fair, standard of evidence.

    This is akin to refusing to accept that a fossil represents a living organism because you did not directly observe that organism being born, or to refuse to accept that a corpse with a stab wound through the heart, a bloody knife, and fingerprints on the knife belonging to a third party is evidence for a homicide because you did not directly observe the killing blow being struck.

    And to propose that viruses are “part of a system for sharing genes” presupposes, without any evidence, that there actually is such a thing as a “system for sharing genes”. It is therefore non-parsimonious compared to the competing explanation that lateral gene transfer mediated by viral activity is a stochastic, random event.

    Even if there were equally zero direct evidence for both of these suppositions, the most parsimonious proposal is preferred until such time as direct evidence is accrued to rule it out in favor of the less parsimonious one.

  9. amphiox
    May 14, 2014

    More for David Bump

    http://www.plosgenetics.org/article/info%3Adoi%2F10.1371%2Fjournal.pgen.1002379

    The conclusions of this paper depend on the presupposition that humans and chimpanzees shared a common ancestor with a single genome. In other words this paper assumes that the vast existing evidence that points to this shared ancestry is correct. (Keep in mind that every scientific paper must accept that foundational research critical to its premises done in the past is broadly correct. One cannot recapitulate the entire history of science in the background section of every scientific paper).

    Given that single assumption, there is no reasonable explanation for the direct observation of 60 places in shared genome of the human and chimpanzee where in the human there is a protein coding open reading frame (ie a gene), and in the chimpanzee there is no protein coding open reading frame (ie non-coding DNA), but with the sequences closely conserved, except for the easily identifiable mutations that produced the open reading frame, other than the evolution of a functioning protein-coding gene from previously non-coding DNA, as a result of a mutation that creates an open reading frame where previously one did not exist.

    • David Bump
      May 14, 2014

      Re: more from amphiox: Love the bite-sized chunks. The need to accept (assume correct) the body of work preceding one’s own is both the basis for the growth of science, and the weakness by which it can go astray IF each stage is not carefully confirmed and established in the most rigorous fashion. The branch of biology concerned with evolution began with Charles Darwin decorating a bare framework of natural filtration of variations with mistaken ideas of inheritance and the major cause of variation, supported by simple observations extended by free extrapolations and speculations, followed by proposing tests and then proposing reasons why the tests might never be passed. Given that beginning, and a sad history following, I am suspicious of anything in the field that goes beyond the observation of the effects of natural conditions on variations.

      To claim something like, “there is no reasonable explanation… other than…” is not the work of science (though a key tool in Darwin’s approach), but the sort of philosophical argument that science was intended to avoid. There may indeed have been a new gene formed in humans, but how significant does it really have to be, and exactly what is its significance? was it from a gene from an ancestor shared by chimpanzees? Have we found out if Neanderthals had it? Did humans always have it? Did the human population at one time include both variants? Could it become non-coding again, and then later become coding again? Has it gone back and forth in the past?

  10. David Bump
    May 14, 2014

    @ amphiox: I have no problem with the inferences that are based on what we can observe and demonstrate today, but I believe science is best done in a realm where there is no question of a “fair standard of evidence.” It is certainly reasonable to ask that something considered to be a “fact” or an established part of “science” (or “knowledge”) should be something that is actually established to be true, and known, in a way that everyone can rely on.
    Experiments have been done replicating the processes of decay and preservation, people have observed murders, the making of fingerprints, etc. When we have observed exactly what goes on in the formation of new genes, then we will know what is possible.

    Proposing something doesn’t presuppose anything except a possibility; otherwise every scientific investigation would start as a matter of begging the question. As I pointed out in my reply to Mr. Leichter, life has been turning out to be “less parsimonious” than thought since the beginning, and I do not believe we have reached the end of its surprises yet. At any rate, it’s hardly in the spirit of scientific investigation to sit on the easiest answer as soon as it comes along and declare the search is over.

  11. Michael Leichter
    May 14, 2014

    @ David Bump:
    You misunderstand the meaning of parsimonious. Science is not looking for the easiest answers, it looks for the most parsimonious answers which are in agreement with measurable facts:
    “At any rate, it’s hardly in the spirit of scientific investigation to sit on the easiest answer as soon as it comes along and declare the search is over.”

    Further you seem not to understand that scientific knowledge is always provisorial, I am willing to abandon evolutionary theory if a better model turns up. But keep in mind that at present modern evolutionary theory gives by far the best answers, it leads to the best predictions and it is most parsimonious compared with all other models that exist (of which none can explain anything).

    By the way, I am still interested in hearing:
    What evidence would you accept?! (For answering the question you initially posed on “new” protein-coding genes)
    A comment on the Szostak Study would be nice as well (my previous post referring to my last but one post).

    • David Bump
      May 14, 2014

      @ Michael Leichter (or by now should it be, “To Mike”?)
      I understand what “parsimonious” means and how it is supposed to be used, but human nature what it is, I think Occam’s razor is often applied subjectively. “Easiest” is shorthand for a number of possibilities, “currently favored,” “philosophically necessary,” etc.

      Likewise, “provisional” seems to be applied to varying degrees, sometimes for good reason, such as the differences between hypotheses and laws, and sometimes it seems to be thrown in when it is pointed out that quite uncertain things are being talked about as if they are facts. “Oh, well, of course everything in science is provisional…” I’ve seen things, from the age of the universe to the significance of newly-discovered fossils to the latest research on stem cells or nutrients, presented to the public as if it were information they could take to the bank; and later it turns out surprisingly (not to say, embarrassingly) different. Sure, the scientists didn’t know all the facts at the time, and in the official papers in the journals they used some (or a lot) of qualifying phrases, but the headlines (and the soundbites or even popular-level articles by the scientists) didn’t convey that provisional nature, and in some cases it seems the difference was more than the scientists really expected they could be wrong.

      Now pretty much all creationists believe in evolution, according to several popular definitions. From what I’ve seen, several of these cases of new genes might be true without indicating that they could produce all living things from protozoa over any given amount of time. However, as long as it is one of the (generally unwritten) laws of science that everything can and must be explained by natural forces alone, then some form of gradual evolution is the only plausible, parsimonious scientific explanation for the existence of all living things. So I have my doubts about how provisional belief in evolution can be. In cosmology it is now recognized that nature as we know it did not always exist, and in astronomy/astrophysics there are black holes which go off the charts at their event horizons. However, both of those are reached by simple extrapolations and (for the Big Bang) puts the unknown as far back as possible or (with Black Holes) as exceptional anomalies that don’t cause any problems for belief that everything on this side has always been perfectly natural. With evolution, it’s been believed from the start without anybody actually demonstrating a mechanism powerful enough to account for the ultimate effect it supposedly explains, so all the evidence against it, or disproving it, is simply consigned to the pile of things we’re supposedly going to figure out one day. We “have” to, because (as the general reaction to Intelligent Design shows) if evolution isn’t the perfectly natural explanation for all of life, the Unknown/supernatural operated on this side of the time/space continuum, and while some people seem happy to think that science makes the god hypothesis a useless vestige of the past, a lot of VIPs get very upset at the idea that science might run into a wall like this that suggests a highly intelligent agent had to intervene from outside the natural limitations of this world to produce the full panoply of organisms, and their variability was due to (and limited to) designed adaptability and degeneration.

      I did say that I think we need to keep investigating along the lines of Lenski’s approach. However, the study by Cai et al. does seem to make a strong case, as far as it goes. I admit I’m getting out of my depth in knowledge and time I can devote to all this as an amateur. Of course, there is the provisional nature of science, and as always in a good report, they note that further study is needed. I’m particularly interested in the possibility that the “new” gene is actually a retained gene that was always in that species of yeast, but downgraded, lost or never existed in the organisms farther down the tree. I don’t know how to judge or balance the observations such as “… overall identity across those four Saccharomyces species is 35.71%…” “…the length of this intergenic region does not change much across all those species …” with the statement that “… we can make an estimate that the origin of the BSC4 ancestral sequence can be dated back at least to the last common ancestor of A. gossypii and S. cerevisiae, i.e., >100 million years ago..” especially when all this depends to some extent on the assumption that “…an inverted gene duplication event formed the syntenic orthologs flanking the ancestor of BSC4.”

      You are right that I have my doubts about the in vitro approach used by Anthony D. Keefe & Jack W. Szostak. Also, while they used a large library of proteins, the total possibility space for proteins is vastly larger, and the seem to have stuck to the shallow end of the pool (80 amino acids) and it seems to me that judging functionality by the ability to bind to ATP is setting the bar low.

      I didn’t mean to be cherry picking about phylogenetic data, rather I know I’m only throwing out an amateur hypothesis and saying we don’t really know if something similar might turn out to be something we have to consider. It goes back to that provisional nature of scientific knowledge, and why I think science needs to stick close to actually observing things repeatedly, and checking for other possibilities even when one has proven possible. Catching the viruses in the act. There may be ways to make such observations practical.

      Remember, the Tree of Life was considered to be a simple, branching pattern for some 100 years or more, and then HGT got thrown in, and then fusion and the endosymbiotic incorporation of whole organisms. The degree of hybridization that seems to have occurred is another surprise (Why Darwin was wrong about the tree of life, 21 January 2009 by Graham Lawton, New Scientist 2692, http://www.newscientist.com/article/mg20126921.600-why-darwin-was-wrong-about-the-tree-of-life.html?full=true ), and there are surprising cases of “parallel” and “convergent evolution.”

  12. Michael Leichter
    May 14, 2014

    @David Bump
    an other reference which would fit your question: You just would have to accept phylogenetic data!
    Cai et al, Genetics 2008: “De Novo Origination of a New Protein-Coding Gene in Saccharomyces cerevisiae
    (admitted: I just skimmed it, but I put it on my reading list)

    By the way you seem -if it suits your case- to selectively accept phylogenetic data as for example you take for granted that “possibly borrowing of useful sequences imported from other organisms by viruses.” makes sense. How could you ever know that sequences looking like “coming” from other organisms indeed come from other organisms ?
    If you didn’t caught the mentioned virus “in flagranti” in the process of transferring the genetic material ?

  13. SocraticGadfly
    May 16, 2014

    Michael Leichter is politely trying to say that it seems that David Bump is not looking for real evolutionary evidence, but which talking points most need to be shot down.

    That said, David Bump appears to be this person. I won’t post the second link, but a person living in the same city in Michigan has a Facebook page saying he’s graduated from **Bob Jones University.**

    So, politely, Carl, I’m going to suggest David is some sort of troll. http://www.creationconversations.com/profile/DavidBump

    • David Bump
      May 16, 2014

      http://en.wikipedia.org/wiki/Troll_(Internet)
      In Internet slang, a troll (/ˈtroʊl/, /ˈtrɒl/) is a person who sows discord on the Internet by starting arguments or upsetting people,[1] by posting inflammatory,[2] extraneous, or off-topic messages in an online community (such as a forum, chat room, or blog) with the deliberate intent of provoking readers into an emotional response[3] or of otherwise disrupting normal on-topic discussion.

      We’ve been having a friendly, intellectually stimulating exchange about the topic of the column. It includes my acceptance that certain evidence for speciation (and therefore evolution) is valid.

      By looking up and posting personal information (based on my using my actual and rare name, unlike most trolls), with the sole intent of turning people against me on extraneous grounds (not a polite thing to do at all), it would seem that YOU, “SocraticGadfly” are being the most troll-like.

  14. amphiox
    May 17, 2014

    David Bump, please provide a link to observable, empirical, and testable evidence to support your assertion that Occam’s Razor is applied subjectively in the actual practice of science, and, in particular, the development of scientific consensus.

    If you do not have such evidence, please withdraw that frankly slanderous assertion, which essentially accuses hard working scientists of being unscientific.

    • David Bump
      May 17, 2014

      @ amphiox My observation about one difference between the ideal of science and the actual practice isn’t a scientific one, and if accusing scientists of sometimes showing that they are still human and not some exalted beings who can always be perfectly objective is “slander,” so be it. It’s true that I don’t know of any studies off-hand examining the practice of applying Occam’s razor, but I’m sure you’re aware of reports on the failings of the peer-review system, incidents of fraud, plagiarism, etc. So why should you be surprised or shocked if I claim there’s a more subtle error that occurs from time to time?

      @Michael Leichter Thank you. I assure you, however, I’m not at all on the fence. It is simply that creationists (many of us, anyway) do accept evolution according to several definitions. When it comes to explaining how all of life got here, however, we find it falls short. In the first place, no natural processes have ever been shown to be capable of producing that result. In the second place, the fossil record does not show it happening. There are different organisms appearing first in different layers, but the biggest differences show up suddenly and often continue with only minor variations through many layers, and there are living creatures almost identical to fossils in layers supposedly tens or even hundreds of millions of years old. What else is there to make me believe this story? I have read books by Sagan, Dawkins, and Dennet, not to mention Darwin’s _Origin_, and I had a special subscription to Nature for a few years and studied the original reports there relating to evolution. I don’t believe it’s a matter of convincing evidence, but having nothing else to believe in if one assumes that everything has always been purely natural. I believe this is a remnant of thinking that arose in the 17th century and peaked in the Victorian era or perhaps the early 20th century. It’s now generally believed that nature as we know it has not always existed, and there are (black) holes in it. We also know that nature does not behave at the very smallest scales like billiard balls or anything else on the scale of our normal experiences. We accept that life cannot arise from combinations of nutrients and heat under normal conditions, or any of many other conditions. It is currently reckoned that most of nature is made out of forms of energy and matter we know very little about. Given all this and more, I find it archaic to expect that we can reconstruct the past for billions of years with anything like scientific confidence.

      @SocraticGadly, I don’t mind your looking me up, what I object to is your attempt to wave the information around and thereby “poison the well.” I suppose if your requirements for having a civil or useful conversation include the likelihood that someone will be convinced by your arguments to change his mind, you might have a point. On the other hand, if I were to take that as a requirement — well, I hardly think amphiox or Michael Leichter are sitting on the fence in their thinking. I do know of people who were strong evolutionists who became creationists. You can Google the book _Persuaded by the Evidence_, which contains several such stories.

  15. Michael Leichter
    May 17, 2014

    @SocraticGadfly

    I think you are wrong. I don’t think of David Bump as a troll and I think that you should excuse yourself towards him.

    I believe that David is not immune to evidence. Did you ever hear a creationist (tentatively) conceding a point ? He is even aware of his limitations as an amateur (I’m sort of an amateur in the field of evolutionary biology, too, though I’m a trained biologist/biochemist).
    He states: “However, the study by Cai et al. does seem to make a strong case, as far as it goes. I admit I’m getting out of my depth in knowledge and time I can devote to all this as an amateur.”

    I think if David would really try and dive into a book like the “Futuyma” and work his way through, he would end up being an evolutionist (though a theistic one). I think he plainly lacks an understanding for how massive the independent lines of evidence for evolution actually are. An understanding which is difficult to be acquired by reading blog posts alone…If this understanding would be there, he for sure would accept single pieces of evidence more easily, which by themselves maybe seem not to be that impressive.

    You have in addition to understand where he most likely comes from: an environment where questioning evolutionary biology is the norm (in principle this is a good thing, you just have to be willing to educate yourself).
    I for myself started getting myself acquainted more with evolutionary biology when, by accident, (and now don’t laugh, this happened, when I was already a “fully trained” biologist) I watched a debate by (again don’t laugh) Kent Howind. All that Howind stated was bullshit, but I realized that at that moment, in a debate setting, I would not know how to behave. I would have gone “down” :the Gish-Gallop is a stupid but powerful weapon in this setting.

    So please take David serious, I think he is sitting on the fence.
    A little push and he is on our side.

  16. SocraticGadfly
    May 17, 2014

    I’ll bow out of the conversation. And, even if “troll” is too harsh, I’ll stand by the rest of my observation about David. Michael, I don’t know if you’re just being too charitable or actually being naive.

    David, it’s called teh Google. If you don’t like that I was able to find that out about you, that’s your problem, not mine. The fact that you graduated from Bob Jones isn’t extraneous to this conversation at all, either. Not in my opinion.

  17. Barbara
    May 18, 2014

    David Bump, It is not a law of science (written or unwritten) that only natural causes could be studied. True, scientists today avoid supernatural explanations, but that’s because we’ve been there, tried that, and it didn’t work. The early geologists and biologists of the West were working within a Christian framework. That theological framework didn’t help, though. It failed in two ways. First, observations from geology, biology, and astronomy didn’t fit with the Biblical stories that had seemed to be history. More importantly, the idea of God didn’t result in useful, accurate predictions about the nature of the earth, the universe, or living beings.

    I mean, maybe God created the earth (at its most basic, that’s a hypothesis we can’t provide evidence for or against) but if so, what should we expect? What predictions can we make? Would we predict the earth is 6000 years old or 100,000 years old or 4.5 billion years old? How old should the universe be? What processes would we predict would be involved? What observations should we be able to make as a result of God’s involvement? People spent a lot of time and effort trying to fit theology and natural history together, and it didn’t really work. For that reason (and not because of pre-existing laws of science), scientists stop using God as an explanation.

    The other side of that, of course, is that natural explanations do give us predictions we can test. Often they are wrong, but we keep finding explanations that do work. If we can test supernatural explanations in a consistent way, form predictions that we can test, and that work better than natural ones, we’ll go with them. Hasn’t happened yet.

    Note that the fact that God isn’t a useful idea in science doesn’t necessarily mean that he doesn’t exist (though it may), but does place boundaries on what he/she/it can be like.

    • David Bump
      May 19, 2014

      Barbara, thanks for jumping in with the history lesson and philosophical musings. However, you seem to have (and perhaps assume that I do, too) a mindset in which a theological framework for science means trying to study the supernatural scientifically (an oxymoronic and quixotic endeavor), or that theology (particularly the Bible) has things to say about geology, biology, and astronomy which don’t fit. Well, the latter is true, but only IF one assumes that our efforts to understand geology, biology, and astronomy via extrapolation of natural forces and processes we observe are more trustworthy than the Biblical framework — but that would actually be using the framework of Naturalism instead. Which is exactly what happened — in certain areas of those fields — after the theological framework had successfully inspired people who believed that a systematic study of nature would be fruitful because it was created by and governed by an intelligent, rational Being. Kepler saw it as “thinking God’s thoughts after Him,” and Newton saw the natural workings of gravity and other forces as a system that must have had Someone to ordain its values and patterns. You will find that those who claimed Biblical things “didn’t fit” were matching up ideas of how nature must work, and must have worked without supernatural interruption, intervention, or origination (or set up straw man versions of them to knock down).

      The thing is, many of the natural explanations have never been shown (and often cannot be shown) to work, or to have worked. The more we learn about life, the more it looks like something designed with intelligence far beyond our own, and our attempts to find someway to overcome the universal tendency for complex systems to break down and get something with dynamic, organized complexity to come into existence continue to look like attempts to build perpetual motion machines. However, suggest that life arose by Intelligent Design, and see how many people “go with” that — and how many try to bite your head off (metaphorically speaking, of course).

      What people can’t accept (even many Young Earth Creationists) is a true Biblical framework, which, although it wouldn’t impact any practical or useful area of science (except to encourage it), it would “place boundaries on” what science can tell us. The age of the Earth? I don’t know. The Biblical framework as I understand it does strongly suggest a few thousand years, but it doesn’t specify and there might be something we’re missing. I think the human fossil and archaeological record is inconsistent with the story “science” tells us now. As Einstein and others have pointed out, too, time is relative, so while God may have created everything a few thousand years ago, the farther reaches of space may have undergone a relativistic event which smoothly increased its age (as viewed from Earth) to billions of years. Personally, I think when the Bible says “God created light” first, it means (possibly) He created the light of a universe as if it had been there for as long as necessary. Including new ones coming from suns that didn’t exist yet, such as our own. Why should God be matter-centric? Bugs some of my fellow creationists who think God would never “lie” like that, what with a virtual history of things that weren’t there and all, but I say it’s not a lie, since He told us about the special event.

      You see, IF we truly use a Biblical framework for science, there’s no question of things not fitting, just as in the naturalistic/evolutionary framework there’s no question of things not fitting — the fossil record doesn’t look like a gradually spreading tree? Just draw in dotted lines and imagine the dots were connected — they HAD to be, somehow. Mutations don’t show any variations that could lead to increased organized dynamic complexity? Hey, change is change, right? The required mutations MUST have come along at just the right time and joined together to provide increased fitness when the opportunity was ripe. How else could it have happened? And look at all those organisms with this or that or a whole bunch of similarities that turn out not to be due to common ancestry — wow, MUST have evolved in parallel, or even converged. But of course all the OTHER similarities must be due to common ancestry…

      So with a Biblical framework, all the normal, practical work of science is the same, but there’s a different set of head-scratchers and a different set of “must have happened… somehow… maybe this way”s, off in the world of the past and things that happened once upon a time.

      The real difference is, as you say, if you want to choose to limit the extent and inviolability of nature (and thus the power of science to tell us about the past), or to limit God. Other than that, everything in science is the same.

  18. R.A. Chisholm-Davin
    May 30, 2014

    Michael Leichter and SocraticGadfly seemed surprised that Mr. Bump would expect repeated observational and experimental verification of the theory of evolution. Experiment is the gold standard of science. Certainly theory and speculation come first, but if that is all one has, it is merely metaphysics, not science. Great scientists in the past tried to spur their fellow evolutionary colleagues to verify the theory through experimentation.

    Nobel Prize winning, brilliant evolutionary biologist and geneticist, Thomas Hunt Morgan (1866-1945), is famous for his early experiments with the Drosophila, or fruit fly and establishing the chromosome theory of inheritance. He desired to demonstrate, and even hoped to control, the process of evolution through experimental science. Morgan said that “It is the prerogative of science, in contrast with the speculative procedures of philosophy and metaphyscs, to cherish those [hypotheses] that can be given an experimental verification and to disregard the rest, not because they are wrong, but because they are useless,” (cited in Moore, 1984, p. 471).

    In the Presidential address at the dinner of the American Society of Naturalists in Boston in 1909, Morgan discussed at length the various theoretical models of a possible evolutionary mechanism which were being considered in his day. He stated that he was “convinced that evolution has taken place” primarily because of the paleontological repository of fossils. However, Morgan admitted that historical evidence (i.e. fossils) is an interpretive endeavor – paleontologists have only a speculative connection between their various creations of fossil phylogenies, not hard evidence. And although Morgan believed evolution happens, he acknowledged that, in his day, no one had seen it happen and no one knew the cause, or “process”. He concluded by saying that evolutionists must, and will, stop speculating and start experimenting:

    “But I frankly confess that I feel, as no doubt every one does who tries to keep in touch with modern work, that the time is past when it will be any longer possible to speculate light-heartedly about the possibilities of evolution, for an army of able and acute investigators is carefully weighing by experimental tests the evidence on which all theories of evolution and adaptation must rest.”

    Outspoken Mendelian popularizer and influential experimental English evolutionary scientist William Bateson (1861-1926), was the first to suggest using the term “genetics” to describe the scientific study of variation and inheritance. In 1921, he delivered an address before the American Association for the Advancement of Science at Convocation Hall at the University of Toronto, entitled Evolutionary Faith and Modern Doubts, the transcript of which was published in the esteemed journal, Science. He used the example of the Division Angiosperms as an example to show that there are no intermediates and therefore Darwin’s idea of gradual transformations was doubtful. His purpose in the address was to spur on scientists to stop using speculation and conjecture and to conduct experimental research to discover how evolution actually works since there is neither material evidence for their phylogenies nor for a mechanism. He upbraided evolutionary scientists of the late 1800s and early 1900s, by saying, “What glorious assumptions went without rebuke. Regardless of the obvious consideration that ‘modification by descent’ must be a chemical process, and that of the principles governing that chemistry, science had neither hint, nor surmise, nor even an empirical observation of its working, professed men of science offered very confidently positive opinions on these nebulous topics which would now scarcely pass muster in a newspaper or sermon.” (Bateson, 1922, p. 57) These scientists spoke and wrote to spur on fellow believers in their respective fields of specialization to action – to undertake scientific experimentation to find evidence for a plausible cause for evolution.

    If the theory of evolution cannot be verified by experiment or observation, then it is, as Morgan put it, “useless”, and outside of true science.

    -Bateson, William. (1922). Evolutionary Faith and Modern Doubts. Science, New Series, 55(1412), 55-61.
    -Moore, John. (1984). Science as a Way of Knowing: Evolutionary Biology. American Zoologist, 24(2), 467-534.
    -Morgan, T.H. (1910). Chance or Purpose in the Origin and Evolution of Adaptations. Science, New Series, 31(789), 201-210.

  19. R.A. Chisholm-Davin
    June 3, 2014

    Michael Leichter and SocraticGadfly seemed surprised that Mr. Bump would expect repeated observational and experimental verification of the theory of evolution. Experiment is the gold standard of science. Certainly theory and speculation come first, but if that is all one has, it is merely metaphysics, not science. Great scientists in the past tried to spur their fellow evolutionary colleagues to verify the theory through experimentation.

    Nobel Prize winning, brilliant evolutionary biologist and geneticist, Thomas Hunt Morgan (1866-1945), is famous for his early experiments with the Drosophila, or fruit fly and establishing the chromosome theory of inheritance. He desired to demonstrate, and even hoped to control, the process of evolution through experimental science. Morgan said that “It is the prerogative of science, in contrast with the speculative procedures of philosophy and metaphyscs, to cherish those [hypotheses] that can be given an experimental verification and to disregard the rest, not because they are wrong, but because they are useless,” (cited in Moore, 1984, p. 471).

    In the Presidential address at the dinner of the American Society of Naturalists in Boston in 1909, Morgan discussed at length the various theoretical models of a possible evolutionary mechanism which were being considered in his day. He stated that he was “convinced that evolution has taken place” primarily because of the paleontological repository of fossils. However, Morgan admitted that historical evidence (i.e. fossils) is an interpretive endeavor – paleontologists have only a speculative connection between their various creations of fossil phylogenies, not hard evidence. And although Morgan believed evolution happens, he acknowledged that, in his day, no one had seen it happen and no one knew the cause, or “process”. He concluded by saying that evolutionists must, and will, stop speculating and start experimenting:

    “But I frankly confess that I feel, as no doubt every one does who tries to keep in touch with modern work, that the time is past when it will be any longer possible to speculate light-heartedly about the possibilities of evolution, for an army of able and acute investigators is carefully weighing by experimental tests the evidence on which all theories of evolution and adaptation must rest.”

    Outspoken Mendelian popularizer and influential experimental English evolutionary scientist William Bateson (1861-1926), was the first to suggest using the term “genetics” to describe the scientific study of variation and inheritance. In 1921, he delivered an address before the American Association for the Advancement of Science at Convocation Hall at the University of Toronto, entitled Evolutionary Faith and Modern Doubts, the transcript of which was published in the esteemed journal, Science. He used the example of the Division Angiosperms as an example to show that there are no intermediates and therefore Darwin’s idea of gradual transformations was doubtful. His purpose in the address was to spur on scientists to stop using speculation and conjecture and to conduct experimental research to discover how evolution actually works since there is neither material evidence for their phylogenies nor for a mechanism. He upbraided evolutionary scientists of the late 1800s and early 1900s, by saying, “What glorious assumptions went without rebuke. Regardless of the obvious consideration that ‘modification by descent’ must be a chemical process, and that of the principles governing that chemistry, science had neither hint, nor surmise, nor even an empirical observation of its working, professed men of science offered very confidently positive opinions on these nebulous topics which would now scarcely pass muster in a newspaper or sermon.” (Bateson, 1922, p. 57) These scientists spoke and wrote to spur on fellow believers in their respective fields of specialization to action – to undertake scientific experimentation to find evidence for a plausible cause for evolution.

    If the theory of evolution cannot be verified by experiment or observation, then it is, as Morgan put it, “useless”, and outside of true science.

    Bateson, William. (1922). Evolutionary Faith and Modern Doubts. Science, New Series, 55(1412), 55-61.
    Moore, John. (1984). Science as a Way of Knowing: Evolutionary Biology. American Zoologist, 24(2), 467-534.
    Morgan, T.H. (1910). Chance or Purpose in the Origin and Evolution of Adaptations. Science, New Series, 31(789), 201-210.

  20. Clive Delmonte
    July 4, 2014

    There is one aspect of the “junk” DNA discussion I seem to have missed. If we ask ourselves how certain events take place, e.g., the pairing of chromosomes prior to cell division, we arrive at situations yet to be explained which seem to have an inheritable basis. How can we know if any part of “junk” DNA does or does not contribute to solving such “problems” ?

Add Your Comments

All fields required.

Related Posts