The first to present his case seems right until another comes forward and questions him. (Proverbs 18:17)

In the debate between fiat creation and evolution, there are several positions people can, and have, taken. But we have to be aware that there are two ideas in the debate that we should distinguish: the age of the earth/universe and the development of life.

The first idea is the age of the earth/universe, which is either young – for the purpose of this article, young Earthians generally assert about 10,000 years – or old – about 18 billion years, with the Earth being 4.5 billion.

The second idea is that life could have been created by fiat or have developed over time. If you take fiat creation, the universe could be either young or old. If you take the latter, an old universe is required, even if you take the Gouldschmidt-Gould-Eldredge “punctuated equilibrium” route rather than the standard neo-Darwinian “uniformitarian” road [1].

So it makes sense to conclude that if we determine the age of the universe, we could come closer to finding the answer about creation versus evolution. However, even if we determine a young age for the Earth, there is the possibility of panspermia – the seeding of life on Earth from space. It would have to be fairly advanced life unless, again, punctuated equilibrium is the method of evolution. From what I’ve read, punctuated equilibrium only got a hearing because of the lack of transitional forms, and because of the apparent failure of neo-Darwinism to describe how life as we know it, in all its diversity and complexity, could have arisen, given how unlikely it is that genetic changes produce beneficial mutations. This is even given 4.5 billion years, the estimated age of the Earth [2]. So we’d have to determine a young age for the Universe; unless we’re only a bubble of a greater universe, from which life could have come. Oh, so many possibilities.

My training is in the social sciences, not the natural ones, although I’m fully aware of the scientific philosophy and method, and I’m not knowledgeable enough in any of those subjects to even call myself a dilettante. I can spot logical errors but I usually don’t have enough general knowledge of the field I’m reading to be able to validate what’s being said. I couldn’t tell you about the gradual or the catastrophic effect of water on the development of the geologic column. Don’t know about the nature of the earth’s magnetic field or the probability of the alpha and beta chains of the haemoglobin protein forming by chance. Then again, I don’t know why I prefer brunettes to blondes.

So I’ll read a book or article that asserts one position about a fact in one field, and then another book or article contradicts it, and then there’s a rebuttal to the contradiction, and so it goes. How much information is enough; until you can say, “I know enough to be convinced that this worldview is true”? Just one fact more, I suppose. But what kind of fact? It has to be one that means that the probability of any other explanation is zero.

But this sort of fact is comparatively rare, and we have to be cautious that what we believe are facts are not, in fact, interpretations of facts. Moreover, most facts depend on assumptions. Some assumptions are reasonable, but some depend on our worldview; and both might be wrong. But if you don’t know much about a field, how can you tell?

The only thing we can do is to learn as much as we can in a field, and come up with a hypothesis that best deals with the facts. But when we learn new facts, we have to review our theory in the light of those facts. This means that as long as there’s still more to learn about any given subject, our theory is only true to the best of our current knowledge.

One problem is that if we don’t want to believe where the facts lead, we’ll be willing to believe theories that are, on the face of it, ridiculous but are still not logically impossible. Such theories can’t be tested and so can’t be disproved. That doesn’t mean the theories are necessarily wrong, just that they can’t be tested by the scientific method.

The scientific method isn’t the only way to find truth but it is the most reliable way, as long as it is correctly applied. There are facts that science can prove directly, or at least provide powerful indirect evidence for. A hypothesis can be proved by direct scientific proof if it’s chemical or to do with physics [3]. Mathematics is the epitome of logical fields; however, as far as science is concerned, it must be applicable to the real world. Mathematical evidence might point to answers about the universe, but unless there is empirical data, it can’t be considered scientific proof.

According to The Skeptics Dictionary (, the James Randi Educational Foundation (JREF) is offering one million dollars for empirical proof of supernatural abilities [4]. There are several other organisations similarly offering rewards for such proof. As far as I know no one has won the challenge.

The terms of the JREF challenge are eminently reasonable: they stipulate “mutually agreed protocols” (2.1) and allow for an ability that, even if it appears supernatural now, turns out later to be explicable by natural processes (2.2); for example, teleportation, which was later discovered to be possible through quantum tunnelling. The JREF challenge provides guidelines for the kind of claims that are empirically untestable (2.3, 2.5) [5].

Being scrupulously fair, the JREF challenge allows for inference to the most probable cause (2.3; the phrase isn’t used but the example illustrates it), which means that if we can’t eliminate all possible causes, we are justified in believing that the most probable cause is the correct one. This is the kind of proof that most of the sciences rely on: those that study events that happened in the past, or whose cause cannot be directly perceived. If it were not so, scientific proof, in the strictest sense, would be next to impossible to obtain. This is because the most irrefutable scientific proof requires us to observe the event we are testing. (This criterion of observation allows for the use of equipment, such as telescopes and sonar, that aids our senses.)

By definition, an extra-sensory cause cannot be perceived by our senses, so it is impossible to get scientific proof, in the strictest sense, of ESP. It’s like a blind person saying, “If I can’t hear, smell, taste or touch colour, I won’t believe that it exists.” By nature, colour can only be sensed by sight. Extrasensory perception might nonetheless be part of a natural universe. However, if the most probable cause is a supernatural one, we can say the challenge has been won – unless we have an overriding commitment to the belief that there is no existence beyond the material universe. Unless we’re willing to change this assumption, we’ll have to find a natural reason for supernatural events. We’ll believe there’s a trick or technique the person uses that we didn’t catch onto, or a natural explanation that we don’t yet know.

It’s similar to the theory of dark matter: the physics requires that there be about four times the amount of matter in the universe that there is, but none can be found. In order to resolve the conundrum, astrophysicists hypothesise the existence of dark matter, which we cannot perceive but which somehow interacts with the matter that we can perceive. Dark matter hasn’t been proven scientifically to exist (but as it interacts with matter, we should eventually be able to); all we see is that the physics doesn’t conform to the mathematics, so the hypothesis of dark matter was formed. If all other hypotheses to explain the physics fail, dark matter can be said to be a working scientific theory, even though it can’t be perceived by our senses.

Science seeks to understand how life works by seeking a perceptible cause-and-effect relationship. Although neither dark matter nor a supernatural realm can be empirically proven, the results of empirical tests enable us to develop hypotheses about them. Why should we presume the non-existence of a realm, entity or force beyond our senses? Surely, if that’s where the evidence leads us, that’s where we should go. Eddie would.

What was I talking about? Going back to the start…. Oh yes, the plenitude of labyrinthine evidence, some of which points one way and some the other, and what facts could point us irrefutably to the truth.

From what I’ve seen, there are several theories that discount our existence by chance. Not the existence of just anything, but our existence, with all its complexity and diversity. Those theories are: biological information, irreducible complexity and minimal function.

Biological information refers to how DNA, RNA and protein uses a code to reproduce. This code is based on the 64 possible combinations of the four bases (adenine, cytosine, guanine and thymine) that form the rungs of the DNA ladder. A combination of three of any of these bases, each with their accompanying sugar and phosphate, is a code that refers to a particular amino acid. The combinations have no inherent connection to the amino acid: the code is arbitrary.

It’s the same with any language. For instance, the combination of the three letters D, O, and G doesn’t necessarily refer to a canine mammal. The written 26 letters of the English alphabet are arbitrary symbols that refer to various sounds. We combine these letters, verbally or in writing, to form words. We use these words to refer to various objects, not because the object and the word necessarily inherently belong together and cannot exist without one another, but because it is agreed that this word will refer to that object.

So we have the social convention that the combination of the letters D, O and G, in that order, refers to a canine mammal. That a word doesn’t have a necessary meaning can be seen in two ways: (1) one word can have several meanings, depending on context; and (2) different languages might have the same word, which has different referents.

(1) Look up the many meanings of the word “cat”. My dictionary gives about eleven.

(2) Take the words “rein” and “mist”. To an English speaker the first means a strip of material that controls a horse, and the second, condensed water vapour. To a German speaker, the first means “pure”, the second means “manure”.

So genetic coding is arbitrary. How is that important? It is important precisely because it is arbitrary. To convey a message, we need several things: a sender; a recipient; the knowledge that is conveyed [6]; the means through which the knowledge is conveyed (speech, writing (hard or soft copy); smoke; light; electric pulses); and a mutually recognised code by which the knowledge is conveyed (the language: English, Farsi, Morse, C++).

This mutually recognised code is arbitrary, and both the sender and receiver must know what the parts of the code refer to. This is whole purpose of crytography: creating messages that only people in the know can understand. The point is: the development of a code requires intelligence. By all we know, codes don’t happen through chance; however, this can’t be proved directly. Nevertheless, using the axiom of inference to the most probable cause, we can say information points to intelligence behind the existence of biological life.

Irreducible complexity and minimal function became known through the work of Michael Behe, a biochemistry professor at a university in the USA. His book Darwin’s Black Box looked at the complexity of life. The concept of a black box is well known in psychology to refer to an impenetrable process. In Darwin’s day, the cell was not known anywhere near as well as it is currently. It was thought that the cell, the basis of life, would be a simple, primordial jelly. As science discovered more about how immensely complex the fundamental building block of life is, the less probable it seemed that it could have happened by accident.

Irreducible complexity is the concept that describes how certain functions of an object cannot happen unless they are all in place. The common example given is that of a mousetrap. Without all the parts – base, spring, hammer, catch and trip – a mousetrap cannot work. Professor Behe shows how the same concept applies to many parts of biological life, such as the flagellum of an E. coli (that for all practical purposes functions like a boat’s motor) and the process of creating a cellular garbage disposal. If one part doesn’t work, the entire process fails; and Prof. Behe uses this as the basis of an argument supporting intelligent design of at least some of life because, he argues, using inference to most probable cause, it could not have happened by small, slow changes over time. (This is part of the reason why some professors have supported the hypothesis of punctuated equilibrium; not because there is evidence for it.)

Connecting back to the idea of information, many writers have noted that even small changes in the molecular information in the genome will result in nonsense messages, which in turn lead to deformities, illnesses, even death. Barney Maddox, who worked on the human genome project, calculated that even three changes in the 3.3 billion nucleotides of the human genome would lead to death. (This 3.3 billion is considering a pair of nucleotides: a full rung on the DNA ladder. If we’re considering a single nucleotide, it would be 6.6 billion [7].)

Along with irreducible complexity, Michael Behe, who has since written another book titled The Edge of Evolution, considers the idea of minimal function. The example he gives is an outboard motor that works but only turns at one revolution an hour. While all the pieces are in place and working, the motor isn’t useful. So a mousetrap whose spring comes from a truck, or whose catch is as strong as a tissue, will not work.

There are other arguments from other fields (history, logic and philosophy particularly) that I might add, but this article is already rather long and I’m getting bored with it. (The article, not with the topic.)


[1] I’m speaking of “uniformitarian” in the biological sense rather than the geological one. I believe that in the geological sense, the corresponding theory is now termed actualism, which indicates the uniformity of cause but rejects uniformity of intensity: this distinction allows for the influence of catastrophism.

[2] According to Barney Maddox, who worked on the Human Genome Project, a change in even three nucleotides in the 3.3 billion that make up the human genome is almost always fatal. (I think this might be a nucleotide pair, rather than a single nucleotide, as if we are talking single nucleotides, the human genome has 6.6 billion.)

[3] I chose this inelegant wording rather than the simpler “physical”.

[4] The relevant webpage, titled “Randi $1,000,000 paranormal challenge”, was updated on 13 May 2012.

[5] Although in 2007 the JREF began to require two preliminary qualifications: a media profile, however small, and testimony from a academician in a relevant field. Presumably this is the JREF’s way of saying “No time wasters please.”

[6] In information theory, knowledge is different to information. Knowledge is what is conveyed, information is how knowledge is conveyed.

[7] Some genetic mutations include the deformity which enhances resistance to malaria, but if there are two points of deformity rather than one, it causes sickle-cell anaemia, which has a 25% mortality rate. Another mutation is the case of the poorly named “superbugs” which resist treatment by antibiotics. In fact, they are genetic cripples who are only resistant because the antibiotics can’t latch onto them. It’s like a parking inspector being unable to clamp your car because it doesn’t have wheels, or a police officer unable to handcuff you because you don’t have hands. It’s an advantage in the present situation, but in the wider population not having wheels or hands is a severe limitation.