Vous êtes sur la page 1sur 6

Atheists are now faced with two problems.

Where does information come from in the

first place, and secondly, how could it increase over time. Dawkins proposes one
of the usual models whereby change is introduced randomly into the DNA and then
natural selection weeds out unwanted results.
Any similarity, whether at the morphology or cellular level, could be used to
support the evolutionist theory. Why limit oneself to hemoglobin? University of
Berkeley law professor Phillip Johnson, a leading figure in the Intelligent Design
community, points out that there are over 40 different kinds of eyes which,
because of their fundamentally differing structure, must have ‘evolved’
separately[37] since the alternative of a common ancestor would be unpalatable to
evolutionists. Notice the double standard Dawkins and others use. When data
appears consistent with one model, it is used as evidence, and when not, a new
story or name like ‘convergence’ is invented. One must ask what the driving force
is which produces such miracles repeatedly and independently.
Living things have by far the most compact information storage/retrieval system
known. This stands to reason if a microscopic cell stores as much information as
several sets of Encyclopædia Britannica. To illustrate further, the amount of
information that could be stored in a pinhead’s volume of DNA is staggering. It
is the equivalent information content of a pile of paperback books 500 times as
tall as the distance from Earth to the moon, each with a different, yet specific
However, Dawkins apparently has a low opinion of DNA and how it works. This makes
it easy to gloss over the issue of how all the necessary information arose. Let’s
look a little more closely at this genetic apparatus.
Many genes tend to be involved in one function and must work together ab initio.
Dr. Lucy Shapiro of Stanford, in writing about the flagellum, the filament in
bacterial cells that is driven by a rotary motor and is used for propulsion,[41]
‘To carry out the feat of co-ordinating the ordered expression of about 50 genes,
delivering the protein products for these genes to the construction site, and
moving the correct parts to the upper floors while adhering to the design
specification with a high degree of accuracy, the cell requires impressive
organisational skills.’ [42]
. The author, Andrée Tétry, a leading French biologist, and anti-Darwinian
evolutionist, deliberately sought a naturalistic explanation for the existence of
life. But she concluded:
‘But how could these organic inventions, these small tools, appear? It seems most
improbable that a single mutation could have given rise simultaneously to the
various elements which compose, say, a press-stud or hooking device. Several
mutations must therefore be assumed, but this implies the further assumption of
close co-ordination between different and distinct mutations. Such indispensable
co-ordination is a major stumbling block, for no known mutations occur in this
way.’ [43] Here we find examples of what Behe calls ‘Irreducible Complexity’:
systems composed of individual parts which only make sense when all components are
present, and for which developing each part individually is inconceivable.[44]
Behe’s most convincing examples involve functional systems composed of individual
members which are single molecules. Examples include aspects of blood clotting,
closed circular DNA, electron transport, the bacterial flagellum, telomeres,
photosynthesis and transcription regulation. It is absurd to argue that the
individual parts arose sequentially (or in parallel) and uncoordinated.
Can multi-part systems, which are themselves only a component of a living
organism, arise by chance? Professor Siegfried Scherer, a creationist
microbiologist, published a paper in the Journal of Theoretical Biology on the
energy-producing mechanism of bacterial photosynthesis.[45] He estimated the
number of basic functional states involve no fewer than five new proteins to move
from ‘fermentative bacteria, perhaps similar to Clostridium’ to fully
photosynthetic bacteria. His calculations show that ‘the range of probabilities
estimated is between 10-40 and 10-104.’ [46] (Note: the total number of
particles in the universe is estimated at around 1080). And this is a trivial
change compared to producing organs such as a brain or heart.
To Scherer’s astronomical number, one must factor in the consideration of what all
can go wrong when photons interact with ‘chromophores’, the portions of molecules
able to absorb light in photosynthesis. If not properly designed, ‘free radicals’
can be generated which would wreak havoc on the cell. Surprisingly, Dawkins may
have suspected difficulties such as those mentioned above because he candidly
tells us:
‘The great evolutionary biologist George C Williams has pointed out that animals
with complicated life cycles need to code for the development of all stages in the
life cycle, but they only have one genome with which to do so. A butterfly’s
genome has to hold the complete information needed for building a caterpillar as
well as a butterfly. A sheep liver fluke has six distinct stages in its life
cycle, each specialized for a different way of life.’ [20]
We must now wonder just what it is Dawkins is trying to communicate. The
information coded must also include that which is necessary to guide every step of
the individual stages along and to provide for contingencies such as disease or
temperature changes. Origin vs. Transmission of Information
Transmission of information appears sometimes to be confused with its origin.
Consider two simple systems which ‘carry’ information.
i. a car battery
ii. a computer algorithm
To create such systems requires a deep understanding of natural phenomena to meet
a goal and for the solution to be optimized. Once the intellectual work has been
carried out, the knowledge hidden behind each could be stolen and duplicated
without a need to understand how or why a system works. The information itself is
duplicated and retained on a physical medium. But such information is not
intrinsic to the matter itself and cannot be understood by knowing its properties.
Also, for matter organized as for the examples (i) and (ii) above to perform an
intended goal, additional physical components are necessary, such as computer
hardware components or an engine. These are anticipated and understood by the
creator of the information system. In these senses I argue that the total picture
of quantity of information content often requires a broader view than if one
merely looks at the carefully arranged matter.
Dr. Kofahl provides an interesting example that leads one to question whether
Shannon’s notion of information, transmitted as a message, captures the essential
‘One mystery is how one virus has DNA which codes for more proteins than it has
space to store the necessary coded information.
'The mystery arose when scientists counted the number of three-letter codons in
the DNA of the virus, X174. They found that the proteins produced by the virus
required many more code words than the DNA in the chromosome contains. How could
this be? Careful research revealed the amazing answer. A portion of a chain of
code letters in the gene, say -A-C-T-G-T-C-C-A-G-, could contain three three-
letter genetic words as follows: -A-C-T*G-T-C*C-A-G-. But if the reading frame is
shifted to the right one or two letters, two other genetic words are found in the
middle of this portion, as follows: -A*C-T-G*T-C-C*A-G- and -A-C*T-G-T*C-C-A*G-.
And this is just what the virus does. A string of 390 code letters in its DNA is
read in two different reading frames to get two different proteins from the same
portion of DNA. [69] Could this have happened by chance? Try to compose an
English sentence of 390 letters from which you can get another good sentence by
shifting the framing of the words one letter to the right. It simply can’t be
done. The probability of getting sense is effectively zero.’ [35]
Dawkins is aware of this, but provides no materialistic explanation for its
origin.[70] The total information prepared in the above genome by the sender
(God) presupposes co-ordination with the receiver as how to process the message.
Two schemes, of identical message lengths, could allow either one or two proteins
from the same DNA sequence to be generated. In each gene there is no redundancy,
yet one provides twice the information as to the protein(s) to be generated than
the other one does.
Dembski has argued in a mathematically rigorous way that what he calls Complex
Specified Information (CSI) cannot arise by natural causes:
‘Natural causes are in-principle incapable of explaining the origin of CSI. To be
sure, natural causes can explain the flow of CSI, being ideally suited for
transmitting already existing CSI. What natural causes cannot do, however, is
originate CSI. This strong proscriptive claim, that natural causes can only
transmit CSI but never originate it, I call the Law of Conservation of
Information. It is this law that gives definite scientific content to the claim
that CSI is intelligently caused.’ [24]
Why does change ever occur in the sense of microevolution? Random fluctuations,
leading to small fluctuations among existing genes, is fully compatible with our
view that God created unique and fully functional plant and animal categories
which are to ‘reproduce after their kind’. Before Darwin’s time, natural
selection was viewed as a method of culling members of a population which were no
longer as well adjusted to the environment as the norm, and it is an information
removing process.
Conclusion. The question as to the origin of information necessary to develop
greater complexity and to guide an organism’s development has not been answered by
Prof. Dawkins. A discussion of Shannon’s notions is not the same as providing an
example as requested. Does Dawkins offer us any suggestions as to how information
content might increase over time in living organisms? We read:
‘Mutation is not an increase in true information content, rather the reverse, for
mutation, in the Shannon analogy, contributes to increasing the prior uncertainty.
But now we come to natural selection, which reduces the ‘prior uncertainty’ and
therefore, in Shannon’s sense, contributes information to the gene pool. In every
generation, natural selection removes the less successful genes from the gene
pool, so the remaining gene pool is a narrower subset.’
‘Of course the total range of variation is topped up again in every generation by
new mutations...’
‘According to this analogy, natural selection is by definition a process whereby
information is fed into the gene pool of the next generation.
If natural selection feeds information into gene pools, what is the information
about? It is about how to survive.’ [47]
Apparently, mutations provide change, and selection makes sure the good changes
are favored, and this is defined by Dawkins as an increase in information. Since
the amount of total change available after duplication of genes is greater, and
Dawkins states that mutations decrease the true information content it is not
clear why a larger number of initially identical genes, each now undergoing random
mutations, is going to help his argument out. He now begins with a larger ‘prior
uncertainty’. The following parts of this essay will examine this question in
more detail.
It seems all these additional genes are going to add to the confusion produced by
DNA duplicating errors. The total number of chances for failure increases,
meaning more proteins with the wrong structures will be produced.
It addition, it would appear that specialization by selection should tend to
decrease the genetic information. Darwin found wingless beetles stranded on the
island of Madeira. Perhaps the beetles that could fly were all blown out to the
ocean by the wind, so drowned before they could propagate their genes. But should
conditions change, those beetles can no longer regain a valuable function, flying.
Selection inevitably removes information from the gene pool.[48]
We should also consider regulatory genes that switch other genes ‘on’ or ‘off’.
That is, they control whether or not the information in a gene will be decoded, so
the trait will be expressed in the creature. This would enable very rapid and
‘jumpy’ changes, which are still changes involving already created information,
not generation of new information, even if latent (hidden) information was turned
on. For example, horses probably have genetic information coding for extra toes,
but it is switched off in most modern horses. Sometimes a horse is born today
where the genes are switched on, and certainly many fossil horses also had the
genes switched on. This phenomenon explains the fossil record of the horse,
showing that it is variation within a kind, not evolution. It also explains why
there are no transitional forms showing gradually smaller toe size.[49]
Virtually all mutations are harmful or at best neutral to the organism and prevent
the messages encoded on DNA from being passed on as intended. A greater number of
redundant genes compounds the problem. Consider what the long-term effect of
mutations is according to Parker:
‘The more time that goes by, the greater the genetic burden or genetic corruption.
Natural selection can’t save us from this genetic decay, since most mutations are
recessive and can sneak through a population hidden in carriers, only rarely
showing up as the double recessive which can be “attacked” by natural selection.
As time goes by, accumulating genetic decay threatens the very survival of plant,
animal, and the human populations’. [50]
The late Professor Pièrre-Paul Grassé, widely regarded as one of the most
distinguished of French zoologists, although not a creationist, denied
emphatically that mutations and selection can create new complex organs, assigning
to DNA duplication errors the role of mere fluctuation.[51]
Dr. Demick, a practising pathologist, likens the activity of mutations to ‘A Blind
Gunman’.[52] He points out:
‘First, that the human mutation problem is bad and getting worse. Second, that it
is unbalanced by any detectable positive mutations. To summarize, recent research
has revealed literally tens of thousands of different mutations affecting the
human genome, with a likelihood of many more yet to be characterized. These have
been associated with thousands of diseases affecting every organ and tissue type
in the body. In all this research, not one mutation that increased the efficiency
of a genetically coded human protein has been found. Each generation has a
slightly more disordered genetic constitution than the preceding one.’ [52]
Dr. Jonathan Wells, a cell biologist currently at the University of Berkeley,
states specifically with reference to Dawkins’ article:
‘But there is no evidence that DNA mutations can provide the sorts of variations
needed for evolution ... The sorts of variations which can contribute to Darwinian
evolution, however, involve things like bone structure or body plan. There is no
evidence for beneficial mutations at the level of macroevolution, but there is
also no evidence at the level of what is commonly regarded as microevolution.’
He continues:
‘The claim that mutations explain differences among genes, which in turn explain
differences among organisms, is the Neo-Darwinian equivalent of alchemy. Compare:
1. We know that mutations happen, and that they alter DNA sequences; organisms
differ in their DNA sequences, so the differences between organisms must be due
(ultimately) to mutations.
2. We know that we can change the characteristics of metals by chemical means;
lead and gold have different characteristics; therefore it must be possible to
change lead into gold by chemical means.
In both cases, the mechanisms invoked to explain the phenomena are incapable of
doing so. Darwinists (like alchemists) have misconceived the nature of reality,
and thus hitched their wagon to an imaginary horse.’ [53]
Israeli MIT-trained biophysicist Dr. Lee Spetner inspired the original question as
to where the information arose in living beings through his book Not By
Chance.[54] He made the following observations about Dawkins’ essay:
1. Let me coin the word ‘biocosm’ to denote the union of all living organisms
at any particular time. Then we can say that the information in the biocosm of
today is vastly greater than that in the putative primitive organism.
2. If Neo-Darwinian theory (NDT) is to account for the evolution of all life,
as it claims to, it must account for this vast increase of biocosmic information
[which would be needed to transform bacteria into humans].
3. Since NDT is based on a long series of small steps then, on the average,
each step must have added some information.
4. According to NDT, a step consists of the appearance of random genetic
variation acted upon by natural selection. (The randomness is important to NDT to
avoid having to invoke some mechanism for the organism’s ‘need’ to induce
mutations that are adaptive to it.)
5. Because the steps in evolution are very small, and because there is supposed
to have been a vast amount of evolutionary change, there must have been a very
large number of such steps. Likewise, a very large number of steps should have
added information to the biocosm.
6. Mutations provide the raw material from which natural selection chooses. If
a single step of mutation followed by natural selection adds information, then the
mutation that gets selected must provide an increase in genetic information.
7. Considering the great sweep of evolution for which NDT claims to account,
and considering the huge number of steps that are supposed to have led to that
evolution, there must have been a huge number of random mutations that added at
least a little information to the biocosm.
8. Therefore, with all the mutations that have been studied on the molecular
level, we should find some that add information.
9. The fact is that none have been found, and that is why Dawkins cannot give
an example. [55]
Dawkins, and others who postulate that inanimate material can produce life unaided
with a necessary constant increase in information, are going to have to face up to
the fact that a lot of very smart people are taking an increasingly dim view of
what is being presented as ‘fact’ in many textbooks.[56]
It seems fair to point out that evolutionists have yet to provide even a single
concrete example of a mutation leading to an increase of information as requested.
Let’s reconsider P(E|F)/P(E). How much more likely is it on average that an
organism with one additional protein, generated ab initio, with or without more
duplicated genes, will survive than a sister exclusively because of that one
protein? In the best case, this single protein would become functional concurrent
with a drastic change in the environment for which the protein could be of some
immediate use. This would offer some measurable advantage. But it becomes ‘just-
so’ story-telling to invent such environmental catastrophes so often.
Now, it is questionable whether any mutation can be shown to lead to some kind of
improvement without causing deleterious functioning of some processes already
encoded on the DNA (this is very different from the question whether one mutation
could allow some members to temporarily survive some drastic environmental
change). Presumably a very bad mutation leads to death, weeding out such mutated
genes from that species’ gene pool forever. Nevertheless, that member with a
single new protein, whose offspring will eventually dominate the species
population, will inevitably passively carry a large number of slightly defective
but not yet deadly genes.
In other words, when I determine that a new protein is present in one or several
organisms, I then know that many generations have passed since the protein-
building process started, and that a huge number of bad, but individually not yet
deadly, mutations have been accumulating. This time bomb may indeed mean that
P(E|F)/P(E) on average may actually be < 1 — the chances of survival for a large
number of members with one improvement but a huge number of disadvantages could
militate against enhanced survival chances!
This is an inevitable consequence of the law of increase in entropy to which all
matter is subject in the long run.[62] This genetic load will get worse with an
increasing number of generations. By invoking duplicate ‘junk’ genes, Dawkins is
merely increasing the potential for more flaws. When told that an organism has a
new protein, I know that many generations must have passed since the point where
no evidence for that protein existed, and so the current member has inherited a
lot of momentarily hidden flaws. Its temporary survival is a blessing in disguise
for the species as a whole. I therefore suspect that P(E|F)/P(E) would indeed be
< 1.
Nevertheless, survival is not the real issue, but rather an increase in
information. The penalty for generating a new protein is a degrading of many
other functions that have been damaged by all the concurrent mutations not related
to producing that protein.
Now, to obtain anything interesting, such as a new organ, I need quite more than a
single new protein. The probability of getting two of the right ones which will
eventually lead to a new structure, all at once or sequentially, can only occur,
if at all, should vastly more generations have passed, accompanied also by a
vastly greater amount of genetic load. In fact, time becomes the greatest enemy
of evolution.
Conclusion: the source of information, even when defined as per Dawkins, remains
an intractable problem for evolutionary theory.
Professor Gitt’s Universal Laws for Information
It is impossible to set up, store, or transmit information without using a code.
It is impossible to have a code apart from a free and deliberate convention.
It is impossible to have information without a sender.
It is impossible that information can exist without having had a mental source.
It is impossible for information to exist without having been established
voluntarily by a free will.
It is impossible for information to exist without all five hierarchical levels:
statistics, syntax, semantics, pragmatics, and apobetics [the purpose for which
the information is intended, from the Greek apobeinon = result, success,

It is impossible that information can originate in statistical processes.