That's the point. Every new mutation in a populations adds information. First case was two alleles, with an information of about 0.3. The second case was three alleles with an information of about 0.48.
This conclusion conflates allelic diversity with biologically meaningful information, and the Shannon calculation itself does not support the claim being made.
In Shannon’s framework, information increases when uncertainty increases, not when function or specification increases. Adding a third allele at equal frequency simply increases statistical unpredictability in the population—it says nothing about whether the mutation is functional, deleterious, neutral, or destructive. In fact, most new mutations degrade or disrupt existing function, even while increasing Shannon entropy. By this definition, a population accumulating random noise would be said to “gain information,” which exposes the category error: Shannon information measures distributional spread, not functional content. Biological information is constrained, highly specific, and context-dependent; functional sequences occupy an extremely small subset of possible sequence space. Therefore, while a new mutation may increase entropy in the statistical sense, it does not follow—mathematically or biologically—that it increases the information required to build or maintain complex systems.
It might seem wrong to you, but the same equation shows us how to transmit messages reliably over billions of kilometers of space with very low-powered transmitters.
The fact that Shannon’s equations allow us to transmit messages reliably over billions of kilometers does not resolve the issue being discussed — it actually highlights the category mistake.
Shannon entropy is extraordinarily powerful for signal transmission, where three things are already in place: A predefined alphabet (symbols are known in advance), an encoding scheme (rules for arranging symbols) and a receiver that interprets the signal according to that scheme
In deep-space communication, Shannon entropy tells us how efficiently a designed message can be preserved against noise. It does not explain how the message, the code, or the semantic constraints came into existence in the first place.
That distinction is crucial. Shannon Himself Explicitly Excluded Meaning
Claude Shannon was very clear about the limits of his theory:
“The semantic aspects of communication are irrelevant to the engineering problem.”
— Shannon, A Mathematical Theory of Communication (1948)
And again:
“These semantic aspects are irrelevant to the engineering problem.”
Shannon entropy measures uncertainty, not function, instruction, or biological meaning. It works beautifully once a communication system already exists — but it is silent about how such systems originate.
Why This Matters for Biology
Biological sequences are not merely strings that resist noise. They must:
Fold into precise 3D structures
Interact with other molecules
Participate in coordinated regulatory networks
Produce functional outcomes under cellular constraints
Two sequences can have identical Shannon entropy while one produces a working protein and the other produces nothing functional Shannon’s metric cannot distinguish between them.
Transmission ≠ Origin
Saying “Shannon entropy works for deep-space communication” supports the claim that information can be preserved once coded. It does not demonstrate that undirected processes can generate the tightly constrained, functionally specified sequences required for living systems.
In fact, every deep-space transmission presupposes: an intelligent sender, a coding system, and a goal (successful decoding)
Those are exactly the questions at issue in origins biology — not answers to them.
The bottom line is that Shannon entropy is indispensable for engineering communication.
It is not a theory of information origin, biological function, or semantic content.
Using it to argue that “every mutation increases information” or that it solves the problem of biological complexity is to apply the right mathematics to the wrong question.
Most mutations don't do much. Some are harmful. A few are useful. Darwin's great discovery was that the harmful ones tend to be removed, and the useful ones tend to spread in a population. But it's important to remember that information is not necessarily useful for anything. This is why the creationist dodge about "information", is such a failure.
I’m not entirely sure what creationists have to do with this particular point, other than being rhetorically invoked as a contrast class. My claim wasn’t “creationism is scientific” or “evolution is religion.” It was much simpler and more general:
All humans operate with foundational assumptions—guiding axioms—that cannot themselves be proven by the methods that rest upon them. That includes scientists.
Faith Is Not the Same as Ignorance. When I said there is “faith” involved, I was not suggesting blind belief or rejection of evidence. I was pointing out something well established in the philosophy of science:
We assume the uniformity of nature. We assume the reliability of reason. We assume that past causes can be inferred from present evidence. We assume that unguided processes are sufficient explanations unless shown otherwise.
None of these assumptions are directly observed. They are preconditions for doing science at all. Calling these “faith” is not an insult. It is an acknowledgment of epistemic limits.
Invoking “Creationists” Is a Category Error. Bringing creationists into the discussion sidesteps the argument rather than answering it. This is not a debate about who is right or wrong religiously. It is a discussion about how much explanatory weight we place on unobserved historical processes and what assumptions we are comfortable making when doing so.
Whether one favors intelligent design, theistic evolution, or strict naturalism is secondary to the point that origin theories necessarily extend beyond direct observation.
That applies equally to: Special creation, Abiogenesis, Universal common ancestry, Unguided natural selection as a complete explanation
All of them rely on inference grounded in prior commitments.
Observation vs. Interpretation:
Yes, we observe adaptation.
Yes, we observe selection.
Yes, we observe mutation.
What we do not observe directly are: The origin of life, The emergence of the first genetic code, The integration of complex multi-part systems at their inception, speciation, etc.
When we say those things happened by unguided processes, we are making a reasoned inference—but still an inference. Confidence in that inference is not observation; it is trust in a framework. That is where “faith” enters—not as theology, but as epistemology.
The Core Point Still Stands. My point was not that evolution is irrational, nor that science is invalid. It was this:
At the level of ultimate origins, everyone places confidence in assumptions they cannot finally prove.
Some trust that unguided processes are sufficient.
Some trust that intelligence is fundamental.
Those are worldview-level commitments, not empirical measurements.
Recognizing that does not weaken science. It simply keeps us honest about what science can—and cannot—ultimately adjudicate.
That's wrong, too. For example, if we have two neutral alleles for a gene in a population, over time, there is a very good likelihood of one of them becoming fixed in the population genome. And Shannon did not assume a "predefined set of symbols and probabilities." I could show you examples with different symbols and probabilities, if you like.
Your objection still misses a crucial point that Claude Shannon himself went out of his way to clarify: his theory explicitly excludes meaning and function. This matters because biological “information” is inseparable from biochemical function.
Shannon wrote in A Mathematical Theory of Communication (1948):
“The semantic aspects of communication are irrelevant to the engineering problem.”
And again:
“The fundamental problem of communication is that of reproducing at one point either exactly or approximately a message selected at another point.”
Shannon entropy measures uncertainty in symbol transmission, not what the symbols do. He was explicit that his theory does not address meaning, purpose, or usefulness.
Neutral Fixation Still Does Not Address Function. Yes, neutral alleles can drift to fixation. But fixation reduces Shannon entropy—it removes uncertainty. More importantly, it does nothing to explain the origin of biological function.
Two neutral alleles can differ in sequence yet produce the same phenotype. Their fixation changes population statistics, not biological capabilities. Shannon entropy can go up, down, or sideways without any change in function at all.
That is precisely why equating Shannon entropy with biological information gain is a mistake. Shannon Entropy Requires Defined States and Probabilities.
Contrary to the claim that Shannon “did not assume a predefined set of symbols and probabilities,” his mathematics explicitly requires them.
Shannon defined information as a function of: “the probabilities of the various possible messages.”
You can change the alphabet or the probabilities, but at any given calculation they must be specified. In genetics, those choices are contextual: nucleotides, codons, alleles, regulatory motifs. Change the context and the entropy changes.
That is not a flaw—it is how the theory works.
Meaning Is Exactly What Biology Cares About. Shannon warned against extending his theory beyond its proper domain. As he later emphasized:
“It is hardly to be expected that a single concept of information would satisfactorily account for the numerous possible applications of this general field.”
Biology is one of those applications where meaning matters. A DNA sequence is informational because it folds, binds, regulates, and functions. Shannon entropy deliberately ignores all of that.
Two sequences can carry identical Shannon information while one kills the organism and the other sustains it. Biology treats those as radically different. Shannon does not—and was never intended to.
Shannon entropy is a powerful measure of uncertainty and variability. It is not a measure of functional biological information. Shannon himself insisted on this distinction.
So when it is claimed that “every mutation increases information” based on Shannon entropy, the claim fails—not mathematically, but conceptually. It confuses uncertainty with instruction, variation with innovation, and statistics with function.
That is not a limitation of biology. It is a limitation Shannon clearly acknowledged.