• Starting today August 7th, 2024, in order to post in the Married Couples, Courting Couples, or Singles forums, you will not be allowed to post if you have your Marital status designated as private. Announcements will be made in the respective forums as well but please note that if yours is currently listed as Private, you will need to submit a ticket in the Support Area to have yours changed.

  • CF has always been a site that welcomes people from different backgrounds and beliefs to participate in discussion and even debate. That is the nature of its ministry. In view of recent events emotions are running very high. We need to remind people of some basic principles in debating on this site. We need to be civil when we express differences in opinion. No personal attacks. Avoid you, your statements. Don't characterize an entire political party with comparisons to Fascism or Communism or other extreme movements that committed atrocities. CF is not the place for broad brush or blanket statements about groups and political parties. Put the broad brushes and blankets away when you come to CF, better yet, put them in the incinerator. Debate had no place for them. We need to remember that people that commit acts of violence represent themselves or a small extreme faction.
  • We hope the site problems here are now solved, however, if you still have any issues, please start a ticket in Contact Us

Evolution conflict and division

Mercy Shown

Well-Known Member
Jan 18, 2019
1,168
332
65
Boonsboro
✟108,752.00
Country
United States
Gender
Male
Faith
Christian
Marital Status
Married
Well, that's a testable assumption. Information was first worked out by Claude Shannon, who applied it to biological systems. He defined information as the uncertainty about a message before it was read. If you know exactly what's in a message before reading it, the message has an infomation of 0.0.

So the formula for genetic information is:
1767927039978.png


Where xi is the frequency of the ith allele of a gene. Let's look at a simple example. If a population has two alleles, each with a frequency of 0.5, then the information for that gene is about 3.0 Now suppose a mutation happens and it eventually goes so that each allele has a frequency of 0.333. Then the information for that gene is about 0.48.
The example presented misrepresents what Shannon information actually measures and what it tells us about biology.

Shannon Information ≠ Functional Biological Information
Shannon entropy, as your formula shows, measures uncertainty or variability in a distribution. In your example, when allele frequencies move from 0.5/0.5 to 0.333/0.333/0.333, the Shannon information of the distribution increases. That is true in the mathematical sense: more uncertainty, more entropy. But this is not the same as functional information—the information that encodes a protein, a regulatory sequence, or a metabolic pathway. Increasing variability does not guarantee a new useful function. A mutation can increase Shannon entropy while simultaneously destroying a functional gene or reducing fitness.

Mutations Are Often Neutral or Deleterious. Empirically, the majority of mutations are either neutral or harmful. Gene duplications, point mutations, or regulatory changes can introduce variation, but not all variation contributes to new, functional information. In many cases, what Shannon entropy measures as an “increase in information” is simply more uncertainty in allele frequency, not a biologically meaningful gain.

Population-Level Frequency Changes Are Not the Same as Origination of Function. The example you gave calculates information after a mutation spreads in a population. But the critical question is: did the mutation create a new functional capability or merely shuffle allele frequencies? A mutation can increase entropy without producing new regulatory networks, enzymes, or integrated systems—the kinds of changes required for evolutionary innovation at the genome or system level.

Shannon Information is Context-Dependent. Shannon’s metric assumes a predefined set of symbols and probabilities. In biology, the context matters: the sequence must fold correctly, interact with other molecules, and confer selective advantage. Simply increasing allele uncertainty does not guarantee the emergence of new biological function.

Summary
Yes, a mutation can mathematically increase Shannon entropy. But that does not equate to a gain in functional or biologically meaningful information. Saying that “every new mutation increases information” is therefore misleading—it ignores the distinction between uncertainty in a distribution and functional genomic information.

Mutations can increase Shannon entropy while reducing fitness or disrupting essential functions. That is why the claim that “every new mutation increases information” is incorrect.
 
Upvote 0

Mercy Shown

Well-Known Member
Jan 18, 2019
1,168
332
65
Boonsboro
✟108,752.00
Country
United States
Gender
Male
Faith
Christian
Marital Status
Married
One does not have "faith in" a natural phenomenon. One merely observes it Nor does one have "faith in" a theory that explains it One merely checks to see if the predictions of the theory have been validated by subsequent evidence. As I suggested, if you understood more about these issues, they wouldn't be so confusing for you.
It’s true that we can observe natural phenomena, and we can test theories against predictions. But it’s worth remembering that observation and testing always rely on underlying assumptions—and in that sense, even scientific conclusions require a form of trust or confidence that could reasonably be called “faith.”

For example:

We assume that the laws of physics and chemistry we measure today operated the same way in the distant past. We cannot directly observe billions of years ago; we infer it.

We assume that our measurements of fossils, genes, or molecular structures are complete and correctly interpreted. No observation is entirely free of model-dependent interpretation.

When we interpret patterns of descent, adaptation, or complexity, we assume that natural selection and mutation can account for the origin and diversification of life—even though the actual processes that generated the first life or complex systems are not directly observed.

In that sense, confidence in natural selection, common ancestry, or intelligent design all involve trust in frameworks that extend beyond what we can directly observe. Calling it “faith” does not mean rejecting evidence; it simply recognizes that all explanations for origins rely on assumptions that cannot be fully proven, only supported.

Science minimizes uncertainty through evidence and prediction—but it cannot escape the fact that some foundational assumptions are taken on trust. Intelligent design, evolution, and any other origin hypothesis are all operating under that reality.

Here's an edited version of your text with corrections for spelling, grammar, and clarity while preserving your original meaning:

"So far, I have been ignoring your ad hominem inferences. I think arguments without them are stronger and more persuasive than ones that include them. They make an argument appear as though it can't stand on its own legs."
 
Upvote 0

The Barbarian

Crabby Old White Guy
Apr 3, 2003
30,997
13,976
78
✟465,941.00
Country
United States
Gender
Male
Faith
Catholic
Marital Status
Married
Politics
US-Libertarian
It’s true that we can observe natural phenomena, and we can test theories against predictions. But it’s worth remembering that observation and testing always rely on underlying assumptions—and in that sense, even scientific conclusions require a form of trust or confidence that could reasonably be called “faith.”
No. Statistical inference is not "faith." I'm always astonished that creationists have so little confidence in faith that they try to make it an accusation. There is reality, and we can learn about it by careful study. No faith required.

We assume that the laws of physics and chemistry we measure today operated the same way in the distant past. We cannot directly observe billions of years ago; we infer it.
Evidence shows this to be true. For example, if the speed of light was significantly faster thousands of years ago, accelerated nuclear decay would have fried living things on Earth.

We assume that our measurements of fossils, genes, or molecular structures are complete and correctly interpreted.
We can test such things. Reproducible results are the point of scientific investigation.

When we interpret patterns of descent, adaptation, or complexity, we assume that natural selection and mutation can account for the origin and diversification of life
As you learned earlier, evolutionary theory is not about the origin of life. You need to keep that in mind. However, we can test the idea that mutation and natural selection produces evolutionary change. And we can measure selective forces on specific genes. Would you like to learn how the math works on that? The Hardy-Weinberg principle is used to detect selective forces. The simplest case is two alleles, A and a.

In the simplest case of a single locus with two alleles denoted A and a with frequencies f(A) = p and f(a) = q, respectively, the expected genotype frequencies under random mating are f(AA) = p2 for the AA homozygotes, f(aa) = q2 for the aa homozygotes, and f(Aa) = 2pq for the heterozygotes. In the absence of selection, mutation, genetic drift, or other forces, allele frequencies p and q are constant between generations, so equilibrium is reached.

If the distribution is significantly different than that predicted by this equation, that signifies selective forces acting on those alleles. Very testable and reproducible.

In that sense, confidence in natural selection, common ancestry, or intelligent design all involve trust in frameworks that extend beyond what we can directly observe.
No, that's quite testable. Undergraduates test it in many universities routinely. Would you like to learn how that works?

Here's an edited version of your text with corrections for spelling, grammar, and clarity while preserving your original meaning:
I see you deleted all the math, specific examples, citations from literature, and your failures to answer my questions. Not hard to figure out why. This isn't a fix one can talk one's self out of. Facts are required. A knowledge of evolutionary and genetic processes is required.

For example, I asked you to show which of the four points of Darwin's theory are about the origin of life. I offered to tell you what those points are, if you did not know. You ignored the question. Maybe you could look it up, you don't want to hear it from me.
 
  • Like
Reactions: Job 33:6
Upvote 0

The Barbarian

Crabby Old White Guy
Apr 3, 2003
30,997
13,976
78
✟465,941.00
Country
United States
Gender
Male
Faith
Catholic
Marital Status
Married
Politics
US-Libertarian
Shannon entropy, as your formula shows, measures uncertainty or variability in a distribution. In your example, when allele frequencies move from 0.5/0.5 to 0.333/0.333/0.333, the Shannon information of the distribution increases.
That's the point. Every new mutation in a populations adds information. First case was two alleles, with an information of about 0.3. The second case was three alleles with an information of about 0.48.

But this is not the same as functional information
It might seem wrong to you, but the same equation shows us how to transmit messages reliably over billions of kilometers of space with very low-powered transmitters.

A mutation can increase entropy without producing new regulatory networks, enzymes, or integrated systems—the kinds of changes required for evolutionary innovation at the genome or system level.
Most mutations don't do much. Some are harmful. A few are useful. Darwin's great discovery was that the harmful ones tend to be removed, and the useful ones tend to spread in a population. But it's important to remember that information is not necessarily useful for anything. This is why the creationist dodge about "information", is such a failure.

Shannon Information is Context-Dependent. Shannon’s metric assumes a predefined set of symbols and probabilities. In biology, the context matters: the sequence must fold correctly, interact with other molecules, and confer selective advantage. S
That's wrong, too. For example, if we have two neutral alleles for a gene in a population, over time, there is a very good likelihood of one of them becoming fixed in the population genome. And Shannon did not assume a "predefined set of symbols and probabilities." I could show you examples with different symbols and probabilities, if you like.
 
  • Like
Reactions: Job 33:6
Upvote 0

Mercy Shown

Well-Known Member
Jan 18, 2019
1,168
332
65
Boonsboro
✟108,752.00
Country
United States
Gender
Male
Faith
Christian
Marital Status
Married
That's the point. Every new mutation in a populations adds information. First case was two alleles, with an information of about 0.3. The second case was three alleles with an information of about 0.48.
This conclusion conflates allelic diversity with biologically meaningful information, and the Shannon calculation itself does not support the claim being made.

In Shannon’s framework, information increases when uncertainty increases, not when function or specification increases. Adding a third allele at equal frequency simply increases statistical unpredictability in the population—it says nothing about whether the mutation is functional, deleterious, neutral, or destructive. In fact, most new mutations degrade or disrupt existing function, even while increasing Shannon entropy. By this definition, a population accumulating random noise would be said to “gain information,” which exposes the category error: Shannon information measures distributional spread, not functional content. Biological information is constrained, highly specific, and context-dependent; functional sequences occupy an extremely small subset of possible sequence space. Therefore, while a new mutation may increase entropy in the statistical sense, it does not follow—mathematically or biologically—that it increases the information required to build or maintain complex systems.
It might seem wrong to you, but the same equation shows us how to transmit messages reliably over billions of kilometers of space with very low-powered transmitters.
The fact that Shannon’s equations allow us to transmit messages reliably over billions of kilometers does not resolve the issue being discussed — it actually highlights the category mistake.

Shannon entropy is extraordinarily powerful for signal transmission, where three things are already in place: A predefined alphabet (symbols are known in advance), an encoding scheme (rules for arranging symbols) and a receiver that interprets the signal according to that scheme

In deep-space communication, Shannon entropy tells us how efficiently a designed message can be preserved against noise. It does not explain how the message, the code, or the semantic constraints came into existence in the first place.

That distinction is crucial. Shannon Himself Explicitly Excluded Meaning

Claude Shannon was very clear about the limits of his theory:

“The semantic aspects of communication are irrelevant to the engineering problem.”
— Shannon, A Mathematical Theory of Communication (1948)

And again:

“These semantic aspects are irrelevant to the engineering problem.”

Shannon entropy measures uncertainty, not function, instruction, or biological meaning. It works beautifully once a communication system already exists — but it is silent about how such systems originate.

Why This Matters for Biology

Biological sequences are not merely strings that resist noise. They must:

Fold into precise 3D structures

Interact with other molecules

Participate in coordinated regulatory networks

Produce functional outcomes under cellular constraints

Two sequences can have identical Shannon entropy while one produces a working protein and the other produces nothing functional Shannon’s metric cannot distinguish between them.

Transmission ≠ Origin

Saying “Shannon entropy works for deep-space communication” supports the claim that information can be preserved once coded. It does not demonstrate that undirected processes can generate the tightly constrained, functionally specified sequences required for living systems.

In fact, every deep-space transmission presupposes: an intelligent sender, a coding system, and a goal (successful decoding)

Those are exactly the questions at issue in origins biology — not answers to them.

The bottom line is that Shannon entropy is indispensable for engineering communication.
It is not a theory of information origin, biological function, or semantic content.

Using it to argue that “every mutation increases information” or that it solves the problem of biological complexity is to apply the right mathematics to the wrong question.

Most mutations don't do much. Some are harmful. A few are useful. Darwin's great discovery was that the harmful ones tend to be removed, and the useful ones tend to spread in a population. But it's important to remember that information is not necessarily useful for anything. This is why the creationist dodge about "information", is such a failure.
I’m not entirely sure what creationists have to do with this particular point, other than being rhetorically invoked as a contrast class. My claim wasn’t “creationism is scientific” or “evolution is religion.” It was much simpler and more general:

All humans operate with foundational assumptions—guiding axioms—that cannot themselves be proven by the methods that rest upon them. That includes scientists.

Faith Is Not the Same as Ignorance. When I said there is “faith” involved, I was not suggesting blind belief or rejection of evidence. I was pointing out something well established in the philosophy of science:

We assume the uniformity of nature. We assume the reliability of reason. We assume that past causes can be inferred from present evidence. We assume that unguided processes are sufficient explanations unless shown otherwise.

None of these assumptions are directly observed. They are preconditions for doing science at all. Calling these “faith” is not an insult. It is an acknowledgment of epistemic limits.

Invoking “Creationists” Is a Category Error. Bringing creationists into the discussion sidesteps the argument rather than answering it. This is not a debate about who is right or wrong religiously. It is a discussion about how much explanatory weight we place on unobserved historical processes and what assumptions we are comfortable making when doing so.

Whether one favors intelligent design, theistic evolution, or strict naturalism is secondary to the point that origin theories necessarily extend beyond direct observation.

That applies equally to: Special creation, Abiogenesis, Universal common ancestry, Unguided natural selection as a complete explanation

All of them rely on inference grounded in prior commitments.

Observation vs. Interpretation:

Yes, we observe adaptation.
Yes, we observe selection.
Yes, we observe mutation.

What we do not observe directly are: The origin of life, The emergence of the first genetic code, The integration of complex multi-part systems at their inception, speciation, etc.

When we say those things happened by unguided processes, we are making a reasoned inference—but still an inference. Confidence in that inference is not observation; it is trust in a framework. That is where “faith” enters—not as theology, but as epistemology.

The Core Point Still Stands. My point was not that evolution is irrational, nor that science is invalid. It was this:

At the level of ultimate origins, everyone places confidence in assumptions they cannot finally prove.

Some trust that unguided processes are sufficient.
Some trust that intelligence is fundamental.

Those are worldview-level commitments, not empirical measurements.

Recognizing that does not weaken science. It simply keeps us honest about what science can—and cannot—ultimately adjudicate.
That's wrong, too. For example, if we have two neutral alleles for a gene in a population, over time, there is a very good likelihood of one of them becoming fixed in the population genome. And Shannon did not assume a "predefined set of symbols and probabilities." I could show you examples with different symbols and probabilities, if you like.
Your objection still misses a crucial point that Claude Shannon himself went out of his way to clarify: his theory explicitly excludes meaning and function. This matters because biological “information” is inseparable from biochemical function.

Shannon wrote in A Mathematical Theory of Communication (1948):

“The semantic aspects of communication are irrelevant to the engineering problem.”

And again:

“The fundamental problem of communication is that of reproducing at one point either exactly or approximately a message selected at another point.”

Shannon entropy measures uncertainty in symbol transmission, not what the symbols do. He was explicit that his theory does not address meaning, purpose, or usefulness.

Neutral Fixation Still Does Not Address Function. Yes, neutral alleles can drift to fixation. But fixation reduces Shannon entropy—it removes uncertainty. More importantly, it does nothing to explain the origin of biological function.

Two neutral alleles can differ in sequence yet produce the same phenotype. Their fixation changes population statistics, not biological capabilities. Shannon entropy can go up, down, or sideways without any change in function at all.

That is precisely why equating Shannon entropy with biological information gain is a mistake. Shannon Entropy Requires Defined States and Probabilities.

Contrary to the claim that Shannon “did not assume a predefined set of symbols and probabilities,” his mathematics explicitly requires them.

Shannon defined information as a function of: “the probabilities of the various possible messages.”

You can change the alphabet or the probabilities, but at any given calculation they must be specified. In genetics, those choices are contextual: nucleotides, codons, alleles, regulatory motifs. Change the context and the entropy changes.

That is not a flaw—it is how the theory works.

Meaning Is Exactly What Biology Cares About. Shannon warned against extending his theory beyond its proper domain. As he later emphasized:

“It is hardly to be expected that a single concept of information would satisfactorily account for the numerous possible applications of this general field.”

Biology is one of those applications where meaning matters. A DNA sequence is informational because it folds, binds, regulates, and functions. Shannon entropy deliberately ignores all of that.

Two sequences can carry identical Shannon information while one kills the organism and the other sustains it. Biology treats those as radically different. Shannon does not—and was never intended to.

Shannon entropy is a powerful measure of uncertainty and variability. It is not a measure of functional biological information. Shannon himself insisted on this distinction.

So when it is claimed that “every mutation increases information” based on Shannon entropy, the claim fails—not mathematically, but conceptually. It confuses uncertainty with instruction, variation with innovation, and statistics with function.

That is not a limitation of biology. It is a limitation Shannon clearly acknowledged.
 
Upvote 0

The Barbarian

Crabby Old White Guy
Apr 3, 2003
30,997
13,976
78
✟465,941.00
Country
United States
Gender
Male
Faith
Catholic
Marital Status
Married
Politics
US-Libertarian
That's the point. Every new mutation in a populations adds information. First case was two alleles, with an information of about 0.3. The second case was three alleles with an information of about 0.48.

This conclusion conflates allelic diversity with biologically meaningful information, and the Shannon calculation itself does not support the claim being made.
I just showed you how the Shannon equation finds information. Perhaps you could give us a testable definition for "biologically meaningful information." It's not what you seem to think it is.

In Shannon’s framework, information increases when uncertainty increases, not when function or specification increases. Adding a third allele at equal frequency simply increases statistical unpredictability in the population—it says nothing about whether the mutation is functional, deleterious, neutral, or destructive.
You're catching on. So "information" is really a kind of dodge used by creationists. And this is why it is a dodge:

Nature Genetics 28 January 2025

Abstract

In the past decade, our understanding of how new genes originate in diverse organisms has advanced substantially, and more than a dozen molecular mechanisms for generating initial gene structures were identified, in addition to gene duplication. These new genes have been found to integrate into and modify pre-existing gene networks primarily through mutation and selection, revealing new patterns and rules with stable origination rates across various organisms. This progress has challenged the prevailing belief that new proteins evolve from pre-existing genes, as new genes may arise de novo from noncoding DNA sequences in many organisms, with high rates observed in flowering plants. New genes have important roles in phenotypic and functional evolution across diverse biological processes and structures, with detectable fitness effects of sexual conflict genes that can shape species divergence. Such knowledge of new genes can be of translational value in agriculture and medicine.

The bottom line is that Shannon entropy is indispensable for engineering communication.
It is not a theory of information origin, biological function, or semantic content.
If you were familiar with information theory or population genetics, you'd know better:

IEEE Eng Med Biol Mag
2006 Jan-Feb;25(1):30-3

Claude Shannon: Biologist

The Founder of Information Theory Used Biology to Formulate the Channel Capacity

Claude Shannon founded information theory in the 1940s. The theory has long been known to be closely related to thermodynamics and physics through the similarity of Shannon's uncertainty measure to the entropy function. Recent work using information theory to understand molecular biology has unearthed a curious fact: Shannon's channel capacity theorem only applies to living organisms and their products, such as communications channels and molecular machines that make choices from several possibilities. Information theory is therefore a theory about biology, and Shannon was a biologist.


Information theory has more evolutionary applications than you suspect:

Applications of fundamental topics of information theory include source coding/data compression (e.g. for ZIP files), and channel coding/error detection and correction (e.g. for DSL). Its impact has been crucial to the success of the Voyager missions to deep space, the invention of the compact disc, the feasibility of mobile phones and the development of the Internet and artificial intelligence ] The theory has also found applications in other areas, including statistical inference,[8] cryptography, neurobiology,[9] perception,[10] signal processing,[2] linguistics, the evolution[11] and function[12] of molecular codes (bioinformatics),,,

Biological Information Theory (BIT) is the application of Shannon's information theory to all of biology.
Tom Schneider is best known for inventing sequence logos, a computer graphic depicting patterns in DNA, RNA or protein that is now widely used by molecular biologists. Logos are only the beginning, however, as the information theory measure used to compute them gives results in bits. But why would a binding site have some number of bits? This led to a simple theory: the number of bits in the DNA binding site of a protein is the number needed to find the sites in the genome. Next Tom asked how are bits related to binding energy? He solved this problem by using a version of the second law of thermodynamics to convert the bits to the energy needed to select them. Dividing the bits used to define a binding site by the bits that could have been selected for the given energy, he then discovered that the efficiency of DNA binding site selections is near 70% and he constructed a theory to explain this result. Schneider has a number of nanotechnology patents derived in part from this theory.

Nucleic Acids Research, Volume 38, Issue 18, 1 October 2010, Pages 5995–6006,

70% efficiency of bistate molecular machines explained by information theory, high dimensional geometry and evolutionary convergence

Abstract: The relationship between information and energy is key to understanding biological systems. We can display the information in DNA sequences specifically bound by proteins by using sequence logos, and we can measure the corresponding binding energy. These can be compared by noting that one of the forms of the second law of thermodynamics defines the minimum energy dissipation required to gain one bit of information. Under the isothermal conditions that molecular machines function this is Emin = kB T ln 2 joules per bit (kB is Boltzmann's constant and T is the absolute temperature). Then an efficiency of binding can be computed by dividing the information in a logo by the free energy of binding after it has been converted to bits. The isothermal efficiencies of not only genetic control systems, but also visual pigments are near 70%. From information and coding theory, the theoretical efficiency limit for bistate molecular machines is ln2 = 0.6931. Evolutionary convergence to maximum efficiency is limited by the constraint that molecular states must be distinct from each other. The result indicates that natural molecular machines operate close to their information processing maximum (the channel capacity), and implies that nanotechnology can attain this goal.
 
Last edited:
  • Like
Reactions: Job 33:6
Upvote 0

Mercy Shown

Well-Known Member
Jan 18, 2019
1,168
332
65
Boonsboro
✟108,752.00
Country
United States
Gender
Male
Faith
Christian
Marital Status
Married
That's the point. Every new mutation in a populations adds information. First case was two alleles, with an information of about 0.3. The second case was three alleles with an information of about 0.48.


I just showed you how the Shannon equation finds information. Perhaps you could give us a testable definition for "biologically meaningful information." It's not what you seem to think it is.
Why do you keep repeating the same premise without engaging what I’ve actually written? Simply restating your conclusion and reposting the same Shannon calculation does not advance the argument. It assumes the very point under dispute—namely, that a statistical increase in allele frequencies is equivalent to an increase in biologically meaningful information. Repetition is not refutation, and citing equations or papers without addressing the conceptual distinction being raised is not an argument.

Since you asked for a testable definition, here it is: biologically meaningful information is information that is functionally specified within a biological context. It is sequence-dependent information that contributes to a coherent biological outcome—such as proper folding of a protein, regulated expression, integration into existing cellular networks, and contribution to organismal fitness. Crucially, this kind of information is constrained, not maximized. Most possible sequences do nothing or actively break existing systems, even though they may increase Shannon entropy.

This distinction is testable. Two sequences can be compared experimentally: one that produces a functional protein and one that does not, despite having equal or greater Shannon entropy. Knockout studies, missense mutations, frame shifts, and regulatory disruptions routinely demonstrate that new mutations often increase statistical variability while destroying biological function. By Shannon’s metric, entropy may rise; by any biological metric, information is lost.

So the issue is not whether Shannon entropy can be calculated—it can. The issue is whether Shannon entropy measures the kind of information that biology depends on. Claude Shannon himself explicitly said it does not address meaning or function. Treating an increase in uncertainty as equivalent to an increase in biological information is a category error, not a scientific result.

Please address what I have said.
You're catching on. So "information" is really a kind of dodge used by creationists. And this is why it is a dodge:

Nature Genetics 28 January 2025

Abstract

In the past decade, our understanding of how new genes originate in diverse organisms has advanced substantially, and more than a dozen molecular mechanisms for generating initial gene structures were identified, in addition to gene duplication. These new genes have been found to integrate into and modify pre-existing gene networks primarily through mutation and selection, revealing new patterns and rules with stable origination rates across various organisms. This progress has challenged the prevailing belief that new proteins evolve from pre-existing genes, as new genes may arise de novo from noncoding DNA sequences in many organisms, with high rates observed in flowering plants. New genes have important roles in phenotypic and functional evolution across diverse biological processes and structures, with detectable fitness effects of sexual conflict genes that can shape species divergence. Such knowledge of new genes can be of translational value in agriculture and medicine.


If you were familiar with information theory or population genetics, you'd know better:

IEEE Eng Med Biol Mag
2006 Jan-Feb;25(1):30-3

Claude Shannon: Biologist

The Founder of Information Theory Used Biology to Formulate the Channel Capacity

Claude Shannon founded information theory in the 1940s. The theory has long been known to be closely related to thermodynamics and physics through the similarity of Shannon's uncertainty measure to the entropy function. Recent work using information theory to understand molecular biology has unearthed a curious fact: Shannon's channel capacity theorem only applies to living organisms and their products, such as communications channels and molecular machines that make choices from several possibilities. Information theory is therefore a theory about biology, and Shannon was a biologist.


This is not really news. When I was in graduate school in the early 70s, this was commonly understood in people studying biological systems.
Labeling “information” as a dodge used by creationists is not an argument; it’s a rhetorical shortcut that avoids engaging the actual claim. The issue on the table is not whether new genes arise, nor whether mutation and selection can modify existing systems. It is whether Shannon entropy, as a statistical measure of uncertainty, captures the kind of functional, context-sensitive constraints that biological systems require. Posting abstracts about de novo gene birth does not answer that question; it assumes it away. Demonstrating that new sequences can arise from noncoding DNA does not, by itself, demonstrate that unguided processes explain the origin of integrated regulation, coordinated expression, and functional specificity at the system level.

The Nature Genetics abstract you cite illustrates this exact point. It describes mechanisms, rates, and integration into pre-existing gene networks—all of which presuppose an already functioning cellular and regulatory environment. The existence of de novo genes does not establish that biological information, in the sense of tightly constrained functional specification, is equivalent to Shannon entropy or allele-frequency variability. In fact, most candidate de novo transcripts are short-lived, nonfunctional, or rapidly purged. The small subset that becomes functional does so under severe selective and contextual constraints, which is precisely what Shannon entropy does not measure.

As for Shannon, appealing to his authority while ignoring his explicit caveats is ironic. Shannon never claimed his theory explained biological meaning, function, or origin. He explicitly bracketed those issues out. That information theory can be applied to biological systems does not mean it captures what makes biological sequences work. Channel capacity tells us how reliably signals can be transmitted once a channel exists—it does not explain the origin of the channel, the code, or the decoding machinery. Calling Shannon a “biologist” does not change the scope limits of his mathematics.

Finally, invoking graduate school experience or implying ignorance on the part of one’s interlocutor is ad hominem, not evidence. The disagreement here is not about whether mutation, selection, or gene origination occur—they do. It is about whether increases in statistical uncertainty are equivalent to increases in functional biological information. Repeating abstracts and equations without addressing that distinction does not resolve it. If anything, it reinforces the concern that the conceptual gap is being papered over rather than examined.

If the discussion is going to move forward, it has to engage the argument being made—not redefine it, dismiss it by association, or replace it with citations that assume the conclusion.

Look, If you are not understanding something I am writing, please ask and I will be glad to explain. This would be preferable to simply ignoring it.
 
Upvote 0

The Barbarian

Crabby Old White Guy
Apr 3, 2003
30,997
13,976
78
✟465,941.00
Country
United States
Gender
Male
Faith
Catholic
Marital Status
Married
Politics
US-Libertarian
Why do you keep repeating the same premise without engaging what I’ve actually written?
I just showed you additional information to help you understand. As you now see, the literature has a great deal of material showing how information is useful in understanding biological evolution. Did you not read any of it?

Since you asked for a testable definition, here it is: biologically meaningful information is information that is functionally specified within a biological context. It is sequence-dependent information that contributes to a coherent biological outcome—such as proper folding of a protein, regulated expression, integration into existing cellular networks, and contribution to organismal fitness.
Two of the papers I cited for you, show exactly that. Did you not read them?

This distinction is testable. Two sequences can be compared experimentally: one that produces a functional protein and one that does not, despite having equal or greater Shannon entropy. Knockout studies, missense mutations, frame shifts, and regulatory disruptions routinely demonstrate that new mutations often increase statistical variability while destroying biological function.
And now you're failing to realize how natural selection tends to preserve useful mutations and tends to remove harmful ones. Here's an example of the way it works:

The Evolved β-Galactosidase System of Escherichia coli

January 1984
file:///C:/Users/Pat%20Parson/Downloads/The_Evolved_b-Galactosidase_System_of_Escherichia_.pdf
 
Upvote 0

Mercy Shown

Well-Known Member
Jan 18, 2019
1,168
332
65
Boonsboro
✟108,752.00
Country
United States
Gender
Male
Faith
Christian
Marital Status
Married
I just showed you additional information to help you understand. As you now see, the literature has a great deal of material showing how information is useful in understanding biological evolution. Did you not read any of it?
Of course. How else would I have carefully responded to what you posted? And yet you ignored all of it and simply restated your initial claim. I would expect you to demonstrate where and how you think I was wrong,
Two of the papers I cited for you, show exactly that. Did you not read them?
Yes, of course, that is why I replied in length, showing you exactly how you are missing the point. In fact, one of the papers strongly supports what I have been saying. You either do not understand what I am writing, or you did not read what I wrote, or else you would know that I gave your post every consideration. Yet you have not even addressed what I posted nor refuted it in any real manner.
And now you're failing to realize how natural selection tends to preserve useful mutations and tends to remove harmful ones. Here's an example of the way it works:
I cannot understand how you think that ad hominem references are a legitimate argument. I do not desire to turn this discussion into personal put-downs. What you just posted is not even germane to our discussion. The issue is not whether natural selection preserves useful mutations; it has come down to whether a statistical increase in allele frequencies is equivalent to an increase in biologically meaningful information. Which it clearly is not.

The Evolved β-Galactosidase System of Escherichia coli

January 1984
file:///C:/Users/Pat%20Parson/Downloads/The_Evolved_b-Galactosidase_System_of_Escherichia_.pdf
The evolved β-galactosidase system in E. coli is a classic and well-studied example of adaptive modification of an existing system, not evidence for the origin of novel, tightly specified biological information. In fact, it reinforces the distinction I’ve been making rather than undermining it. The β-galactosidase (lac) system already exists: the enzyme, the regulatory elements, the transport machinery, and the metabolic context are all in place before any “evolution” in this experiment occurs. What is being observed is fine-tuning—changes in regulation, efficiency, or substrate use—within an already functional, information-rich framework.

Crucially, this system does not arise de novo. The experiment presupposes a working gene, a working protein fold, a working transcriptional control system, and a cellular environment capable of interpreting and acting on genetic changes. Selection can certainly optimize or repurpose such systems once they exist. That has never been in dispute. What remains unaddressed is the origin of the integrated, multi-level specification that makes such optimization possible in the first place.

This is why shifting from Shannon entropy calculations to lac operon adaptation doesn’t resolve the argument—it sidesteps it. Demonstrating that mutation and selection can modify an enzyme’s activity or regulatory behavior does not show that they generate the functional information embodied in the system as a whole. In fact, most mutations to β-galactosidase are deleterious, a point well documented in mutational scanning studies. Functional gains occur against a background of overwhelming functional constraint, which is precisely what Shannon-style measures of uncertainty do not capture.

So this example is entirely consistent with the claim that selection preserves and refines existing biological information. It does not demonstrate that random mutation and selection explain the origin of novel, tightly coordinated molecular systems. Conflating adaptation with origination is the recurring error here. Until examples are offered that show the emergence of integrated functional systems without presupposing prior ones, the core issue remains untouched.
 
  • Like
Reactions: o_mlly
Upvote 0

The Barbarian

Crabby Old White Guy
Apr 3, 2003
30,997
13,976
78
✟465,941.00
Country
United States
Gender
Male
Faith
Catholic
Marital Status
Married
Politics
US-Libertarian
Of course. How else would I have carefully responded to what you posted?
You addressed none of it, and simply repeated your denials. As you see, information theory is a key part of modern evolutionary theory.
And now you're failing to realize how natural selection tends to preserve useful mutations and tends to remove harmful ones. Here's an example of the way it works:

The Evolved β-Galactosidase System of Escherichia coli

January 1984
file:///C:/Users/Pat%20Parson/Downloads/The_Evolved_b-Galactosidase_System_of_Escherichia_.pdf

The evolved β-galactosidase system in E. coli is a classic and well-studied example of adaptive modification of an existing system, not evidence for the origin of novel, tightly specified biological information.
It's an example of the evolution of a new, irreducibly complex enzyme system. As you learned earlier, evolutionary novelty doesn't come from nothing; it's always a modification of something already existing. If you had bothered to read Darwin's theory or any of the modern revisions of his theory, you would realize this.
What is being observed is fine-tuning—changes in regulation, efficiency, or substrate use—within an already functional, information-rich framework.
If you consider a new, irreducibly complex enzyme system to be "fine-tuning", we've located the problem. It might seem like cheating for evolution to use pre-existing features to produce something new and different. But until you realize that's how evolution works, you're still lost.
Crucially, this system does not arise de novo.
Yes. Evolution always works by modifying things already there. That was one of Darwin's points.
I do not desire to turn this discussion into personal put-downs.
Maybe it would be good to stop obsessing about the Evil Barbarian, and focus on evolutionary theory and the examples you've been shown.
This is why shifting from Shannon entropy calculations
You do realize that population genetics and the genetic information in a population is not the same thing as natural selection. Would it be possible to calculate the new information in this bacterial culture? Sure. But it would be a lot more complex than the simple version I showed you. The lac operon evolved over a series of gradual steps, each making the new enzyme more effective, followed by the evolution of a regulator that allowed the enzyme to be produced only in the presence of the specific substrate. If you like, I could show you how it goes with a somewhat more complex set of changes. Would that help?

So this example is entirely consistent with the claim that selection preserves and refines existing biological information.
And as you have seen, produces new information. In fact, natural selection is not even necessary; any new mutation in a population increases information in that population. Do you remember how?

In fact, most mutations to β-galactosidase are deleterious, a point well documented in mutational scanning studies.
And that, as you saw in Dr Hall's study, is how natural selection works to increase fitness. There were many mutations observed, but only the useful ones increased in the population, improving the function of the new enzyme.
Functional gains occur against a background of overwhelming functional constraint
Not always, but usually. There are mathematical models that accurately measure selective pressure. I believe I showed you how the Hardy-Weinberg equation does that.

So this example is entirely consistent with the claim that selection preserves and refines existing biological information.
Again, you don't understand what biological information is. Each new mutation in the process was an increase in biological information.
Conflating adaptation with origination is the recurring error here.
You continue to conflate evolutionary change with de novo origin. If you learn nothing else here, remember that evolution only modifies existing things in producing new traits.
Until examples are offered that show the emergence of integrated functional systems without presupposing prior ones, the core issue remains untouched.

Evolutionary theory assumes that organisms exist and describes how they change over time. Even Darwin just supposed that God created the first living things. You're struggling with yourself here, fighting an imaginary theory of your own invention. If you knew more about the real one, you'd be more effective against it.

I've suggested to you that you might consider the simplest living things and show us how any step from them to vertebrate animals is impossible. You've declined to do that. I sympathize with the difficulty therein; I've tried to find such a step myself. Can't find one. So far, no one else can, either. But It might be instructive for you to try.
 
Last edited:
Upvote 0

Mercy Shown

Well-Known Member
Jan 18, 2019
1,168
332
65
Boonsboro
✟108,752.00
Country
United States
Gender
Male
Faith
Christian
Marital Status
Married
You addressed none of it, and simply repeated your denials. As you see, information theory is a key part of modern evolutionary theory.
And now you're failing to realize how natural selection tends to preserve useful mutations and tends to remove harmful ones. Here's an example of the way it works:

The Evolved β-Galactosidase System of Escherichia coli

January 1984
file:///C:/Users/Pat%20Parson/Downloads/The_Evolved_b-Galactosidase_System_of_Escherichia_.pdf


It's an example of the evolution of a new, irreducibly complex enzyme system. As you learned earlier, evolutionary novelty doesn't come from nothing; it's always a modification of something already existing. If you had bothered to read Darwin's theory or any of the modern revisions of his theory, you would realize this.
The evolved β-galactosidase system in E. coli is a classic and well-studied example of adaptive modification of an existing system, not evidence for the origin of novel, tightly specified biological information. In fact, it reinforces the distinction I’ve been making rather than undermining it. The β-galactosidase (lac) system already exists: the enzyme, the regulatory elements, the transport machinery, and the metabolic context are all in place before any “evolution” in this experiment occurs. What is being observed is fine-tuning—changes in regulation, efficiency, or substrate use—within an already functional, information-rich framework.
If you consider a new, irreducibly complex enzyme system to be "fine-tuning", we've located the problem. It might seem like cheating for evolution to use pre-existing features to produce something new and different. But until you realize that's how evolution works, you're still lost.
This is simply not true. Support your points other than just makeing accusations and assuming your conclusion is true. You response supports what I have been trying to get accross to you all along. "It might seem like cheating for evolution to use pre-existing features to produce something new and different." This is called adaptation, not something "new and different"
Yes. Evolution always works by modifying things already there. That was one of Darwin's points.

Maybe it would be good to stop obsessing about the Evil Barbarian, and focus on evolutionary theory and the examples you've been shown.
Productive debate depends on engaging ideas rather than resorting to personal innuendo. I ask that we keep the discussion focused on the arguments.
You do realize that population genetics and the genetic information in a population is not the same thing as natural selection. Would it be possible to calculate the new information in this bacterial culture? Sure. But it would be a lot more complex than the simple version I showed you. The lac operon evolved over a series of gradual steps, each making the new enzyme more effective, followed by the evolution of a regulator that allowed the enzyme to be produced only in the presence of the specific substrate. If you like, I could show you how it goes with a somewhat more complex set of changes. Would that help?
I do realize that population genetics, genetic variation, and natural selection are not identical concepts—but that distinction actually reinforces my original point rather than undermining it. From the beginning, I have not disputed that populations change, that alleles shift in frequency, or that selection can refine and optimize existing biological systems. What I have consistently questioned is whether these processes explain the origin of the tightly specified, integrated arrangements those systems depend on.


Saying that the “new information” in the bacterial culture could be calculated—albeit in a more complex way—does not address the issue at hand. The question is not whether entropy, allele frequencies, or mutational pathways can be quantified. The question is what kind of information is being measured and whether that metric captures functional specification rather than statistical variability. Increasing enzymatic efficiency or regulatory fine-tuning within the lac operon presupposes the existence of the operon itself: a working enzyme scaffold, a regulatory architecture, transcriptional machinery, and metabolic context already in place.


The lac operon example illustrates adaptation, not origination. A series of gradual steps that improve an existing enzyme and later refine its regulation shows how selection preserves and optimizes function once it exists. It does not explain how such coordinated systems arise in the first place, nor does it demonstrate that increases in Shannon-style information or population-level diversity correspond to the emergence of new functional architectures.


So while I appreciate the offer to walk through a more complex series of changes, complexity alone is not the missing piece. My original point remains unchanged: selection can preserve and refine improvements, but it does not explain the origin of the tightly specified arrangements those improvements depend on. Until that distinction is addressed directly, adding more examples of adaptive refinement—even very good ones—doesn’t resolve the core question I’ve been raising all along.
 
Upvote 0

The Barbarian

Crabby Old White Guy
Apr 3, 2003
30,997
13,976
78
✟465,941.00
Country
United States
Gender
Male
Faith
Catholic
Marital Status
Married
Politics
US-Libertarian
The evolved β-galactosidase system in E. coli is a classic and well-studied example of adaptive modification of an existing system, not evidence for the origin of novel, tightly specified biological information.
It is an example of new traits evolved from existing things. That's what evolution is. It is also, as you learned, new information forming from new alleles. "Novel, tightly specified biological information" seems to be a creationist (actually creationist ID) buzzword. But as you also see, the evolution of useful new traits doesn't require whatever you think the buzzword means.

"It might seem like cheating for evolution to use pre-existing features to produce something new and different." This is called adaptation, not something "new and different"
You've confused evolution and adaptation. Here's a quick way to keep it straight:
1. Getting a suntan is adaptation, but not evolution.
2. A neutral mutation in a population is evolution, but not adaptation.
3. The evolution of a new enzyme system is adaptation and evolution.

Easy to remember.

Yes. Evolution always works by modifying things already there. That was one of Darwin's points.

I ask that we keep the discussion focused on the arguments.
Perhaps you could address the issue. I asked you what step from the most primitive prokaryotes to vertebrates is impossible to have evolved. Can you do that? There are a lot of them. If there was one, surely you could think of one.

I do realize that population genetics, genetic variation, and natural selection are not identical concepts—but that distinction actually reinforces my original point rather than undermining it. From the beginning, I have not disputed that populations change, that alleles shift in frequency,
So you acknowledge the fact of evolution. Good. It is, after all, a change in allele frequencies in a population. Now we're down to questioning what traits in living things do you think are impossible to have evolved by such changes? Can you answer that, now?

Saying that the “new information” in the bacterial culture could be calculated—albeit in a more complex way—does not address the issue at hand. The question is not whether entropy, allele frequencies, or mutational pathways can be quantified. The question is what kind of information is being measured and whether that metric captures functional specification rather than statistical variability. Increasing enzymatic efficiency or regulatory fine-tuning within the lac operon presupposes the existence of the operon itself: a working enzyme scaffold, a regulatory architecture, transcriptional machinery, and metabolic context already in place.
Again, as you learned, evolutionary theory shows that novel traits are never produced de novo, but only by modification of existing things. By your own admission, such changes in allele frequencies are the cause of the new enzyme system. This is consistent with observed evolution and confirmation of evolutionary theory. You seem concerned, believing that it is not "novel, tightly specified biological information", something not part of evolutionary theory, not part of observed evolution, and so far, not defined in a testable way. Perhaps that would be something you could do, after which you might explain why evolution happens without it.

The lac operon example illustrates adaptation, not origination.
More precisely, it illustrates evolution and adaption. See above, if you find this confusing. "Origination" seems to be a rather nebulous thing also. Clearly, the new enzyme didn't exist before this evolutionary change. The regulator did not exist before the new enzyme evolved. If the evolution of a new enzyme system with a regulator is not "origination", it's very clear that evolution does not need it to function.

Rock and a hard place.

So while I appreciate the offer to walk through a more complex series of changes, complexity alone is not the missing piece. My original point remains unchanged: selection can preserve and refine improvements, but it does not explain the origin of the tightly specified arrangements those improvements depend on.
The tightly specified arrangements of the new enzyme were observed to evolve over a period of time by mutation and natural selection. No point in denying the fact.
 
  • Like
Reactions: Job 33:6
Upvote 0

Mercy Shown

Well-Known Member
Jan 18, 2019
1,168
332
65
Boonsboro
✟108,752.00
Country
United States
Gender
Male
Faith
Christian
Marital Status
Married
It is an example of new traits evolved from existing things. That's what evolution is. It is also, as you learned, new information forming from new alleles. "Novel, tightly specified biological information" seems to be a creationist (actually creationist ID) buzzword. But as you also see, the evolution of useful new traits doesn't require whatever you think the buzzword means.
Calling “novel, tightly specified biological information” a creationist buzzword does not engage the argument—it avoids it by relabeling it. The phrase is simply descriptive. It refers to biological features that require precise coordination among multiple components to function at all, rather than incremental tuning of an already-working system. Dismissing that category by name does not make it disappear, nor does it show that standard evolutionary mechanisms fully explain it.

Yes, evolution explains how new traits arise from existing structures. That has never been disputed. What is at issue is whether the mechanisms you are pointing to—mutation, allele reshuffling, and selection—account for the origin of systems whose functionality depends on specific arrangements being present together. Examples like the lac operon show refinement and regulation of an existing enzymatic framework, not the emergence of such frameworks from scratch.

Likewise, asserting that “new alleles equal new information” depends entirely on the definition of information being used. Under Shannon entropy, random noise increases information. Under any biological definition tied to function, most new alleles reduce or destroy information. That is not a creationist claim—it is an experimentally observed fact demonstrated by mutational scans, knockouts, and loss-of-function studies across biology.

Finally, saying that evolution “doesn’t require” this distinction does not resolve whether the distinction is real. The question is not whether useful traits can evolve—they clearly can—but whether statistical variation alone explains the emergence of highly constrained, coordinated biological systems. That question remains open regardless of how often the word “buzzword” is invoked.

If the discussion is going to progress, it needs to address the substance of that distinction rather than dismiss it by association.
You've confused evolution and adaptation. Here's a quick way to keep it straight:
1. Getting a suntan is adaptation, but not evolution.
2. A neutral mutation in a population is evolution, but not adaptation.
3. The evolution of a new enzyme system is adaptation and evolution.

Easy to remember.

Yes. Evolution always works by modifying things already there. That was one of Darwin's points.

This framing oversimplifies the issue and subtly shifts the question. I am not confusing evolution and adaptation; I am pointing out a limitation that your definition quietly assumes. Yes, population genetics defines evolution as changes in allele frequencies over time, and yes, adaptation refers to traits shaped by selection. That distinction is textbook and undisputed. But defining evolution that way does not automatically explain the origin of complex functional systems—it merely describes how variation is tracked once systems already exist.

Your examples illustrate this perfectly. A suntan is adaptation without genetic change. A neutral mutation is evolution without adaptation. Both are changes within an already functioning organism or population. Even your third example—the “evolution of a new enzyme system”—presupposes existing biochemical machinery: folded proteins, metabolic pathways, regulatory logic, and a cellular environment capable of interpreting genetic changes. Calling that “evolution” does not explain how those tightly constrained systems came into being in the first place; it describes how they are modified once present.

Saying that “evolution always works by modifying things already there” is precisely the point I have been making all along. That principle explains refinement, repurposing, and optimization. It does not, by itself, explain the origin of the highly coordinated arrangements that make such modification possible. Darwin recognized this limitation himself, which is why his theory focused on diversification and adaptation, not on the origin of life or the first biological systems.

So the disagreement is not about definitions—it’s about explanatory scope. Redefining evolution as “any change in allele frequencies” does not answer the question of how novel, functionally integrated biological systems arise. It simply moves that question outside the definition. And doing so doesn’t resolve the issue; it leaves it untouched.
Perhaps you could address the issue. I asked you what step from the most primitive prokaryotes to vertebrates is impossible to have evolved. Can you do that? There are a lot of them. If there was one, surely you could think of one.
I’ve never claimed there is a single step that can be labeled “impossible” in the logical sense, and framing the discussion that way sets up a false dichotomy. Historical sciences rarely work by proving impossibility; they work by weighing explanatory adequacy. The issue is not whether one can imagine a pathway in principle, but whether the proposed mechanisms plausibly account for the coordinated increases in complexity we see across major transitions.

For example, consider the transition from simple prokaryotes to eukaryotic cells. Endosymbiosis explains the origin of mitochondria reasonably well, but it does not explain the origin of the nucleus, the spliceosome, linear chromosomes with telomeres, or the highly regulated trafficking system that separates transcription from translation. These are not single traits but tightly integrated systems that must function together. The proposed pathways are inferential reconstructions, not observed processes, and they rely on multiple coordinated changes occurring without loss of viability.

Or take the origin of multicellularity. It is not merely cells sticking together. It requires regulated cell differentiation, apoptosis, positional signaling, developmental gene networks, and mechanisms to suppress selfish cell behavior (i.e., cancer). Each of these systems is interdependent. Partial versions often confer no advantage or are actively harmful. The question is not “could cells clump?” but how regulatory hierarchies governing development arose in a stepwise, selectable way.

Consider the emergence of nervous systems. Moving from no neurons to excitable membranes, synapses, neurotransmitters, ion channels, and centralized processing is not a single mutation problem. Each component depends on the others for function. A voltage-gated sodium channel without a signaling network is useless; a network without reliable channels does not work. Again, the literature proposes scenarios—but proposals are not demonstrations.

The same applies to the vertebrate body plan. The origin of the neural crest, the endoskeleton, segmented vertebrae, complex eyes, and coordinated embryological patterning involves layers of regulation encoded in gene regulatory networks. Small perturbations to these networks are usually catastrophic, which is why development is so conserved. That conservation itself is evidence of tight constraint, not evolutionary ease.

So no, I’m not claiming there is a magical “brick wall” where evolution stops. I’m pointing out that many transitions—from prokaryote to eukaryote, unicellular to multicellular, non-neural to neural, invertebrate to vertebrate—require the origin of coordinated systems, not just incremental trait variation. These systems are inferred to have evolved, but they are not observed forming, and the pathways remain speculative to varying degrees.

Finally, demanding that critics name a single “impossible” step misses the point. Science does not advance by declaring gaps nonexistent because none are logically impossible. It advances by honestly acknowledging where mechanisms are well-supported and where explanatory depth is still lacking. Pointing that out is not anti-science—it’s pro-clarity.
So you acknowledge the fact of evolution. Good. It is, after all, a change in allele frequencies in a population. Now we're down to questioning what traits in living things do you think are impossible to have evolved by such changes? Can you answer that, now?
I am at the point of knowing that evolution requires as much faith as believing in God. As far as your question is concerned complex biological systems have no answer as to how they could have evolved in the first place. Life is a mystery as to how it came into being. Perhaps you are insert God as the first cause in order to solve your conundrum. If so, you ultimately have faith in a creator albeit less than omnipotent.
Again, as you learned, evolutionary theory shows that novel traits are never produced de novo, but only by modification of existing things. By your own admission, such changes in allele frequencies are the cause of the new enzyme system. This is consistent with observed evolution and confirmation of evolutionary theory. You seem concerned, believing that it is not "novel, tightly specified biological information", something not part of evolutionary theory, not part of observed evolution, and so far, not defined in a testable way. Perhaps that would be something you could do, after which you might explain why evolution happens without it.
You are now assuming things I have ever stated. You need to quit creating and then knocking over strawmen and address what I actually say.
More precisely, it illustrates evolution and adaption. See above, if you find this confusing. "Origination" seems to be a rather nebulous thing also. Clearly, the new enzyme didn't exist before this evolutionary change. The regulator did not exist before the new enzyme evolved. If the evolution of a new enzyme system with a regulator is not "origination", it's very clear that evolution does not need it to function.

Rock and a hard place.
“Origination” is not nebulous here—it refers to the coming into existence of new, functionally integrated architectures, not incremental tuning of components that already exist. In the lac operon case, the enzyme scaffold, the transcriptional machinery, the regulatory logic, and the metabolic context all predate the changes being described. Saying that a particular regulatory element or enzyme variant “didn’t exist before” in a trivial sense does not address the deeper question of how the coordinated system that makes such variation viable arose in the first place.

This is why the “rock and a hard place” framing doesn’t hold. Evolution does not “need” origination if origination is defined out of scope by definition. But that’s not an empirical result—it’s a conceptual move. By defining evolution as modification of what already exists (as Darwin did), you implicitly bracket off the origin of those systems. That doesn’t mean the origin problem is solved; it means it’s being set aside.
The tightly specified arrangements of the new enzyme were observed to evolve over a period of time by mutation and natural selection. No point in denying the fact.
Stating something as a fact does not make it one; what matters is what was actually observed versus how those observations are interpreted. What was observed in the enzyme example were changes in sequence, activity, and regulation within an already existing biochemical and cellular framework. That is an observation of adaptive modification, not a direct observation of the origin of a tightly specified biological system in the broader sense being discussed.

No one is denying that mutations occurred or that selection acted on them. The disagreement is over what those observations demonstrate. The “tight specification” of the enzyme you refer to is constrained by pre-existing protein folds, catalytic motifs, regulatory machinery, and metabolic pathways. The experiment does not show such systems arising from nonfunctional precursors; it shows them being refined once present. That distinction is not rhetorical—it is empirical and conceptual.

So the issue is not denial of facts, but disagreement over explanatory scope. Observing that selection can improve an enzyme does not establish that mutation and selection alone explain the emergence of integrated molecular architectures. Concluding otherwise requires an additional inference—one that has not been directly observed and should not be asserted as settled simply by repetition.
 
Last edited:
Upvote 0

Job 33:6

Well-Known Member
Jun 15, 2017
9,913
3,394
Hartford, Connecticut
✟387,481.00
Country
United States
Gender
Male
Faith
Christian
Marital Status
Married
Politics
US-Republican
Calling “novel, tightly specified biological information” a creationist buzzword does not engage the argument—it avoids it by relabeling it. The phrase is simply descriptive. It refers to biological features that require precise coordination among multiple components to function at all, rather than incremental tuning of an already-working system. Dismissing that category by name does not make it disappear, nor does it show that standard evolutionary mechanisms fully explain it.

Yes, evolution explains how new traits arise from existing structures. That has never been disputed. What is at issue is whether the mechanisms you are pointing to—mutation, allele reshuffling, and selection—account for the origin of systems whose functionality depends on specific arrangements being present together. Examples like the lac operon show refinement and regulation of an existing enzymatic framework, not the emergence of such frameworks from scratch.

Likewise, asserting that “new alleles equal new information” depends entirely on the definition of information being used. Under Shannon entropy, random noise increases information. Under any biological definition tied to function, most new alleles reduce or destroy information. That is not a creationist claim—it is an experimentally observed fact demonstrated by mutational scans, knockouts, and loss-of-function studies across biology.

Finally, saying that evolution “doesn’t require” this distinction does not resolve whether the distinction is real. The question is not whether useful traits can evolve—they clearly can—but whether statistical variation alone explains the emergence of highly constrained, coordinated biological systems. That question remains open regardless of how often the word “buzzword” is invoked.

If the discussion is going to progress, it needs to address the substance of that distinction rather than dismiss it by association.

Here is a clear, corrective rebuttal that addresses his definition of evolution directly and explains why it still does not resolve your original point:


This framing oversimplifies the issue and subtly shifts the question. I am not confusing evolution and adaptation; I am pointing out a limitation that your definition quietly assumes. Yes, population genetics defines evolution as changes in allele frequencies over time, and yes, adaptation refers to traits shaped by selection. That distinction is textbook and undisputed. But defining evolution that way does not automatically explain the origin of complex functional systems—it merely describes how variation is tracked once systems already exist.

Your examples illustrate this perfectly. A suntan is adaptation without genetic change. A neutral mutation is evolution without adaptation. Both are changes within an already functioning organism or population. Even your third example—the “evolution of a new enzyme system”—presupposes existing biochemical machinery: folded proteins, metabolic pathways, regulatory logic, and a cellular environment capable of interpreting genetic changes. Calling that “evolution” does not explain how those tightly constrained systems came into being in the first place; it describes how they are modified once present.

Saying that “evolution always works by modifying things already there” is precisely the point I have been making all along. That principle explains refinement, repurposing, and optimization. It does not, by itself, explain the origin of the highly coordinated arrangements that make such modification possible. Darwin recognized this limitation himself, which is why his theory focused on diversification and adaptation, not on the origin of life or the first biological systems.

So the disagreement is not about definitions—it’s about explanatory scope. Redefining evolution as “any change in allele frequencies” does not answer the question of how novel, functionally integrated biological systems arise. It simply moves that question outside the definition. And doing so doesn’t resolve the issue; it leaves it untouched.

I’ve never claimed there is a single step that can be labeled “impossible” in the logical sense, and framing the discussion that way sets up a false dichotomy. Historical sciences rarely work by proving impossibility; they work by weighing explanatory adequacy. The issue is not whether one can imagine a pathway in principle, but whether the proposed mechanisms plausibly account for the coordinated increases in complexity we see across major transitions.

For example, consider the transition from simple prokaryotes to eukaryotic cells. Endosymbiosis explains the origin of mitochondria reasonably well, but it does not explain the origin of the nucleus, the spliceosome, linear chromosomes with telomeres, or the highly regulated trafficking system that separates transcription from translation. These are not single traits but tightly integrated systems that must function together. The proposed pathways are inferential reconstructions, not observed processes, and they rely on multiple coordinated changes occurring without loss of viability.

Or take the origin of multicellularity. It is not merely cells sticking together. It requires regulated cell differentiation, apoptosis, positional signaling, developmental gene networks, and mechanisms to suppress selfish cell behavior (i.e., cancer). Each of these systems is interdependent. Partial versions often confer no advantage or are actively harmful. The question is not “could cells clump?” but how regulatory hierarchies governing development arose in a stepwise, selectable way.

Consider the emergence of nervous systems. Moving from no neurons to excitable membranes, synapses, neurotransmitters, ion channels, and centralized processing is not a single mutation problem. Each component depends on the others for function. A voltage-gated sodium channel without a signaling network is useless; a network without reliable channels does not work. Again, the literature proposes scenarios—but proposals are not demonstrations.

The same applies to the vertebrate body plan. The origin of the neural crest, the endoskeleton, segmented vertebrae, complex eyes, and coordinated embryological patterning involves layers of regulation encoded in gene regulatory networks. Small perturbations to these networks are usually catastrophic, which is why development is so conserved. That conservation itself is evidence of tight constraint, not evolutionary ease.

So no, I’m not claiming there is a magical “brick wall” where evolution stops. I’m pointing out that many transitions—from prokaryote to eukaryote, unicellular to multicellular, non-neural to neural, invertebrate to vertebrate—require the origin of coordinated systems, not just incremental trait variation. These systems are inferred to have evolved, but they are not observed forming, and the pathways remain speculative to varying degrees.

Finally, demanding that critics name a single “impossible” step misses the point. Science does not advance by declaring gaps nonexistent because none are logically impossible. It advances by honestly acknowledging where mechanisms are well-supported and where explanatory depth is still lacking. Pointing that out is not anti-science—it’s pro-clarity.

I am at the point of knowing that evolution requires as much faith as believing in God. As far as your question is concerned complex biological systems have no answer as to how they could have evolved in the first place. Life is a mystery as to how it came into being. Perhaps you are insert God as the first cause in order to solve your conundrum. If so, you ultimately have faith in a creator albeit less than omnipotent.

You are now assuming things I have ever stated. You need to quit creating and then knocking over strawmen and address what I actually say.

“Origination” is not nebulous here—it refers to the coming into existence of new, functionally integrated architectures, not incremental tuning of components that already exist. In the lac operon case, the enzyme scaffold, the transcriptional machinery, the regulatory logic, and the metabolic context all predate the changes being described. Saying that a particular regulatory element or enzyme variant “didn’t exist before” in a trivial sense does not address the deeper question of how the coordinated system that makes such variation viable arose in the first place.

This is why the “rock and a hard place” framing doesn’t hold. Evolution does not “need” origination if origination is defined out of scope by definition. But that’s not an empirical result—it’s a conceptual move. By defining evolution as modification of what already exists (as Darwin did), you implicitly bracket off the origin of those systems. That doesn’t mean the origin problem is solved; it means it’s being set aside.

Stating something as a fact does not make it one; what matters is what was actually observed versus how those observations are interpreted. What was observed in the enzyme example were changes in sequence, activity, and regulation within an already existing biochemical and cellular framework. That is an observation of adaptive modification, not a direct observation of the origin of a tightly specified biological system in the broader sense being discussed.

No one is denying that mutations occurred or that selection acted on them. The disagreement is over what those observations demonstrate. The “tight specification” of the enzyme you refer to is constrained by pre-existing protein folds, catalytic motifs, regulatory machinery, and metabolic pathways. The experiment does not show such systems arising from nonfunctional precursors; it shows them being refined once present. That distinction is not rhetorical—it is empirical and conceptual.

So the issue is not denial of facts, but disagreement over explanatory scope. Observing that selection can improve an enzyme does not establish that mutation and selection alone explain the emergence of integrated molecular architectures. Concluding otherwise requires an additional inference—one that has not been directly observed and should not be asserted as settled simply by repetition.

Your argument rests on a mistaken assumption about how biological complexity must arise. By treating “novel, tightly specified biological information” as a distinct empirical category, you implicitly assume that coordinated systems must appear fully formed to be functional. That assumption is not supported by biology. Complex integration routinely emerges through gradual, selectable processes such as gene duplication, co-option, regulatory evolution, and exaptation, where intermediate stages perform different functions before becoming tightly integrated. Examples like the lac operon are not offered as explanations for the origin of life, but as demonstrations that functional coordination and regulatory logic can evolve incrementally. Dismissing these cases because they presuppose existing biology imposes an unrealistic explanatory standard that no historical science could satisfy.

You also conflate unresolved detail with explanatory failure. Pointing out that some major evolutionary transitions remain under active investigation does not mean that evolution “has no answer” for them. Claims about information loss depend on redefining “information” as present-day function, while ignoring mechanisms, especially gene duplication and divergence, that increase genetic material and enable novelty over time.

Also, I'm pretty sure that Barbarian has already said this to you but your argument mistakenly collapses two distinct explanatory domains into one. Evolutionary theory does not attempt to explain the origin of life or the first biochemical machinery; it explains how complexity arises once self-replicating systems already exist. When you say that evolutionary explanations “presuppose” folded proteins, regulatory logic, and cellular context, that is not a hidden weakness, it is the explicit scope of the theory. The origin of those first systems belongs to abiogenesis research, not evolutionary biology. Conflating these questions makes it seem as though evolution is avoiding an explanatory burden, when in fact it is being criticized for not answering a question it never claimed to address.

Within its proper domain, evolution does directly address the emergence of complex, coordinated systems. New biochemical pathways, regulatory networks, and integrated functions arise through well documented mechanisms such as gene duplication and divergence, co-option of existing components, changes in gene regulation, and exaptation, where intermediate forms have different but selectable functions. These processes do not merely “refine what already exists”; they generate new architectures over time. Pointing out that such systems evolved from earlier ones is not an evasion but the core explanatory claim, supported by comparative genomics, developmental biology, and experimental evolution. Treating the presence of precursors as a failure to explain origin mischaracterizes how historical sciences account for novelty and complexity.
 
Last edited:
Upvote 0

Mercy Shown

Well-Known Member
Jan 18, 2019
1,168
332
65
Boonsboro
✟108,752.00
Country
United States
Gender
Male
Faith
Christian
Marital Status
Married
Your argument rests on a mistaken assumption about how biological complexity must arise. By treating “novel, tightly specified biological information” as a distinct empirical category, you implicitly assume that coordinated systems must appear fully formed to be functional. That assumption is not supported by biology. Complex integration routinely emerges through gradual, selectable processes such as gene duplication, co-option, regulatory evolution, and exaptation, where intermediate stages perform different functions before becoming tightly integrated. Examples like the lac operon are not offered as explanations for the origin of life, but as demonstrations that functional coordination and regulatory logic can evolve incrementally. Dismissing these cases because they presuppose existing biology imposes an unrealistic explanatory standard that no historical science could satisfy.
Your response rests on a misunderstanding of what is being claimed. The argument does not assume that coordinated systems must appear “fully formed” or instantaneously functional. That is a common strawman. The claim is narrower and more precise: incremental evolutionary pathways must preserve function at each step, and the space of viable intermediates is far more constrained than is usually acknowledged. Pointing to mechanisms like duplication, co-option, and exaptation names possible routes, but it does not demonstrate that those routes plausibly traverse the functional gaps involved in major transitions.

Saying that intermediate stages “performed different functions” is not an explanation by itself. For a pathway to be evolutionarily viable, each intermediate must be selectable in the actual historical and cellular context, not merely imaginable in hindsight. Many systems require multiple components to be present simultaneously before any selectable function emerges. In such cases, partial systems are neutral or deleterious, not stepping stones. The question is not whether gradualism is logically possible, but whether it is empirically supported in cases where integration and coordination are tightly coupled.

Examples like the lac operon demonstrate regulatory refinement within an already integrated metabolic framework. They do not show the origin of regulatory logic itself, nor the emergence of coding, decoding, and coordinated expression machinery. Saying that such examples “presuppose existing biology” is not imposing an unrealistic standard—it is identifying the boundary of what the example actually explains. Historical sciences routinely distinguish between explaining variation within a system and explaining the origin of the system. That distinction is standard, not exceptional.

Finally, treating “biologically meaningful information” as a real category is not an ad hoc invention; it reflects an empirical reality recognized throughout molecular biology. Functional sequences occupy a tiny fraction of possible sequence space, and biological systems are defined by constraint, not by entropy maximization. Demonstrating that complexity can be modified once it exists does not show how the initial functional specification arises. Until examples are provided that directly address that origin question—rather than redescribing adaptive elaboration—the original concern remains unaddressed.

If the discussion is going to progress, it needs to engage this distinction rather than dismiss it as an unreasonable demand.
You also conflate unresolved detail with explanatory failure. Pointing out that some major evolutionary transitions remain under active investigation does not mean that evolution “has no answer” for them. Claims about information loss depend on redefining “information” as present-day function, while ignoring mechanisms, especially gene duplication and divergence, that increase genetic material and enable novelty over time.
This response blurs an important distinction between open research questions and explanatory gaps, and then treats the latter as if they were merely the former. Acknowledging unresolved details is not the same as declaring explanatory failure—but neither does ongoing investigation automatically constitute an explanation. When proposed mechanisms remain inferential, historically unobserved, or dependent on post-hoc reconstruction, it is legitimate to ask whether they adequately account for the phenomena in question. Pointing this out is not a confusion; it is precisely how explanatory sufficiency is assessed in historical sciences.

The claim that arguments about information loss “redefine information as present-day function” mischaracterizes the position. The issue is not equating information with current utility, but recognizing that biological information is functionally constrained. Sequences must do specific things—fold, bind, regulate, coordinate—to be selectable at all. Shannon entropy and raw sequence length measure variability and capacity, not the emergence of functional specification. A mutation can increase sequence diversity or gene count while degrading or eliminating function, which is why loss-of-function mutations are both common and well documented.

Gene duplication and divergence are frequently cited as solutions, but duplication alone does not explain the origin of new function—it merely copies existing information. For divergence to produce novelty, a duplicated gene must traverse a narrow path where accumulating mutations neither destroy function nor remain selectively neutral indefinitely. Empirically, most duplicated genes degrade into pseudogenes. A small minority acquire new roles, but this outcome presupposes regulatory integration, expression control, and functional compatibility with existing systems—exactly the coordination under discussion.

So the concern is not that evolution has “no answer,” but that its standard answers often shift the burden of explanation forward without resolving the core issue of how tightly specified, integrated biological functions arise in the first place. Calling attention to that is not conflating categories; it is asking whether the proposed mechanisms truly explain what they are claimed to explain.
Also, I'm pretty sure that Barbarian has already said this to you but your argument mistakenly collapses two distinct explanatory domains into one. Evolutionary theory does not attempt to explain the origin of life or the first biochemical machinery; it explains how complexity arises once self-replicating systems already exist. When you say that evolutionary explanations “presuppose” folded proteins, regulatory logic, and cellular context, that is not a hidden weakness, it is the explicit scope of the theory. The origin of those first systems belongs to abiogenesis research, not evolutionary biology. Conflating these questions makes it seem as though evolution is avoiding an explanatory burden, when in fact it is being criticized for not answering a question it never claimed to address.
I am not collapsing explanatory domains; I am questioning whether the boundaries between them are being used consistently or conveniently. It is true that evolutionary theory, strictly defined, does not attempt to explain the origin of life. But the discussion here has not been limited to abiogenesis versus Darwinian evolution—it has focused on major transitions within life, where new levels of coordination, regulation, and functional integration arise. Those transitions occur well after self-replication exists and squarely within the scope of evolutionary explanation.

When I point out that evolutionary accounts presuppose folded proteins, regulatory logic, and cellular context, I am not accusing the theory of hiding a weakness—I am asking whether the mechanisms invoked are sufficient to explain the emergence of new integrated systems rather than merely their modification. The issue is not the very first cell, but how systems that require multiple coordinated components become established when intermediate states lack selectable function.

Appealing to abiogenesis does not resolve this concern, because the same explanatory pattern appears repeatedly: mechanisms are cited that explain refinement and diversification, while the origin of coordination is deferred to earlier stages or adjacent fields. At some point, that deferral becomes circular rather than clarifying. Saying “that’s outside the scope” is legitimate only if the remaining explanation is complete within its own domain.

So the critique is not that evolution fails to answer a question it never claimed to address. It is that, for certain transitions, the proposed evolutionary mechanisms may not yet adequately explain the rise of tightly constrained, interdependent systems—even after self-replication, metabolism, and selection are already in place. Distinguishing domains does not eliminate that question; it simply clarifies where it must be answered.
Within its proper domain, evolution does directly address the emergence of complex, coordinated systems. New biochemical pathways, regulatory networks, and integrated functions arise through well documented mechanisms such as gene duplication and divergence, co-option of existing components, changes in gene regulation, and exaptation, where intermediate forms have different but selectable functions. These processes do not merely “refine what already exists”; they generate new architectures over time. Pointing out that such systems evolved from earlier ones is not an evasion but the core explanatory claim, supported by comparative genomics, developmental biology, and experimental evolution. Treating the presence of precursors as a failure to explain origin mischaracterizes how historical sciences account for novelty and complexity.
This response correctly describes the categories of mechanisms invoked by evolutionary biology, but it overstates what they actually demonstrate. Listing gene duplication, divergence, co-option, regulatory change, and exaptation names plausible routes of modification; it does not, by itself, show that those routes successfully generate new, tightly coordinated architectures in a stepwise, selectable manner. The dispute is not over whether such mechanisms exist, but whether they are sufficient, in practice, to traverse the functional constraints involved in complex integration.

Saying that intermediate forms have “different but selectable functions” is an assertion that requires case-specific support. For many systems—especially those involving multi-component regulation, feedback control, and precise molecular interaction—partial systems do not perform alternative functions at all. They are neutral or deleterious. In such cases, the existence of precursors does not automatically constitute an explanatory pathway. Demonstrating that a system evolved from earlier ones is not the same as demonstrating that each transitional step was selectable in its historical context rather than reconstructed retrospectively.

Moreover, the claim that these processes “generate new architectures” often rests on cumulative descriptions rather than direct demonstrations of origin. Comparative genomics and developmental biology are powerful tools for identifying homology and modification, but they infer pathways from end states. That is a legitimate historical method, yet it does not eliminate the question of how rare functional configurations are reached in sequence space, nor does it quantify how often proposed mechanisms actually succeed versus fail (as in the high rate of pseudogenization following duplication).

Finally, pointing out that systems evolved from precursors is not being treated as a failure per se; it is being treated as an incomplete explanation when the origin of coordination is assumed rather than shown. Historical sciences explain novelty not merely by tracing ancestry, but by accounting for the causal adequacy of the mechanisms invoked. Asking whether those mechanisms plausibly generate new integrated function is not mischaracterizing evolution—it is testing the strength of its explanatory claims.

In short, adaptation and elaboration are well supported. The open question remains whether the same processes adequately explain the emergence of tightly specified, interdependent biological systems, rather than presupposing them at each stage.
 
Upvote 0

The Barbarian

Crabby Old White Guy
Apr 3, 2003
30,997
13,976
78
✟465,941.00
Country
United States
Gender
Male
Faith
Catholic
Marital Status
Married
Politics
US-Libertarian
Your argument rests on a mistaken assumption about how biological complexity must arise. By treating “novel, tightly specified biological information” as a distinct empirical category, you implicitly assume that coordinated systems must appear fully formed to be functional. That assumption is not supported by biology. Complex integration routinely emerges through gradual, selectable processes such as gene duplication, co-option, regulatory evolution, and exaptation, where intermediate stages perform different functions before becoming tightly integrated. Examples like the lac operon are not offered as explanations for the origin of life, but as demonstrations that functional coordination and regulatory logic can evolve incrementally. Dismissing these cases because they presuppose existing biology imposes an unrealistic explanatory standard that no historical science could satisfy.
Today's winner.
 
Upvote 0

Job 33:6

Well-Known Member
Jun 15, 2017
9,913
3,394
Hartford, Connecticut
✟387,481.00
Country
United States
Gender
Male
Faith
Christian
Marital Status
Married
Politics
US-Republican
Your response rests on a misunderstanding of what is being claimed. The argument does not assume that coordinated systems must appear “fully formed” or instantaneously functional. That is a common strawman. The claim is narrower and more precise: incremental evolutionary pathways must preserve function at each step, and the space of viable intermediates is far more constrained than is usually acknowledged. Pointing to mechanisms like duplication, co-option, and exaptation names possible routes, but it does not demonstrate that those routes plausibly traverse the functional gaps involved in major transitions.

Saying that intermediate stages “performed different functions” is not an explanation by itself. For a pathway to be evolutionarily viable, each intermediate must be selectable in the actual historical and cellular context, not merely imaginable in hindsight. Many systems require multiple components to be present simultaneously before any selectable function emerges. In such cases, partial systems are neutral or deleterious, not stepping stones. The question is not whether gradualism is logically possible, but whether it is empirically supported in cases where integration and coordination are tightly coupled.

Examples like the lac operon demonstrate regulatory refinement within an already integrated metabolic framework. They do not show the origin of regulatory logic itself, nor the emergence of coding, decoding, and coordinated expression machinery. Saying that such examples “presuppose existing biology” is not imposing an unrealistic standard—it is identifying the boundary of what the example actually explains. Historical sciences routinely distinguish between explaining variation within a system and explaining the origin of the system. That distinction is standard, not exceptional.

Finally, treating “biologically meaningful information” as a real category is not an ad hoc invention; it reflects an empirical reality recognized throughout molecular biology. Functional sequences occupy a tiny fraction of possible sequence space, and biological systems are defined by constraint, not by entropy maximization. Demonstrating that complexity can be modified once it exists does not show how the initial functional specification arises. Until examples are provided that directly address that origin question—rather than redescribing adaptive elaboration—the original concern remains unaddressed.

If the discussion is going to progress, it needs to engage this distinction rather than dismiss it as an unreasonable demand.

This response blurs an important distinction between open research questions and explanatory gaps, and then treats the latter as if they were merely the former. Acknowledging unresolved details is not the same as declaring explanatory failure—but neither does ongoing investigation automatically constitute an explanation. When proposed mechanisms remain inferential, historically unobserved, or dependent on post-hoc reconstruction, it is legitimate to ask whether they adequately account for the phenomena in question. Pointing this out is not a confusion; it is precisely how explanatory sufficiency is assessed in historical sciences.

The claim that arguments about information loss “redefine information as present-day function” mischaracterizes the position. The issue is not equating information with current utility, but recognizing that biological information is functionally constrained. Sequences must do specific things—fold, bind, regulate, coordinate—to be selectable at all. Shannon entropy and raw sequence length measure variability and capacity, not the emergence of functional specification. A mutation can increase sequence diversity or gene count while degrading or eliminating function, which is why loss-of-function mutations are both common and well documented.

Gene duplication and divergence are frequently cited as solutions, but duplication alone does not explain the origin of new function—it merely copies existing information. For divergence to produce novelty, a duplicated gene must traverse a narrow path where accumulating mutations neither destroy function nor remain selectively neutral indefinitely. Empirically, most duplicated genes degrade into pseudogenes. A small minority acquire new roles, but this outcome presupposes regulatory integration, expression control, and functional compatibility with existing systems—exactly the coordination under discussion.

So the concern is not that evolution has “no answer,” but that its standard answers often shift the burden of explanation forward without resolving the core issue of how tightly specified, integrated biological functions arise in the first place. Calling attention to that is not conflating categories; it is asking whether the proposed mechanisms truly explain what they are claimed to explain.

I am not collapsing explanatory domains; I am questioning whether the boundaries between them are being used consistently or conveniently. It is true that evolutionary theory, strictly defined, does not attempt to explain the origin of life. But the discussion here has not been limited to abiogenesis versus Darwinian evolution—it has focused on major transitions within life, where new levels of coordination, regulation, and functional integration arise. Those transitions occur well after self-replication exists and squarely within the scope of evolutionary explanation.

When I point out that evolutionary accounts presuppose folded proteins, regulatory logic, and cellular context, I am not accusing the theory of hiding a weakness—I am asking whether the mechanisms invoked are sufficient to explain the emergence of new integrated systems rather than merely their modification. The issue is not the very first cell, but how systems that require multiple coordinated components become established when intermediate states lack selectable function.

Appealing to abiogenesis does not resolve this concern, because the same explanatory pattern appears repeatedly: mechanisms are cited that explain refinement and diversification, while the origin of coordination is deferred to earlier stages or adjacent fields. At some point, that deferral becomes circular rather than clarifying. Saying “that’s outside the scope” is legitimate only if the remaining explanation is complete within its own domain.

So the critique is not that evolution fails to answer a question it never claimed to address. It is that, for certain transitions, the proposed evolutionary mechanisms may not yet adequately explain the rise of tightly constrained, interdependent systems—even after self-replication, metabolism, and selection are already in place. Distinguishing domains does not eliminate that question; it simply clarifies where it must be answered.

This response correctly describes the categories of mechanisms invoked by evolutionary biology, but it overstates what they actually demonstrate. Listing gene duplication, divergence, co-option, regulatory change, and exaptation names plausible routes of modification; it does not, by itself, show that those routes successfully generate new, tightly coordinated architectures in a stepwise, selectable manner. The dispute is not over whether such mechanisms exist, but whether they are sufficient, in practice, to traverse the functional constraints involved in complex integration.

Saying that intermediate forms have “different but selectable functions” is an assertion that requires case-specific support. For many systems—especially those involving multi-component regulation, feedback control, and precise molecular interaction—partial systems do not perform alternative functions at all. They are neutral or deleterious. In such cases, the existence of precursors does not automatically constitute an explanatory pathway. Demonstrating that a system evolved from earlier ones is not the same as demonstrating that each transitional step was selectable in its historical context rather than reconstructed retrospectively.

Moreover, the claim that these processes “generate new architectures” often rests on cumulative descriptions rather than direct demonstrations of origin. Comparative genomics and developmental biology are powerful tools for identifying homology and modification, but they infer pathways from end states. That is a legitimate historical method, yet it does not eliminate the question of how rare functional configurations are reached in sequence space, nor does it quantify how often proposed mechanisms actually succeed versus fail (as in the high rate of pseudogenization following duplication).

Finally, pointing out that systems evolved from precursors is not being treated as a failure per se; it is being treated as an incomplete explanation when the origin of coordination is assumed rather than shown. Historical sciences explain novelty not merely by tracing ancestry, but by accounting for the causal adequacy of the mechanisms invoked. Asking whether those mechanisms plausibly generate new integrated function is not mischaracterizing evolution—it is testing the strength of its explanatory claims.

In short, adaptation and elaboration are well supported. The open question remains whether the same processes adequately explain the emergence of tightly specified, interdependent biological systems, rather than presupposing them at each stage.

You write that the proposed mechanisms “remain inferential, historically unobserved, or dependent on post-hoc reconstruction.” This is precisely how historical sciences work: we infer past processes from the evidence available today.

You acknowledge that mechanisms like gene duplication, co-option, exaptation, and regulatory change exist and are accepted components of evolutionary theory, yet you simultaneously claim they may be insufficient to explain the emergence of tightly integrated systems.

This is a contradiction: you accept the mechanisms as real and operative, but treat the inferential evidence that they plausibly generate coordinated complexity as inadequate. In other words, you agree that evolution modifies and repurposes existing systems, exactly what these mechanisms do, yet argue that evolution “fails” because a fully documented, stepwise, historically observed sequence for every intermediate is not available.

Evolutionary biology does not require direct observation of every intermediate to demonstrate explanatory adequacy. Instead, it relies on comparative genomics, developmental biology, and experimental evolution to show that these mechanisms can and do produce complex, integrated systems over time. Accepting the mechanisms but rejecting the inferential evidence for their cumulative effects is not a neutral critique, it is a demand for an impossible standard of proof, and it contradicts your acknowledgment that the mechanisms themselves are real and functional.
 
  • Winner
Reactions: The Barbarian
Upvote 0

The Barbarian

Crabby Old White Guy
Apr 3, 2003
30,997
13,976
78
✟465,941.00
Country
United States
Gender
Male
Faith
Catholic
Marital Status
Married
Politics
US-Libertarian
Your response rests on a misunderstanding of what is being claimed. The argument does not assume that coordinated systems must appear “fully formed” or instantaneously functional. That is a common strawman. The claim is narrower and more precise: incremental evolutionary pathways must preserve function at each step, and the space of viable intermediates is far more constrained than is usually acknowledged. Pointing to mechanisms like duplication, co-option, and exaptation names possible routes, but it does not demonstrate that those routes plausibly traverse the functional gaps involved in major transitions.
The observed evolution of a new enzyme system demonstrates exactly such a process. Reality beats anyone's guesses.
 
  • Like
Reactions: Job 33:6
Upvote 0

Job 33:6

Well-Known Member
Jun 15, 2017
9,913
3,394
Hartford, Connecticut
✟387,481.00
Country
United States
Gender
Male
Faith
Christian
Marital Status
Married
Politics
US-Republican
Your response rests on a misunderstanding of what is being claimed. The argument does not assume that coordinated systems must appear “fully formed” or instantaneously functional. That is a common strawman. The claim is narrower and more precise: incremental evolutionary pathways must preserve function at each step, and the space of viable intermediates is far more constrained than is usually acknowledged. Pointing to mechanisms like duplication, co-option, and exaptation names possible routes, but it does not demonstrate that those routes plausibly traverse the functional gaps involved in major transitions.

Saying that intermediate stages “performed different functions” is not an explanation by itself. For a pathway to be evolutionarily viable, each intermediate must be selectable in the actual historical and cellular context, not merely imaginable in hindsight. Many systems require multiple components to be present simultaneously before any selectable function emerges. In such cases, partial systems are neutral or deleterious, not stepping stones. The question is not whether gradualism is logically possible, but whether it is empirically supported in cases where integration and coordination are tightly coupled.

Examples like the lac operon demonstrate regulatory refinement within an already integrated metabolic framework. They do not show the origin of regulatory logic itself, nor the emergence of coding, decoding, and coordinated expression machinery. Saying that such examples “presuppose existing biology” is not imposing an unrealistic standard—it is identifying the boundary of what the example actually explains. Historical sciences routinely distinguish between explaining variation within a system and explaining the origin of the system. That distinction is standard, not exceptional.

Finally, treating “biologically meaningful information” as a real category is not an ad hoc invention; it reflects an empirical reality recognized throughout molecular biology. Functional sequences occupy a tiny fraction of possible sequence space, and biological systems are defined by constraint, not by entropy maximization. Demonstrating that complexity can be modified once it exists does not show how the initial functional specification arises. Until examples are provided that directly address that origin question—rather than redescribing adaptive elaboration—the original concern remains unaddressed.

If the discussion is going to progress, it needs to engage this distinction rather than dismiss it as an unreasonable demand.

This response blurs an important distinction between open research questions and explanatory gaps, and then treats the latter as if they were merely the former. Acknowledging unresolved details is not the same as declaring explanatory failure—but neither does ongoing investigation automatically constitute an explanation. When proposed mechanisms remain inferential, historically unobserved, or dependent on post-hoc reconstruction, it is legitimate to ask whether they adequately account for the phenomena in question. Pointing this out is not a confusion; it is precisely how explanatory sufficiency is assessed in historical sciences.

The claim that arguments about information loss “redefine information as present-day function” mischaracterizes the position. The issue is not equating information with current utility, but recognizing that biological information is functionally constrained. Sequences must do specific things—fold, bind, regulate, coordinate—to be selectable at all. Shannon entropy and raw sequence length measure variability and capacity, not the emergence of functional specification. A mutation can increase sequence diversity or gene count while degrading or eliminating function, which is why loss-of-function mutations are both common and well documented.

Gene duplication and divergence are frequently cited as solutions, but duplication alone does not explain the origin of new function—it merely copies existing information. For divergence to produce novelty, a duplicated gene must traverse a narrow path where accumulating mutations neither destroy function nor remain selectively neutral indefinitely. Empirically, most duplicated genes degrade into pseudogenes. A small minority acquire new roles, but this outcome presupposes regulatory integration, expression control, and functional compatibility with existing systems—exactly the coordination under discussion.

So the concern is not that evolution has “no answer,” but that its standard answers often shift the burden of explanation forward without resolving the core issue of how tightly specified, integrated biological functions arise in the first place. Calling attention to that is not conflating categories; it is asking whether the proposed mechanisms truly explain what they are claimed to explain.

I am not collapsing explanatory domains; I am questioning whether the boundaries between them are being used consistently or conveniently. It is true that evolutionary theory, strictly defined, does not attempt to explain the origin of life. But the discussion here has not been limited to abiogenesis versus Darwinian evolution—it has focused on major transitions within life, where new levels of coordination, regulation, and functional integration arise. Those transitions occur well after self-replication exists and squarely within the scope of evolutionary explanation.

When I point out that evolutionary accounts presuppose folded proteins, regulatory logic, and cellular context, I am not accusing the theory of hiding a weakness—I am asking whether the mechanisms invoked are sufficient to explain the emergence of new integrated systems rather than merely their modification. The issue is not the very first cell, but how systems that require multiple coordinated components become established when intermediate states lack selectable function.

Appealing to abiogenesis does not resolve this concern, because the same explanatory pattern appears repeatedly: mechanisms are cited that explain refinement and diversification, while the origin of coordination is deferred to earlier stages or adjacent fields. At some point, that deferral becomes circular rather than clarifying. Saying “that’s outside the scope” is legitimate only if the remaining explanation is complete within its own domain.

So the critique is not that evolution fails to answer a question it never claimed to address. It is that, for certain transitions, the proposed evolutionary mechanisms may not yet adequately explain the rise of tightly constrained, interdependent systems—even after self-replication, metabolism, and selection are already in place. Distinguishing domains does not eliminate that question; it simply clarifies where it must be answered.

This response correctly describes the categories of mechanisms invoked by evolutionary biology, but it overstates what they actually demonstrate. Listing gene duplication, divergence, co-option, regulatory change, and exaptation names plausible routes of modification; it does not, by itself, show that those routes successfully generate new, tightly coordinated architectures in a stepwise, selectable manner. The dispute is not over whether such mechanisms exist, but whether they are sufficient, in practice, to traverse the functional constraints involved in complex integration.

Saying that intermediate forms have “different but selectable functions” is an assertion that requires case-specific support. For many systems—especially those involving multi-component regulation, feedback control, and precise molecular interaction—partial systems do not perform alternative functions at all. They are neutral or deleterious. In such cases, the existence of precursors does not automatically constitute an explanatory pathway. Demonstrating that a system evolved from earlier ones is not the same as demonstrating that each transitional step was selectable in its historical context rather than reconstructed retrospectively.

Moreover, the claim that these processes “generate new architectures” often rests on cumulative descriptions rather than direct demonstrations of origin. Comparative genomics and developmental biology are powerful tools for identifying homology and modification, but they infer pathways from end states. That is a legitimate historical method, yet it does not eliminate the question of how rare functional configurations are reached in sequence space, nor does it quantify how often proposed mechanisms actually succeed versus fail (as in the high rate of pseudogenization following duplication).

Finally, pointing out that systems evolved from precursors is not being treated as a failure per se; it is being treated as an incomplete explanation when the origin of coordination is assumed rather than shown. Historical sciences explain novelty not merely by tracing ancestry, but by accounting for the causal adequacy of the mechanisms invoked. Asking whether those mechanisms plausibly generate new integrated function is not mischaracterizing evolution—it is testing the strength of its explanatory claims.

In short, adaptation and elaboration are well supported. The open question remains whether the same processes adequately explain the emergence of tightly specified, interdependent biological systems, rather than presupposing them at each stage.
Additionally, this argument treats gene duplication, divergence, co-option, and regulatory evolution as if they are hypothetical or insufficient mechanisms for producing integrated biological systems, but this misrepresents what is actually known. These mechanisms have been directly observed to generate new functions and integrate into existing pathways. Comparative genomics, for example, shows how duplicated genes in vertebrates led to the expansion of Hox gene clusters, which coordinate complex body plans. Co-option of existing genes and regulatory modules is widely documented, as are cases in which small changes in gene regulation produce major innovations in development without disrupting existing functions. These are not speculative mechanisms, they are empirically supported processes that can and do produce new functional systems and arrangements of genes, proteins, and regulatory elements that work together in coordinated ways.

Experimental evolution further confirms the explanatory power of these mechanisms. In laboratory experiments, duplicated genes or enzymes have evolved entirely new catalytic or regulatory roles, demonstrating that novelty can emerge stepwise and integrate with existing cellular systems. Observing these processes in real time validates the plausibility of evolutionary pathways inferred from historical data. The mechanisms themselves are real, operative, and demonstrably capable of producing integrated complexity.
 
  • Agree
Reactions: The Barbarian
Upvote 0

Mercy Shown

Well-Known Member
Jan 18, 2019
1,168
332
65
Boonsboro
✟108,752.00
Country
United States
Gender
Male
Faith
Christian
Marital Status
Married
Additionally, this argument treats gene duplication, divergence, co-option, and regulatory evolution as if they are hypothetical or insufficient mechanisms for producing integrated biological systems, but this misrepresents what is actually known. These mechanisms have been directly observed to generate new functions and integrate into existing pathways. Comparative genomics, for example, shows how duplicated genes in vertebrates led to the expansion of Hox gene clusters, which coordinate complex body plans. Co-option of existing genes and regulatory modules is widely documented, as are cases in which small changes in gene regulation produce major innovations in development without disrupting existing functions. These are not speculative mechanisms, they are empirically supported processes that can and do produce new functional systems and arrangements of genes, proteins, and regulatory elements that work together in coordinated ways.

Experimental evolution further confirms the explanatory power of these mechanisms. In laboratory experiments, duplicated genes or enzymes have evolved entirely new catalytic or regulatory roles, demonstrating that novelty can emerge stepwise and integrate with existing cellular systems. Observing these processes in real time validates the plausibility of evolutionary pathways inferred from historical data. The mechanisms themselves are real, operative, and demonstrably capable of producing integrated complexity.
It seems that you are hearing something different than what I am saying. Since your response overcorrects by treating legitimate questions about sufficiency and scope as if they were denials of observation. No one here is claiming that gene duplication, divergence, co-option, or regulatory evolution are hypothetical or unreal. They are real, observed processes. The issue is not their existence, but what they have actually been shown to explain, and at what scale.

Take Hox gene duplication as an example. Comparative genomics clearly shows expansion and diversification of Hox clusters in vertebrates. What this demonstrates is elaboration and modulation of an already deeply constrained regulatory framework, not the origin of body-plan coordination itself. Hox genes only function within a pre-existing developmental architecture involving positional information, chromatin regulation, transcriptional control, and cell signaling networks. Duplication expands degrees of freedom within that framework; it does not explain how the framework originated or why partial disruptions are not catastrophically deleterious. That distinction matters.

Similarly, co-option and regulatory change are powerful concepts, but they often function as retrospective descriptions rather than forward-looking causal demonstrations. Saying that a gene or module was co-opted tells us that it now plays a new role, not how the intermediate stages avoided loss of fitness while transitioning between roles. In many systems, small regulatory changes are not benign—they are harmful. The fact that some regulatory tweaks produce innovation does not establish that such outcomes are common or that they adequately explain the rise of highly interdependent systems.

Experimental evolution is frequently cited here, but its limits are rarely acknowledged. Laboratory experiments overwhelmingly demonstrate optimization, loss, simplification, or repurposing of existing functions, usually over short evolutionary distances and within tightly controlled environments. Cases where enzymes acquire genuinely new catalytic roles almost always begin with promiscuous activity already present and proceed through incremental refinement. This is impressive—but it still presupposes a rich functional starting point. It does not show how systems requiring multiple coordinated novelties arise without guidance or prior structure.

So the disagreement is not about whether evolutionary mechanisms operate, nor whether they can produce novelty in some sense. It is about whether the observed instances of novelty scale to explain the emergence of tightly integrated biological systems, where function depends on multiple components being present together, correctly regulated, and mutually compatible. Demonstrating that mechanisms can tweak, expand, or repurpose existing systems is not the same as demonstrating that they generate such systems from less integrated precursors.

In short, these mechanisms are empirically real—but the question remains whether their demonstrated capacities are causally adequate for the explanatory work they are often asked to perform. Asking that question is not misrepresentation; it is how explanatory claims are properly evaluated in any historical science.
 
Upvote 0