• Starting today August 7th, 2024, in order to post in the Married Couples, Courting Couples, or Singles forums, you will not be allowed to post if you have your Marital status designated as private. Announcements will be made in the respective forums as well but please note that if yours is currently listed as Private, you will need to submit a ticket in the Support Area to have yours changed.

The PRATT Thread

Willtor

Not just any Willtor... The Mighty Willtor
Apr 23, 2005
9,713
1,429
44
Cambridge
Visit site
✟39,787.00
Faith
Presbyterian
Marital Status
Married
Politics
US-Others
The PRATT Thread

This thread is an attempt at organizing responses to the most popular PRATT (PRATT = "Point Refuted A Thousand Times"). I'll do my best to keep this first post updated with links to posts (in this thread or others) that respond to a single PRATT issue. I hope that this will be useful for quickly locating succinct, well thought-out responses (with graphics, where required) in other threads where the responses are required.

If you have recommendations for topics in this post, please suggest them.

If you know of a candidate for a response, please post a link. If the post in question responds to other issues, copy out the relevant portions and cite the original. Then I'll link to your post from here.

Note: Any other users who want to create indices of their own, make a post and I'll link to them from here.

PRATT Topics

Theological Topics

  • In order to take Genesis seriously, it must be read literally. Non-literal reading of, e.g., the creation account, is akin to denying its truth.
  • The Church has always taken the creation account in Genesis literally. It is a dramatic divergence from tradition to interpret it otherwise.
  • Death, of any kind, could not have existed before the Fall, as per Romans 5:12. [link]

Scientific Topics

  • A scientific theory is an idea that is not well-supported by evidence.
  • There are no documented instances of macro evolution being observed in the lab or in nature.
  • All missing links between humans and the alleged common ancestor with chimpanzees have been shown to be fraudulent.

Philosophy of Science Topics

  • Perhaps creation took place 6000 years ago, but has embedded age. That is, the world looks old, but it is actually quite young.
  • Since science cannot prove a thing, it is not a good basis for thinking that a thing is so.
 
Last edited:

Willtor

Not just any Willtor... The Mighty Willtor
Apr 23, 2005
9,713
1,429
44
Cambridge
Visit site
✟39,787.00
Faith
Presbyterian
Marital Status
Married
Politics
US-Others
Death Before the Fall

Contention

Death, of any kind, could not have existed before the Fall, as per Romans 5:12:

"Therefore, just as sin came into the world through one man, and death came through sin, and so death spread to all because all have sinned..."

Response

This interpretation of Romans 5:12 is problematic both for the Creation/Eden account, and even internally, for itself.

Creation/Eden: Death Before the Fall

Plant death is explicitly permitted by God in Genesis 1:29-30, and Genesis 2:16, before the Fall. Is animal death different? Although God does not permit animals to use one another for food in Genesis 1:29-30, He also does not forbid it. A stretch, you say? Note that animals are not given permission to (nor are they cursed to) eat one another after the Fall, either. But they do it. Further, note that (elsewhere in the Bible) God relishes the fact that animals eat one another: Job 38:39-41, Job 39:29-30.

Possibly animal death was permitted. But certainly not human death. Actually, it was there in design, at least. A literal reading of the account of the Fall indicates that the thing that caused Adam and Eve to die was withholding the fruit of the Tree of Life (Genesis 3:22-23). Adam and Eve were designed to die. They were supernaturally kept alive by the Tree of Life. But, naturally-speaking, death was waiting in the wings for them.

Now, it may be that if Adam and Eve had remained in Eden, no natural death would have ever come to them. Perhaps this is what Paul meant. But be careful that zeal against "death before the Fall" does not lead to counter-biblical ideas.

Romans 5:12: A Deeper Look

The rest of the chapter uses Adam as a figure for Christ. Adam was the death-bringer, just as Christ is the life-bringer. Given that people still physically die, even after accepting the grace of Christ, it is difficult to read verse 12 as talking about physical death. But even allowing for that, he is at least talking about spiritual death and life, i.e., separation from and reconciliation with God. In what way does this apply to animals or plants? Does your cat/dog/guinea-pig require justification?

Conclusion

The "all" in the passage is talking about all people (and from verse 12, people who are capable of sinning, since it says that this is how death spreads to all). It may be talking about physical death, or it may not be doing so. Paul is certainly talking about spiritual death. And there was physical death built into the design of the world. There are even instances of physical death in the world before the Fall (albeit, not of people). The Bible says so.
 
Upvote 0

Papias

Listening to TW4
Dec 22, 2005
3,967
988
59
✟64,806.00
Faith
Catholic
Marital Status
Married
Great Idea!

However, my " reinventing the wheel" alarm is going off.

Plus, how findable would this be as a reference?

So, first I'd say to go here : An Index to Creationist Claims, where most of the PRATTs (especially the science based ones) are listed and organized.

It looks like they've got somewhere around 600 of them listed and refuted.

I tried to copy the list, but it's WAY over the size limit, so you'll have to check it out yourself. The full list is here: An Index to Creationist Claims


Papias
 
Upvote 0

Willtor

Not just any Willtor... The Mighty Willtor
Apr 23, 2005
9,713
1,429
44
Cambridge
Visit site
✟39,787.00
Faith
Presbyterian
Marital Status
Married
Politics
US-Others
Ah! That's a really good resource.

However, I have noticed in the past that some creationists won't read TalkOrigins pages. They might, however, be more receptive to posts written and arranged by CF members. I.e., if I can say, "Actually, Papias wrote a good response to that in another thread," along with a link, it might be received better than, "Actually, TalkOrigins has a good response to that," along with the TalkOrigins link.

That said, links to TalkOrigins as references in posts might be handy (especially given the broad range of topics in the page you cited). But if the crux of the issue is addressed on CF, that would be really handy.

For keeping track of the index, I just have it bookmarked. If people find this thread useful (especially if people start making their own indices, here), I'll ask a mod to sticky it.
 
Upvote 0

chilehed

Veteran
Jul 31, 2003
4,735
1,399
64
Michigan
✟250,727.00
Faith
Catholic
Marital Status
Married
Great Idea!

However, my " reinventing the wheel" alarm is going off.

Plus, how findable would this be as a reference?

So, first I'd say to go here : An Index to Creationist Claims, where most of the PRATTs (especially the science based ones) are listed and organized.

It looks like they've got somewhere around 600 of them listed and refuted....

Papias
That site's discussion of the First Cause argument is horribly flawed, and doesn't refute the argument at all. It's quite apparent that they've never read St. Thomas Acquinas.

And the discussion of the 2nd Law of Thermodynamics seems pretty clunky to me.
 
Upvote 0

Willtor

Not just any Willtor... The Mighty Willtor
Apr 23, 2005
9,713
1,429
44
Cambridge
Visit site
✟39,787.00
Faith
Presbyterian
Marital Status
Married
Politics
US-Others
That site's discussion of the First Cause argument is horribly flawed, and doesn't refute the argument at all. It's quite apparent that they've never read St. Thomas Acquinas.

And the discussion of the 2nd Law of Thermodynamics seems pretty clunky to me.

You're right about the First Cause article. On the other hand, that is the understanding of the argument presented by every creationist with whom I have discussed the matter. Although St. Thomas Aquinas has a much more well-reasoned discourse on the matter, since his views are rarely (if ever) supplied by creationists, while discussing "First Cause", it is the creationist views that are refuted.

Re: 2nd Law of Thermodynamics article: What's the problem with it?
 
Upvote 0

Papias

Listening to TW4
Dec 22, 2005
3,967
988
59
✟64,806.00
Faith
Catholic
Marital Status
Married
Sorry for the delay - I just noticed your reply.

Willtor wrote:
links to TalkOrigins as references in posts might be handy (especially given the broad range of topics in the page you cited). But if the crux of the issue is addressed on CF, that would be really handy.

Yes. Gluadys just posted the link below on another thread, and it seems to be exactly what you are talking about. We indeed have had many excellent posts on CF!

http://www.christianforums.com/t2580923/

How about that thread?

-Papias
 
Upvote 0

chilehed

Veteran
Jul 31, 2003
4,735
1,399
64
Michigan
✟250,727.00
Faith
Catholic
Marital Status
Married
You're right about the First Cause article. On the other hand, that is the understanding of the argument presented by every creationist with whom I have discussed the matter. Although St. Thomas Aquinas has a much more well-reasoned discourse on the matter, since his views are rarely (if ever) supplied by creationists, while discussing "First Cause", it is the creationist views that are refuted.
Choosing to rebut an opponent's weakest argument is poor argumentation.

Maybe they do it because they can't rebut the real argument.

Re: 2nd Law of Thermodynamics article: What's the problem with it?
Some of what it says is good observation, for example the fact that spacially ordered systems arise naturally during thermodynamic processes. But for the most part it's just bare assertion, and at times it's misleading at best, like when it says that "entropy is not the same as disorder. Sometimes the two correspond, but sometimes order increases as entropy increases." It's sort of like there's a shift in the definition of terms in the middle of that statement.

And the response to claim CF005 ("1.While statistical information theory has a quantity called "entropy", it does not have anything equivalent to the second law of thermodynamics") is flat-out wrong. Thermodynamic entropy is EXACLY the information entropy of the macrostate multiplied by Boltzman's constant.

CF011.1 would have been an ideal place to explain that a phrase written in a language has lower information entropy than a string of random characters of the same language, but it's not discussed at all. It's related to the YEC's objection that evolution results in an increase in information. The claim actually harms the creationist's case, because in fact all real thermodynamic processes result in an increase in information.

But you can't demonstrate that without explaining what thermodynamic entropy actually is.
 
Last edited:
Upvote 0

Willtor

Not just any Willtor... The Mighty Willtor
Apr 23, 2005
9,713
1,429
44
Cambridge
Visit site
✟39,787.00
Faith
Presbyterian
Marital Status
Married
Politics
US-Others
Choosing to rebut an opponent's weakest argument is poor argumentation.

Maybe they do it because they can't rebut the real argument.

They may not know the real argument. But my guess is it wouldn't matter to them since they are only addressing arguments that oppose science (and a few other tangential matters on the side, but only to a tiny degree).

My guess is that they address it because it is made; Not because it is weak or strong.

Some of what it says is good observation, for example the fact that spacially ordered systems arise naturally during thermodynamic processes. But for the most part it's just bare assertion, and at times it's misleading at best, like when it says that "entropy is not the same as disorder. Sometimes the two correspond, but sometimes order increases as entropy increases." It's sort of like there's a shift in the definition of terms in the middle of that statement.

And the response to claim CF005 ("1.While statistical information theory has a quantity called "entropy", it does not have anything equivalent to the second law of thermodynamics") is flat-out wrong. Thermodynamic entropy is EXACLY the information entropy of the macrostate multiplied by Boltzman's constant.

CF011.1 would have been an ideal place to explain that a phrase written in a language has lower information entropy than a string of random characters of the same language, but it's not discussed at all. It's related to the YEC's objection that evolution results in an increase in information. The claim actually harms the creationist's case, because in fact all real thermodynamic processes result in an increase in information.

But you can't demonstrate that without explaining what thermodynamic entropy actually is.

Email them.
 
Upvote 0

shernren

you are not reading this.
Feb 17, 2005
8,463
515
38
Shah Alam, Selangor
Visit site
✟33,881.00
Faith
Protestant
Marital Status
In Relationship
Time to butt heads over thermodynamics again!

And the response to claim CF005 ("1.While statistical information theory has a quantity called "entropy", it does not have anything equivalent to the second law of thermodynamics") is flat-out wrong. Thermodynamic entropy is EXACLY the information entropy of the macrostate multiplied by Boltzman's constant.

CF011.1 would have been an ideal place to explain that a phrase written in a language has lower information entropy than a string of random characters of the same language, but it's not discussed at all. It's related to the YEC's objection that evolution results in an increase in information. The claim actually harms the creationist's case, because in fact all real thermodynamic processes result in an increase in information.

But you can't demonstrate that without explaining what thermodynamic entropy actually is.

The whole problem is that while information-theoretic entropy is connected to disorder, information-theoretic disorder is not connected to real-world disorder in the way that creationists think. I wrote a post about this some time back which I thought was great, but I have no idea where it went, so here goes.

Let's take the classic example of gas molecules in a partitioned box. Suppose we have twenty molecules in a box in four separate quadrants labeled A, G, T, and C. In the beginning, all the molecules are kept in the A quadrant by partitions; when we raise the partitions, random diffusion ensures that the molecules are now free to visit the G, T and C quadrants.

Suppose we wanted to use a twenty-letter string to represent where each molecule is (technical note: they are distinguishable). In the beginning the string is just twenty As. This string has low information (there's only one possible box per particle) and thus the physical state has low entropy. But after the partition is raised, the string might become AGTCCCTAA ... it now has more information (because now each particle could be in any one of four boxes) and thus more entropy. A little while later, the string might instead become GTACTCAAG ... but it has the same information as before: all we know is that each particle has a one in four chance of being in each of the boxes, and the latter string is just as likely as the former.

Simple? Now suppose we have a nucleic acid string whose sequence is initially just twenty As. After going through a whole bunch of point mutations, the sequence is now AGTCCCTAA ... And after going through another bunch of point mutations, the sequence is now GTACTCAAG ... By analogy with the earlier example, the first bunch of mutations increases the (information) entropy of the nucleic acid string to a theoretical maximum (two bits of information per bit of sequence), and the second bunch of mutations conserves the entropy of the nucleic acid string (still two bits of information per bit of sequence).

Does this raise a problem for creationists? In a sense, because it's obvious that any string of uniform characters (having low entropy, and thus low information) doesn't code for a working protein while a string of random characters could. But that's only contingent on our choice of transcription codes. One could imagine a genetic code where an isolated base (of any kind) acts as a start codon, a pair of bases (of any kind) acts as a stop codon, and the nth amino acid is coded for by a sequence of n+2 successive bases (again, of any kind). In such a code, a "low entropy string" of uniform characters would code for a protein as much as a "high entropy string" of random characters. So it's not just that creating entropy creates genetic information: one can imagine a system in which any number of arbitrarily low-entropy strings can code for proteins.

No, the problem is more fundamentally that the character-wise entropy of a genetic sequence bears zero correlation to the selective advantage its phenotype confers. Not that the character-wise entropy is correlated to carried information (as shown above, it need not do so).

So take the gene for normal hemoglobin. It can be represented as a long string of A, G, T, and C; this long string can be in turn represented as a long series of flipped coins. It would take two coins to represent one base (for example two heads for A, first head and then tail for G, first tail and then head for T, and two tails for C). Because A, G, T and C are roughly evenly distributed in the hemoglobin gene, there's roughly the same number of heads as tails, and you can't tell just by looking at the previous flips whether the next one will be head or tails: the coin is fair.

Now let's take one of the coins and re-flip them. Oops! The hemoglobin gene now has a point mutation, and the poor sufferer now has sickle-cell trait. But notice something: there's (say) only one tail more and one head less among the hundreds of flips. So the coin flips are still fair, half-half head and tail.

It's true that if we took a fair coin and flipped it hundreds of times, we'd get about half of them tails and half of them heads. That's what the mutations do, make the coin flips fair so that the sequence has about the same number of A, G, T and Cs. And the mutations increase entropy in the sense that if we had an initial sequence with all heads, and randomly reflipped all the coins, it would be much more likely for us to get a sequence with half heads and half tails than for us to get another sequence with almost all heads.

But what would the sequence exactly be? One fair sequence (half heads and half tails) might code for normal hemoglobin. Another fair sequence might code for sickle-cell hemoglobin. Yet another one might code for insulin. A fourth might not be able to code for anything at all. A fifth might code for bigger brains, and a sixth for predisposition to accepting evolution - and they would all have about the same number of heads as tails, and therefore be valid results of random mutation (to say nothing of the subsequent effects of natural selection).

The point isn't that more entropy = more information (although by many intuitive definitions of information that's true). The point is that the statistical entropy of a genetic sequence simply doesn't correspond to the biological sort of information that creationists believe can't be generated, the kind of information that codes for wings instead of arms and brains instead of feet.

(BTW, "a phrase written in a language has lower information entropy than a string of random characters of the same language" applies to most human languages, notably English, but not to DNA. The information entropy of an arbitrary coding sequence of DNA will be about the same as the information entropy of an arbitrary non-coding sequence of DNA; in fact, as the sequence gets longer, the entropy difference will decrease, because less characters relative to the length of the whole sequence need to be fixed (only the start and the stop codon).

This is contrary to human languages where the longer a string of random characters is, the higher the probability that it will be nonsense - therefore, the entropy difference between readable and nonsense strings of characters increases, nearly exponentially I suspect, as the length of an arbitrary string increases.)
 
  • Like
Reactions: chilehed
Upvote 0

Notedstrangeperson

Well-Known Member
Jul 3, 2008
3,430
110
36
✟19,524.00
Gender
Female
Faith
Anglican
Marital Status
In Relationship
Inerestingly, Answers in Genesis (of all places!) has a list of arguments Creationists should not use, including:
  • The 2nd Law of Thermodynamics disproves evolution.
  • There are no beneficial mutations.
  • If we evolved from apes then why are apes around today?
  • No new species have ever been produced (although AIG seems confused about the definition of "species").
  • Evolution is just a theory.
  • Microevolution has been proved but macroevolution has not.
  • There are no transitional forms (again, AIG seems confused about the definition of "transitional").
 
Upvote 0

Assyrian

Basically pulling an Obama (Thanks Calminian!)
Mar 31, 2006
14,868
991
Wales
✟42,286.00
Faith
Christian
Marital Status
Married
Time to butt heads over thermodynamics again!

The whole problem is that while information-theoretic entropy is connected to disorder, information-theoretic disorder is not connected to real-world disorder in the way that creationists think. I wrote a post about this some time back which I thought was great, but I have no idea where it went, so here goes.

Let's take the classic example of gas molecules in a partitioned box. Suppose we have twenty molecules in a box in four separate quadrants labeled A, G, T, and C. In the beginning, all the molecules are kept in the A quadrant by partitions; when we raise the partitions, random diffusion ensures that the molecules are now free to visit the G, T and C quadrants.

Suppose we wanted to use a twenty-letter string to represent where each molecule is (technical note: they are distinguishable). In the beginning the string is just twenty As. This string has low information (there's only one possible box per particle) and thus the physical state has low entropy. But after the partition is raised, the string might become AGTCCCTAA ... it now has more information (because now each particle could be in any one of four boxes) and thus more entropy. A little while later, the string might instead become GTACTCAAG ... but it has the same information as before: all we know is that each particle has a one in four chance of being in each of the boxes, and the latter string is just as likely as the former.

Simple? Now suppose we have a nucleic acid string whose sequence is initially just twenty As. After going through a whole bunch of point mutations, the sequence is now AGTCCCTAA ... And after going through another bunch of point mutations, the sequence is now GTACTCAAG ... By analogy with the earlier example, the first bunch of mutations increases the (information) entropy of the nucleic acid string to a theoretical maximum (two bits of information per bit of sequence), and the second bunch of mutations conserves the entropy of the nucleic acid string (still two bits of information per bit of sequence).

Does this raise a problem for creationists? In a sense, because it's obvious that any string of uniform characters (having low entropy, and thus low information) doesn't code for a working protein while a string of random characters could. But that's only contingent on our choice of transcription codes. One could imagine a genetic code where an isolated base (of any kind) acts as a start codon, a pair of bases (of any kind) acts as a stop codon, and the nth amino acid is coded for by a sequence of n+2 successive bases (again, of any kind). In such a code, a "low entropy string" of uniform characters would code for a protein as much as a "high entropy string" of random characters. So it's not just that creating entropy creates genetic information: one can imagine a system in which any number of arbitrarily low-entropy strings can code for proteins.

No, the problem is more fundamentally that the character-wise entropy of a genetic sequence bears zero correlation to the selective advantage its phenotype confers. Not that the character-wise entropy is correlated to carried information (as shown above, it need not do so).

So take the gene for normal hemoglobin. It can be represented as a long string of A, G, T, and C; this long string can be in turn represented as a long series of flipped coins. It would take two coins to represent one base (for example two heads for A, first head and then tail for G, first tail and then head for T, and two tails for C). Because A, G, T and C are roughly evenly distributed in the hemoglobin gene, there's roughly the same number of heads as tails, and you can't tell just by looking at the previous flips whether the next one will be head or tails: the coin is fair.

Now let's take one of the coins and re-flip them. Oops! The hemoglobin gene now has a point mutation, and the poor sufferer now has sickle-cell trait. But notice something: there's (say) only one tail more and one head less among the hundreds of flips. So the coin flips are still fair, half-half head and tail.

It's true that if we took a fair coin and flipped it hundreds of times, we'd get about half of them tails and half of them heads. That's what the mutations do, make the coin flips fair so that the sequence has about the same number of A, G, T and Cs. And the mutations increase entropy in the sense that if we had an initial sequence with all heads, and randomly reflipped all the coins, it would be much more likely for us to get a sequence with half heads and half tails than for us to get another sequence with almost all heads.

But what would the sequence exactly be? One fair sequence (half heads and half tails) might code for normal hemoglobin. Another fair sequence might code for sickle-cell hemoglobin. Yet another one might code for insulin. A fourth might not be able to code for anything at all. A fifth might code for bigger brains, and a sixth for predisposition to accepting evolution - and they would all have about the same number of heads as tails, and therefore be valid results of random mutation (to say nothing of the subsequent effects of natural selection).

The point isn't that more entropy = more information (although by many intuitive definitions of information that's true). The point is that the statistical entropy of a genetic sequence simply doesn't correspond to the biological sort of information that creationists believe can't be generated, the kind of information that codes for wings instead of arms and brains instead of feet.

(BTW, "a phrase written in a language has lower information entropy than a string of random characters of the same language" applies to most human languages, notably English, but not to DNA. The information entropy of an arbitrary coding sequence of DNA will be about the same as the information entropy of an arbitrary non-coding sequence of DNA; in fact, as the sequence gets longer, the entropy difference will decrease, because less characters relative to the length of the whole sequence need to be fixed (only the start and the stop codon).

This is contrary to human languages where the longer a string of random characters is, the higher the probability that it will be nonsense - therefore, the entropy difference between readable and nonsense strings of characters increases, nearly exponentially I suspect, as the length of an arbitrary string increases.)
How does that relate to information and Shannon entropy?
 
Upvote 0

chilehed

Veteran
Jul 31, 2003
4,735
1,399
64
Michigan
✟250,727.00
Faith
Catholic
Marital Status
Married
Time to butt heads over thermodynamics again!
I'm not aware that you and I have ever really butted heads. We merely see the same things from different angles and express them in different ways, and I find that our conversations always serve to sharpen me up.

But I have developed a bit of a headache thinking about this. It does feel like I've been banging my skull against something. :D

The whole problem is that while information-theoretic entropy is connected to disorder, information-theoretic disorder is not connected to real-world disorder in the way that creationists think...
Which is why I think it's important to explain what entropy really is.

...No, the problem is more fundamentally that the character-wise entropy of a genetic sequence bears zero correlation to the selective advantage its phenotype confers. Not that the character-wise entropy is correlated to carried information (as shown above, it need not do so).
EXCELLENT! That's EXACTLY what I've been trying to figure out how to say! Thank you!

Thermodynamic entropy is based on the macrovariables composition, mass, temperature, and pressure ONLY. There might be some variance in the entropy of the macrovariable “selective advantage”, but that’s completely irrelevant to thermodynamic entropy.

It’s just like the colors of the marbles. If all of the white ones are at one temp and all of the black ones are at another, then the entropy before and after thermal stabilization is the same whether or not the marbles are mixed by color.

That’s part of why I object to the comment in talkorigins about how “entropy is not the same as disorder. Sometimes the two correspond, but sometimes order increases as entropy increases". The value of entropy is dependent on the macrovariables used to define the state, and by definition order in the associated microstates is inversely proportional to the entropy of the macrostate. If you have a case in which order increases as entropy increases, then you’ve redefined the macrovariables somewhere and invalidated your results.

The point isn't that more entropy = more information (although by many intuitive definitions of information that's true)...
Hmmm... I'm not liking this one. Maybe it's not the point you're making, but it is true that more entropy = more information because that's how the math works. A random string is less compressible than a non-random string of the same length, and so inherently requires more bits to code at the limit of compression.

(BTW, "a phrase written in a language has lower information entropy than a string of random characters of the same language" applies to most human languages, notably English, but not to DNA. The information entropy of an arbitrary coding sequence of DNA will be about the same as the information entropy of an arbitrary non-coding sequence of DNA; in fact, as the sequence gets longer, the entropy difference will decrease, because less characters relative to the length of the whole sequence need to be fixed (only the start and the stop codon).

This is contrary to human languages where the longer a string of random characters is, the higher the probability that it will be nonsense - therefore, the entropy difference between readable and nonsense strings of characters increases, nearly exponentially I suspect, as the length of an arbitrary string increases.)
I’m not getting this, but I’m pretty weak in the whole DNA sequencing thing. I’ll have to mull it over, but it sounds like you might be confounding information with meaning.

Sure, statistical methods require an adequate sample size. And with a string of text in any language there are clues about what the next character most probably will be; that’s why such text has less information content and less entropy than a string of random text.

I lke your example of the coding of a long DNA sequence. With an arbitrarily large number of trials the average entropy of the sequence approaches a constant. But each trial involves the thermodynamic event of flipping a base, and each of those events results in an increase in thermodynamic entropy. So over the course of the experiment total thermodynamic entropy increases continually while the average entropy of the sequence is constant.
 
Upvote 0

mark kennedy

Natura non facit saltum
Site Supporter
Mar 16, 2004
22,030
7,265
62
Indianapolis, IN
✟594,630.00
Gender
Male
Faith
Calvinist
Marital Status
Single
Politics
US-Democrat
TLet's take the classic example of gas molecules in a partitioned box. Suppose we have twenty molecules in a box in four separate quadrants labeled A, G, T, and C. In the beginning, all the molecules are kept in the A quadrant by partitions; when we raise the partitions, random diffusion ensures that the molecules are now free to visit the G, T and C quadrants.

Suppose we wanted to use a twenty-letter string to represent where each molecule is (technical note: they are distinguishable). In the beginning the string is just twenty As. This string has low information (there's only one possible box per particle) and thus the physical state has low entropy. But after the partition is raised, the string might become AGTCCCTAA ... it now has more information (because now each particle could be in any one of four boxes) and thus more entropy. A little while later, the string might instead become GTACTCAAG ... but it has the same information as before: all we know is that each particle has a one in four chance of being in each of the boxes, and the latter string is just as likely as the former.

I was wondering, is this ever going to be useful as a DNA in living systems or are you just talking about molecular properties?

Simple? Now suppose we have a nucleic acid string whose sequence is initially just twenty As. After going through a whole bunch of point mutations, the sequence is now AGTCCCTAA ... And after going through another bunch of point mutations, the sequence is now GTACTCAAG ... By analogy with the earlier example, the first bunch of mutations increases the (information) entropy of the nucleic acid string to a theoretical maximum (two bits of information per bit of sequence), and the second bunch of mutations conserves the entropy of the nucleic acid string (still two bits of information per bit of sequence).

Point mutations that 'conserve' entropy of the nucleic acid string? That's a stretch.

Does this raise a problem for creationists? In a sense, because it's obvious that any string of uniform characters (having low entropy, and thus low information) doesn't code for a working protein while a string of random characters could. But that's only contingent on our choice of transcription codes. One could imagine a genetic code where an isolated base (of any kind) acts as a start codon, a pair of bases (of any kind) acts as a stop codon, and the nth amino acid is coded for by a sequence of n+2 successive bases (again, of any kind). In such a code, a "low entropy string" of uniform characters would code for a protein as much as a "high entropy string" of random characters. So it's not just that creating entropy creates genetic information: one can imagine a system in which any number of arbitrarily low-entropy strings can code for proteins

You can imagine anything you like but you can only understand the truth.

No, the problem is more fundamentally that the character-wise entropy of a genetic sequence bears zero correlation to the selective advantage its phenotype confers. Not that the character-wise entropy is correlated to carried information (as shown above, it need not do so).

Then you make another giant leap of logic with this point mutation(s) becoming some, 'arbitrarily low-entropy string', that can code for proteins in an improved way. Then you make another giant leap of logic and a selective advantage results. Even if you get that far the odds are that it will revert back. Got to wonder, what are the odds?

So take the gene for normal hemoglobin. It can be represented as a long string of A, G, T, and C; this long string can be in turn represented as a long series of flipped coins. It would take two coins to represent one base (for example two heads for A, first head and then tail for G, first tail and then head for T, and two tails for C). Because A, G, T and C are roughly evenly distributed in the hemoglobin gene, there's roughly the same number of heads as tails, and you can't tell just by looking at the previous flips whether the next one will be head or tails: the coin is fair.

One nucleotide out of it's proper sequence and it's a train wreck, you know that right?

Now let's take one of the coins and re-flip them. Oops! The hemoglobin gene now has a point mutation, and the poor sufferer now has sickle-cell trait. But notice something: there's (say) only one tail more and one head less among the hundreds of flips. So the coin flips are still fair, half-half head and tail.

It's true that if we took a fair coin and flipped it hundreds of times, we'd get about half of them tails and half of them heads. That's what the mutations do, make the coin flips fair so that the sequence has about the same number of A, G, T and Cs. And the mutations increase entropy in the sense that if we had an initial sequence with all heads, and randomly reflipped all the coins, it would be much more likely for us to get a sequence with half heads and half tails than for us to get another sequence with almost all heads.

If you based every nucleotide on a coin flip you would get disease and disorder. The sequences have to code in a certain order or the protein will not fold or be otherwise too garbled to be useful.

But what would the sequence exactly be? One fair sequence (half heads and half tails) might code for normal hemoglobin. Another fair sequence might code for sickle-cell hemoglobin. Yet another one might code for insulin. A fourth might not be able to code for anything at all. A fifth might code for bigger brains, and a sixth for predisposition to accepting evolution - and they would all have about the same number of heads as tails, and therefore be valid results of random mutation (to say nothing of the subsequent effects of natural selection).

What you would have is a clock taken apart and can't be put together randomly and still get it to work. That's ID in a nutshell.

The point isn't that more entropy = more information (although by many intuitive definitions of information that's true). The point is that the statistical entropy of a genetic sequence simply doesn't correspond to the biological sort of information that creationists believe can't be generated, the kind of information that codes for wings instead of arms and brains instead of feet.

The creationist has the same problems with evolutionary biology everyone else does, how do adaptations on an evolutionary scale happen. Instead of mutations what if there are mechanisms in the DNA that produce systems that pass on inheritable traits?


This is contrary to human languages where the longer a string of random characters is, the higher the probability that it will be nonsense - therefore, the entropy difference between readable and nonsense strings of characters increases, nearly exponentially I suspect, as the length of an arbitrary string increases.)

Random mutations may play a part, it just doesn't explain major leaps in evolutionary history. The bacteria original common ancestor did not grow a nucleous at random, there had to be a dramatic increase in the DNA information. While these few and rare beneficial effects may sometimes be vital the more likely explanation as cause is God created living systems fully formed.

Just couldn't resist, now back to your regularly scheduled discussion. BTW, that was actually very interesting. Sorry to butt in but just couldn't resist.

Grace and peace,
Mark
 
Last edited:
Upvote 0

Astridhere

Well-Known Member
Jul 30, 2011
1,240
43
I live in rural NSW, Australia
✟1,616.00
Faith
Christian
Marital Status
Married
The PRATT Thread

This thread is an attempt at organizing responses to the most popular PRATT (PRATT = "Point Refuted A Thousand Times"). I'll do my best to keep this first post updated with links to posts (in this thread or others) that respond to a single PRATT issue. I hope that this will be useful for quickly locating succinct, well thought-out responses (with graphics, where required) in other threads where the responses are required.

If you have recommendations for topics in this post, please suggest them.

If you know of a candidate for a response, please post a link. If the post in question responds to other issues, copy out the relevant portions and cite the original. Then I'll link to your post from here.

Note: Any other users who want to create indices of their own, make a post and I'll link to them from here.

PRATT Topics

Theological Topics

  • In order to take Genesis seriously, it must be read literally. Non-literal reading of, e.g., the creation account, is akin to denying its truth.
  • The Church has always taken the creation account in Genesis literally. It is a dramatic divergence from tradition to interpret it otherwise.
  • Death, of any kind, could not have existed before the Fall, as per Romans 5:12. [link]
Scientific Topics

  • A scientific theory is an idea that is not well-supported by evidence.
  • There are no documented instances of macro evolution being observed in the lab or in nature.
  • All missing links between humans and the alleged common ancestor with chimpanzees have been shown to be fraudulent.
Philosophy of Science Topics

  • Perhaps creation took place 6000 years ago, but has embedded age. That is, the world looks old, but it is actually quite young.
  • Since science cannot prove a thing, it is not a good basis for thinking that a thing is so.


Well, well...a PRATT thread specifically for pratts to prattle on and on about a science ridled with contradictions and 150 years of instability. Yet still more than happy to ridicule creationists.

I'll show you something about the science you defend.

Ardi below recontructed from chards and pieces....

thumbnail.aspx
made from



Then you present Turkana Boy your most complete fossil found in non colocated pieces.



Look at the side view of both skulls they are similar. Tilt Turkana Boys head back to sit like Ardi's. They are very similar. The nasal cavity of both is just like an ape. Turkana Boy has an extra verterbra like an ape, a small neural cavity like an ape, long arms and what look to be the bones of long curved fingers almost down to his knee. Turkana Boys upper thigh bones also resemble the upper thigh bones of Ardi the ape with ape feet found in tact.

Both Turkana Boy, Ardi and Rudolfensis(the reconstruction done after Leakys was proven erraneous) look much the same.

The reason why creationists keep telling the PRATT society that evolution is not well supported by the evidence is because it is true. Human evolution is just one example of the nonsense. Much the same evidence is used to support mankind evolving from some thing like a chimp as well as some creature nothing like a chimp. Well done!

Unfortunately for evolutionists the fact that there are no intermediates betwen mankind and apes, means a creationist prediction (from a particular creationist camp..mine) of no intermediates, let alone common ancestors, will be found has been verified. You can not produce chimp ancestors because they are likely represented as human relatives. The evidence supports creation, not evolution.

It is a great idea to have a special thread for PRATTs ......

You lot have fun self gratifying each others fantasies....but seriously they are wrong wrong wrong..

I also could not resist......;)
 
Last edited:
Upvote 0

shernren

you are not reading this.
Feb 17, 2005
8,463
515
38
Shah Alam, Selangor
Visit site
✟33,881.00
Faith
Protestant
Marital Status
In Relationship
You know, over the years, I have seen many Malaysian friends go to one foreign university or another, fall in love, and get married to someone they studied with.

The odds of that happening by chance are next to zero.

Yeah, sure, they keep lying to me about why they chose to study in the US or Canada or Australia. They tell me that this university offered this degree, that university had great lecturers, this country lets people carry concealed handguns in public, et cetera. And they claim that, when they chose the university they went to, they had no idea that they would find the love of their lives there.

Humbug. What is the chance of one person, choosing between ten universities, picking the one which has the love of their life - assuming they didn't know? One out of ten. Not too bad, right? But if two of my friends can simultaneously pick the universities where the love of their lives will study, without having known beforehand, the chances drop to one out of a hundred. Three, and it becomes one in a thousand.

And what are the chances of a hundred of my friends all having happened to choose the exact university where the loves of their lives studies, without having known beforehand? That's one in ten to the power of a hundred. That's even less likely than me marking an electron red, mixing it up with all the other subatomic particles in the universe, closing my eyes, and picking out the red electron on the first try.

So don't be fooled by the evil educationists telling you that your choice of university was completely random with respect to your future life partner's choice of university. It's not. The university you choose is actually determined by ... Intelligent Dating.

=========

That is exactly the answer to the question "How do random mutations preserve the low entropy of gene pools?" I won't spell it out (I'll be on holidays in Singapore for three days) but I'll give the basics.

We need to define our terms very carefully. Firstly, it is patently not true that evolution violates some kind of thermodynamic entropy. For more see this post:http://www.christianforums.com/t2558054-19/#post55697423

But random mutations do actually increase the entropy of the gene pool, properly defined. Think of it this way: Suppose I found a group of a hundred humans, and label the set of all their genomes (the gene pool) as H0. Since they are all humans, there is a very limited set of possible three-billion-or-so base pair sequences that could go into H0. H0 is therefore a low-entropy set of sequences: it has very few possible members.

Now the humans proliferate, and we take the gene pool of all their children as H1. Given random mutation, some of the genomes in H1 will have been genomes that could never have been in H0. The entropy of H1 is thus larger than H0. (If this is counterintuitive, go back to the molecules in a box. When all the molecules are trapped in the left side, the only possible value for a molecule's position is "left"; when the partition is removed, molecules could be either "left" or "right", eventually with equal probability. The more possible "answers" there are for the question, the higher the entropy of the macrostate.)

Now there is a a very narrow range of possible three-billion-or-so base pair sequences that could possibly code for a human - in other words, the human gene pool is a low-entropy gene pool. And yet random mutations tend to increase the entropy of gene pools as they accumulate. Therefore, how can random mutations alone create a low-entropy gene pool, or maintain it?

The answer lies in the parable above.
 
Upvote 0

AV1611VET

SCIENCE CAN TAKE A HIKE
Site Supporter
Jun 18, 2006
3,856,187
52,654
Guam
✟5,151,331.00
Country
United States
Gender
Male
Faith
Baptist
Marital Status
Married
Politics
US-Republican
Perhaps creation took place 6000 years ago, but has embedded age.
Could you provide a link where this has been refuted a thousand times, please?
That is, the world looks old, but it is actually quite young.
How can something only look old, when it has age embedded into it? I would think, if it had age embedded into it, it would be old as well.

If I had glass embedded into my shoe, would it only look like I had glass embedded into my shoe, or could you actually pick glass out of my shoe?
 
Upvote 0