- Feb 17, 2005
- 8,463
- 515
- 37
- Faith
- Protestant
- Marital Status
- In Relationship
Derail from http://www.christianforums.com/t3315965-teach-me-why-i-should-believe-in-evolution.html&page=8
How does one count information entropy? It's really simple:
H(x) = -sum for all i [ p(i) log2 p(i) ]
http://en.wikipedia.org/wiki/Information_entropy
H is entropy.
x is the message.
i is all possible "events". For example, DNA has four "i"s: A, T, C, and G.
p(i) is the "probability" of an event, in this case the number of occurrences of a particular event over the total number of events. For example, in the message "aabc", p("a") = 0.5 and p("b") = p("c") = 0.25.
log2 is the logarithm to base two.
Example 1:
aaaa -> aaab
For "aaaa", there is only one possible event. Therefore
p("a") = 1, log2 p("a") = 0, and p("a") log2 p("a") = 0. A nonsense message consisting of a single letter has zero entropy!
For "aaab":
p("a") = 3/4; log2 p("a") = -0.415; p ("a") log2 p("a") = -0.3113 (hereafter abbreviated as plp("a"))
Similarly, plp("b") = -.5
plp("a") + plp("b") = -0.915; H(x) = 0.915.
Example 2:
aabb -> aaab
For "aabb":
p("a") = 1/2 -> plp("a") = -.5
Similarly, plp("b") = -.5
plp("a") + plp("b") = -1; H(x) = 1.
For "aaab":
H("aaab") = 0.915, by Ex. 1
Notice a negative entropy change of -0.085. Random mutations can cause information entropy to decrease!
Example 3:
kissed -> kisses
For "kissed":
p("s") = 1/3
plp("s") = -0.5283
p("k") = 1/6
plp("k") = -0.4308
similarly, for "i", "d" and "e", plp = -0.4308
Sum of all plp = -2.252; H("kissed") = 2.252
For "kisses":
p("s") = 1/2 -> plp("s") = -.5
p("k") = 1/6
plp("k") = -0.4308
similarly, for "i" and "e", plp = -0.4308
Sum of all plp = -1.792; H("kisses") = 1.792
Again, a negative entropy change brought about by a mutation which could very well be random. I think "kissed -> kissee", "kissed -> kidded", and "kissed ->iissee" all yield negative entropy changes as well. The entropy change seems to be irrelevant to the meaning of the message, doesn't it?
Example 4:
"argument" - > "rgument" -> "gument"
I will not prove these results, which are perfectly repeatable, for the sake of brevity of working. (If Fermat could do it, so can I.)
H("argument") = log2(8) = 3
H("rgument") = log2(7) = 2.807
H("gument") = log2(6) = 2.585
Now even deletions decrease entropy! Not just that, nonsense strings have less entropy than sense strings.
Example 5: (DNA)
ATCAGC -> ATCATC
H("ATCAGC") = 1.918
H("ATCATC") = log2(3) = 1.585
Even when dealing with DNA, random mutations can decrease entropy.
This should put to rest any notion of a Second Law of Thermodynamics for information content.
How does one count information entropy? It's really simple:
H(x) = -sum for all i [ p(i) log2 p(i) ]
http://en.wikipedia.org/wiki/Information_entropy
H is entropy.
x is the message.
i is all possible "events". For example, DNA has four "i"s: A, T, C, and G.
p(i) is the "probability" of an event, in this case the number of occurrences of a particular event over the total number of events. For example, in the message "aabc", p("a") = 0.5 and p("b") = p("c") = 0.25.
log2 is the logarithm to base two.
Example 1:
aaaa -> aaab
For "aaaa", there is only one possible event. Therefore
p("a") = 1, log2 p("a") = 0, and p("a") log2 p("a") = 0. A nonsense message consisting of a single letter has zero entropy!
For "aaab":
p("a") = 3/4; log2 p("a") = -0.415; p ("a") log2 p("a") = -0.3113 (hereafter abbreviated as plp("a"))
Similarly, plp("b") = -.5
plp("a") + plp("b") = -0.915; H(x) = 0.915.
Example 2:
aabb -> aaab
For "aabb":
p("a") = 1/2 -> plp("a") = -.5
Similarly, plp("b") = -.5
plp("a") + plp("b") = -1; H(x) = 1.
For "aaab":
H("aaab") = 0.915, by Ex. 1
Notice a negative entropy change of -0.085. Random mutations can cause information entropy to decrease!
Example 3:
kissed -> kisses
For "kissed":
p("s") = 1/3
plp("s") = -0.5283
p("k") = 1/6
plp("k") = -0.4308
similarly, for "i", "d" and "e", plp = -0.4308
Sum of all plp = -2.252; H("kissed") = 2.252
For "kisses":
p("s") = 1/2 -> plp("s") = -.5
p("k") = 1/6
plp("k") = -0.4308
similarly, for "i" and "e", plp = -0.4308
Sum of all plp = -1.792; H("kisses") = 1.792
Again, a negative entropy change brought about by a mutation which could very well be random. I think "kissed -> kissee", "kissed -> kidded", and "kissed ->iissee" all yield negative entropy changes as well. The entropy change seems to be irrelevant to the meaning of the message, doesn't it?
Example 4:
"argument" - > "rgument" -> "gument"
I will not prove these results, which are perfectly repeatable, for the sake of brevity of working. (If Fermat could do it, so can I.)
H("argument") = log2(8) = 3
H("rgument") = log2(7) = 2.807
H("gument") = log2(6) = 2.585
Now even deletions decrease entropy! Not just that, nonsense strings have less entropy than sense strings.
Example 5: (DNA)
ATCAGC -> ATCATC
H("ATCAGC") = 1.918
H("ATCATC") = log2(3) = 1.585
Even when dealing with DNA, random mutations can decrease entropy.
This should put to rest any notion of a Second Law of Thermodynamics for information content.