The newest YEC gambit (ie, fallback position) is that "no new information" results from micro evolution, and so macro evolution can not occur. Here is a good discussion of of the matter (of which I was neither side). Below that are some links to cases in which the YEC position has been proven false, since new information and even new features were added:
> I'm trying to get a handle on where creationists and "intelligent
> design-ists" are trying to go with this "information theory and design"
> ****.
>
> Are there any good online resources that would give me a handle on what
> information theory is about (not too technical but not too dumbed-down,
> either!)?
They're actually talking about two distinct notions of information theory: one is Shannon's information theory; and one is Kolmogorov-Chaitin information theory.
For a nice description of Kolmogorov-Chaitin, I suggest The Limits of Mathematics by Greg Chaitin.
For Shannon's info theory, take a look at http://oak.cats.ohiou.edu/~sk260695/skinfo.html, which has a lot of links.
The creationist trick is to mix the two theories in a nonsensical way to create a false conclusion.
Shannon's theory deals with communication. He was working for Bell Labs, and his fundamental interest was in communication over noisy links. (The fundamental paper that first proposed Shannon IT was titled "Communication in the Presence of Noise".) In Shannon theory, entropy is randomness introduced by noise: communication over a noisy channel always adds entropy, but one can never add information - because the point is to correctly transmit information from a source to a destination. Noise along the channel can not add to the information content - because by definition, the only information was provided by the transmitter, and anything that occurs during transmission can, at best, not harm the information content of the transmission.
Shannon theory is thus the root of the creationist claim that "randomness cannot add information to a system".
Kolmogorov-Chaitin information theory is a totally different topic (and one that I know more about than Shannon). K-C is a version of information theory derived from computer science. It studies what it calls the information content of a string. In K-C information theory, one defines the information content of a string in terms of the randomness of the string: a string with lots of redundancy has low information content; the more random a string is, the less redundancy it has, and thus the more information each bit of it contains. K-C information theory is interesting in that it considers the size of the "decoding machine" used to interpret a string to be a part of the measure of information content of that string. K-C also has a definition of entropy as a measure of information content: entropy is a measure of the randomness of a string, and thus, of the information content of that string.
K-C information theory is absolutely fascinating, and has been used fairly widely in a lot of interesting ways. Greg Chaitin has been using it as a tool to study some very deep properties of mathematics; it's been used by theoretical computer scientists to analyze the intrinsic algorithmic complexity of computable problems; and it has been used to discuss the information content of DNA (because with DNA, the information content is not determined solely by the gene sequence, but by the machinery that processes it).
The creationist trick is to say that the term "entropy" means the same thing in both Shannon and K-C information theories. If that's true, then you can take a measure of the information content of DNA, using K-C terms, and then argue that on the basis of Shannon theory, the information content of the DNA can never increase.
The flaw here is actually pretty subtle. K-C says nothing about how information content can change. It simply talks about how to measure information content, and what, in fact, information content means in a mathematical/computational sense. But Shannon is working in a very limited field where there is a specific, predetermined upper bound on information content. K-C, by definition, has no such upper bound.
Adding randomness to a system adds noise to the system. By Shannon theory, that means that the information content of the system decreases. But by K-C theory, the information content will likely increase by the addition of randomness. K-C allows noise to increase information content; Shannon doesn't. Mix the two, you get something nonsensical, but you can create some very deep looking stuff that looks very dazzling to people who aren't trained in either form of information theory."
Now for the links:
http://www.talkorigins.org/faqs/information/apolipoprotein.html
Which contains the following conclusion:
AiG claims that the Apo-AIM mutation, which produces a reduction in risk from heart attack and stroke, results in a loss of specificity. However, these claims are incorrect. Instead, Apo-AIM is 1) of a more complex tertiary structure 2) more stable and 3) activates cholesterol efflux more effectively than Apo-AI. Furthermore, Apo-AIM has an antioxidant activity not present in Apo-AI that is sequence and substrate specific. Thus, far from a loss of specificity, Apo-AIM represents a gain of specificity and "information" by AiG's own measures. Contrary to AiG's suggestion, all current evidence indicates that the Apo-AIM mutation is beneficial for its carriers, whether heterozygous or homozygous.
And here:
http://www.talkorigins.org/indexcc/CB/CB101_2.html
Which contains this list of mutations that actually increase information and useful features:
> I'm trying to get a handle on where creationists and "intelligent
> design-ists" are trying to go with this "information theory and design"
> ****.
>
> Are there any good online resources that would give me a handle on what
> information theory is about (not too technical but not too dumbed-down,
> either!)?
They're actually talking about two distinct notions of information theory: one is Shannon's information theory; and one is Kolmogorov-Chaitin information theory.
For a nice description of Kolmogorov-Chaitin, I suggest The Limits of Mathematics by Greg Chaitin.
For Shannon's info theory, take a look at http://oak.cats.ohiou.edu/~sk260695/skinfo.html, which has a lot of links.
The creationist trick is to mix the two theories in a nonsensical way to create a false conclusion.
Shannon's theory deals with communication. He was working for Bell Labs, and his fundamental interest was in communication over noisy links. (The fundamental paper that first proposed Shannon IT was titled "Communication in the Presence of Noise".) In Shannon theory, entropy is randomness introduced by noise: communication over a noisy channel always adds entropy, but one can never add information - because the point is to correctly transmit information from a source to a destination. Noise along the channel can not add to the information content - because by definition, the only information was provided by the transmitter, and anything that occurs during transmission can, at best, not harm the information content of the transmission.
Shannon theory is thus the root of the creationist claim that "randomness cannot add information to a system".
Kolmogorov-Chaitin information theory is a totally different topic (and one that I know more about than Shannon). K-C is a version of information theory derived from computer science. It studies what it calls the information content of a string. In K-C information theory, one defines the information content of a string in terms of the randomness of the string: a string with lots of redundancy has low information content; the more random a string is, the less redundancy it has, and thus the more information each bit of it contains. K-C information theory is interesting in that it considers the size of the "decoding machine" used to interpret a string to be a part of the measure of information content of that string. K-C also has a definition of entropy as a measure of information content: entropy is a measure of the randomness of a string, and thus, of the information content of that string.
K-C information theory is absolutely fascinating, and has been used fairly widely in a lot of interesting ways. Greg Chaitin has been using it as a tool to study some very deep properties of mathematics; it's been used by theoretical computer scientists to analyze the intrinsic algorithmic complexity of computable problems; and it has been used to discuss the information content of DNA (because with DNA, the information content is not determined solely by the gene sequence, but by the machinery that processes it).
The creationist trick is to say that the term "entropy" means the same thing in both Shannon and K-C information theories. If that's true, then you can take a measure of the information content of DNA, using K-C terms, and then argue that on the basis of Shannon theory, the information content of the DNA can never increase.
The flaw here is actually pretty subtle. K-C says nothing about how information content can change. It simply talks about how to measure information content, and what, in fact, information content means in a mathematical/computational sense. But Shannon is working in a very limited field where there is a specific, predetermined upper bound on information content. K-C, by definition, has no such upper bound.
Adding randomness to a system adds noise to the system. By Shannon theory, that means that the information content of the system decreases. But by K-C theory, the information content will likely increase by the addition of randomness. K-C allows noise to increase information content; Shannon doesn't. Mix the two, you get something nonsensical, but you can create some very deep looking stuff that looks very dazzling to people who aren't trained in either form of information theory."
Now for the links:
http://www.talkorigins.org/faqs/information/apolipoprotein.html
Which contains the following conclusion:
AiG claims that the Apo-AIM mutation, which produces a reduction in risk from heart attack and stroke, results in a loss of specificity. However, these claims are incorrect. Instead, Apo-AIM is 1) of a more complex tertiary structure 2) more stable and 3) activates cholesterol efflux more effectively than Apo-AI. Furthermore, Apo-AIM has an antioxidant activity not present in Apo-AI that is sequence and substrate specific. Thus, far from a loss of specificity, Apo-AIM represents a gain of specificity and "information" by AiG's own measures. Contrary to AiG's suggestion, all current evidence indicates that the Apo-AIM mutation is beneficial for its carriers, whether heterozygous or homozygous.
And here:
http://www.talkorigins.org/indexcc/CB/CB101_2.html
Which contains this list of mutations that actually increase information and useful features:
- The ability of a bacterium to digest nylon [Negoro et al. 1994; Thwaites 1985; Thomas n.d.].
- Adaptation in yeast to a low-phosphate environment [Francis and Hansche 1972; 1973; Hansche 1975].
- The ability of E. coli to hydrolyze galactosylarabinose [Hall and Zuzel 1980; Hall 1981].
- Evolution of multicellularity in a unicellular green alga [Boraas 1983; Boraas et al. 1998].
- Modification of E. coli's fucose pathway to metabolize propanediol [Lin and Wu 1984].
- Evolution in Klebsiella bacteria of a new metabolic pathway for metabolizing 5-carbon sugars [Hartley 1984].