Again Dawkins is making an analogy here. Personally, I think Dawkins actually over-exaggerates the comparison which is probably where some of the confusion here lies. I've read other articles from biologists that actually dislike the way such analogies are used, because it muddies the waters when people treat them overly literally.
The equivocation is happening over words like "code" and "information". Yes, DNA can be considered a code. Yes, DNA can be considered to contain information.
^ so we agree, we also agree computers are usually made of plastic and DNA isn't- that has no bearing on the information system similarities
Just a very brief example of just how 'interchangeable' the mediums are in terms of the digital information processing capabilities themselves- and touching on parity bit error checking capacity- but much much more you can research on this if you are interested
DNA computing is an emerging new research field that uses DNA molecules instead of traditional silicon based microchips. The first researcher to demonstrate the computing capability of DNA was Leonard Adelman, who in 1994 developed a method of using DNA for solving an instance of the directed Hamiltonian path problem [4]. In 1997, Ogihara and Ray demonstrated that DNA computers can simulate Boolean AND and OR gates [5]. The advantage of DNA computers is that they are smaller and faster than traditional silicon computers, and they can be easily used for parallel processing. DNA has also been used as a tool for cryptography and cryptanalysis, using molecular techniques for its manipulation [3]. Bogard et al. describe how multiple sequence alignment can be used for error reduction in DNA computing [6].
****
Of the 16 nucleotide bases that could pair up to make DNA, why do only A, T, G, and C make up the genomic alphabet? Researchers have long put it down to the composition of the primordial soup in which the first life arose. But Dónall Mac Dónaill of Trinity College Dublin says the choice incorporates a tactic for minimizing errors similar to that used by error-coding systems incorporated into credit card numbers, bank accounts, and airline tickets.
In the error-coding theory first developed in 1950 by Bell Telephone Laboratories researcher Richard Hamming, a so-called parity bit is added to the end of digital numbers to make the digits add up to an even number. For example, when transmitting the number 100110, you would add an extra 1 onto the end (100110,1); the number 100001 would have a zero added (100001,0). Because the most likely transmission error--switching a single digit from 1 to 0 or vice versa--causes the sum of the digits to be odd, the recipient of an odd number can assume that an error occurred.
Mac Dónaill asserts, in a forthcoming issue of
Chemical Communications, that a similar process was at work in the choice of bases in the genetic alphabet. To demonstrate this, he represented each nucleotide as a four-digit binary number. The first three digits represent the three bonding sites that each nucleotide presents to its partner. Each site is either a hydrogen donor or acceptor; a nucleotide offering donor-acceptor-acceptor sites would be represented as 100 and would only bond with an acceptor-donor-donor nucleotide, or 011. The fourth digit is 1 if the nucleotide is a single-ringed pyrimidine type and 0 if it is a double-ringed purine type. Nucleotides readily bond with members of the other type.
Mac Dónaill noticed that
the final digit acts as a parity bit: The four digits of A, T, G, and C all add up to an even number. Banishing all odd-parity nucleotides from the DNA alphabet reduces errors, Mac Dónaill says. For example, nucleotide C (100,1) binds naturally to nucleotide G (011,0), but it might accidentally bind to the odd parity nucleotide X (010,0), because there is just one mismatch. Such a bond would be weak compared to C-G but not impossible. However, C is highly unlikely to bond to any other even-parity nucleotides, such as the idealized amino-adenine (101,0), because there are two mismatches.