Sorry that is a hand waving test.
It is not hand waving as it is based on science. The determination of ID is based on the scientific principle of observation theory. The measurements of information in DNA are a widely recognised science and based on information and probability theory used in many other fields. It determines the level of information in certain situations that can infer intelligent design/agents and excludes other events that don’t meet these requirements such as random natural processes.
When there is low probability as compared to high probability then the information can be said to have high complexity.
But certain info (specified complexity) can only be created by intelligence because its chosen and specified in a certain direction and creates function. Specified complexity is a well-known and accepted scientific measure for determine intelligence in different fields like cryptography, forensics, archaeology and astronomy.
The fact is in biology it is widely recognised that there is specified complexity in the form of language codes and molecular machinery and when measured by these scientific criterion point to an intelligence and not a natural chance process.
You need something more concrete. You need to be able to be specific about the "criteria". You need to be far more specific in what you mean by design. You do not have a proper test. By definition you do not have evidence.
I gave the basic requirements for determining intelligent design which though simplistic meets the requirements for determining specified complexity. This description does really cover the basics and any more detail only verified this.
Intelligently design info not only has Shannon information (complexity level) but also has
specified information which produces function which is obviously specified towards a specific meaning, instruction that produces some functional outcome. This can only be produced by intelligence as it requires a high level of choice that produce that function and outcome.
For example though simple the following really explains things well. It just means the greater the complexity and specification in the sequences the greater the probability that it can only be produced by intelligence and not chance IE
First-
nehya53nslbyw1`jejns7eopslanm46/J
This is complex and defies reduction to a simple rule but is not
specified complexity because it doesn’t specify any meaning, instruction or function.
Second-
ABABABABABABABABABABAB
This is not complex but simple and highly ordered nor specified as it doesn’t produce any meaning or function.
Third-
TIME AND TIDE WAIT FOR NO MAN
This is complex (defy reduction to a simple rule as above) and is specified because it has meaning and performs a communication that tells humans something. It has function as it relays a message about life and this can only be produced by humans. Just like DNA.
DNA is both complex and specified because the amino acid sequences are a code or language which give instruction and communicate a function which is to build proteins that build phenotypes. We know that many scientists have compared the Cell and DNA to computer language and machines designed by humans thus it follows that these natural machines are also have the same qualities of being design by intelligence.
But an analogy of similarities is not the only reason that supports ID in DNA. It also constitutes an inference to the best explanation. Such arguments don't just compare degrees of similarity between different effects, but instead compare the explanatory power of competing causes with respect to a single kind of effect. So an inference to DNA being intelligently designed is the best explanation compared to a chance natural cause.
I am not going to go into further detail as this would take pages of how specified complexity in biological codes and machinery meet the requirements of human made machinery and language and thus can be inferred as being intelligently design. The fact is it is supported by the science and if you want more detail of that science then these papers will help.
These papers show how to determine specified complexity and how DNA meets those requirements.
On the improbability of algorithmic specified complexity
An event with low probability is unlikely to happen, but events with low probability happen all of the time. This is because many distinct low probability events can have a large combined probability. However, some low probability events can be seen to follow an independent pattern. Algorithmic specified complexity (ASC) measures the degree to which an event is improbable and follows a pattern. We show a bound on the probability of obtaining a particular value of algorithmic specified complexity. Consequently we can say that high ASC objects are improbable.
On the improbability of algorithmic specified complexity
Structural Complexity of DNA Sequence
In modern bioinformatics, finding an efficient way to allocate sequence fragments with biological functions is an important issue. This paper presents a structural approach based on context-free grammars extracted from original DNA or protein sequences. This approach is radically different from all those statistical methods. Furthermore, this approach is compared with a topological entropy-based method for consistency and difference of the complexity results.
In this paper, we give a method for computing complexity of DNA sequences. The traditional method focused on the statistical data or simply explored the structural complexity without value. In our method, we transform the DNA sequence to DNA tree with tree representations at first.
Then we transform the tree to context-free grammar format, so that it can be classified. Finally, we use redefined generating function and find the complexity values. We give a not only statistical but also structural complexity for DNA sequences, and this technique can be used in many important applications.
Structural Complexity of DNA Sequence
This is a different approach but still uses specified complexity to determine the difference between something originating from an intelligent cause as with the finely tuned universe argument which is then applied to biological machines and codes which are said to also be finely tuned.
Just in the same way that the probability is against the universe producing the just right conditions for intelligent life happened by chance the probability that DNA and the astronomically rare functional proteins that produce life is against all probability of being produced by natural chance causes and a better explanatiuon is that it was caused by an intelligence.
Using Statistical Methods to Model the Fine-tuning of Molecular Machines and Systems
However, in this paper we argue that biological systems present fine-tuning at different levels, e.g. functional proteins, complex biochemical machines in living cells, and cellular networks. This paper describes molecular fine-tuning, how it can be used in biology, and how it challenges conventional Darwinian thinking. We also discuss the statistical methods underpinning fine-tuning and present a framework for such analysis.
We define fine-tuning as an object with two properties: it must a) be unlikely to have occurred by chance, under the relevant probability distribution (i.e. complex), and b) conform to an independent or detached specification (i.e. specific).
Using statistical methods to model the fine-tuning of molecular machines and systems - ScienceDirect