The reason I used this example is because it's a case where design is not immediately obvious. It's possible to find natural stones that have the appearance of stone tools, but on closer examination are not.
In contrast, obvious examples of design (e.g. a stone bust) only require pre-existing knowledge that humans make sculptures coupled with our intrinsic pattern recognition. The latter is especially adept at recognizing faces.
No fancy calculations or close scrutiny needed.
yes but we can extract a lot of information out of human made artifacts that can help us in measuring design as opposed to chance events. Even with the simple stone tool a closer examination is needed to find those design signs to determine it from a naturally occurring stone. It is determining that the work is made by an intelligent mind that can use a tool to make those specific marks and shapes rather than happening by random chance which build the case for specified info complex or not. By determining that a natural event could not do this makes a case for specified complexity and ID.
Pattern recognition. We don't need to do any complex calculations to recognize a human face or other objects where we have preexisting knowledge of their creation.
But to determine that a random chance event could cause this to happen we need to d the calculations. This will show the odds of it not being able to happen and therefore build support for specified info. The more complex the more a case can be made. It is a bit like the fine-tuning argument in that there are a number of physical parameters set within a tiny value in among many possible values.
This as some scientists have said goes beyond a chance event and coincident where chance happen to make them all fall into place. It is because the odds are against it being chance that we can make a case for it being specified and complex. That specificity then can make a case for an intelligent agent. As Fred Hoyle said “A common sense interpretation of the facts suggests that a super intellect has monkeyed with physics”
Yeah and this paper seems a better one for understanding the application of specified complexity when measuring meaning in images as opposed to those that contain random info. It gives a more practical way to understand specified complexity compared to random chance.
It's depends on what we're specifically talking about.
In the case of biology like GM organisms, current methods require existing knowledge of the genetic sequences and/or biological products in question. And methods to detect GM organisms are being developed based on the knowledge of how GM organisms are created and specific biological characteristics common to GM organisms based on their creation.
In the case of completely unknown designers and design we have neither of these to rely on. So how can we possibly detect design in those circumstances?
I would have thought there were some fundamental principles of design that can apply across the board. Going back to the stone tool and the bust. We can investigate the way certain shapes and meaning is incorporated in these items. The shapes and lines have been chosen to represent a meaning and this is built upon. A bit like reverse engineering, I guess. Scientists use reverse engineering to understand how an insect may fly for example to improve human flying or invent new forms of aerodynamics.
Meanwhile we also have a known process (evolution) by which populations of biological organisms modify themselves over time. We even utilize this process in the design of biological organisms and by-products (directed evolution).
How would one distinguish a genetic sequence deliberately programmed as such versus a sequence arising from directed evolution versus a sequence arising from unguided evolution?
Well isn’t random mutations a chance thing. If using sequences don’t people say that this can be likened to language. So, a functional sequence has language that makes sense. A random mutation can alter this change it into incoherent language. In that sense it shows that existing sequences that build proteins are made up of rare and specific language and any random mutation will have a negative effect.
But also if you look at processes contained in the EES you can see a fundamental difference to the way the standard theory relies on adaptations which are primarily subjecting life to chance. As opposed to more directed, organised and structured changes that produce certain outcomes that are well suited and integrated. For example the standard theory sees convergent evolution as extraordinary coincidences with similar features being produced through similar environments. But there are also contradictory situations where different outcomes happen that SET cannot account for. The EES sees similar features a result of development where certain features are the only outcomes and will be the same for most living things regardless of environments.
If you believe you have posted papers that should how to detect design in relation to biology, then please quote the relevant papers and sections thereof that support this. The paper you cited presents a hypothetical model. But where is the empirical testing and verification of its validity?
The paper above do not specifically talk about biological info. But it and the other paper give a method of being able to show how random chance events cannot produce the level of info we see in what is usually regarded as designed or in things like a living cell. Therefore, this points to the content being specified complexity. The odds of that info being produced by chance is beyond chance and this infers something more specified. This model can be applied to anything including biology.
You're talking about the infamous Douglas Axe paper (2004) which was cited by the OP. The paper does not support the conclusion it is being used to support (e.g. the rarity of viable protein folds among all of biology).
I believe we already discussed this earlier in this thread.
Maybe so but the findings can be applied to a large range on proteins and considering that these are needed as the building blocks of life it seems to be comprehensive. Once again it is a bit like the fine-tuning argument but applied to the rarity of functional proteins. Like the parameters of each physical constant having to be within a very small range and if change even slightly they will break down.
It is the same for proteins in that there are very rare functional proteins that fall with very narrow forms. Any random mutation that comes in will undermine this and it break down. This also supports specified complexity in that the info in proteins is complex and the functional folds are specific. They can be determined from any possible fold that occupies a massive non-functional space that can be the result of random chance.
So, the odds for a specific mutational change that needs to be made to change a function and be viable is beyond chance. Like scientists have quipped about the fine tuning of the universe this can make a case for some involvement of intelligence.
The problem is that "specified complexity" is entirely too nebulous a term. Per Dembski's original writings, he was formulating a mathematical probability test. As stated, I'm not aware of his method ever being empirically verified with respect to biology.
There are a number of papers out that use similar methods. It is not about verifying specified complexity itself. It is about showing that random chance cannot account for certain things. That the odds for chance producing them go beyond chance. This then infers specified complexity. That is what those papers I posted were about. People get things wrong when they think that we need to verify specified complexity. It is more about showing how random chance cannot account for what we see.