• Starting today August 7th, 2024, in order to post in the Married Couples, Courting Couples, or Singles forums, you will not be allowed to post if you have your Marital status designated as private. Announcements will be made in the respective forums as well but please note that if yours is currently listed as Private, you will need to submit a ticket in the Support Area to have yours changed.

A simple calculation shows why evolution is impossible

stevevw

inquisitive
Nov 4, 2013
15,809
1,695
Brisbane Qld Australia
✟317,890.00
Gender
Male
Faith
Christian
Marital Status
Private
That's not how it works. As I already explained, the status of a mutation (beneficial/neutral/deleterious) is contingent, i.e. can vary with circumstances; consider sickle-cell trait.
Then why does the paper say that what can be seen as a neutral mutation is actually a slightly deleterious mutation being tolerated. That they can reach a threshold where their combined effect can have a cost on fitness.
 
Upvote 0

pitabread

Well-Known Member
Jan 29, 2017
12,920
13,373
Frozen North
✟344,333.00
Country
Canada
Gender
Male
Faith
Agnostic
Marital Status
Private
i can't imagine a species evolving from an ape to a man, especially in just 5,000 years. God created man in His own image. i can't see God as an ape.

It didn't happen in 5000 years. Modern humans are at least 300,000 years old.
 
Upvote 0

FrumiousBandersnatch

Well-Known Member
Mar 20, 2009
15,405
8,143
✟348,882.00
Faith
Atheist
Then why does the paper say that what can be seen as a neutral mutation is actually a slightly deleterious mutation being tolerated. That they can reach a threshold where their combined effect can have a cost on fitness.
A mutation that is neutral in some circumstances can become slightly deleterious in other circumstances. Similarly, a mutation that is slightly slightly deleterious in some circumstances can become neutral or advantageous in other circumstances; and so-on.

How the paper chooses to present some particular set of circumstances is up to the authors. They may have felt that the overall contribution from those particular mutations was slightly disadvantageous.
 
Upvote 0

stevevw

inquisitive
Nov 4, 2013
15,809
1,695
Brisbane Qld Australia
✟317,890.00
Gender
Male
Faith
Christian
Marital Status
Private
And I didn't see a single thing in there that showed how a mutation that was harmful would NOT be selected against.
It stands to reason as the papers show if there is a tolerance of slightly deleterious mutations then they are not going to be identified by natural selection because they are being tolerated. It is not until there is an accumulation that will cause a fitness loss that selection will act. So in the meantime selection is blind to those slightly deleterious mutations until they accumulate and therefore there are slightly harmful mutations which are still hanging around.

Without listing another lot of papers here is one from Lynch on how multicelled creatures accumulate harmful mutations which are not always purged out by natural selection and thuse makes them more susceptible to extinction.

Multicellular species experience reduced population sizes, reduced recombination rates, and increased deleterious mutation rates, all of which diminish the efficiency of selection (13). It may be no coincidence that such species also have substantially higher extinction rates than do unicellular taxa (47, 48).
The frailty of adaptive hypotheses for the origins of organismal complexity






 
Upvote 0

stevevw

inquisitive
Nov 4, 2013
15,809
1,695
Brisbane Qld Australia
✟317,890.00
Gender
Male
Faith
Christian
Marital Status
Private
A mutation that is neutral in some circumstances can become slightly deleterious in other circumstances. Similarly, a mutation that is slightly slightly deleterious in some circumstances can become neutral or advantageous in other circumstances; and so-on.

How the paper chooses to present some particular set of circumstances is up to the authors. They may have felt that the overall contribution from those particular mutations was slightly disadvantageous.
You make it sound like beneficial mutations have just as much chance when they are very very rare. The majority are said to be neutral with a large amount of deleterious ones as well. Though as mentioned overall regardless of the situation from what I have read many of those neutral ones can be well tolerated slightly harmful ones that may appear neutral.
 
Last edited:
Upvote 0

stevevw

inquisitive
Nov 4, 2013
15,809
1,695
Brisbane Qld Australia
✟317,890.00
Gender
Male
Faith
Christian
Marital Status
Private
It didn't happen in 5000 years. Modern humans are at least 300,000 years old.
The time period for evolving an ape to a human which is said to be around 6 million years has been shown to be massively inadequate to have happened by blind and random evolution. Research shows to even get the simplest word equivalent of two or more sequenced Nucleotides would take longer into 10's of millions of years than the 6 million years that humans evolved. Let alone the larger number and longer sequences needed. For a sequence of around 10 Nucleotides equivalent to a simple sentence which requires multiple connected mutations it would take a 100 billion years much longer that the age of the universe.
As the paper says it would require
millions of specific beneficial mutations, and a large number of specific beneficial sets of mutations, selectively fixed in this very short period of time.
The waiting time problem in a model hominin population
The waiting time problem in a model hominin population
 
Upvote 0

pitabread

Well-Known Member
Jan 29, 2017
12,920
13,373
Frozen North
✟344,333.00
Country
Canada
Gender
Male
Faith
Agnostic
Marital Status
Private
The time period for evolving an ape to a human which is said to be around 6 million years has been shown to be massively inadequate to have happened by blind and random evolution. Research shows to even get the simplest word equivalent of two or more sequenced Nucleotides would take longer into 10's of millions of years than the 6 million years that humans evolved. Let alone the larger number and longer sequences needed. For a sequence of around 10 Nucleotides equivalent to a simple sentence which requires multiple connected mutations it would take a 100 billion years much longer that the age of the universe.
As the paper says it would require
millions of specific beneficial mutations, and a large number of specific beneficial sets of mutations, selectively fixed in this very short period of time.
The waiting time problem in a model hominin population
The waiting time problem in a model hominin population

Have you read that paper? Do you think the model it presents reflects an accurate picture of how biological evolution would function in evolving populations over millions of years?

Just looking at the methods section already I can see several glaring issues.

But I'll let you have first crack at explaining the methodology and why you think it's an accurate representation of evolving populations over time.
 
Last edited:
Upvote 0

FrumiousBandersnatch

Well-Known Member
Mar 20, 2009
15,405
8,143
✟348,882.00
Faith
Atheist
The time period for evolving an ape to a human which is said to be around 6 million years has been shown to be massively inadequate to have happened by blind and random evolution.
Evolution isn't random.

Research shows to even get the simplest word equivalent of two or more sequenced Nucleotides would take longer into 10's of millions of years than the 6 million years that humans evolved. Let alone the larger number and longer sequences needed. For a sequence of around 10 Nucleotides equivalent to a simple sentence which requires multiple connected mutations it would take a 100 billion years much longer that the age of the universe.
As the paper says it would require
millions of specific beneficial mutations, and a large number of specific beneficial sets of mutations, selectively fixed in this very short period of time.
The time period for evolving an ape to a human which is said to be around 6 million years has been shown to be massively inadequate to have happened by blind and random evolution. Research shows to even get the simplest word equivalent of two or more sequenced Nucleotides would take longer into 10's of millions of years than the 6 million years that humans evolved. Let alone the larger number and longer sequences needed. For a sequence of around 10 Nucleotides equivalent to a simple sentence which requires multiple connected mutations it would take a 100 billion years much longer that the age of the universe.
As the paper says it would require
millions of specific beneficial mutations, and a large number of specific beneficial sets of mutations, selectively fixed in this very short period of time.
The waiting time problem in a model hominin population
The waiting time problem in a model hominin population

The waiting time problem in a model hominin population
The waiting time problem in a model hominin population
Specifying specific sequences in advance is always going to be problematic because it's not modelling evolution. This is retrospective probability calculation (although kudos to them for admitting to five different ways their calculation could be out).

I'm inclined to be a little suspicious of papers suggesting problems for the evolutionary genetics when the authors are a horticulturalist advocate of ID & creationism (who denies common descent), his colleague at 'Feed My Sheep' Inc., a creationist fluid physicist, and a global flood creationist geophysicist...

Here's a critique of Sanford's book, 'Genetic Entropy', which is a background for this 'modelling exercise' paper (the main text follows a less dismissive review of Behe’s 'Edge of Evolution', in which Behe makes the same error I pointed out for the paper you quoted) :

"Sanford’s Genetic Entropy, on the other hand, is simply wrong from beginning to end. It misrepresents everything it touches: beneficial and deleterious mutations, gene duplication, natural selection, and synergistic epistasis. In all these areas, Sanford avoids engaging the large body of work which directly refutes his viewpoint, and instead cherry-picks a few references that seem to point his way, usually misinterpreting them in the process."

The critique is worth reading, as it addresses a few points you have made here, presumably misled by the likes of Sanford.

'Nuff said, I think.
 
Last edited:
  • Like
Reactions: pitabread
Upvote 0

pitabread

Well-Known Member
Jan 29, 2017
12,920
13,373
Frozen North
✟344,333.00
Country
Canada
Gender
Male
Faith
Agnostic
Marital Status
Private
Then how is design detected in say human made items and info.

The specific methodology depends on what we are talking about. From what I've seen though, it comes down to combinations of understanding of the methodology of how things are designed as well as pattern recognition.

Take paleolithic tools as an example (e.g. knapped flints). Distinguishing between a stone that has been deliberately shaped versus one that is occurring naturally involves understanding the respective processes of how such stones are formed.

In the case of a deliberately shaped stone the method used is typically percussive striking. Deliberately striking a stone to break off pieces results in specific patterns showing the point of fracture. This in turn indicates that it was percussive striking that created such fractures.

Conversely it's possible for stones to naturally fracture on their own. As the process is typically different (for example, thermal expansion), the pattern of the fractures themselves will be different.

By understanding the respective processes and the resulting outputs one can distinguish between a deliberately shaped stone tool versus one that occurs naturally.

Actually it was Paul Davies who came up with the concept when talking about abiogenesis, Dembski just extended this to other areas.

I was talking about within the context of ID. AFAIK, it was Dembski who first used that terminology with respect to applying it to design detection.

Other scientists have expanded and developed his ideas.

<snip>

But there is also other articles that don't mention specified complexity but have research outcomes that produce similar findings.

<snip>

Again, I was talking about the use of complexity, specificity, etc, within the context of an empirically tested and verified methodology for design detection.

It's certainly possible to measure things like complexity, information and so on including within the context of biology. But it's a whole different thing to develop a method whereby such measurements are used to detect purposeful design of the same.

IMHO, what ID proponents should be striving for is design detection of GM organisms. If they can't even detect examples where biological design is already known, why would anyone think they could the same with the unknown?

Yet real examples of ID would have to contain some specified info as opposed to random, it stands to reason. When ever I have debated ID with anyone "information" seems to be a key word used by both sides. It is usually based around what sort of information. Complexity seems to be a logical hallmark of ID, the more complex the harder it is to create randomly.

I disagree that complexity is a logical hallmark of ID. I'll give you two examples as to why.

The first involves compressibility of information as a measure of complexity. In short, the more complex a sequence is the less it can be compressed.

I did an experiment with this where I took two 4-second audio recordings. The first was a deliberate, designed sequence of musical notes. The second was random noise taken from a NASA recording.

Utilizing variable bit-rate MP3 compression with identical target quality settings, the sequence of musical notes took up approximately half the digital space as the recording of random noise. This suggests the random noise is of greater complexity since it is less compressible than the sequence of musical notes.

The second example involves digital security and passwords.

Humans typically select passwords based on common patterns. For example an English language word or phrase followed by a numerical sequence (e.g. "Password123"). Such password selection methods are quite common, often to meet password selection criteria per whatever systems people are using.

However, this also makes these passwords easy to crack and password cracking algorithms take advantage of these predictable human patterns. This is also where password length becomes less relevant with respect to password strength. By narrowing the search to an existing series of dictionary words and/or combinations of words, it greately reduces the search space from total possible character combinations.

Conversely a random sequence of characters will be much harder to crack since it doesn't conform to the aforementioned patterns. Entropy itself is now used as an indicator for relative password strength.

So your saying radio signals that require intelligent design may be the result of natural causes. yet if scientists discover those radio signals in outta space the headlines would be hailing how we have found intelligent life in the universe. Seems they want to hedge their bets.

You appear to be creating a strawman.

Then why would complexity be an important factor if it was established as being from an intelligent source.

Complexity would be a factor if we were specifically trying to determine if a signal contained some sort of message. Complexity by itself would not be an indicator of an intelligent source.

Yet scientists use these types of analogies all the time when explaining things like the cell and DNA such as sequences, language, codes, systems and patterns.

Sure, people use analogies to explain concepts. There is a difference between using an analogy to explain a concept versus using one to make an argument.
 
Last edited:
Upvote 0

Kylie

Defeater of Illogic
Nov 23, 2013
15,069
5,309
✟327,545.00
Country
Australia
Gender
Female
Faith
Atheist
Marital Status
Married
It stands to reason as the papers show if there is a tolerance of slightly deleterious mutations then they are not going to be identified by natural selection because they are being tolerated. It is not until there is an accumulation that will cause a fitness loss that selection will act. So in the meantime selection is blind to those slightly deleterious mutations until they accumulate and therefore there are slightly harmful mutations which are still hanging around.

Without listing another lot of papers here is one from Lynch on how multicelled creatures accumulate harmful mutations which are not always purged out by natural selection and thuse makes them more susceptible to extinction.

Multicellular species experience reduced population sizes, reduced recombination rates, and increased deleterious mutation rates, all of which diminish the efficiency of selection (13). It may be no coincidence that such species also have substantially higher extinction rates than do unicellular taxa (47, 48).
The frailty of adaptive hypotheses for the origins of organismal complexity

You don't seem to understand.

If there is any deleterious effect, then natural selection WILL select against it. It may do so slowly, but the selection against will be there right from the start.

If natural selection is blind to it, then there can't possibly be any deleterious effect.

Natural selection isn't sitting there going, "Yeah, I can see it's deleterious, but I'm not gonna do anything just yet. I might wait for a bit, see if the mutation gets any worse. THEN I might do something about it."

And saying that some creatures accumulate harmful mutations which are not purged by natural selection, and these make them more susceptible to extinction makes no sense. Being more susceptible to extinction is exactly how natural selection purges them!
 
Upvote 0

stevevw

inquisitive
Nov 4, 2013
15,809
1,695
Brisbane Qld Australia
✟317,890.00
Gender
Male
Faith
Christian
Marital Status
Private
You don't seem to understand.

If there is any deleterious effect, then natural selection WILL select against it. It may do so slowly, but the selection against will be there right from the start.

If natural selection is blind to it, then there can't possibly be any deleterious effect.

Natural selection isn't sitting there going, "Yeah, I can see it's deleterious, but I'm not gonna do anything just yet. I might wait for a bit, see if the mutation gets any worse. THEN I might do something about it."

And saying that some creatures accumulate harmful mutations which are not purged by natural selection, and these make them more susceptible to extinction makes no sense. Being more susceptible to extinction is exactly how natural selection purges them!
Then why do humans and other creatures carrying thousands of diseases and these are accumulation all the time. Also sometimes a deleterious mutation may give a benefit somewhere else so natural selection will not purge it out. Sickle cell is one example.

Michael Lynch is an expert on population genetics and I would trust his knowledge on this. As he states reduced population sizes, reduced recombination rates, and increased deleterious mutation rates, all of which diminish the efficiency of selection. So natural selections ability to purge out deleterious mutations is reduced and therefore it will not be able to reduce all harm. Small populations are known to be more susceptible to extinction.

But I am not arguing that natural selection is not a factor. I am saying that it is not this all powerful force that some make out. It is one influence in among several that which can be bypassed or play a minimal role because other factors are more prominent.
 
Last edited:
Upvote 0

Kylie

Defeater of Illogic
Nov 23, 2013
15,069
5,309
✟327,545.00
Country
Australia
Gender
Female
Faith
Atheist
Marital Status
Married
Then why do humans and other creatures carrying thousands of diseases and these are accumulation all the time. Also sometimes a deleterious mutation may give a benefit somewhere else so natural selection will not purge it out. Sickle cell is one example.

Michael Lynch is an expert on population genetics and I would trust his knowledge on this. As he states reduced population sizes, reduced recombination rates, and increased deleterious mutation rates, all of which diminish the efficiency of selection. So natural selections ability to purge out deleterious mutations is reduced and therefore it will not be able to reduce all harm. Small populations are known to be more susceptible to extinction.

But I am not arguing that natural selection is not a factor. I am saying that it is not this all powerful force that some make out. It is one influence in among several that which can be bypassed or play a minimal role because other factors are more prominent.

You think that natural selection MUST produce something that is 100% effective? No.

It is a constant cost to benefit thing. Natural selection may be able to increase resistance to a disease, but if it is going to require a great deal of resources to accomplish only a small increase, that extra cost may counteract any increase for a net neutral effect, or even a net disadvantage.

And it was very interesting that you cited the example of sickle cell anaemia. The gene for SCA is recessive, which means that if you get the SCA gene from one parent but not the other, the non-SCA version of the gene is dominant and you have no problems. It's only when you get the SCA gene from both parents that you are in trouble.

BUT...

Having a SCA version of the gene from one parent but not both conveys an advantage, giving protection against malaria. People who have a single copy of the SCA allele are less likely to die from Malaria. Sickle Cell Trait and the Risk of Plasmodium falciparum Malaria and Other Childhood Diseases

So it would actually be a DISADVANTAGE for natural selection to eradicate it entirely.
 
Upvote 0

stevevw

inquisitive
Nov 4, 2013
15,809
1,695
Brisbane Qld Australia
✟317,890.00
Gender
Male
Faith
Christian
Marital Status
Private
You think that natural selection MUST produce something that is 100% effective? No.

It is a constant cost to benefit thing. Natural selection may be able to increase resistance to a disease, but if it is going to require a great deal of resources to accomplish only a small increase, that extra cost may counteract any increase for a net neutral effect, or even a net disadvantage.

And it was very interesting that you cited the example of sickle cell anaemia. The gene for SCA is recessive, which means that if you get the SCA gene from one parent but not the other, the non-SCA version of the gene is dominant and you have no problems. It's only when you get the SCA gene from both parents that you are in trouble.

BUT...

Having a SCA version of the gene from one parent but not both conveys an advantage, giving protection against malaria. People who have a single copy of the SCA allele are less likely to die from Malaria. Sickle Cell Trait and the Risk of Plasmodium falciparum Malaria and Other Childhood Diseases

So it would actually be a DISADVANTAGE for natural selection to eradicate it entirely.
At the same time in areas where malaria is more prominent those the sickle cell anemia mutation increase therefore increasing the risk of couples both having the mutation and mating which then increases the those in the population having the disease. If this continues to increase then we have a situation with a benefit on one hand but an increasing disease on the other. Overall the change is not really an improvement on the existing genetics but a dysfunction. The red blood cells should have smooth disc shapes but the red blood cells of those carrying the sickle-cell anemia are abnormally shaped. The more malaria resistance the more those abnormal blood cells get into the population. So it comes at a cost.

My point was one way or another whether through natural selection not being able to rid a disease because it also has a benefit, the harmful mutation is too small or there is not enough time to eradicate the mutations there are many diseases entering the populations of humans and other animals that natural selection is not able to purge and this is slowly causing an accumulation of harmful mutations which is having a negative effect.
 
Upvote 0

stevevw

inquisitive
Nov 4, 2013
15,809
1,695
Brisbane Qld Australia
✟317,890.00
Gender
Male
Faith
Christian
Marital Status
Private
The specific methodology depends on what we are talking about. From what I've seen though, it comes down to combinations of understanding of the methodology of how things are designed as well as pattern recognition.

Take paleolithic tools as an example (e.g. knapped flints). Distinguishing between a stone that has been deliberately shaped versus one that is occurring naturally involves understanding the respective processes of how such stones are formed.

In the case of a deliberately shaped stone the method used is typically percussive striking. Deliberately striking a stone to break off pieces results in specific patterns showing the point of fracture. This in turn indicates that it was percussive striking that created such fractures.

Conversely it's possible for stones to naturally fracture on their own. As the process is typically different (for example, thermal expansion), the pattern of the fractures themselves will be different.

By understanding the respective processes and the resulting outputs one can distinguish between a deliberately shaped stone tool versus one that occurs naturally.
Ok this seems like only one example and does not really represent other ways humans design. I would say a simple design in making a stone flint is different to say designing a stone busk of a famous person. There are many more obvious ways of detecting that design. But you hinted on some common distinctions such as the the stone being deliberately shaped and deliberate strikes as opposed to a natural break. This is not different to specified info. The deliberate actions are being directed towards a certain outcome by an intelligent agent.

Complexity alone is harder to detect as ID and I agree complexity will not mean ID. You can have complex random sequences with Shannon info. It is the combo of specified and complexity that determines ID. I mean how would someone describe a more complex and detailed design by humans such as a stone busk or a hi tech device. One of the papers I linked talks about detecting specified complexity. This is done by determining if an image contains randomness or patterns and meaning. That meaning has certain characteristics that intelligent agents will display and this can be measured in the images.
Measuring meaningful information in images: algorithmic specified complexity
https://digital-library.theiet.org/content/journals/10.1049/iet-cvi.2014.0141
I was talking about within the context of ID. AFAIK, it was Dembski who first used that terminology with respect to applying it to design detection.
Actually Paul Davies was using this idea before Dembski in detecting design in origins of life. So he was using it in the context of ID.

Again, I was talking about the use of complexity, specificity, etc, within the context of an empirically tested and verified methodology for design detection.

It's certainly possible to measure things like complexity, information and so on including within the context of biology. But it's a whole different thing to develop a method whereby such measurements are used to detect purposeful design of the same.
Once again surely we have developed methods to detect human design. What would be the difference in detecting ID. The papers I posted earlier show methods of how to detect design which is complex and specified so why can't this be a method.
A Unified Model of Complex Specified Information
https://www.researchgate.net/publication/329700536_A_Unified_Model_of_Complex_Specified_Information

IMHO, what ID proponents should be striving for is design detection of GM organisms. If they can't even detect examples where biological design is already known, why would anyone think they could the same with the unknown?
I thought this was already the case. Or at least I think there are papers which show that in biology such as proteins folding the info is so rare, complex and directed towards specific folds as opposed to any possible folds that this meets the requirements for at least not being the result of random chance.

I disagree that complexity is a logical hallmark of ID. I'll give you two examples as to why.

The first involves compressibility of information as a measure of complexity. In short, the more complex a sequence is the less it can be compressed.

I did an experiment with this where I took two 4-second audio recordings. The first was a deliberate, designed sequence of musical notes. The second was random noise taken from a NASA recording.

Utilizing variable bit-rate MP3 compression with identical target quality settings, the sequence of musical notes took up approximately half the digital space as the recording of random noise. This suggests the random noise is of greater complexity since it is less compressible than the sequence of musical notes.

The second example involves digital security and passwords.

Humans typically select passwords based on common patterns. For example an English language word or phrase followed by a numerical sequence (e.g. "Password123"). Such password selection methods are quite common, often to meet password selection criteria per whatever systems people are using.

However, this also makes these passwords easy to crack and password cracking algorithms take advantage of these predictable human patterns. This is also where password length becomes less relevant with respect to password strength. By narrowing the search to an existing series of dictionary words and/or combinations of words, it greately reduces the search space from total possible character combinations.

Conversely a random sequence of characters will be much harder to crack since it doesn't conform to the aforementioned patterns. Entropy itself is now used as an indicator for relative password strength.
Yes I agree. As mentioned above it is not just complexity but also specified complexity. You can have complex Shannon info that is jumbled and random but is not specified. Then you can have simple or complex info that is specified. I would have thought info that is specified especially towards the type of info that intelligent minds create would be an indication of ID as opposed to the many possibilities that could be random and not show any choice or direction from an intelligent mind. The brush strokes of a painting made by a human which are directed towards creating a landscape as opposed to someone spilling some paint on the ground or throwing paint at a wall blindfolded.

Complexity would be a factor if we were specifically trying to determine if a signal contained some sort of message. Complexity by itself would not be an indicator of an intelligent source.
So is not this specified complexity in that as you say we were specifically trying to determine a signal with some sort of message, presumably a message from an intelligent source as oppose to random signals.

Sure, people use analogies to explain concepts. There is a difference between using an analogy to explain a concept versus using one to make an argument.
I though the argument was already inherent by choosing those particular words. Why use those words if a person meant something else.
 
Upvote 0

Bungle_Bear

Whoot!
Mar 6, 2011
9,084
3,513
✟262,040.00
Faith
Agnostic
Marital Status
Married
Ok this seems like only one example and does not really represent other ways humans design. I would say a simple design in making a stone flint is different to say designing a stone busk of a famous person. There are many more obvious ways of detecting that design. But you hinted on some common distinctions such as the the stone being deliberately shaped and deliberate strikes as opposed to a natural break. This is not different to specified info. The deliberate actions are being directed towards a certain outcome by an intelligent agent.

Complexity alone is harder to detect as ID and I agree complexity will not mean ID. You can have complex random sequences with Shannon info. It is the combo of specified and complexity that determines ID. I mean how would someone describe a more complex and detailed design by humans such as a stone busk or a hi tech device. One of the papers I linked talks about detecting specified complexity. This is done by determining if an image contains randomness or patterns and meaning. That meaning has certain characteristics that intelligent agents will display and this can be measured in the images.
Measuring meaningful information in images: algorithmic specified complexity
https://digital-library.theiet.org/content/journals/10.1049/iet-cvi.2014.0141
Actually Paul Davies was using this idea before Dembski in detecting design in origins of life. So he was using it in the context of ID.

Once again surely we have developed methods to detect human design. What would be the difference in detecting ID. The papers I posted earlier show methods of how to detect design which is complex and specified so why can't this be a method.
A Unified Model of Complex Specified Information
https://www.researchgate.net/publication/329700536_A_Unified_Model_of_Complex_Specified_Information

I thought this was already the case. Or at least I think there are papers which show that in biology such as proteins folding the info is so rare, complex and directed towards specific folds as opposed to any possible folds that this meets the requirements for at least not being the result of random chance.

Yes I agree. As mentioned above it is not just complexity but also specified complexity. You can have complex Shannon info that is jumbled and random but is not specified. Then you can have simple or complex info that is specified. I would have thought info that is specified especially towards the type of info that intelligent minds create would be an indication of ID as opposed to the many possibilities that could be random and not show any choice or direction from an intelligent mind. The brush strokes of a painting made by a human which are directed towards creating a landscape as opposed to someone spilling some paint on the ground or throwing paint at a wall blindfolded.

So is not this specified complexity in that as you say we were specifically trying to determine a signal with some sort of message, presumably a message from an intelligent source as oppose to random signals.

I though the argument was already inherent by choosing those particular words. Why use those words if a person meant something else.
Your whole post appears to rely on the assumption that there is such a thing as specified complexity. Dembski and the few others who claim it is measurable have failed to demonstrate that it is a viable concept. All attempts so far have been shown to be inadequate.
 
Upvote 0

FrumiousBandersnatch

Well-Known Member
Mar 20, 2009
15,405
8,143
✟348,882.00
Faith
Atheist
At the same time in areas where malaria is more prominent those the sickle cell anemia mutation increase therefore increasing the risk of couples both having the mutation and mating which then increases the those in the population having the disease. If this continues to increase then we have a situation with a benefit on one hand but an increasing disease on the other. Overall the change is not really an improvement on the existing genetics but a dysfunction. The red blood cells should have smooth disc shapes but the red blood cells of those carrying the sickle-cell anemia are abnormally shaped. The more malaria resistance the more those abnormal blood cells get into the population. So it comes at a cost.

My point was one way or another whether through natural selection not being able to rid a disease because it also has a benefit, the harmful mutation is too small or there is not enough time to eradicate the mutations there are many diseases entering the populations of humans and other animals that natural selection is not able to purge and this is slowly causing an accumulation of harmful mutations which is having a negative effect.
Nobody is claiming that evolution through natural selection is utilitarian - it isn't. Nor does it ensure species survival. In terms of efficiency, you can view it either as being ruthlessly efficient in removing the unfit, or as being extremely wasteful of life.

Populations with high levels of SCA mutations will have greater numbers of sickly offspring with the full-blown disease, but these are at a selective disadvantage to the rest of the population, so will be less likely to pass on their SCA genes. SCA genes will persist in the population in the single allele form, seriously affecting some who inherit both, but providing some advantage to many more. The net result is clearly favourable to population survival otherwise we would not see populations surviving in areas with high malaria risk. Malaria is a highly effective selective agent, killing millions; this is what gives single-allele SCA it's selective advantage.

Incidentally, this dominant-recessive gene behaviour is one reason why attempting to rid a population of genetic 'weakness' through eugenics by eliminating or sterilising the sufferers is a losing strategy.
 
Last edited:
Upvote 0

FrumiousBandersnatch

Well-Known Member
Mar 20, 2009
15,405
8,143
✟348,882.00
Faith
Atheist
... I thought this was already the case. Or at least I think there are papers which show that in biology such as proteins folding the info is so rare, complex and directed towards specific folds as opposed to any possible folds that this meets the requirements for at least not being the result of random chance.
The genomes of all organisms consist of genes that code for proteins that all fold in specific ways. A GM organism may have been given a whole gene from another organism or may have had an existing gene modified or duplicated, etc.

Given the genome of an unidentified organism, how do you propose to determine that it is GM?
 
Upvote 0

pitabread

Well-Known Member
Jan 29, 2017
12,920
13,373
Frozen North
✟344,333.00
Country
Canada
Gender
Male
Faith
Agnostic
Marital Status
Private
Ok this seems like only one example and does not really represent other ways humans design. I would say a simple design in making a stone flint is different to say designing a stone busk of a famous person.

The reason I used this example is because it's a case where design is not immediately obvious. It's possible to find natural stones that have the appearance of stone tools, but on closer examination are not.

In contrast, obvious examples of design (e.g. a stone bust) only require pre-existing knowledge that humans make sculptures coupled with our intrinsic pattern recognition. The latter is especially adept at recognizing faces.

No fancy calculations or close scrutiny needed.

Complexity alone is harder to detect as ID and I agree complexity will not mean ID. You can have complex random sequences with Shannon info. It is the combo of specified and complexity that determines ID. I mean how would someone describe a more complex and detailed design by humans such as a stone busk or a hi tech device.

Pattern recognition. We don't need to do any complex calculations to recognize a human face or other objects where we have preexisting knowledge of their creation.

One of the papers I linked talks about detecting specified complexity. This is done by determining if an image contains randomness or patterns and meaning. That meaning has certain characteristics that intelligent agents will display and this can be measured in the images.
Measuring meaningful information in images: algorithmic specified complexity
https://digital-library.theiet.org/content/journals/10.1049/iet-cvi.2014.0141

Did you read it?

Actually Paul Davies was using this idea before Dembski in detecting design in origins of life. So he was using it in the context of ID.

Fair enough. I won't belabor the point.

Once again surely we have developed methods to detect human design. What would be the difference in detecting ID.

It's depends on what we're specifically talking about.

In the case of biology like GM organisms, current methods require existing knowledge of the genetic sequences and/or biological products in question. And methods to detect GM organisms are being developed based on the knowledge of how GM organisms are created and specific biological characteristics common to GM organisms based on their creation.

In the case of completely unknown designers and design we have neither of these to rely on. So how can we possibly detect design in those circumstances?

Meanwhile we also have a known process (evolution) by which populations of biological organisms modify themselves over time. We even utilize this process in the design of biological organisms and by-products (directed evolution).

How would one distinguish a genetic sequence deliberately programmed as such versus a sequence arising from directed evolution versus a sequence arising from unguided evolution?

The papers I posted earlier show methods of how to detect design which is complex and specified so why can't this be a method.
A Unified Model of Complex Specified Information
https://www.researchgate.net/publication/329700536_A_Unified_Model_of_Complex_Specified_Information

If you believe you have posted papers that should how to detect design in relation to biology, then please quote the relevant papers and sections thereof that support this. The paper you cited presents a hypothetical model. But where is the empirical testing and verification of its validity?

According to that ResearchGate link that paper hasn't even been cited by anyone and has barely any reads.

I thought this was already the case. Or at least I think there are papers which show that in biology such as proteins folding the info is so rare, complex and directed towards specific folds as opposed to any possible folds that this meets the requirements for at least not being the result of random chance.

You're talking about the infamous Douglas Axe paper (2004) which was cited by the OP. The paper does not support the conclusion it is being used to support (e.g. the rarity of viable protein folds among all of biology).

I believe we already discussed this earlier in this thread.

Yes I agree. As mentioned above it is not just complexity but also specified complexity.

The problem is that "specified complexity" is entirely too nebulous a term. Per Dembski's original writings, he was formulating a mathematical probability test. As stated, I'm not aware of his method ever being empirically verified with respect to biology.

In contrast, it's been heavily criticized. Wikipedia has a list of some of the criticisms: Specified complexity - Wikipedia

So is not this specified complexity in that as you say we were specifically trying to determine a signal with some sort of message, presumably a message from an intelligent source as oppose to random signals.

Again, "specified complexity" is entirely nebulous in this context. This is the problem of colloquial usage of this terminology.

I though the argument was already inherent by choosing those particular words. Why use those words if a person meant something else.

Explaining a concept is not the same thing as making an argument. An analogy is not an argument.
 
Last edited:
Upvote 0