VirOptimus
A nihilist who cares.
i can't imagine a species evolving from an ape to a man, especially in just 5,000 years. God created man in His own image. i can't see God as an ape.
Humans are apes.
Upvote
0
Starting today August 7th, 2024, in order to post in the Married Couples, Courting Couples, or Singles forums, you will not be allowed to post if you have your Marital status designated as private. Announcements will be made in the respective forums as well but please note that if yours is currently listed as Private, you will need to submit a ticket in the Support Area to have yours changed.
i can't imagine a species evolving from an ape to a man, especially in just 5,000 years. God created man in His own image. i can't see God as an ape.
Then why does the paper say that what can be seen as a neutral mutation is actually a slightly deleterious mutation being tolerated. That they can reach a threshold where their combined effect can have a cost on fitness.That's not how it works. As I already explained, the status of a mutation (beneficial/neutral/deleterious) is contingent, i.e. can vary with circumstances; consider sickle-cell trait.
i can't imagine a species evolving from an ape to a man, especially in just 5,000 years. God created man in His own image. i can't see God as an ape.
A mutation that is neutral in some circumstances can become slightly deleterious in other circumstances. Similarly, a mutation that is slightly slightly deleterious in some circumstances can become neutral or advantageous in other circumstances; and so-on.Then why does the paper say that what can be seen as a neutral mutation is actually a slightly deleterious mutation being tolerated. That they can reach a threshold where their combined effect can have a cost on fitness.
It stands to reason as the papers show if there is a tolerance of slightly deleterious mutations then they are not going to be identified by natural selection because they are being tolerated. It is not until there is an accumulation that will cause a fitness loss that selection will act. So in the meantime selection is blind to those slightly deleterious mutations until they accumulate and therefore there are slightly harmful mutations which are still hanging around.And I didn't see a single thing in there that showed how a mutation that was harmful would NOT be selected against.
You make it sound like beneficial mutations have just as much chance when they are very very rare. The majority are said to be neutral with a large amount of deleterious ones as well. Though as mentioned overall regardless of the situation from what I have read many of those neutral ones can be well tolerated slightly harmful ones that may appear neutral.A mutation that is neutral in some circumstances can become slightly deleterious in other circumstances. Similarly, a mutation that is slightly slightly deleterious in some circumstances can become neutral or advantageous in other circumstances; and so-on.
How the paper chooses to present some particular set of circumstances is up to the authors. They may have felt that the overall contribution from those particular mutations was slightly disadvantageous.
The time period for evolving an ape to a human which is said to be around 6 million years has been shown to be massively inadequate to have happened by blind and random evolution. Research shows to even get the simplest word equivalent of two or more sequenced Nucleotides would take longer into 10's of millions of years than the 6 million years that humans evolved. Let alone the larger number and longer sequences needed. For a sequence of around 10 Nucleotides equivalent to a simple sentence which requires multiple connected mutations it would take a 100 billion years much longer that the age of the universe.It didn't happen in 5000 years. Modern humans are at least 300,000 years old.
The time period for evolving an ape to a human which is said to be around 6 million years has been shown to be massively inadequate to have happened by blind and random evolution. Research shows to even get the simplest word equivalent of two or more sequenced Nucleotides would take longer into 10's of millions of years than the 6 million years that humans evolved. Let alone the larger number and longer sequences needed. For a sequence of around 10 Nucleotides equivalent to a simple sentence which requires multiple connected mutations it would take a 100 billion years much longer that the age of the universe.
As the paper says it would require
millions of specific beneficial mutations, and a large number of specific beneficial sets of mutations, selectively fixed in this very short period of time.
The waiting time problem in a model hominin population
The waiting time problem in a model hominin population
You're reading things into what I said that aren't there. Try not to do that.You make it sound like beneficial mutations have just as much chance when they are very very rare.
Evolution isn't random.The time period for evolving an ape to a human which is said to be around 6 million years has been shown to be massively inadequate to have happened by blind and random evolution.
Specifying specific sequences in advance is always going to be problematic because it's not modelling evolution. This is retrospective probability calculation (although kudos to them for admitting to five different ways their calculation could be out).Research shows to even get the simplest word equivalent of two or more sequenced Nucleotides would take longer into 10's of millions of years than the 6 million years that humans evolved. Let alone the larger number and longer sequences needed. For a sequence of around 10 Nucleotides equivalent to a simple sentence which requires multiple connected mutations it would take a 100 billion years much longer that the age of the universe.
As the paper says it would require
millions of specific beneficial mutations, and a large number of specific beneficial sets of mutations, selectively fixed in this very short period of time.
The time period for evolving an ape to a human which is said to be around 6 million years has been shown to be massively inadequate to have happened by blind and random evolution. Research shows to even get the simplest word equivalent of two or more sequenced Nucleotides would take longer into 10's of millions of years than the 6 million years that humans evolved. Let alone the larger number and longer sequences needed. For a sequence of around 10 Nucleotides equivalent to a simple sentence which requires multiple connected mutations it would take a 100 billion years much longer that the age of the universe.As the paper says it would require
millions of specific beneficial mutations, and a large number of specific beneficial sets of mutations, selectively fixed in this very short period of time.
The waiting time problem in a model hominin population
The waiting time problem in a model hominin population
The waiting time problem in a model hominin population
The waiting time problem in a model hominin population
Then how is design detected in say human made items and info.
Actually it was Paul Davies who came up with the concept when talking about abiogenesis, Dembski just extended this to other areas.
Other scientists have expanded and developed his ideas.
<snip>
But there is also other articles that don't mention specified complexity but have research outcomes that produce similar findings.
<snip>
Yet real examples of ID would have to contain some specified info as opposed to random, it stands to reason. When ever I have debated ID with anyone "information" seems to be a key word used by both sides. It is usually based around what sort of information. Complexity seems to be a logical hallmark of ID, the more complex the harder it is to create randomly.
So your saying radio signals that require intelligent design may be the result of natural causes. yet if scientists discover those radio signals in outta space the headlines would be hailing how we have found intelligent life in the universe. Seems they want to hedge their bets.
Then why would complexity be an important factor if it was established as being from an intelligent source.
Yet scientists use these types of analogies all the time when explaining things like the cell and DNA such as sequences, language, codes, systems and patterns.
It stands to reason as the papers show if there is a tolerance of slightly deleterious mutations then they are not going to be identified by natural selection because they are being tolerated. It is not until there is an accumulation that will cause a fitness loss that selection will act. So in the meantime selection is blind to those slightly deleterious mutations until they accumulate and therefore there are slightly harmful mutations which are still hanging around.
Without listing another lot of papers here is one from Lynch on how multicelled creatures accumulate harmful mutations which are not always purged out by natural selection and thuse makes them more susceptible to extinction.
Multicellular species experience reduced population sizes, reduced recombination rates, and increased deleterious mutation rates, all of which diminish the efficiency of selection (13). It may be no coincidence that such species also have substantially higher extinction rates than do unicellular taxa (47, 48).
The frailty of adaptive hypotheses for the origins of organismal complexity
Then why do humans and other creatures carrying thousands of diseases and these are accumulation all the time. Also sometimes a deleterious mutation may give a benefit somewhere else so natural selection will not purge it out. Sickle cell is one example.You don't seem to understand.
If there is any deleterious effect, then natural selection WILL select against it. It may do so slowly, but the selection against will be there right from the start.
If natural selection is blind to it, then there can't possibly be any deleterious effect.
Natural selection isn't sitting there going, "Yeah, I can see it's deleterious, but I'm not gonna do anything just yet. I might wait for a bit, see if the mutation gets any worse. THEN I might do something about it."
And saying that some creatures accumulate harmful mutations which are not purged by natural selection, and these make them more susceptible to extinction makes no sense. Being more susceptible to extinction is exactly how natural selection purges them!
Then why do humans and other creatures carrying thousands of diseases and these are accumulation all the time. Also sometimes a deleterious mutation may give a benefit somewhere else so natural selection will not purge it out. Sickle cell is one example.
Michael Lynch is an expert on population genetics and I would trust his knowledge on this. As he states reduced population sizes, reduced recombination rates, and increased deleterious mutation rates, all of which diminish the efficiency of selection. So natural selections ability to purge out deleterious mutations is reduced and therefore it will not be able to reduce all harm. Small populations are known to be more susceptible to extinction.
But I am not arguing that natural selection is not a factor. I am saying that it is not this all powerful force that some make out. It is one influence in among several that which can be bypassed or play a minimal role because other factors are more prominent.
At the same time in areas where malaria is more prominent those the sickle cell anemia mutation increase therefore increasing the risk of couples both having the mutation and mating which then increases the those in the population having the disease. If this continues to increase then we have a situation with a benefit on one hand but an increasing disease on the other. Overall the change is not really an improvement on the existing genetics but a dysfunction. The red blood cells should have smooth disc shapes but the red blood cells of those carrying the sickle-cell anemia are abnormally shaped. The more malaria resistance the more those abnormal blood cells get into the population. So it comes at a cost.You think that natural selection MUST produce something that is 100% effective? No.
It is a constant cost to benefit thing. Natural selection may be able to increase resistance to a disease, but if it is going to require a great deal of resources to accomplish only a small increase, that extra cost may counteract any increase for a net neutral effect, or even a net disadvantage.
And it was very interesting that you cited the example of sickle cell anaemia. The gene for SCA is recessive, which means that if you get the SCA gene from one parent but not the other, the non-SCA version of the gene is dominant and you have no problems. It's only when you get the SCA gene from both parents that you are in trouble.
BUT...
Having a SCA version of the gene from one parent but not both conveys an advantage, giving protection against malaria. People who have a single copy of the SCA allele are less likely to die from Malaria. Sickle Cell Trait and the Risk of Plasmodium falciparum Malaria and Other Childhood Diseases
So it would actually be a DISADVANTAGE for natural selection to eradicate it entirely.
Ok this seems like only one example and does not really represent other ways humans design. I would say a simple design in making a stone flint is different to say designing a stone busk of a famous person. There are many more obvious ways of detecting that design. But you hinted on some common distinctions such as the the stone being deliberately shaped and deliberate strikes as opposed to a natural break. This is not different to specified info. The deliberate actions are being directed towards a certain outcome by an intelligent agent.The specific methodology depends on what we are talking about. From what I've seen though, it comes down to combinations of understanding of the methodology of how things are designed as well as pattern recognition.
Take paleolithic tools as an example (e.g. knapped flints). Distinguishing between a stone that has been deliberately shaped versus one that is occurring naturally involves understanding the respective processes of how such stones are formed.
In the case of a deliberately shaped stone the method used is typically percussive striking. Deliberately striking a stone to break off pieces results in specific patterns showing the point of fracture. This in turn indicates that it was percussive striking that created such fractures.
Conversely it's possible for stones to naturally fracture on their own. As the process is typically different (for example, thermal expansion), the pattern of the fractures themselves will be different.
By understanding the respective processes and the resulting outputs one can distinguish between a deliberately shaped stone tool versus one that occurs naturally.
Actually Paul Davies was using this idea before Dembski in detecting design in origins of life. So he was using it in the context of ID.I was talking about within the context of ID. AFAIK, it was Dembski who first used that terminology with respect to applying it to design detection.
Once again surely we have developed methods to detect human design. What would be the difference in detecting ID. The papers I posted earlier show methods of how to detect design which is complex and specified so why can't this be a method.Again, I was talking about the use of complexity, specificity, etc, within the context of an empirically tested and verified methodology for design detection.
It's certainly possible to measure things like complexity, information and so on including within the context of biology. But it's a whole different thing to develop a method whereby such measurements are used to detect purposeful design of the same.
I thought this was already the case. Or at least I think there are papers which show that in biology such as proteins folding the info is so rare, complex and directed towards specific folds as opposed to any possible folds that this meets the requirements for at least not being the result of random chance.IMHO, what ID proponents should be striving for is design detection of GM organisms. If they can't even detect examples where biological design is already known, why would anyone think they could the same with the unknown?
Yes I agree. As mentioned above it is not just complexity but also specified complexity. You can have complex Shannon info that is jumbled and random but is not specified. Then you can have simple or complex info that is specified. I would have thought info that is specified especially towards the type of info that intelligent minds create would be an indication of ID as opposed to the many possibilities that could be random and not show any choice or direction from an intelligent mind. The brush strokes of a painting made by a human which are directed towards creating a landscape as opposed to someone spilling some paint on the ground or throwing paint at a wall blindfolded.I disagree that complexity is a logical hallmark of ID. I'll give you two examples as to why.
The first involves compressibility of information as a measure of complexity. In short, the more complex a sequence is the less it can be compressed.
I did an experiment with this where I took two 4-second audio recordings. The first was a deliberate, designed sequence of musical notes. The second was random noise taken from a NASA recording.
Utilizing variable bit-rate MP3 compression with identical target quality settings, the sequence of musical notes took up approximately half the digital space as the recording of random noise. This suggests the random noise is of greater complexity since it is less compressible than the sequence of musical notes.
The second example involves digital security and passwords.
Humans typically select passwords based on common patterns. For example an English language word or phrase followed by a numerical sequence (e.g. "Password123"). Such password selection methods are quite common, often to meet password selection criteria per whatever systems people are using.
However, this also makes these passwords easy to crack and password cracking algorithms take advantage of these predictable human patterns. This is also where password length becomes less relevant with respect to password strength. By narrowing the search to an existing series of dictionary words and/or combinations of words, it greately reduces the search space from total possible character combinations.
Conversely a random sequence of characters will be much harder to crack since it doesn't conform to the aforementioned patterns. Entropy itself is now used as an indicator for relative password strength.
So is not this specified complexity in that as you say we were specifically trying to determine a signal with some sort of message, presumably a message from an intelligent source as oppose to random signals.Complexity would be a factor if we were specifically trying to determine if a signal contained some sort of message. Complexity by itself would not be an indicator of an intelligent source.
I though the argument was already inherent by choosing those particular words. Why use those words if a person meant something else.Sure, people use analogies to explain concepts. There is a difference between using an analogy to explain a concept versus using one to make an argument.
Your whole post appears to rely on the assumption that there is such a thing as specified complexity. Dembski and the few others who claim it is measurable have failed to demonstrate that it is a viable concept. All attempts so far have been shown to be inadequate.Ok this seems like only one example and does not really represent other ways humans design. I would say a simple design in making a stone flint is different to say designing a stone busk of a famous person. There are many more obvious ways of detecting that design. But you hinted on some common distinctions such as the the stone being deliberately shaped and deliberate strikes as opposed to a natural break. This is not different to specified info. The deliberate actions are being directed towards a certain outcome by an intelligent agent.
Complexity alone is harder to detect as ID and I agree complexity will not mean ID. You can have complex random sequences with Shannon info. It is the combo of specified and complexity that determines ID. I mean how would someone describe a more complex and detailed design by humans such as a stone busk or a hi tech device. One of the papers I linked talks about detecting specified complexity. This is done by determining if an image contains randomness or patterns and meaning. That meaning has certain characteristics that intelligent agents will display and this can be measured in the images.
Measuring meaningful information in images: algorithmic specified complexity
https://digital-library.theiet.org/content/journals/10.1049/iet-cvi.2014.0141
Actually Paul Davies was using this idea before Dembski in detecting design in origins of life. So he was using it in the context of ID.
Once again surely we have developed methods to detect human design. What would be the difference in detecting ID. The papers I posted earlier show methods of how to detect design which is complex and specified so why can't this be a method.
A Unified Model of Complex Specified Information
https://www.researchgate.net/publication/329700536_A_Unified_Model_of_Complex_Specified_Information
I thought this was already the case. Or at least I think there are papers which show that in biology such as proteins folding the info is so rare, complex and directed towards specific folds as opposed to any possible folds that this meets the requirements for at least not being the result of random chance.
Yes I agree. As mentioned above it is not just complexity but also specified complexity. You can have complex Shannon info that is jumbled and random but is not specified. Then you can have simple or complex info that is specified. I would have thought info that is specified especially towards the type of info that intelligent minds create would be an indication of ID as opposed to the many possibilities that could be random and not show any choice or direction from an intelligent mind. The brush strokes of a painting made by a human which are directed towards creating a landscape as opposed to someone spilling some paint on the ground or throwing paint at a wall blindfolded.
So is not this specified complexity in that as you say we were specifically trying to determine a signal with some sort of message, presumably a message from an intelligent source as oppose to random signals.
I though the argument was already inherent by choosing those particular words. Why use those words if a person meant something else.
Nobody is claiming that evolution through natural selection is utilitarian - it isn't. Nor does it ensure species survival. In terms of efficiency, you can view it either as being ruthlessly efficient in removing the unfit, or as being extremely wasteful of life.At the same time in areas where malaria is more prominent those the sickle cell anemia mutation increase therefore increasing the risk of couples both having the mutation and mating which then increases the those in the population having the disease. If this continues to increase then we have a situation with a benefit on one hand but an increasing disease on the other. Overall the change is not really an improvement on the existing genetics but a dysfunction. The red blood cells should have smooth disc shapes but the red blood cells of those carrying the sickle-cell anemia are abnormally shaped. The more malaria resistance the more those abnormal blood cells get into the population. So it comes at a cost.
My point was one way or another whether through natural selection not being able to rid a disease because it also has a benefit, the harmful mutation is too small or there is not enough time to eradicate the mutations there are many diseases entering the populations of humans and other animals that natural selection is not able to purge and this is slowly causing an accumulation of harmful mutations which is having a negative effect.
The genomes of all organisms consist of genes that code for proteins that all fold in specific ways. A GM organism may have been given a whole gene from another organism or may have had an existing gene modified or duplicated, etc.... I thought this was already the case. Or at least I think there are papers which show that in biology such as proteins folding the info is so rare, complex and directed towards specific folds as opposed to any possible folds that this meets the requirements for at least not being the result of random chance.
Ok this seems like only one example and does not really represent other ways humans design. I would say a simple design in making a stone flint is different to say designing a stone busk of a famous person.
Complexity alone is harder to detect as ID and I agree complexity will not mean ID. You can have complex random sequences with Shannon info. It is the combo of specified and complexity that determines ID. I mean how would someone describe a more complex and detailed design by humans such as a stone busk or a hi tech device.
One of the papers I linked talks about detecting specified complexity. This is done by determining if an image contains randomness or patterns and meaning. That meaning has certain characteristics that intelligent agents will display and this can be measured in the images.
Measuring meaningful information in images: algorithmic specified complexity
https://digital-library.theiet.org/content/journals/10.1049/iet-cvi.2014.0141
Actually Paul Davies was using this idea before Dembski in detecting design in origins of life. So he was using it in the context of ID.
Once again surely we have developed methods to detect human design. What would be the difference in detecting ID.
The papers I posted earlier show methods of how to detect design which is complex and specified so why can't this be a method.
A Unified Model of Complex Specified Information
https://www.researchgate.net/publication/329700536_A_Unified_Model_of_Complex_Specified_Information
I thought this was already the case. Or at least I think there are papers which show that in biology such as proteins folding the info is so rare, complex and directed towards specific folds as opposed to any possible folds that this meets the requirements for at least not being the result of random chance.
Yes I agree. As mentioned above it is not just complexity but also specified complexity.
So is not this specified complexity in that as you say we were specifically trying to determine a signal with some sort of message, presumably a message from an intelligent source as oppose to random signals.
I though the argument was already inherent by choosing those particular words. Why use those words if a person meant something else.