• Starting today August 7th, 2024, in order to post in the Married Couples, Courting Couples, or Singles forums, you will not be allowed to post if you have your Marital status designated as private. Announcements will be made in the respective forums as well but please note that if yours is currently listed as Private, you will need to submit a ticket in the Support Area to have yours changed.

What is Science?

Loudmouth

Contributor
Aug 26, 2003
51,417
6,142
Visit site
✟98,015.00
Faith
Agnostic
How can you know? You have referenced zero studies that support the idea that vaccines are effective.

There is zero evidence that you will accept.

There is no certain way to know, but Koch's Postulates provide a reasonable starting point.

Then you are saying that the scientific method is reasonable. However, you will quickly change your tune as soon as the evidence starts leading to conclusions you don't like. At that point, out comes the claims of logical fallacies this and logical fallacies that, and you refuse to look at the evidence. This is what has happened in every scientific discussion you have been a part of.
 
Last edited:
Upvote 0

Zosimus

Non-Christian non-evolution believer
Oct 3, 2013
1,656
33
Lima, Peru
✟24,500.00
Faith
Agnostic
Marital Status
Married
Then you are saying that the scientific method is reasonable. However, you will quickly change your tune as soon as the evidence starts leading to conclusions you don't like. At that point, out comes the claims of logical fallacies this and logical fallacies that, and you refuse to look at the evidence. This is what has happened in every scientific discussion you have been a part of.
I don't have any problem with the scientific method. It's right up there with praying for revelation and examining chicken entrails. If these methods work for you, then more power to you!

What I don't want to hear is people claiming that the scientific method has proved something any more than I want to hear people claiming that their personal prayers have proved that God exists. They have done nothing of the kind.

Koch's Postulates are a serious attempt to falsify the claim that microorganism X causes disease Y. For example, poliomyelitis does not meet Koch's Postulates as there are a number of people who have the microorganisms but not the disease. Nevertheless, scientific consensus places the blame for polio on the virus. Cholera and typhoid fever are two other diseases that do not meet Koch's Postulates, yet science consensus has it that the causes of these diseases are known. HIV/AIDS does not meet Koch's Postulates. Again, the scientific consensus is that HIV causes AIDS (although I remain skeptical).

Accordingly, even a disease that meets Koch's Postulates cannot be said to be the certain cause of the disease and many diseases that don't meet Koch's Postulates are nevertheless accepted. Science is a crap shoot. What do you want?
 
Upvote 0

Loudmouth

Contributor
Aug 26, 2003
51,417
6,142
Visit site
✟98,015.00
Faith
Agnostic
I don't have any problem with the scientific method. It's right up there with praying for revelation and examining chicken entrails.

And there we go. Now we have Koch's Postulates being the same as praying for revelation and auguring. How predictable.
 
Upvote 0

freezerman2000

Living and dying in 3/4 time
Feb 24, 2011
9,525
1,221
South Carolina
✟46,630.00
Country
United States
Gender
Male
Faith
Christian
Marital Status
Married
Politics
US-Others
I don't have any problem with the scientific method. It's right up there with praying for revelation and examining chicken entrails. If these methods work for you, then more power to you!

What I don't want to hear is people claiming that the scientific method has proved something any more than I want to hear people claiming that their personal prayers have proved that God exists. They have done nothing of the kind.

Koch's Postulates are a serious attempt to falsify the claim that microorganism X causes disease Y. For example, poliomyelitis does not meet Koch's Postulates as there are a number of people who have the microorganisms but not the disease. Nevertheless, scientific consensus places the blame for polio on the virus. Cholera and typhoid fever are two other diseases that do not meet Koch's Postulates, yet science consensus has it that the causes of these diseases are known. HIV/AIDS does not meet Koch's Postulates. Again, the scientific consensus is that HIV causes AIDS (although I remain skeptical).

Accordingly, even a disease that meets Koch's Postulates cannot be said to be the certain cause of the disease and many diseases that don't meet Koch's Postulates are nevertheless accepted. Science is a crap shoot. What do you want?

Some people can carry those microorganisms all of their lives and not be afflicted with the disease.
case in point, http://www.history.com/news/10-things-you-may-not-know-about-typhoid-mary
 
Upvote 0

Zosimus

Non-Christian non-evolution believer
Oct 3, 2013
1,656
33
Lima, Peru
✟24,500.00
Faith
Agnostic
Marital Status
Married
You think looking at chicken guts is as effective a tool for modeling reality as Koch's postulates? Really?
Most published scientific findings are wrong. Looking at chicken guts is also wrong most of the time. However, chicken guts apologists assure me that chicken-gut-looking is self-correcting and will eventually arrive at the truth.
 
Upvote 0

Loudmouth

Contributor
Aug 26, 2003
51,417
6,142
Visit site
✟98,015.00
Faith
Agnostic
Most published scientific findings are wrong. Looking at chicken guts is also wrong most of the time. However, chicken guts apologists assure me that chicken-gut-looking is self-correcting and will eventually arrive at the truth.

I asked a simple question. Can't you answer it?

Do you think looking at chicken guts is as effective a tool for modeling reality as Koch's postulates?
 
Upvote 0

Zosimus

Non-Christian non-evolution believer
Oct 3, 2013
1,656
33
Lima, Peru
✟24,500.00
Faith
Agnostic
Marital Status
Married
I asked a simple question. Can't you answer it?

Do you think looking at chicken guts is as effective a tool for modeling reality as Koch's postulates?
Koch's Postulates are not a tool for modeling reality. Accordingly, I would have to say that looking at chicken guts must be more effective than are Koch's Postulates.
 
Upvote 0

Loudmouth

Contributor
Aug 26, 2003
51,417
6,142
Visit site
✟98,015.00
Faith
Agnostic
Koch's Postulates are not a tool for modeling reality. Accordingly, I would have to say that looking at chicken guts must be more effective than are Koch's Postulates.

Thank you.

Everyone take a good gander at the conclusions you come to when you reject the scientific method like Zosimus does.
 
Upvote 0

Zosimus

Non-Christian non-evolution believer
Oct 3, 2013
1,656
33
Lima, Peru
✟24,500.00
Faith
Agnostic
Marital Status
Married
Thank you.

Everyone take a good gander at the conclusions you come to when you reject the scientific method like Zosimus does.
Incorrect. Should be:

Everyone take a good gander at the conclusions [one] comes to when [he or she] rejects the scientific method [as] Zosimus does.

Anyway, I don't reject the scientific method. I think it's a great method for producing known-wrong published research findings.
 
Upvote 0

Zosimus

Non-Christian non-evolution believer
Oct 3, 2013
1,656
33
Lima, Peru
✟24,500.00
Faith
Agnostic
Marital Status
Married
How so? Please show us the methodology used in these studies.
It can be proved that most published research findings are false.

Published research findings are sometimes refuted by subsequent evidence, with ensuing confusion and disappointment. Refutation and controversy is seen across the range of research designs, from clinical trials and traditional epidemiological studies [1–3] to the most modern molecular research [4,5]. There is increasing concern that in modern research, false findings may be the majority or even the vast majority of published research claims [6–8]. However, this should not be surprising. It can be proven that most claimed research findings are false.

Several methodologists have pointed out [9–11] that the high rate of nonreplication (lack of confirmation) of research discoveries is a consequence of the convenient, yet ill-founded strategy of claiming conclusive research findings solely on the basis of a single study assessed by formal statistical significance, typically for a p-value less than 0.05. Research is not most appropriately represented and summarized by p-values, but, unfortunately, there is a widespread notion that medical research articles should be interpreted based only on p-values. Research findings are defined here as any relationship reaching formal statistical significance, e.g., effective interventions, informative predictors, risk factors, or associations. “Negative” research is also very useful. “Negative” is actually a misnomer, and the misinterpretation is widespread. However, here we will target relationships that investigators claim exist, rather than null findings.

As has been shown previously, the probability that a research finding is indeed true depends on the prior probability of it being true (before doing the study), the statistical power of the study, and the level of statistical significance [10,11]. Consider a 2 × 2 table in which research findings are compared against the gold standard of true relationships in a scientific field. In a research field both true and false hypotheses can be made about the presence of relationships. Let R be the ratio of the number of “true relationships” to “no relationships” among those tested in the field. R is characteristic of the field and can vary a lot depending on whether the field targets highly likely relationships or searches for only one or a few true relationships among thousands and millions of hypotheses that may be postulated. Let us also consider, for computational simplicity, circumscribed fields where either there is only one true relationship (among many that can be hypothesized) or the power is similar to find any of the several existing true relationships. The pre-study probability of a relationship being true is R/(R+ 1). The probability of a study finding a true relationship reflects the power 1 - β (one minus the Type II error rate). The probability of claiming a relationship when none truly exists reflects the Type I error rate, α. Assuming that c relationships are being probed in the field, the expected values of the 2 × 2 table are given in Table 1. After a research finding has been claimed based on achieving formal statistical significance, the post-study probability that it is true is the positive predictive value, PPV. The PPV is also the complementary probability of what Wacholder et al. have called the false positive report probability [10]. According to the 2 × 2 table, one gets PPV = (1 - β)R/(R - βR + α). A research finding is thus more likely true than false if (1 - β)R > α. Since usually the vast majority of investigators depend on a = 0.05, this means that a research finding is more likely true than false if (1 - β)R > 0.05.

What is less well appreciated is that bias and the extent of repeated independent testing by different teams of investigators around the globe may further distort this picture and may lead to even smaller probabilities of the research findings being indeed true. We will try to model these two factors in the context of similar 2 × 2 tables.

In the presence of bias (Table 2), one gets PPV = ([1 - β]R + uβR)/(R + α − βR+ uuα + uβR), and PPV decreases with increasing u, unless 1 − β ≤ α, i.e., 1 − β ≤ 0.05 for most situations. Thus, with increasing bias, the chances that a research finding is true diminish considerably.

The probability that at least one study, among several done on the same question, claims a statistically significant research finding is easy to estimate. For n independent studies of equal power, the 2 × 2 table is shown in Table 3: PPV = R(1 − βn)/(R + 1 − [1 − α]nRβn) (not considering bias). With increasing number of independent studies, PPV tends to decrease, unless 1 - β < a, i.e., typically 1 − β < 0.05. This is shown for different levels of power and for different pre-study odds in Figure 2. For n studies of different power, the term βn is replaced by the product of the terms βi for i = 1 to n, but inferences are similar.

In the described framework, a PPV exceeding 50% is quite difficult to get. Table 4 provides the results of simulations using the formulas developed for the influence of power, ratio of true to non-true relationships, and bias, for various types of situations that may be characteristic of specific study designs and settings. A finding from a well-conducted, adequately powered randomized controlled trial starting with a 50% pre-study chance that the intervention is effective is eventually true about 85% of the time. A fairly similar performance is expected of a confirmatory meta-analysis of good-quality randomized trials: potential bias probably increases, but power and pre-test chances are higher compared to a single randomized trial. Conversely, a meta-analytic finding from inconclusive studies where pooling is used to “correct” the low power of single studies, is probably false if R ≤ 1:3. Research findings from underpowered, early-phase clinical trials would be true about one in four times, or even less frequently if bias is present. Epidemiological studies of an exploratory nature perform even worse, especially when underpowered, but even well-powered epidemiological studies may have only a one in five chance being true, if R = 1:10. Finally, in discovery-oriented research with massive testing, where tested relationships exceed true ones 1,000-fold (e.g., 30,000 genes tested, of which 30 may be the true culprits) [30,31], PPV for each claimed relationship is extremely low, even with considerable standardization of laboratory and statistical methods, outcomes, and reporting thereof to minimize bias.
 
Upvote 0

Kylie

Defeater of Illogic
Nov 23, 2013
15,069
5,309
✟327,545.00
Country
Australia
Gender
Female
Faith
Atheist
Marital Status
Married
All right–let's start at the beginning. Let's assume that we have an evolutionary biologist who is studying some species of animal to determine why some of those animals with specific traits are surviving better than are those animals with other traits. We will also assume that he is going to approach this subject by examining some 20,000 genes to determine which, if any, of these genes are contributing to the improved success of the animal. We will also imagine that there are true relationships to be found. Let us say that 20 of those genes have a real, measurable impact on the survival of the species.

Since our evolutionary biologist is using a test that is 95 percent accurate (p=0.05), what will his results be? Well, of the 19,980 genes that have no effect whatsoever, he will still come up with 999 false positives. Of the 20 genes that have a real effect, he will come up with 19 true positives. This means that 98.13 percent of his findings are false positives. This is simple math of the type that anyone can do. Now, please explain to me why you think that this math scenario does not apply to biology studies?

Do you think he is only going to do the tests once? Of course not. He is going to do it many times. Let's say he does it 100 times.

Doesn't it follow that out of those 100 tests, he will see that the 20 genes that actually have an effect will have positive results 95 times, and the 19,980 genes that don't have an effect will have positive results only 5 times?

Or do you really think that scientists have not yet developed ways to compensate for these sorts of things?

No, my math comes directly from Why Most Published Research Findings Are False wherein John Ioannidis does some simple math and concludes that "Most research findings are false for most research designs and for most fields." So don't attack the messenger.

Seems to me that the math is being used improperly, as you are applying it to only single tests. There are, as I explained, ways to compensate.

No, I specifically linked you to the source for the claim. You then claimed that I said that biological fields have 10 times the retraction rate as do other fields. This is not what I said. I said that the total number science retractions is 10 times higher than it used to be and that most examples in the article come from biologically-related fields. Don't put words in my mouth.

Again, call me crazy, but in the article you linked to, the word "ten" is not used at all.

It's not a question of being perfect all the time. Most published research findings are false. In fact, non-randomized studies are wrong some 80 percent of the time. So don't post some [bless and do not curse] non-randomized study and expect me to get all excited about it.

Citation required.

No, my arguments do not stem from small sample sizes. It has to do entirely with a priori odds and the strength of the study. It also has to do with the potential for bias and procedures for eliminating said bias. Additionally, it is not correct that decades-long studies must necessary have large sample sizes. We could easily select two people with different diets and study them for decades to try to determine whether their diet affects their chance of having a stroke. Two people is a small sample size regardless the number of years studied.

Yes, they do.

The example you posted (which I replied to earlier in this post) used a sample of ONE test. Show me any scientists who would conduct a single experiment and then claim that this single experiment yielded accurate results.

No, honey, you claimed that people cannot have an opinion outside of their field. Since the topic under discussion is statistics and Lenski isn't a mathematician, according to your own standards he shouldn't have an opinion.

First of all, don't you dare call me "honey." I am not your honey and you do not have the right to refer to me by some smarmy pet name like you have some relationship with me.

Secondly, unless you can show that Lenski has not studied and cannot use properly the maths required in his field of study, you are wrong.
 
Upvote 0

Kylie

Defeater of Illogic
Nov 23, 2013
15,069
5,309
✟327,545.00
Country
Australia
Gender
Female
Faith
Atheist
Marital Status
Married
Are you implying that evolutionary biologists are omniscient and never have limited information?

It seems you are lacking in imagination if you insist on taking everything so literally.

If I come home and find flour dumped all over the floor, I find my daughter covered from head to toe in flour, I also find her holding the empty bag of flour and she tells me that she emptied the flour, I have enough information to reach the correct conclusion. The fact I do not have unlimited information about the event (such as what direction she was facing at the time she spilled it, or the time to the second that it happened) does not mean I can't reach a conclusion.

So go ahead and tell me what the procedure is that eliminates all possibility that scientific studies could generate false results. I'm all ears!

Repeating experiments and observations, submitting your data to others and seeing if they can reach the same conclusion as you, using different techniques to test the same variables...

Doesn't get rid of ALL possibility of false results, but those things (and doubtless a few others that I don't know of) can drastically reduce the chances.

This has nothing to do with anything. We're talking about p-values and their tendency to cluster around p=0.05 in studies. There are peer-reviewed publications about p-hacking abundantly available on the Internet, and these publications include tests for p-hacking that anyone with an Excel spreadsheet can do. We read:

"The p-curve can, however, be used to identify p-hacking, by only considering significant findings [14]. If researchers p-hack and turn a truly nonsignificant result into a significant one, then the p-curve’s shape will be altered close to the perceived significance threshold (typically p = 0.05). Consequently, a p-hacked p-curve will have an overabundance of p-values just below 0.05 [12,40,41]. If researchers p-hack when there is no true effect, the p-curve will shift from being flat to left skewed (Fig. 2A). If, however, researchers p-hack when there is a true effect, the p-curve will be exponential with right skew but there will be an overrepresentation of p-values in the tail of the distribution just below 0.05 (Fig. 2B). Both p-hacking and selective publication bias predict a discontinuity in the p-curve around 0.05, but only p-hacking predicts an overabundance of p-values just below 0.05..."
----------------------
Accordingly, when you present me with multiple studies of an effect and all the p-values are at or just below 0.05, then I smell a rat. If a true relationship exists, at least one of those p-values should be more like 0.02 or even 0.001.

And this has nothing to do with whether the average height of 25-year-old men in New York clusters around an average.

So if I conduct an experiment to find out the average height of 25 year old men in New York, you are saying that this problem won't apply?
 
Upvote 0

BrriKerr

Active Member
Dec 15, 2015
237
42
36
UK
✟603.00
Gender
Male
Faith
Atheist
Marital Status
Private
Most scientists would say that "science is a way of knowing".
Science is what all young creationists rely on but are taught to fear and hate, like socialism they don't know what science is but they know it's bad for their religion.
Without science creationists would be just talking amongst themselves.
 
Last edited:
Upvote 0

Zosimus

Non-Christian non-evolution believer
Oct 3, 2013
1,656
33
Lima, Peru
✟24,500.00
Faith
Agnostic
Marital Status
Married
Do you think he is only going to do the tests once? Of course not. He is going to do it many times. Let's say he does it 100 times.
Completely unrealistic. He'll never get the funding to do the same experiment 100 times. Let's be more realistic and assume that 10 different groups are given the funding to do the same experiment 3 times. Odds tell us that the groups will have, on average, 17 true positives and 3 false positives. We will also assume that all 10 studies get published.

What are the odds that the groups will have the same false positives? Unlikely. So we are talking about some 30 false positives and 17-20 true positives. This means that after running the test some 30 times we still have more false positives than true positives by a factor of 2:1.

Plus all of this assumes good study design and no bias. Considering that most published research findings are not randomized, this is a generous assumption. Selection bias must surely be a factor in most or all of these studies.

Doesn't it follow that out of those 100 tests, he will see that the 20 genes that actually have an effect will have positive results 95 times, and the 19,980 genes that don't have an effect will have positive results only 5 times?
Will he? Unlikely. More likely he will run the test again on the positives and determine that those positives that do not retest positive are false positives. Since his first test will falsely exclude one of the true positives and on his second test another of the true positives will fail to rest positive, we are already contemplating that two true positive results are lost by the test procedure. Even so, after 2 iterations he will still have 18 true positives and 50 false positives for a ratio of about 26.5 percent true positives.

Or do you really think that scientists have not yet developed ways to compensate for these sorts of things?
Scientists have no motivation to compensate for these sorts of things. Statisticians have already suggested methods of improving the situation. Scientists are not interested.

Seems to me that the math is being used improperly, as you are applying it to only single tests. There are, as I explained, ways to compensate.
I understand. You don't get math, so you don't think it's a problem.

Again, call me crazy, but in the article you linked to, the word "ten" is not used at all.
Must I do all the leg work for you? Here you go:
The original article points out that although the number of papers being published has increased by 44 percent, the number of retractions has grown from around 30 to "on track to index more than 400." A simple back-of-the-envelope calculation shows that this means some 13.33 times more retractions.

However, the article links to another graph[/quote] claiming that "In the past decade, the number of retraction notices has shot up 10-fold (top), even as the literature has expanded by only 44 percent."

Now retractions could be spun as a good thing. I'm sure you'll try to say that it's wonderful how bad research is being culled from the scientific literature. However, [url=http://www.nature.com/news/2011/111005/full/478026a/box/1.html]studies show that retracted research is still frequently cited.
. "[John Budd] found that [the retracted studies] were cited in total more than 2,000 times after their withdrawal, with fewer than 8% of the citations acknowledging the retraction. And the rates haven't improved much in the age of electronic publication: in a preliminary analysis of 1,112 retracted papers during 1997–2009, Budd finds them cited just as often, with the retraction mentioned in only about 4% of the citations. Other studies suggest that the situation is even worse for corrections, which are more numerous and often add important updates to a paper."

Citation required.
His model predicted, in different fields of medical research, rates of wrongness roughly corresponding to the observed rates at which findings were later convincingly refuted: 80 percent of non-randomized studies (by far the most common type) turn out to be wrong, as do 25 percent of supposedly gold-standard randomized trials, and as much as 10 percent of the platinum-standard large randomized trials. The article spelled out his belief that researchers were frequently manipulating data analyses, chasing career-advancing findings rather than good science, and even using the peer-review process—in which journals ask researchers to help decide which studies to publish—to suppress opposing views. “You can question some of the details of John’s calculations, but it’s hard to argue that the essential ideas aren’t absolutely correct,” says Doug Altman, an Oxford University researcher who directs the Centre for Statistics in Medicine.

The example you posted (which I replied to earlier in this post) used a sample of ONE test. Show me any scientists who would conduct a single experiment and then claim that this single experiment yielded accurate results.
"We went through the paper line by line, figure by figure," said Begley. "I explained that we re-did their experiment 50 times and never got their result. He said they'd done it six times and got this result once, but put it in the paper because it made the best story. It's very disillusioning."

First of all, don't you dare call me "honey." I am not your honey and you do not have the right to refer to me by some smarmy pet name like you have some relationship with me.
Don't get emotional. It's not worth crying over.[/url]
 
Upvote 0

Zosimus

Non-Christian non-evolution believer
Oct 3, 2013
1,656
33
Lima, Peru
✟24,500.00
Faith
Agnostic
Marital Status
Married
It seems you are lacking in imagination if you insist on taking everything so literally.

If I come home and find flour dumped all over the floor, I find my daughter covered from head to toe in flour, I also find her holding the empty bag of flour and she tells me that she emptied the flour, I have enough information to reach the correct conclusion. The fact I do not have unlimited information about the event (such as what direction she was facing at the time she spilled it, or the time to the second that it happened) does not mean I can't reach a conclusion.
Ridiculous. This has nothing to do with scientific research.

Repeating experiments and observations, submitting your data to others and seeing if they can reach the same conclusion as you, using different techniques to test the same variables...
What will happen? Other scientists won't reach the same conclusion, but won't be able to publish because journals generally only publish positive findings.

Doesn't get rid of ALL possibility of false results, but those things (and doubtless a few others that I don't know of) can drastically reduce the chances.
You can say what you want, but most published research findings are false. All the brain twisting in the world won't change that.

So if I conduct an experiment to find out the average height of 25 year old men in New York, you are saying that this problem won't apply?
P-hacking refers to manipulation of the p-value. If you just measure the average height of 25-year-old men in New York and don't try to calculate a p-value off of it, then no, there will be no problem.
 
Upvote 0