Is global warming just another ‘End-of-the-World’ delusion?

Lucy Stulz

Well-Known Member
Apr 3, 2013
1,394
57
✟1,937.00
Faith
Other Religion
Marital Status
Married
Bodenstein's Critique of Anderegg et. al. (HERE)

Anderegg et al.'s reply to Bodenstien (HERE)

Anderegg et al.'s General reply to various Critiques (HERE)

My favorite quote from the latter:

Andereggetal said:
Furthermore, the vast majority of comments pertain to how the study could have been done differently. To the authors of such comments, we offer two words – do so! That’s the hallmark of science.
 
Upvote 0

Poptech

Newbie
Jun 18, 2011
158
6
✟15,318.00
Faith
Agnostic
This is called "filtering", PopTech. As I said, I am often published under my first two initials. In fact it is not that uncommon. However if one wishes to ensure they will capture UNRELATED hits on names similar to mine they may wish to drop my second initial. My last name isn't all that common, but you should get the point.
You did not answer the question, as it directly applies to the flawed methods used by Anderegg et al.

3. Did Anderegg et al. apply the use of a first and middle initial arbitrarily to the scientists names?

You know as well as I do that the absolute count is not absolutely critical. There is noise in the data as part of the weakness of ANY database. Databases as you have noted are dynamic and often sloppy. Noise.
An accurate count is absolutely critical. Erroneous data is not "noise" but evidence that Google Scholar cannot be used for this type of study, since it is merely a "scholarly" search engine that does not use robust methods to obtain search results. The only reliable way to use Google Scholar in such a study is to verify every single result, something Anderegg et al. did not do.

With bibliographic databases like Scopus and Web of Science you will not see massive negative "corrections" like you do with Google Scholar. Instead you would likely see an increase as these authors publish more or at best the numbers staying the same.

I am not sure why you are trying to dismiss this evidence - instead of being intellectually honest. As an example,

17% (120) of the results used for Phil Jones can be considered erroneous.
51% (290) of the results used for Andrew J Weaver can be considered erroneous.
85% (352) of the results used for Gary L. Russell can be considered erroneous.

The empirical evidence of massive negative "corrections" alone discredits Anderegg et al., since conclusions based on unreliable data are worthless.

I have already addressed this. Do not make yourself out to be a LIAR (as you like to accuse everyone else).

Expertise is subjective and Anderegg et al. at least went so far as to establish a baseline for their study and tested the robustness of this.
It is irrelevant if they tested it for "robustness", they are explicitly making the claim that someone who publishes at the 20 paper threshold is an "expert" and anyone else is not.

I will rephrase the questions,

7. Do you consider a scientist who has published 20 peer-reviewed papers on climate change should be considered an expert and someone who published only 19 not?

It shows that no matter how the data is parsed those with a baseline history of publication and research in the field are OVERWHELMINGLY more likely to be in the CE category than UE category. Buy such a large margin that if it were otherwise the p-value would be much, much higher.
11. Is the data used by Anderegg et al. reliable and reproducible?

You failed to answer this question,

8. Did Anderegg et al. fail to validate at least 80% of the data they used?

And again, it would be trivial for folks like YOU to actually do not just a "count" analysis, but a CITATION analysis and find an absolutely different result. But you don't. Why is that? Because you can't.
Why would I be interested in if a paper was popular or not? How can that tell me if a paper is scientifically valid?

And that is flawed on many vectors:
1. A single search is NOT what Anderegg et al were doing.
2. You are looking at a raw count on one search
3. This has nothing to do with the TYPE of analysis Anderegg et al were doing. (ie it is not a credibility or citation analysis.)
I am not talking about Anderegg et al. here. My question was in response to your argument that citation count implies scientific validity,

10. Is "Intelligent design: The bridge between science & theology" scientifically valid because it is cited 353 times?

I await YOUR response to the challenge set out to you do an actual analysis showing that the UE group is so vast and so well published in the climate sciences that there is almost no STATISTICAL DIFFERENCE in the populations as expressed in Anderegg et al.
The data and methods used by Anderegg et al. are unreliable and thus no meaningful statistical analysis is possible.

Now, if you wish to say that there is NO OBJECTIVE WAY to determine a "consensus" in a field then fine! But that is kind of silly since there is. And an analysis of citations will do that.
Citation counts cannot determine consensus, all they can tell you is popularity.

But it is like missing the forest for the trees. Meanwhile nearly every single climate scientist you will meet (assuming you get out of the skeptic blogger bubble you appear to live in) will probably fall in the "CE" category. There's a 97% likelihood of that.
This is unsubstantiated conjecture. Please provide the objective criteria for determining who is a "climate scientist". The CE group in Anderegg et al. is very specific to those willing to sign a statement in support of political action in response to belief in AGW. This will be quite different from scientists who may accept an anthropogenic component to climate change but differ on degree and/or policy recommendations. Conflating these groups together would be unwise.
 
Upvote 0

Poptech

Newbie
Jun 18, 2011
158
6
✟15,318.00
Faith
Agnostic
So keep on "assisting" people who get published. Maybe you'll get published one day.
I have no interest in getting published and have made no attempt to. This does not change the fact that I am very familiar with the process.

If your "Expertise" is so solid in this world of "LIARS" and "COMPUTER ILLITERATES" then by all means: DO YOUR OWN ANALYSIS and RUN THE NUMBERS.

Show the world they are LIARS and ILLITERATES.
I don't need to publish in any journal to show this, as it is clearly demonstrated on my website. No one with a background in computer science ever argues with me about my Google Scholar critiques, they either immediately accept them or concede - as the arguments are irrefutable since we are dealing with how the software works not the fantasy land of those like yourself that don't. That speaks volumes to my credibility on this issue. Also whenever this is discussed in forums no one with any computer background EVER comes into to embarrass themselves in these debates because they know better and cannot help people like you.

That's why Creationists do what they do.
Why are you so desperate to smear me with this dishonest ad hominem?
 
Upvote 0

Poptech

Newbie
Jun 18, 2011
158
6
✟15,318.00
Faith
Agnostic
PopTech has missed the greater picture clearly visible in the statistics. He is claiming that the SELECTION methods of Anderegg et al. are so bad that it calls into question the idea that there is a solid scientific consensus on the topic.
There is no clear picture in statistics derived from flawed data. I am claiming that the conclusions of Anderegg et al. are worthless because they are based on flawed data and methods.

If PopTech is only interested in tossing the 97% figure then will he be OK if HIS analysis shows a 90% figure? Or how about 86%?
Since when is someone after demonstrating a study's conclusion is worthless required to do their own study? I consider the level of "consensus" on this issue unknown.

But PopTech is mongering doubt.
Nope, I am discrediting the flawed studies you are trying to use to support your "consensus" argument.

He has failed to appreciate the power of statistical analysis. Even if the values of Anderegg et al's searches were different due to poor search criteria etc. they would have to be EXTRAORDINARILY off the mark to result in such horrific errors that PopTech seems to see.
You do not consider these types of errors horrific?

17% (120) of the results used for Phil Jones can be considered erroneous.
51% (290) of the results used for Andrew J Weaver can be considered erroneous.
85% (352) of the results used for Gary L. Russell can be considered erroneous.
 
Upvote 0

Poptech

Newbie
Jun 18, 2011
158
6
✟15,318.00
Faith
Agnostic
Are you guys seriously arguing about the citation style in scientific publications? It's really arbitrary, but most common is last name comma first initial(s), middle name initial.
Not at all and I am well aware it is arbitrary. I am asking her a specific question relating to the methods used in Anderegg et al.,

3. Did Anderegg et al. apply the use of a first and middle initial arbitrarily to the scientists names?
 
Upvote 0

Lucy Stulz

Well-Known Member
Apr 3, 2013
1,394
57
✟1,937.00
Faith
Other Religion
Marital Status
Married
An accurate count is absolutely critical.

So when you said you had statistical training, did you miss the whole "sample vs population" lecture? I don't mean to have to educate you on this, but the idea of a sample is that it is not a PERFECT mirror of the population. Hence there is ERROR associated with the measure.

A count should be accurate, yes, but that is why Anderegg et al. ran a statistical analysis!

When you read Anderegg et al. did you ever notice that they continually refer to the Mann-Whitney U-Test and the associated p-value? I assumed since you have had statistics training that you KNOW what that p-value indicates?

It is a relative measure of the possibility of falsely rejecting the null hypothesis that there is no difference between the two groups (UE and CE) in terms of relative "expertise" as defined by active publication in the field.

The counts themselves are prone to error....Anderegg et al. TOLD YOU THAT EXPLICITLY IN THE PAPER. But, and this is where statistics comes in, the samples they took, imperfect as they may be, showed such VAST differences that there is virtually no way to confuse the relative expertise between the two groups (or perhaps the ability to get one's publication accepted by the overall scientific community, however you wish to view it)

anderegg_fig1_zps8a88d9e8.jpg


What this picture tells you is that for an analysis, as explicitly outlined in the study, there is virtually NO WAY to confuse the relative activity of researchers who are CONVINCED vs those who are UNCONVINCED.

Two skewed distributions with means of 60 (UE) and 119 (CE) publications.

The chance of these two means actually being the SAME is about 0.000000000001% (hence the statistical analysis).

Could they be the same? Not very likely. But yes, there is a 1X10^-12% chance of them being the same and that this is merely a difference shown up by random chance.

Now let us assume that NOT ALL 908 people in the study KNOW EACH OTHER PERSONALLY and rub each other's necks at the "office party" and rather view the REALITY that this amounts to nearly 1000 individual people spread ALL OVER THE EARTH.

Like I pointed out earlier, WHEN MY PAPERS ARE CITED it is often by people I DON"T EVEN KNOW EXIST. So how could I impact my citation analysis in those instances?

This graph shows if there is a difference in the NUMBER OF TIMES a given researcher is cited normalized so that absolute publication counts of the researcher are no longer an issue:

anderegg_fig3_zps93adc121.jpg


This is taking each researcher, regardless of how well published they are and choosing their 4 MOST CITED ARTICLES and comparing how many citations those TOP FOUR MOST CITED ARTICLES GET!

It couldn't be more fair than that! AND it is not limited to their "climate only" work! They just want to figure out how "Prominent" they are in the scientific community!

And again, it is nearly impossible to confuse these two populations. For those climate researchers who have 20 climate publications or more, they top most cited publications in ANYTHING, not just climate are dramatically different!

Erroneous data is not "noise" but evidence that Google Scholar cannot be used for this type of study

SO WHAT METHODOLOGY WOULD YOU USE?

With bibliographic databases like Scopus and Web of Science you will not see massive negative "corrections" like you do with Google Scholar. Instead you would likely see an increase as these authors publish more or at best the numbers staying the same.

Then do your analysis using Scopus.

Honestly! I mean, seriously! Anderegg et al. stated explicitly all the limitations of their study and they have asked that critics such as yourself DO YOUR OWN ANALYSIS!

You are spending so much time FIGHTING this publication instead of SHOWING A BETTER RESULT.

This is why I say you are arguing like a creationist. You are mongering doubt for doubt's sake BUT PROVIDING NO ALTERNATIVE DATA AND CONCLSUIONS!

I am not sure why you are trying to dismiss this evidence - instead of being intellectually honest. As an example,

Because Anderegg et al. explicitly state all the limitations of their analysis and within the bounds of the analysis it is STATISTICALLY ROBUST.

And when you compare this to other studies such as Dornan et al (EOS) and other self-reporting type analyses you see the vast majority of climate researchers are "convinced" by the evidence.

Like I said in another post: What if Anderegg et al were wrong? What if only 93%, or 90% or even, gasp, 88% of the world's climate researchers were actually "convinced"......

Would that really change anything?

This is a SAMPLE. It is not intended to be perfect, and is NEVER claimed to be perfect.

IF it is SO FAR OUT from the TRUE mean, then YOU can surely do another analysis, get it published and show the TRUTH.

I will rephrase the questions,

7. Do you consider a scientist who has published 20 peer-reviewed papers on climate change should be considered an expert and someone who published only 19 not?

:doh:

At times like this I wish I was talking to someone who had actually done a scientific study of some sort.

Why would I be interested in if a paper was popular or not? How can that tell me if a paper is scientifically valid?

You are so desperately confused by what "popular" means in science. Science cites papers not based on popularity as you may know it from high school, but on whether the data and conclusions are worthy of citation or further testing.

I am not talking about Anderegg et al. here. My question was in response to your argument that citation count implies scientific validity,

I would like to point out that citations don't always mean the author "likes" the study they are citing. I had one study of mine in which I was cited so that the researcher citing me could point out an error I made.

Just a friendly FYI for someone such as yourself who appears to have nearly no actual publication history of his own.

10. Is "Intelligent design: The bridge between science & theology" scientifically valid because it is cited 353 times?

And again, you fail to grasp the point of the article.

The data and methods used by Anderegg et al. are unreliable and thus no meaningful statistical analysis is possible.

Well, if you are NOT like a creationist in debating style, you clearly are able to do your OWN analysis and show it to be false.

Not just find "critiques".

Citation counts cannot determine consensus, all they can tell you is popularity.

:doh:

Ow! My head hurts!!!

Please! Stop! Come back after you've actually gotten some stuff of your own published in a peer reviewed journal.
 
Upvote 0

Lucy Stulz

Well-Known Member
Apr 3, 2013
1,394
57
✟1,937.00
Faith
Other Religion
Marital Status
Married
I have no interest in getting published and have made no attempt to.

Of course not!

This does not change the fact that I am very familiar with the process.

Clearly it was not your own research as you only "assisted" and were not a co-author.

No one with a background in computer science ever argues with me about my Google Scholar critiques

-sigh-

Sampling. You must have missed that lecture in your statistics training.

, they either immediately accept them or concede - as the arguments are irrefutable

Except that Anderegg et al never make the claim their data is perfect, and in fact go to great lengths to ensure the "raw counts" are not the only metric by which the statistics are run.

That speaks volumes to my credibility on this issue.

Your "credibility", such as it is, is limited to "blogging". Versus the science which is pretty solid and almost every metric by which one can assess the "consensus", there is almost no way in which your "critiques" of the Google Scholar searchwill change the raw fact that nearly every climate researcher on the planet is in the CE group.

YOUR sampling, however, is of a limited number of skeptics and fellow bloggers with almost no CLIMATE RELEVANT expertise or training.

Why are you so desperate to smear me with this dishonest ad hominem?

Because that is how you are debating.

You are casting doubt (and NEVER conduct an analysis of your own...you merely pick apart and find areas to "question", but never answer any questions yourself).

THAT is why. All I would need to do is change a couple nouns in your screeds to something related to "Genesis" and I'd have a carbon copy of most creation vs evolution debates.

You are ignoring the science of AGW to focus on a sideshow, and even doing that you never actually do a contrasting analysis to show how the numbers really look.

Doubt mongering. Based on your particular expertise which: won't change a thing (even the general idea of the consensus).

So is your beef with GOOGLE SCHOLAR or with AGW?

If it's with Google Scholar then, fine! If it's with AGW then you have a LOT more work to do. Your critique of Anderegg et al, even if shown to be spot on...will still fail to show that the majority of climate science researchers are unconvinced of AGW.

And in the end that is the sad part of your little windmill tilting exercise here.

As Anderegg et al. say directly to folks like you:

Furthermore, the vast majority of comments pertain to how the study could have been done differently. To the authors of such comments, we offer two words – do so! That’s the hallmark of science.(HERE)
 
Last edited:
Upvote 0

Lucy Stulz

Well-Known Member
Apr 3, 2013
1,394
57
✟1,937.00
Faith
Other Religion
Marital Status
Married
Since when is someone after demonstrating a study's conclusion is worthless required to do their own study?

In creationist-type circles: NEVER!

I consider the level of "consensus" on this issue unknown.

Of course you do! It is "doubt"! And that's all that counts!

Unless, of course, you actually talk to climate researchers and hang around the world's top earth science research facilities (as I have been lucky enough to occasionally do). Only then do you realize that the "doubt" you wish to live with really isn't that doubtful.

:)
 
Upvote 0

Poptech

Newbie
Jun 18, 2011
158
6
✟15,318.00
Faith
Agnostic
You continue to dodge this question, as it directly applies to the flawed methods used by Anderegg et al.,

3. Did Anderegg et al. apply the use of a first and middle initial arbitrarily to the scientists names?

So when you said you had statistical training, did you miss the whole "sample vs population" lecture? ...but the idea of a sample is that it is not a PERFECT mirror of the population. Hence there is ERROR associated with the measure.
Strawman, my critiques of Anderegg et al. have to do with flawed methods and erroneous data not the representativeness of the sample.

A count should be accurate, yes, but that is why Anderegg et al. ran a statistical analysis!
Accurate as in, does not include unreliable and erroneous data. A meaningful statistical analysis cannot be done on unreliable and erroneous data. Using your logic I could create "Poptech's scholarly database" that was full of unreliable and erroneous data but so long as I ran a "statistical analysis" on the data my conclusions would always be valid.

You continue to dodge these questions as well,

8. Did Anderegg et al. fail to validate at least 80% of the data they used?

11. Is the data used by Anderegg et al. reliable and reproducible?


Now let us assume that NOT ALL 908 people in the study KNOW EACH OTHER PERSONALLY and rub each other's necks at the "office party" and rather view the REALITY that this amounts to nearly 1000 individual people spread ALL OVER THE EARTH.

Like I pointed out earlier, WHEN MY PAPERS ARE CITED it is often by people I DON"T EVEN KNOW EXIST. So how could I impact my citation analysis in those instances?
The existence of the Internet invalidates the argument of a geographic location as an impediment. It is reasonable to suggest that with the existence of the Internet and the IPCC, these scientists are more likely to know each other than not. Anderegg et al. even conceded that they were unable to rule out self-citation and clique citation bias,

"Regarding the influence of citation patterns, we acknowledge that it is difficult to quantify potential biases of self-citation or clique citation in the analysis presented here." - Anderegg et al.

Just because you assert you do not know someone who cites your paper does not rule out the existence of influence. For instance, the influence could come from a co-author of one of your papers or a colleague. The point is the bias cannot be ruled out, and thus makes the metric scientifically worthless.

It couldn't be more fair than that! AND it is not limited to their "climate only" work! They just want to figure out how "Prominent" they are in the scientific community!
This is an argumentum ad populum logical fallacy, as "prominence" does not equal scientific validity.

SO WHAT METHODOLOGY WOULD YOU USE?
No all inclusive database exists to present a methodology that would not be biased.

Then do your analysis using Scopus.

Honestly! I mean, seriously! Anderegg et al. stated explicitly all the limitations of their study and they have asked that critics such as yourself DO YOUR OWN ANALYSIS!
I simply used Scopus as an example of a bibliographic database that you would not see massive negative "corrections" like you do with Google Scholar. I do not have to do my own analysis to prove that Anderegg et al.'s is worthless.

This is why I say you are arguing like a creationist. You are mongering doubt for doubt's sake
This is a dishonest ad hominem and a dishonest circumstantial ad hominem.

Because Anderegg et al. explicitly state all the limitations of their analysis and within the bounds of the analysis it is STATISTICALLY ROBUST.
This is false, if they understood the limitations of using Google Scholar they would of never used it for their study.

And when you compare this to other studies such as Dornan et al (EOS) and other self-reporting type analyses you see the vast majority of climate researchers are "convinced" by the evidence.
I have seen no such evidence to draw any such conclusions, Doran and Zimmerman (2009) suffers from a biased sample of only 75 scientists.

You continue to dodge this question,

7. Do you consider a scientist who has published 20 peer-reviewed papers on climate change should be considered an expert and someone who published only 19 not?

You are so desperately confused by what "popular" means in science. Science cites papers not based on popularity as you may know it from high school, but on whether the data and conclusions are worthy of citation or further testing. [...] I would like to point out that citations don't always mean the author "likes" the study they are citing. I had one study of mine in which I was cited so that the researcher citing me could point out an error I made.
I am not confused at all, as there is no way to objectively determine the motives for which a paper is cited.

And again, you fail to grasp the point of the article.
Strawman - again, you failed to grasp my argument. I am not talking about Anderegg et al. but your argument that citation count implies scientific validity,

10. Is "Intelligent design: The bridge between science & theology" scientifically valid because it is cited 353 times?

Well, if you are NOT like a creationist in debating style
This is again a dishonest ad hominem.
 
Upvote 0
This site stays free and accessible to all because of donations from people like you.
Consider making a one-time or monthly donation. We appreciate your support!
- Dan Doughty and Team Christian Forums

Poptech

Newbie
Jun 18, 2011
158
6
✟15,318.00
Faith
Agnostic
Clearly it was not your own research as you only "assisted" and were not a co-author.
Strawman, I never claimed it was.

Sampling. You must have missed that lecture in your statistics training.
Which lecture in statistics tells you how sampling solves the problem of unreliable and erroneous data?

Your "credibility", such as it is, is limited to "blogging". Versus the science which is pretty solid and almost every metric by which one can assess the "consensus", there is almost no way in which your "critiques" of the Google Scholar search will change the raw fact that nearly every climate researcher on the planet is in the CE group.
Again, why has no one with a computer science background challenged my arguments about the limitations of Google Scholar for these types of studies? My critiques of their improper use of Google Scholar will certainly make computer literate people wary of citing Anderegg et al. to support a consensus argument.

You have polled every climate researcher on the planet?

YOUR sampling, however, is of a limited number of skeptics and fellow bloggers with almost no CLIMATE RELEVANT expertise or training.
What are you talking about? What sampling?

Please provide the objective criteria for determining if someone has "climate relevant" expertise or training.

Because that is how you are debating.

You are casting doubt (and NEVER conduct an analysis of your own...you merely pick apart and find areas to "question", but never answer any questions yourself).
So you admit to trying to smear me with dishonest ad hominems based on falsely implied motives?

THAT is why. All I would need to do is change a couple nouns in your screeds to something related to "Genesis" and I'd have a carbon copy of most creation vs evolution debates.
More dishonest ad hominems.

You are ignoring the science of AGW to focus on a sideshow, and even doing that you never actually do a contrasting analysis to show how the numbers really look.
No I am focusing on your flawed consensus argument. You were the one that foolishly attempted to use Anderegg et al. to support a flawed consensus argument.

I do not have to do my own analysis to prove that Anderegg et al.'s is worthless.

In creationist-type circles: NEVER!
You argue like a Nazi propaganda minister. I have no idea how you received a Ph.D.

Of course you do! It is "doubt"! And that's all that counts!

Unless, of course, you actually talk to climate researchers and hang around the world's top earth science research facilities (as I have been lucky enough to occasionally do). Only then do you realize that the "doubt" you wish to live with really isn't that doubtful.
I talk to climate researchers on a weekly basis and they do not support your position.

Intellectual honesty is all that counts which is why I honestly state that,

I consider the level of "consensus" on this issue unknown.
 
Upvote 0

Lucy Stulz

Well-Known Member
Apr 3, 2013
1,394
57
✟1,937.00
Faith
Other Religion
Marital Status
Married
Which lecture in statistics tells you how sampling solves the problem of unreliable and erroneous data?

Sampling means it is not the full population picture. It is just a sample. With sufficient samples the central limit theorem tells us we will zero in on the "true mean" of the population, but a sample is NOT necessarily the full picture.

Again, why has no one with a computer science background challenged my arguments about the limitations of Google Scholar

Because your points about Google Scholar are reasonable....but flawed in their application.

No one expects the Google Scholar sample to be perfect. Not even the authors themselves.

for these types of studies? My critiques of their improper use of Google Scholar will certainly make computer literate people wary of citing Anderegg et al. to support a consensus argument.

You have polled every climate researcher on the planet?

See, that's where you show your failure in this analysis. You seem to be of the opinion that every member of the population MUST BE ANALYZED for a meaningful conclusion to be drawn.

That is where statistics comes in. It is possible to SAMPLE a population and develop a modicum of confidence on the population behavior....but it is always imperfect.

This is the reason I ask about your statistics training. Because it seems you may have missed quite a bit of the key parts.

What are you talking about? What sampling?

You claim you are in contact with scientists and yet have doubts on agw consensus. THAT sampling.

Please provide the objective criteria for determining if someone has "climate relevant" expertise or training.

THAT is what Anderegg et al did in this study. That's the whole point! I wish you would attempt to understand how scientists operate. In the absence of a clearly "standardized" system one has build a rubric by which to run the analysis.

So you admit to trying to smear me with dishonest ad hominems based on falsely implied motives?

I admit nothing of the sort. And it would be most helpful if you understood the technical details of the ad hominem fallacy. It is not merely an "insult". It is using an unrelated aspect of the PERSON to call into question the content of their argument. THAT is what the ad hominem fallacy is.

If you are "insulted" by my comparing your debates to creationist style, that is totally different.

My point is technically on target and I have supported it with an explanation.

AND it is hardly "dishonest" to point out that you are only casting doubt on the analysis without running a comparative analysis.

More dishonest ad hominems.

More failures to have taken a logic class.

No I am focusing on your flawed consensus argument. You were the one that foolishly attempted to use Anderegg et al. to support a flawed consensus argument.

"flawed consensus"? So Anderegg, Noreskes, Dornan, EVERYONE is wrong? Repeated analyses by INDEPENDENT STUDIES find, time and again, >90% agreement within the community.

I do not have to do my own analysis to prove that Anderegg et al.'s is worthless.

Actually you kinda do! Because Anderegg's results are in line with other independent studies, AND if Anderegg's sampling is so seriously flawed then rather than point to POSSIBLE problems with "citation counts" you should be able to generate a MATHEMATICALLY ROBUST analysis showing the final values to be different.

This is where you will likely make another howler of an error so let me stop you beforehand:

merely pointing out that the search for PD-Jones or the search for X.Y. Johnson results in different numbers of "hits" at different times is insufficient to determine if the final statistical analysis will be different.

Let me help you with a much simpler example:

Let's say I want to find out if there's a difference in two populations A and B.

I sample A and get a distribution "a". I sample B and get a distribution "b".

FIRST off: "a" is NOT A. It is a SAMPLE. That means it may or may not accurately reflect the population A.

So I run a STATISTICAL TEST ON "a" and "b". Perhaps I do a "t-test" (let's assume these are "normally distributed" data). I find a difference between the two populations that has a p-value of <0.05.

What if I then go back and re-sample A and B and get two new samples: "a2" and "b2".

"a2" and "b2" may have different means and be somewhat different from "a" and "b".

But will it necessarily change the results of the t-test?

What you have done is point out a possible problem with sampling.

You have not shown how it will impact the final analysis and conclusions of Anderegg et al.

NOW, here's the shocker: your critiques are not ipso facto "bad". But all they are is a critique.

It proves nothing as to the final analysis. And that is where you seem to miss the boat. This is the power of statistics. It doesn't have to see EVERYTHING PERFECTLY to draw conclusions.

But you are correct to question the sampling methodologies. However once you have done that you cannot simply blow it off as "worthless" because the results may still be accurate on the whole

You argue like a Nazi propaganda minister. I have no idea how you received a Ph.D.

I actually did research. Not just critiquing of others work.

Intellectual honesty is all that counts which is why I honestly state that,

I consider the level of "consensus" on this issue unknown.

And since you have no NUMBERS you are free to live with that doubt. Clearly those who bother to do actual research and come up with analyses and numbers are, in your world, worthless.

It is better to be a "doubt monger" than to attempt to do research.

THIS is how I got my PhD. I did RESEARCH. It isn't enough just to scratch your head and say "I don't think you did that right so I think it's all in doubt."

THAT is where you have fallen short of the goal.
 
Last edited:
Upvote 0

Lucy Stulz

Well-Known Member
Apr 3, 2013
1,394
57
✟1,937.00
Faith
Other Religion
Marital Status
Married
Accurate as in, does not include unreliable and erroneous data. A meaningful statistical analysis cannot be done on unreliable and erroneous data. Using your logic I could create "Poptech's scholarly database" that was full of unreliable and erroneous data but so long as I ran a "statistical analysis" on the data my conclusions would always be valid.

-sigh-

Let's do some real world work here:

In R I have generated two POPULATIONS, A and B. (You are familiar with R no doubt, since you are a computer scientist and have training in statistics).

I used the following commands to generate the two populations:

A<-rnorm(1000,2,4) which gives a normal distribution with a mean of 2 and stdev =4

B<-rnorm(1000,7,4)

Then I ran R to generate a few samples we'll call "a1","b1", "a2", "b2", "a3","b3", etc.

Each will be 10 items from the POPULATION.

I will use the following command:

a<-sample(A,10,replace=FALSE, prob=NULL)

Now, you can run this sort of test yourself and you'll get different values but remember: the TRUE MEAN of A is "2" and the TRUE MEAN of B is 7

On repeated SAMPLING I get the following means:

a: 1.6, 0.9, 3.2, 3.5, 3.3
b: 5.3, 8.2, 5.4, 8.6, 7.7

And for each of these sample pairs I ran t-tests which give the following p-values:
0.12, 0.001, 0.324, 0.016, 0.03

MEANING in 3 out of 5 runs I was able to see a significant difference between the two SAMPLES.

In reality the t.test(A,B) command returns a p-value of 2.2e-16 (!!!) That's pretty significant.

It is a large data set so it can easily show the differences.

The samples are, NECESSARILY off from the TRUE POPULATION.

Was my sampling "bad"? Well it was "random" sampling, so it's about as good as I could get.

And in the end, that is what Anderegg et al were doing. SAMPLING A DATABASE. And they clearly outlined the criterion which is the BEST that can be done with a given database. That's it!

Your critiques of the "problems" with the "hit count" for any given researcher have two problems:

1. Will that difference in "hit count" result in a REAL difference in result?

2. Given the nature of the data base this is properly called NOISE (error), and as such, since it is not UNDER THE CONTROL OF ANDEREGG ET AL is not an induced bias.

ONLY unlike MY example the DIFFERENCES ARE SO LARGE in the samples that the p-values from the Mann-Whitney U test make it really hard to see how those numbers could be shifted significantly. (Not to say they can't!)

...
11. Is the data used by Anderegg et al. reliable and reproducible?[/COLOR][/B]

In that any sample is "reproducible" from an unknown population, yeah.

No all inclusive database exists to present a methodology that would not be biased.

Now you have found BIAS??? What kind of statistical analysis did you use to come to THAT conclusion?

But really, HOW DO YOU KNOW THIS? In fact, ANY database can be thus analyzed. Unless Anderegg went to the "We Hate Denialists" Publication Database there is no reasonable reason NOT to analyze Google Scholar.

But again, Scopus would be a good one. I'd like to see your similar analysis of Scopus (which I know you won't do...that's called actual research, rather than "critique".)

I simply used Scopus as an example of a bibliographic database that you would not see massive negative "corrections" like you do with Google Scholar. I do not have to do my own analysis to prove that Anderegg et al.'s is worthless.

Again, you actually DO have to do an analysis. Or you have to show mathematically how ANY GIVEN DATABASE SAMPLING will yield a non-robust response.

That's the only way the game can be played.

Oh, and please don't confuse your anecdotal "I got x hits for Phil Jones and they got y" type argument. It goes much more in-depth than that! That's anecdotal data and has no place in statistics.

(Again, if you don't have a handy stats package like JMP or SPSS or Minitab, you can download R for free! It's a no-brainer for a computer scientist like yourself...it's command line! Hardcore stuff that computer illiterates like myself simply can't do!)
 
Upvote 0

Greatcloud

Senior Member
May 3, 2007
2,814
271
Oregon coast
✟48,000.00
Country
United States
Faith
Charismatic
Marital Status
Single
Politics
US-Republican
All I see for the coming seasons is global cooling. Can't wait til the June satellite data comes out ,more cooling coming. Yes I will have the last laugh this Winter when there is no stopping the cooling from rising above the noise so everyone will see it, brrrrrrrr keep your coat handy.
 
Upvote 0

Lucy Stulz

Well-Known Member
Apr 3, 2013
1,394
57
✟1,937.00
Faith
Other Religion
Marital Status
Married
All I see for the coming seasons is global cooling.

Why? I mean, besides the fact that there is no evidence of cooling right now...why do you think it will start cooling?

Can't wait til the June satellite data comes out

So you can draw a ridiculously NON_SCIENTIFIC conclusion from ONE MONTH'S worth of data???

LOL!

,more cooling coming. Yes I will have the last laugh this Winter when there is no stopping the cooling from rising above the noise so everyone will see it, brrrrrrrr keep your coat handy.

And then you can demonstrate your lack of understanding of statistics! YAY!

Sorry but you won't be able to look at a short period of time in the trend and draw meaningful conclusions.


You can draw the type of conclusions the scientifically uneducated do, though!
 
Upvote 0
This site stays free and accessible to all because of donations from people like you.
Consider making a one-time or monthly donation. We appreciate your support!
- Dan Doughty and Team Christian Forums

Lucy Stulz

Well-Known Member
Apr 3, 2013
1,394
57
✟1,937.00
Faith
Other Religion
Marital Status
Married
One really crude mini-analysis.

Using R, again, I generated two poisson distributions (so I would have a skewed data set, ie non-gaussian) that kind of sort of "mimicked" the performance of the UE and CE data from Anderegg et al.

R-Commands
> UE<-rpois(93,2)
> CE<-rpois(817,4)

(PopTech will understand what this is doing)

This is what the results look like overlain:

UE_CE_Example1_zps32bdd0a5.jpg


Compare this to Fig 1 from Anderegg:

anderegg_fig1_zps8a88d9e8.jpg


Now mind you, I'm just playing here but I'm not too far off in the weeds.

Note that the Wilcoxon Rank Sums test (same as the Mann-Whitney U test) yields a SIGNIFICANT result (meaning the two means are likely not the same)

So now, what if I assume that my sampling was simply awful! Let's say that, instead of the mean number of publications attibutable to CE researchers was actually 3/4 of the number I got here...a 25% REDUCTION in publication count!

Let's see what the data looks like with these distributions:

> UE<-rpois(93,2)
> CE<-rpois(817,3)

UE_CE_Example2_zpsc83e04d7.jpg


Wilcoxon rank sum test with continuity correction

data: UE and CE
W = 26251, p-value = 6.395e-07
alternative hypothesis: true location shift is not equal to 0

Note how the DIFFERENCE IS STILL SIGNIFICANT! Even if I were to have found 25% "inflation" in my publication count in the CE group I still get a STATISTICALLY SIGNIFICANT DIFFERENCE.

Again, I know that R is probably second nature to PopTech and he'll just whip up a quick program in R and run a ton of actual analyses!
 
Upvote 0

Poptech

Newbie
Jun 18, 2011
158
6
✟15,318.00
Faith
Agnostic
Lucy, continues to dodge my questions (re-numbering them here) because she cannot answer them and instead wants to argue strawman arguments and red herrings to distract from her inability to do so,

1. What is the 1001 result for any Google Scholar search?

2. Did Anderegg et al. apply the use of a first and middle initial arbitrarily to the scientists names?

3. Did Anderegg et al. fail to validate at least 80% of the data they used?

4. Is the data used by Anderegg et al. reliable and reproducible?

5. Are 17% (120) of the results used for Phil Jones erroneous?

6. Are 51% (290) of the results used for Andrew J Weaver erroneous?

7. Are 85% (352) of the results used for Gary L. Russell erroneous?

8. Is a scientist who has published 19 peer-reviewed papers on climate change an expert?

9. Is "Intelligent design: The bridge between science & theology" scientifically valid because it is cited 353 times?


A meaningful statistical analysis cannot be done on bad data.
 
Last edited:
Upvote 0

Lucy Stulz

Well-Known Member
Apr 3, 2013
1,394
57
✟1,937.00
Faith
Other Religion
Marital Status
Married
Lucy, continues to dodge my questions (re-numbering them here) and instead argue strawman arguments,

Lucy has been busy DOING SOME MATHEMATICAL/STATISTICAL ANALYSES in regards to the topic. It's called doing some work rather than just picking stuff apart.

5. Are 17% (120) of the results used for Phil Jones erroneous?

I showed you, explicitly, in the post above, that CHANGING EVERY COUNT DOWN BY 25% IN JUST THE CE GROUP doesn't change the results from the "example" I gave.

So how would 17% change IN ONE AUTHOR'S COUNT affect the results???

817 researchers, this amounts to ONE RESEARCHER whose "count value" (x-axis value) will drop by 17%. Show me how 0.1% shift in the mean of a skewed distribution will affect the overall conclusion.

6. Are 51% (290) of the results used for Andrew J Weaver erroneous?

Same questions.

8. Is a scientist who has published 19 peer-reviewed papers on climate change an expert?

This is arbitrarily established as a "cutoff filter". But again, YOU could run your own analysis.

But then you wouldn't be doing the same type of research. Because there are people who can get maybe one publication in a field WHO ARE NOT CAPABLE RESEARCHERS, certainly not EXPERT.

9. Is "Intelligent design: The bridge between science & theology" scientifically valid because it is cited 353 times?
[/COLOR][/B]

THis is your funniest question because it really has NO BEARING ON THIS TYPE OF STUDY.

Do you even understand what Anderegg et al were doing? Because when you ask this question you make it clear you really don't.


Perhaps we can talk about the MATHEMATICAL AND STATISTICAL IMPLICATIONS of your critiques.

Of course that will require you do some work too. I've been the one to actually do some work here.

I can do more if you like.
 
Last edited:
Upvote 0

Poptech

Newbie
Jun 18, 2011
158
6
✟15,318.00
Faith
Agnostic
Lucy continues to dodge these questions because she cannot answer them,

1. What is the 1001 result for any Google Scholar search?

2. Did Anderegg et al. apply the use of a first and middle initial arbitrarily to the scientists names?

3. Did Anderegg et al. fail to validate at least 80% of the data they used?

4. Is the data used by Anderegg et al. reliable and reproducible?


I showed you, explicitly, in the post above, that CHANGING EVERY COUNT DOWN BY 25% doesn't change the results from the "example" I gave.

So how would 17% change IN ONE AUTHOR'S COUNT affect the results???
Strawman, that is not the questions I asked,

5. Are 17% (120) of the results used for Phil Jones erroneous?

6. Are 51% (290) of the results used for Andrew J Weaver erroneous?

7. Are 85% (352) of the results used for Gary L. Russell erroneous?


This is arbitrarily established as a "cutoff filter". But again, YOU could run your own analysis.

But then you wouldn't be doing the same type of research. Because there are people who can get maybe one publication in a field WHO ARE NOT CAPABLE RESEARCHERS, certainly not EXPERT.
This is not what I asked you,

8. Is a scientist who has published 19 peer-reviewed papers on climate change an expert?

THis is your funniest question because it really has NO BEARING ON THIS TYPE OF STUDY.

Do you even understand what Anderegg et al were doing? Because when you ask this question you make it clear you really don't.
Strawman argument for the third time,

9. Is "Intelligent design: The bridge between science & theology" scientifically valid because it is cited 353 times?

Everyone can see your dodge ball game very clearly.
 
Upvote 0
This site stays free and accessible to all because of donations from people like you.
Consider making a one-time or monthly donation. We appreciate your support!
- Dan Doughty and Team Christian Forums

Lucy Stulz

Well-Known Member
Apr 3, 2013
1,394
57
✟1,937.00
Faith
Other Religion
Marital Status
Married
A meaningful statistical analysis cannot be done on bad data.

LOL!

I showed you an example in which I found that DECREASING EVERY HIT COUNT FOR ONLY ONE CLASS (CE in my example) AND STILL SHOWED THAT A STATISTICALLY SIGNIFICANT DIFFERENCE WAS STILL MAINTAINED.

So far you've presented 3 whole examples of people with potentially inflated "hit counts"

You have 817 members of the "CE" class. Each one has a number of "references".

The statistics rest on how big the difference is between the two groups.

This is something you should be able to clearly show to be flawed IF IT IS.

(Don't worry, like many Creationist-type debates, I am under no illusion that you will be willing to push one button on a calculator in defense of your "doubt gambit"...ironically enough you are the "computer expert" here. And mathematical analysis will fall out of your posts other than anecdotal post counts for individuals.

And again, the statistics are more robust than that.
 
Upvote 0