An accurate count is absolutely critical.
So when you said you had statistical training, did you miss the whole "sample vs population" lecture? I don't mean to have to educate you on this, but the idea of a sample is that it is not a PERFECT mirror of the population. Hence there is ERROR associated with the measure.
A count should be accurate, yes, but that is why Anderegg et al.
ran a statistical analysis!
When you read Anderegg et al. did you ever notice that they continually refer to the Mann-Whitney U-Test and the associated
p-value? I assumed since you have had statistics training that you KNOW what that p-value indicates?
It is a relative measure of the possibility of falsely rejecting the null hypothesis that there is no difference between the two groups (UE and CE) in terms of relative "expertise" as defined by active publication in the field.
The counts themselves are prone to error....Anderegg et al. TOLD YOU THAT EXPLICITLY IN THE PAPER. But, and this is where statistics comes in, the samples they took, imperfect as they may be, showed such VAST differences that there is virtually no way to confuse the relative expertise between the two groups (or perhaps the ability to get one's publication accepted by the overall scientific community, however you wish to view it)
What this picture tells you is that for an analysis, as explicitly outlined in the study, there is virtually NO WAY to confuse the relative activity of researchers who are CONVINCED vs those who are UNCONVINCED.
Two skewed distributions with means of 60 (UE) and 119 (CE) publications.
The chance of these two means actually being the SAME is about 0.000000000001% (hence the statistical analysis).
Could they be the same? Not very likely. But yes, there is a 1X10^-12% chance of them being the same and that this is merely a difference shown up by random chance.
Now let us assume that NOT ALL 908 people in the study KNOW EACH OTHER PERSONALLY and rub each other's necks at the "office party" and rather view the REALITY that this amounts to nearly 1000 individual people spread ALL OVER THE EARTH.
Like I pointed out earlier, WHEN MY PAPERS ARE CITED it is often by people I DON"T EVEN KNOW EXIST. So how could I impact my citation analysis in those instances?
This graph shows if there is a difference in the NUMBER OF TIMES a given researcher is cited
normalized so that absolute publication counts of the researcher are no longer an issue:
This is taking each researcher,
regardless of how well published they are and choosing
their 4 MOST CITED ARTICLES and comparing how many citations those TOP FOUR MOST CITED ARTICLES GET!
It couldn't be more fair than that! AND it is not limited to their "climate only" work! They just want to figure out how "Prominent" they are in the scientific community!
And again, it is nearly impossible to confuse these two populations. For those climate researchers who have 20 climate publications or more, they top most cited publications
in ANYTHING, not just climate are dramatically different!
Erroneous data is not "noise" but evidence that Google Scholar cannot be used for this type of study
SO WHAT METHODOLOGY WOULD YOU USE?
With bibliographic databases like Scopus and Web of Science you will not see massive negative "corrections" like you do with Google Scholar. Instead you would likely see an increase as these authors publish more or at best the numbers staying the same.
Then do your analysis using Scopus.
Honestly! I mean, seriously! Anderegg et al. stated explicitly all the limitations of their study and they have asked that critics such as yourself DO YOUR OWN ANALYSIS!
You are spending so much time FIGHTING this publication instead of SHOWING A BETTER RESULT.
This is why I say you are arguing like a creationist. You are mongering doubt for doubt's sake
BUT PROVIDING NO ALTERNATIVE DATA AND CONCLSUIONS!
I am not sure why you are trying to dismiss this evidence - instead of being intellectually honest. As an example,
Because Anderegg et al. explicitly state all the limitations of their analysis and within the bounds of the analysis it is STATISTICALLY ROBUST.
And when you
compare this to other studies such as Dornan et al (EOS) and other self-reporting type analyses you see the vast majority of climate researchers are "convinced" by the evidence.
Like I said in another post: What if Anderegg et al were wrong? What if only 93%, or 90% or even, gasp, 88% of the world's climate researchers were actually "convinced"......
Would that really change anything?
This is a SAMPLE. It is not intended to be perfect, and is NEVER claimed to be perfect.
IF it is SO FAR OUT from the TRUE mean, then YOU can surely do another analysis, get it published and show the TRUTH.
I will rephrase the questions,
7. Do you consider a scientist who has published 20 peer-reviewed papers on climate change should be considered an expert and someone who published only 19 not?
At times like this I wish I was talking to someone who had actually done a scientific study of some sort.
Why would I be interested in if a paper was popular or not? How can that tell me if a paper is scientifically valid?
You are so desperately confused by what "popular" means in science. Science cites papers not based on popularity as you may know it from high school, but on whether the data and conclusions are worthy of citation or further testing.
I am not talking about Anderegg et al. here. My question was in response to your argument that citation count implies scientific validity,
I would like to point out that citations don't always mean the author "likes" the study they are citing. I had one study of mine in which I was cited so that the researcher citing me could point out an error I made.
Just a friendly FYI for someone such as yourself who appears to have nearly no actual publication history of his own.
10. Is "Intelligent design: The bridge between science & theology" scientifically valid because it is cited 353 times?
And again, you fail to grasp the point of the article.
The data and methods used by Anderegg et al. are unreliable and thus no meaningful statistical analysis is possible.
Well, if you are NOT like a creationist in debating style, you clearly are able to do your OWN analysis and show it to be false.
Not just find "critiques".
Citation counts cannot determine consensus, all they can tell you is popularity.
Ow! My head hurts!!!
Please! Stop! Come back after you've actually gotten some stuff of your own published in a peer reviewed journal.