• Starting today August 7th, 2024, in order to post in the Married Couples, Courting Couples, or Singles forums, you will not be allowed to post if you have your Marital status designated as private. Announcements will be made in the respective forums as well but please note that if yours is currently listed as Private, you will need to submit a ticket in the Support Area to have yours changed.

Global warming--the Data, and serious debate

grmorton

Senior Member
Sep 19, 2004
1,241
83
75
Spring TX formerly Beijing, China
Visit site
✟24,283.00
Faith
Non-Denom
Marital Status
Married
Let's look at Farmington and Conception. Farmington is down in SEMO. Spent a lot of time down in SEMO myself collecting kerogen samples over the MVT mineralizations waaay back when. Conception is at the polar opposite corner of the state. Let's compare their maxes and mins:

Min:
Farmington was 3.5degF Jan 1918, while at the same time Conception was 2.79degF.

Conception's lowest temp was Jan 1940 at -3degF, but at that same time it was a balmy 7.9degF in Farmington.

Max:
Farmington was in July 1901 (wow, another from that year!) at 100.9degF
Conception was 96.3degF.

But in July 1936 Conception hit its max at 100.9degF.

What does this prove? Nothing really. Just fun. I sure wouldn't want to be in Missouri in 1901.

Well, the thing I have done is go for closely spaced cities, not cities half way across the universe. We can't repeat a temperature measurement because we can't go back in time. Thus, the nearest we have to repeatability is by comparing two nearby cities which should have about the same yearly average temperature. That is the reason I have compared closely spaced cities, like Hallottsville TX and Flatonia TX or Stillwater-Perry OK. And none of the raw data is without the big problems I am talking about.

Remember, there should be meteorological phenomenon to go with a high temperature difference over a short distance. Yet we see none of these expected phenomenon. That says the data is crap.

And it is this data which we must use to know what the average temperature was in 1900.

Secondly, if closely spaced cities can be different by 2-5 degrees, what does that say about the intrinsic error in temperature measurement? Doesn't that bother anyone? I am amazed that it doesn't.

I took all the Missouri stations and put them into an expensive mapping program we use in the oil business and I made maps from the stations. Can you explain why Carruthersville, in SE Missouri had the coldest annual average temperature in 1950 (see picture below). One also needs to explain why a lump of cold air sat in SE Missouri for an entire year in 1972.

One would intuitively expect that the coldest part of MO would be in the North. But for some reason the cold air snuck around Missouri and settled on Doniphan. If the temperature data has any validity at all for telling us that the global temperature has changed, we have to understand why it gives us such ridiculous maps.

Any explanation?
 
Upvote 0

thaumaturgy

Well-Known Member
Nov 17, 2006
7,541
882
✟12,333.00
Faith
Atheist
Marital Status
Married
Boy you twist everything. I said it was boring our readers. And about 1/2 a percent of them have the ability to follow our discussion.

Here's what I was responding to:
I first want to thank Thaumaturgy for making this thread boring where we are discussing things few understand. This will be my last on the nerd grenades.

Then at least do me the favor of responding to some of the temperature records. Or are you afraid of them?

Jeezly pete, man, I've downloaded the data, processed it, done statistics on it and explained my statistical methodologies, does that sound like I'm afraid of it?

I apologize. Can we at least discuss the percentage of stations that are next to air conditioners? Your math suggesting that only 24% of the stations (not 53%) were class 4 assumes that every other unsurveyed station is fine. Doesn't that strike you as odd?

It strikes me as un-qualified data. There's no discussion of "sampling methodology" and there's no indication that that 24% is a true-random sampling. If it were, it would be troubling. It would represent a significant problem.

That's why the statistics is so important.

Lets start over.

FAIR ENOUGH! :thumbsup:

You address the crapppy station data and I will address the time series data and statistics.

If you can provide me with the "sampling methodologies" underlying the station data assessment I will gladly pursue that. In the meantime I will, likewise, look for information on my own about station sitings.

Again, don't expect that I will take anecdotal assessments as a significant forcing function to gainsay what the majority of climate scientists are saying, but I will gladly look at this information.

Believe it or not, I do find it bothersome that there are bad gauges. I am not in any way trying to "justify" their existence as bad gauges. That can't be done.

But it's part of the error in real life.

I will skip the rest of what you wrote. We are multiplying posts like bunny rabbits. Since I don't want to not answer something that you find important, can you do me a favor of looking through your posts and then posting this one per day and I will do the same--one per day for each of us. I am sure that your boss would rather you work than write email at work. I have 2 hours per day when I am not working or eating, so I do like doing a few other things as well.

AGREED. Settled.

If there is a post that you made tonight that you feel I simply must answer, I will do so. But by doing this, I am letting you have the last word on some of those issues unless they come up again.

Perhaps Post #120 (LINK) The assessment of relative error is of importance to the discussion I think.
 
Upvote 0

grmorton

Senior Member
Sep 19, 2004
1,241
83
75
Spring TX formerly Beijing, China
Visit site
✟24,283.00
Faith
Non-Denom
Marital Status
Married
(This has been one of the most fun discussions I've had on CF in a loooong time.)


Good, glad you had fun even though it is with one of the more foul-tempered people on the internet--me.

Here is some more data. Above you said

thaumaturgy said:
It strikes me as un-qualified data. There's no discussion of "sampling methodology" and there's no indication that that 24% is a true-random sampling. If it were, it would be troubling. It would represent a significant problem.

Now, would it bother you that all 54 california stations have been surveyed and 70% are class 4 or 5???

Lets refresh what the siting document says

[FONT=Verdana said:
Climate Reference Network, Site Information Handbook, p. 6 ][/font]
Class 1 – Flat and horizontal ground surrounded by a clear surface with a slope below 1/3
(<19º). Grass/low vegetation ground cover <10 centimeters high. Sensors located at
least 100 meters from artificial heating or reflecting surfaces, such as buildings, concrete
surfaces, and parking lots. Far from large bodies of water, except if it is representative of
the area, and then located at least 100 meters away. No shading when the sun elevation
>3 degrees.
Class 2 &#8211; Same as Class 1 with the following differences. Surrounding Vegetation <25
centimeters. Artificial heating sources within 30m. No shading for a sun elevation >5º.
Class 3 (error 1ºC) &#8211; Same as Class 2, except no artificial heating sources within 10
meters.
Class 4 (error &#8805; 2ºC) &#8211; Artificial heating sources <10 meters.
Class 5 (error &#8805; 5ºC) &#8211; Temperature sensor located next to/above an artificial heating
source, such a building, roof top, parking lot, or concrete surface.

http://www1.ncdc.noaa.gov/pub/data/uscrn/documentation/program/X030FullDocumentD0.pdf

The frustrating thing for me Thaumaturgy, is that you went to the site and some of the answer was there if you did a wee bit of looking around. Just now, I went there and downloaded the station survey spread sheet. I sorted it for California and then calculated the classes. 70% of ALL California stations are class 4 or 5.
Now they are divided evently between 4 and 5, 35% each. That means that California's contribution to global warming is largely error. Class 4 has a 2 deg C error--too hot. One can't claim that putting a thermometer on hot concrete causes it to cool down.

And 35% are near active heat sources--5 deg C error. Once again, one can't logically claim that putting a thermometer next to an air conditioner or other heat source will cause it to cool down. Thus we are dealing with an upward bias, not random noise, and not a cooling bias. Ever tried to walk across hot cement on a hot Texas day?

Now, This isn't random sampling, it is complete sampling. The data from California is crap. Wouldn't you agree?

I just realized that I missed T's question about relative error

Thaumaturgy said:
To wit: you are looking at temperature data taken in 1917 in Oklahoma and drawing detailed conclusions from 2deg temperature differences? Honestly? Really? How accurate do you actually know those gauges to be? What is the relative error?

Now, of course, this will make a difference, the data from across the globe going back over a century is important and indeed errors are known to exist. But think about it for a second. A 2degF temperature "error" in a thermometer set out in Oklahoma in 1917 is hardly a shockingly bad gauge. I don't know what time of year your data comes from but in 1917 in Stillwater the annual average max was about 70deg. That's a 3% relative error! Please give me a break. 3% error is "Happy" time!

If the entirety of the conclusions of Global Warming were drawn solely from Oklahoma's raw absolute temperature data from the early 20th century I'd say it was irrational to claim knowledge about global temperature increase. But it isn't.

The fact that you can find "bad gauges" indicates that the reality is there is error in the data. That's just life. That's why statistics is a real area of study.

You can't just "process the signal" and ignore the statistics. Statistics deals with error and the quantification of error and the appreciation that error informs our every decision.

Decisions based on anecdotal data are bad decisions. There's no "gut check" to be made. You can't look at individual data and draw reasonable conclusions in a global scale.


First off, these errors are not just limited to 1917. Such temperature differences are found in 2004 if you compare Oxford UK with Greenwich UK. In 1963 Oxford was 3 degrees F warmer than Greenwich. Today, Greenwich is 3 deg F warmer than Oxford. (I converted the Met data to F for comparison with the UK). What caused Greenwich to warm so much in the last 40 years? Could it be that it is in LONDON????

You say that a 2 deg error between Stillwater and Perry is only a 3% error. Why don't you use absolute zero as a comparison. then a 2 deg error is only a .4% error. A better comparison is against the global warming over the past 100 years--1.1 F. A 2 degree error in Perry Stillwater is almost a 200% error against the presumed global warming.

And yes, global warming isn't calculated solely by Stillwater. But every single near city comparision I have made is of this nature.
 
Last edited:
Upvote 0

Chalnoth

Senior Contributor
Aug 14, 2006
11,361
384
Italy
✟36,153.00
Faith
Atheist
Marital Status
Single
The other problem, that people who don't deal with gravity data don't know is that the density of the material leaving the antarctic continent must also be perfect. How much rock flour, how many rocks do you think the base of the ice carries? That adds to the density of the ice and if you don't account for that correctly, it will look like more ice is leaving the continent than actually is.
A valid point, and yes, obviously the rock is of higher density, but what percentage of the glaciers' mass is actually rock? As long as it's less than a couple percent, it seems highly unlikely that this is a significant effect.

I once oversaw a gravity study of a salt dome. We were trying to use the gravity and magnetics to calculate the thickness of the salt layer in the gulf of Mexico. THe one thing we didn't have was the exact density of the sediments beneath the salt. For very tiny changes in density, the calculated base of salt went up or down by kilometers. We were wanting to use this data to help us in a depth migration of a 3d seismic volume. After spending lots of money trying to find the base of salt, we had to give up because of the lack of precise knowledge of the sedimentary density.
Okay, but somehow I doubt that sediments in Antarctic ice are anywhere near as large a fraction of the mass as those in that salt layer.

I came here to discuss the station data, which, as I said, I will do. If you and thaumaturgy, choose to cherry-pick the data you respond to and the data you won't respond to, fine. The temperature data is what is used in the IPCC.
Except focusing on the station data to the exclusion of all other factors demonstrating global warming is cherry picking. Even if there are massive problems with these data, you've got other corroborating measurements, such as satellite measurements and other indirect temperature proxies. The case for global warming, even without the station data, is hardly weakened at all.

Attached are the temperature difference between Perry, Ok and Stillwater OK. From 1899 to about 1917, in general the subtraction is positive meaning Stillwater is hotter than Perry. Now, Perry is NW of Stillwater. Since hot air rises, we should see the air from Perry flowing towards Stillwater for these years. Well, the typical wind direction is from either the NW or SW, so no problem during these years.

But, in 1917 the temperature record says that Perry was hotter than Stillwater by about 2 degrees. Using the logic above, wind should have blown from the east to the west, a very rare direction for wind in Oklahoma and it should have done it for 3 years running. Temperatures have consequences, and we didn't see those consequences in the wind record.
There is not a one-to-one relationship between temperature and pressure, however. It is pressure that directly drives wind, not temperature. Yes, if you take a volume of air and heat it up at constant volume, its pressure will increase, and it will tend to flow outward. But the Earth's weather patterns are a hell of a lot more complicated than just this, and we can't expect this simple situation to hold all the time.

That aside, it is entirely possible that one or both of the temperature stations were in error. This would not surprise me. But nor would it weaken the case for global warming to any noticeable extent. Remember that the reason why the case for global warming is so strong is because of corroboration across multiple lines of evidence, not because of any one line.
 
Upvote 0

corvus_corax

Naclist Hierophant and Prophet
Jan 19, 2005
5,588
333
Oregon
✟22,411.00
Faith
Seeker
Marital Status
Private
Politics
US-Others
(This has been one of the most fun discussions I've had on CF in a loooong time.)
Perhaps because you aren't debating against the "willingly blind faithful" like AV1611Vet or 'dad'?

Good, glad you had fun even though it is with one of the more foul-tempered people on the internet--me.
Actually, Glen, you are one of the forumites that I actually appreciate, because you don't post internet crap.
You present your side, your evidence and your conclusions thereof WITHOUT being a foul mouthed mother[wash my mouth][wash my mouth][wash my mouth][wash my mouth]er (yeah, that "Wash my mouth" is gonna be a LONG string)
I enjoy your statements, thaumaturgy's statements, Blayz's statements, ChordateLegacy's statements and the arguments thereof, simply because none of you jump to the stupid garbage of "superstition", which is far too common on this forum.
I really appreciate watching a discussion-debate between those who actually understand the science behind what they are talking about it.

Which is why I have my popcorn out. Im hoping to actually learn something from this discussion.

You guys are great! :thumbsup:
 
Upvote 0

Chalnoth

Senior Contributor
Aug 14, 2006
11,361
384
Italy
✟36,153.00
Faith
Atheist
Marital Status
Single
But, averaging within a year constitutes a discontinuous filter. Each segment/year has a different filter appllied. We never do that in time series analysis. So, I agree with your last statement.
I hope you read the correction I posted, because it also points to an error that you made: the averaging does not decrease the granularity in the frequency domain. That is to say, there are exactly the same number of points in frequency space between the 4-year period and 5-year period whether you're averaging or not. The primary effect of the averaging is simply to remove the high-frequency information. Unfortunately some of this high-frequency information is wrapped into lower frequencies, but as long as you go to frequencies much lower than the cutoff point, this effect is small.

And why are you claiming that a different filter is being applied to each segment/year?

I would point out that if, as is being claimed here that the satellite data is a linear line, it should yield a power spectrum like a line. It doesn't. Note the second line in the following plot. There is a ramp and the power spectrum doesn't have the bumps that both Thaumaturgy's and my fft had. Thus, your data isn't linear. QED
Huh? A line has a power spectrum that is large at low frequencies and falls off sharply. The power spectrum of this time line has a very similar feature. Are you attempting to claim that because there is higher-frequency information, there isn't an overall trend? Because that would be silly.
 
Upvote 0

thaumaturgy

Well-Known Member
Nov 17, 2006
7,541
882
✟12,333.00
Faith
Atheist
Marital Status
Married
OK, I have re-coded the data to allow for monthly assessment by JMP (it doesn't like "repeats" on the time-axis, so I had to make a "time stamp" series that was {Year + (Month/12)}

The results are here:
globaltimeseries.JPG

I think I was mistaken about the Kappa function. It does show a statistical significance for cyclcity when it is low on the p-value.

No problem. We see from the graph that, as Glenn has pointed out, there is, indeed, cyclicity. AND it has a multi-year period. The residuals bear this out.

HOWEVER, from what I can tell the large peak at or near "zero" on the FREQUENCY graph, as well as the raw data graph itself, show a secular trend.

The way JMP models time-series is to assume the larger secular trend is actually just an extremely long-wavelength cyclicity I believe. Hence the "periodogram" in the lower left with an extremely long period peak of importance.

So we are back at square one. Indeed there is a cyclicity that is on a longer time scale than merely a seasonal as would be expected.

HOWEVER there is obviously a secular trend of some significance. That is what I noted earlier with my oversimplified linear least-squares regression.

I am, as I said, attempting to be as honest as is humanly possible in this debate, I expect no less of anyone else. To that end I'm exposing my own errors and making the caveat that I could still be in error. I have put in a call to a teacher who recently ran our JMP stats class refresher to help me better understand how JMP treats time-series data.

I do not believe, at this time that it is possible to rule out the importance of the secular trend here in the data.
 
Upvote 0

thaumaturgy

Well-Known Member
Nov 17, 2006
7,541
882
✟12,333.00
Faith
Atheist
Marital Status
Married
www.surfacestations.org
A database of sitings of the US HCN surface stations.

The group is apparently a non-official, wholly volunteer group of people who have as their mission to document and rate the &#8220;sitings&#8221; of the US HCN surface temperature measurement stations. This is a highly valuable endeavor.

Here&#8217;s their map of the stations they have so far surveyed:
surfstation.JPG


The actual surface station page is located here:

http://www.surfacestations.org/USHCN_stationlist.htm

Here are the stats:

Total number of known USHCN Stations: 1221
Number &#8220;SurfaceStations.org&#8221; has assessed: 534
That is 43.7% of the total number of stations

This is a Volunteer Survey group. They are clearly not developing their database using a true random sampling technique. This is apparent from the maps above. Note how highly populated areas are readily assessed leaving vast swaths of lower population density areas with no coverage so far.

Until this program has completed its surveys and they are covered at or near 100% coverage any statistical inference will be questionable to the extreme.

Currently they have only assessed 43.7% of the known U.S. sites and of those 69% have a CRN rating of 4 or 5 which indicates a bias of >2-5degC.

This means that they have successfully shown that 30% of the stations available have selected have a bias of >2-5degC.

No information is available as to any systematic bias. Is the bias preponderantly positive or negative? We do not know.

However, because this is not verifiably data from a RANDOM SAMPLING it is impossible to draw a statistically significant conclusion from that. In addition since no estimation of directionality of bias is available all we know right now from surfacestations.org is that there are bad gauges in this study.

In addition, once the USHCN survey is completed and statistics and directionality of bias are assessed, the same sort of thing must be done for all gauges including international and satellite systems.
 
Upvote 0

Naraoia

Apprentice Biologist
Sep 30, 2007
6,682
313
On edge
Visit site
✟23,498.00
Faith
Atheist
Marital Status
Single
Note how highly populated areas are readily assessed leaving vast swaths of lower population density areas with no coverage so far.
I hope I don't sound horribly stupid, but doesn't that also mean that they are much more likely to sample stations with a positive bias? I.e. ones that are in or near cities.

(Been watching this thread for a while now, most of the stuff just goes whoosh over my head, but it's interesting nonetheless :D)
 
Upvote 0

thaumaturgy

Well-Known Member
Nov 17, 2006
7,541
882
✟12,333.00
Faith
Atheist
Marital Status
Married
I hope I don't sound horribly stupid, but doesn't that also mean that they are much more likely to sample stations with a positive bias? I.e. ones that are in or near cities.

(Been watching this thread for a while now, most of the stuff just goes whoosh over my head, but it's interesting nonetheless :D)

Possibly. However, there are cases where city-based sites, like in Berkely, CA with a rating of "1" (because it is well placed as per the LeRoy Standards) but is in the midst of a major city, hence one would assume some amount of "heat island effect" (linky) whereas the Mt. Charleston station (described HERE) which is quite rural but poorly sited according to the Leroy metrics which gives it a 5 rating (bad).

I think the key is that we are here discussing these "siting" metrics which could lead to bias but perhaps not seeing the whole picture.

Glenn has made a point of focusing on the data on a nearly individual level which is pointless in a statistical data set.

HOWEVER, he has made a point that nearly all of the California sites are now in the surfacestations.org database and they do have a preponderance of bad sites according to the Leroy scale.

Now, unless I'm very much mistaken (always a possibility) the Leroy scale is on in which errors are more likely, not that the data is ipso facto useless. It is a siting guideline based on various studies.

And it includes good common sense. It is silly to place a temperature station right next to a heat generator.

But, and this is big, we are not stuck solely with U.S. surface temperature stations and the National Weather Service, NOAA, and NASA are all abundantly aware that there is potential error in the data. That is why it is important to look at large "gridded averages" and overall trends checked against other measurement techniques which are not prone to the same errors (ie satellite data, etc.)

The problem here is that we are focussing to narrowly on a handful of data and ignoring the fact that global warming is not predicated solely on U.S. surface temperature station measurements.

It is the same with things like "geologic time". We don't just use one technique. There could be flaws in that. We know there is. It's more powerful when two or three techniques are used. Multiple radiometric ages from multiple isotopic systems help us zero in a "more true" estimate.

This debate really has to move beyond finding some bad gauges and deal with an assessment of the actual error in the data.

To bring the discussion back around to a global perspective, let's look again at the data from NASA:


(Error bars are estimated 2&#963; (95% confidence) uncertainty.)



The green bars are uncertainty (95% confidence)

Here's an interesting note:

And while it is true that differing weather station locations, from proximity to lakes or rivers or elevation above sea level, probably make it impossible to arrive at a meaningful figure for global average surface temperature, that is not what we are really interested in. The investigation is focused on trends, not the absolute level. Often, as in this case, it is easier to determine how much a given property is changing than what its exact value is. If one station is near an airport at three feet above sea level and another is in a park at 3000 feet, it doesn't really matter -- they both show rising temperature, and that is the critical information.


So how do we finally know when all the reasoning is reasonable and the corrections correct? One good way is to cross check your conclusion against other completely unrelated data sets. In this case, all the other available indicators of global temperature trends unanimously agree. Go ahead, put aside the direct surface temperature measurements -- global warming is also indicated by:
  • Satellite measurements of the upper and lower troposphere
  • Weather balloons show very similar warming
  • Borehole analysis
  • Glacial melt observations
  • Declining arctic sea ice
  • Sea level rise
  • Proxy Reconstructions
  • Rising ocean temperature
All of these completely independent analyses of widely varied aspects of the climate system lead to the same conclusion: the Earth is undergoing a rapid and substantial warming trend.(SOURCE)

So this thread, while loads of fun from a statistics point of view, does sort of miss the whole, larger picture.

Glenn is right to hammer on the uncertainty and the errors in the gauges. But then individual temperature stations' absolute measurement is hardly what we are really dealing with in terms of global warming trend analysis.
 
Upvote 0

grmorton

Senior Member
Sep 19, 2004
1,241
83
75
Spring TX formerly Beijing, China
Visit site
✟24,283.00
Faith
Non-Denom
Marital Status
Married
OK, I have re-coded the data to allow for monthly assessment by JMP (it doesn't like "repeats" on the time-axis, so I had to make a "time stamp" series that was {Year + (Month/12)}

The results are here:
globaltimeseries.JPG

I think I was mistaken about the Kappa function. It does show a statistical significance for cyclcity when it is low on the p-value.

No problem. We see from the graph that, as Glenn has pointed out, there is, indeed, cyclicity. AND it has a multi-year period. The residuals bear this out.

HOWEVER, from what I can tell the large peak at or near "zero" on the FREQUENCY graph, as well as the raw data graph itself, show a secular trend.

The way JMP models time-series is to assume the larger secular trend is actually just an extremely long-wavelength cyclicity I believe. Hence the "periodogram" in the lower left with an extremely long period peak of importance.

So we are back at square one. Indeed there is a cyclicity that is on a longer time scale than merely a seasonal as would be expected.

HOWEVER there is obviously a secular trend of some significance. That is what I noted earlier with my oversimplified linear least-squares regression.

I am, as I said, attempting to be as honest as is humanly possible in this debate, I expect no less of anyone else. To that end I'm exposing my own errors and making the caveat that I could still be in error. I have put in a call to a teacher who recently ran our JMP stats class refresher to help me better understand how JMP treats time-series data.

I do not believe, at this time that it is possible to rule out the importance of the secular trend here in the data.

Sigh, sorry, that simply isn't a secular trend. Fourier analysis is based upon the concept that any function can be represented by a summation of sines and cosines. Now, if you have a box (two step functions going in reverse direction, you will have a high amplitude in the low frequency part of the spectrum (what you are calling a secular trend.) Why? Because step functions require it. Secondly, since the FFT works only in multiples of 2 and you only have 357 months, not 512 months, there is necessarily a step function where the data ends and zeros begin. Thus, you will get this low frequency bump.

Below are two pictures. One is of the thickness of Green River Varves. It is taken from John Davis, Statistical and Data Analysis in Geology, John Wiley, 1973, Table 5-17. I plotted it. Note that there is no secular trend. The second picture is the figure in his book (so he did the FFT, not I), and you can see the rise in frequency in the low frequency part of the power spectrum--That is normal. But it doesn't represent a secular trend.

Show me the secular trend in the Green River Varve data!
 
Upvote 0

grmorton

Senior Member
Sep 19, 2004
1,241
83
75
Spring TX formerly Beijing, China
Visit site
✟24,283.00
Faith
Non-Denom
Marital Status
Married
www.surfacestations.org
A database of sitings of the US HCN surface stations.

The group is apparently a non-official, wholly volunteer group of people who have as their mission to document and rate the “sitings” of the US HCN surface temperature measurement stations. This is a highly valuable endeavor.

Here’s their map of the stations they have so far surveyed:
surfstation.JPG


The actual surface station page is located here:

http://www.surfacestations.org/USHCN_stationlist.htm

Here are the stats:

Total number of known USHCN Stations: 1221
Number “SurfaceStations.org” has assessed: 534
That is 43.7% of the total number of stations

This is a Volunteer Survey group. They are clearly not developing their database using a true random sampling technique. This is apparent from the maps above. Note how highly populated areas are readily assessed leaving vast swaths of lower population density areas with no coverage so far.

Until this program has completed its surveys and they are covered at or near 100% coverage any statistical inference will be questionable to the extreme.

Currently they have only assessed 43.7% of the known U.S. sites and of those 69% have a CRN rating of 4 or 5 which indicates a bias of >2-5degC.

This means that they have successfully shown that 30% of the stations available have selected have a bias of >2-5degC.

No information is available as to any systematic bias. Is the bias preponderantly positive or negative? We do not know.

However, because this is not verifiably data from a RANDOM SAMPLING it is impossible to draw a statistically significant conclusion from that. In addition since no estimation of directionality of bias is available all we know right now from surfacestations.org is that there are bad gauges in this study.

In addition, once the USHCN survey is completed and statistics and directionality of bias are assessed, the same sort of thing must be done for all gauges including international and satellite systems.


Sigh, this is very frustrating. I showed that every single station in California, all 54 have been surveyed. 35% of them were class 4 meaning a 2 deg c bias upward. And I showed that 35% of them are class 5 meaning a 5 deg C upward bias. I asked you if we could agree that the California data was crap. I got no response to that.

Can you do me the favor of actually responding to this question? If you think that 70% class 4+ stations can give a good temperature, please explain your reasoning. It seems to me that ALL you want to talk about is the Fourier data, and you don't understand them because you haven't spent a career with them. So, once again, do you think California's temperature record is giving us a valid measurement of global warming, yes or no?
 
Upvote 0

grmorton

Senior Member
Sep 19, 2004
1,241
83
75
Spring TX formerly Beijing, China
Visit site
✟24,283.00
Faith
Non-Denom
Marital Status
Married
I hope I don't sound horribly stupid, but doesn't that also mean that they are much more likely to sample stations with a positive bias? I.e. ones that are in or near cities.

(Been watching this thread for a while now, most of the stuff just goes whoosh over my head, but it's interesting nonetheless :D)


That is what frustrates me about Thaumaturgy's fixation on Fourier analysis rather than the stupidity of putting a thermometer next to an air conditioner exhaust fan and expecting it to give us a good measurement of climatic temperature.

So far, I have hardly gotten any comments on theses attrocious sitings. I also got no comment on why in 1951 and 1972 globs of cold air sat over SE Missouri, which should be the warmest.

I also have gotten no response to the question about why, if the temperature record is so good, we have 2-10 deg F differences in annual average temperature over merely 20 miles and yet, there are no thunderstorms or strong winds which should accompany those phenomenon.

All I am doing here is trying to educate Thaumaturgy on Fourier transforms. I am about to start ignoring that topic because I have answered everything and as you say, it goes over y'all's heads. yep, I just made my last post on Fourier. Thaumaturgy doesn't seem interested in responding to anything except issues on FFT. Thaumaturgy, if you want to start a thread on Fourier analysis, please do. this thread is about global warming and the problems in measuring the temperature. Fourier discussion is fini as of right now.

The other day, I posted a temperature record with a sharp temperature spike in 1912 or sometime around then. The claim was made, by whom I forget, that that wasn't any big deal that 100 years ago that a temperature gauge was bad. Well, it isn't limited to the early part of the last century. Below is the temperature record of Fort Valley Arizona. Note the temperature spikes of 20 degress in 1978, 1995 and 2001, 2002, and 2003. Maybe these guys are using old thermometers, but the fact is, the weather bureau isn't watching the data that comes into their shop.
 
Upvote 0

thaumaturgy

Well-Known Member
Nov 17, 2006
7,541
882
✟12,333.00
Faith
Atheist
Marital Status
Married
Sigh, sorry, that simply isn't a secular trend.

How do you know?

Fourier analysis is based upon the concept that any function can be represented by a summation of sines and cosines.

PROVE IT.

Did you completely ignore my gigantic post on time series analysis?

(Or did it go "over y'all's head")
 
Last edited:
Upvote 0

thaumaturgy

Well-Known Member
Nov 17, 2006
7,541
882
✟12,333.00
Faith
Atheist
Marital Status
Married
That is what frustrates me about Thaumaturgy's fixation on Fourier analysis rather than the stupidity of putting a thermometer next to an air conditioner exhaust fan and expecting it to give us a good measurement of climatic temperature.


Glenn, either you haven't read this thread or you are deliberately misrepresenting my stance repeated:

Please note:
Answered already. I find it appalling that anyone would set a gauge in a bad place.

Have I ever said this kind of thing is a "good thing"? So far in this debate I have agreed this type of thing is bad.

Are you deliberately misrepresenting my repeated stance? I have said over and over that I find these "bad gauges" to be Bad. Unequivocally so.

Believe it or not, I do find it bothersome that there are bad gauges. I am not in any way trying to "justify" their existence as bad gauges. That can't be done.

Now, please stop this misrepresentation of the facts. I have agreed bad
gauges are bad. I am not happy with them.

So far, I have hardly gotten any comments on theses attrocious sitings.

You are mistaken. Now you've been corrected.

All I am doing here is trying to educate Thaumaturgy on Fourier transforms. I am about to start ignoring that topic because I have answered everything and as you say, it goes over y'all's heads.

Yet strangely you keep ignoring my pleas to discuss the statistics.

Funny that. Luke 6:31 pops to mind.

But again, my admitted lack of expertise on FT is in stark contrast to your silence on statistics.

I suspect all the stuff I've posted on statistics here has gone "over y'all's head"

You are offending me at many levels at this point.

I don't appreciate misrepresentation and I am growing unhappy with constantly having to be told how ignorant I am by a man who won't talk statistics in a statistics discussion. (Motes and beams, perhaps?)

yep, I just made my last post on Fourier. Thaumaturgy doesn't seem interested in responding to anything except issues on FFT.

Again, you are either not reading or you are deliberately misrepresenting the facts: if you look at my many posts in this thread you'll see far more are related to statistics.

Of which Fourier Analysis fits in one place (time series analysis). You make an ex-cathedra claim (which you can't even begin to back up) that there's no "secular trend" in data.

Thaumaturgy, if you want to start a thread on Fourier analysis, please do. this thread is about global warming and the problems in measuring the temperature. Fourier discussion is fini as of right now.

Remind me, Glenn, who brought up Fourier Transform in the first place???

Let's rewind the tape:


With cyclical phenomenon a Fourier analysis is far more appropriate.

I was just pointing out that you were applying a linear analysis to an obviously cyclical data set. Cycles come back to close to their starting points. Noisy linear systems don't. They trend one way or the other. Thus, while statisticians are aware of it, and you might be aware of it, you didn't apply it in my opinion.

As I said, a Fourier spectral analysis is far more appropriate. One can't claim that this is rising linearly when it is clearly a cyclical signal.

Woah! It was Glenn Morton!

If you have trouble defending your FFT analysis in a time series discussion (ie in a statistical discussion), I recommend, next time DON'T BRING IT UP.
 
Last edited:
Upvote 0

grmorton

Senior Member
Sep 19, 2004
1,241
83
75
Spring TX formerly Beijing, China
Visit site
✟24,283.00
Faith
Non-Denom
Marital Status
Married
Are the UHI effects accounted for in addition to this survey ? Dr. Morten ?


I am not a doctor. In answer to your question, yes and no.

The following is from a NASA web page.

April 26, 1999: As the heat builds during a blistering summer day in Atlanta, Georgia, you can almost hear the clouds overhead cry, "Let's get ready to rumble!"
Urban growth has transformed Atlanta's environment, creating a uniquely altered arena of weather. Because urban areas both generate and trap heat, a bubble or "urban heat island" forms around the city. The temperature in Atlanta is 5 to 8 degrees Fahrenheit higher than outlying areas, and this excess heat produces increased rainfall and thunderstorms.

http://science.nasa.gov/newhome/headlines/essd26apr99_1.htm

Two things to notice, the city is quite a lot hotter than the outlying areas. Secondly, the thing that Thaumaturgy and other global warming advocates have not commented on with the temperature differences over short distances, as seen in my posts above, is that such temperature differences, if real, cause thunderstorms and rainfall. They would also produce winds. (see those pictures now before I run out of room for pictures and have to delete them.)

Now, how much does James Hansen correct for urban heat island effect? 0.3 deg C or about half a degree F! That is about a 1/10th of the amount that Nasa says is the heat island effect.

How do I know this? Hansen published this in the Journal of Geophysical Research.

J. Hansen et al said:
available at http:// pubs.giss.nasa.gov/docs/2001/2001_Hansen_etal.pdf p 6

"The magnitude of the adjustment at the urban and periurban stations themselves, rather than the impact of these adjustments on the total data set, is shown in Plate 2l. The adjustment is about -0.3[deg]C at the urban stations and -0.1 [deg]C at the periurban stations.

So, the 5-8 deg urban heat island effect is corrected by Hansen with a tiny tiny correction. Regardless of what the GW advocates say about this, that is incompetence of Biblical proportions in my opinion.


Now, The Houston Area Research Council (HARC) did a heat island survey of Houston. And they collected some data on an hourly basis, which shows the variability of the effect. It varies almost by the minute depending on the way the wind blows. But notice in the following that the author says that the outlying areas around Houston have not heated up in the past 20 years.

"Growth of the surface temperature urban heat island of Houston, Texas is determined
by comparing two sets of heat island measurements taken twelve years
apart. Individual heat island characteristics are calculated from radiative temperature
maps obtained using the split-window infrared channels of the Advanced Very
High Resolution Radiometer on board National Oceanic and Atmospheric Administration
polar-orbiting satellites. Eighty-two nighttime scenes taken between 1985
and 1987 are compared to 125 nighttime scenes taken between 1999 and 2001. Analysis
of the urban heat island characteristics from these two intervals reveals a mean
growth in magnitude of 0.8 K, or 35%." David R. Streutker, "Satellite-measured growth of the urban heat island of Houston, Texas" p. 1 http://files.harc.edu/Projects/CoolHouston/Documents/GrowthUrbanHeatisland.pdf

and

"For interval 1 the mean rural temperature of the area surrounding the city of
Houston is 17.2 +/- 0.7 [deg]C. (The uncertainty quoted is the standard deviation of
the mean and does not include any attempt to quantify the errors discussed
in the previous section.) The mean rural temperature of the same area for
interval 2 is 17.1 +/- 0.8 [deg]C, virtually identical to the earlier interval.
David R. Streutker, "Satellite-measured growth of the urban heat island of Houston, Texas" p. 5 http://files.harc.edu/Projects/CoolHouston/Documents/GrowthUrbanHeatisland.pdf

From this bolded part one can conclude that CO2 doesn't have any impact on the rural areas around Houston--yet we are told that CO2 is going to burn us up. Yeah right!


Now, For those who want to correct for the urban heat island effect there is a huge problem. What number to you subtract from TODAY's Houston temperature??? The picture below is a plot over a 24 hour time of the urban heat island effect. It is a scatter gram. No one records the urban heat island delta temperature for each day and it varies each day. All one can do is estimate an average, but that might not be applicable for a while and the average might change in the future.

And then we have the Hansen only correcting 1/10th what is needed. And people wonder why I think there is rank incompetence in the GW industry.

Does any GW advocate want to try to reconcile the previously documented 7.5 deg Fwarming in Santa Ana California with the 1.1 deg F (.65 C) global warming over the past 100 years, with the 5 to 8 deg F warming in Atlanta due to urban heat island, with the measly -.3 deg C (.6 deg F) correction made for it, and the magnitude of the global warming change compared with the urban heat island effect in Houston.

I am ALL ears.
 
Upvote 0