• Starting today August 7th, 2024, in order to post in the Married Couples, Courting Couples, or Singles forums, you will not be allowed to post if you have your Marital status designated as private. Announcements will be made in the respective forums as well but please note that if yours is currently listed as Private, you will need to submit a ticket in the Support Area to have yours changed.

Global warming--the Data, and serious debate

grmorton

Senior Member
Sep 19, 2004
1,241
83
75
Spring TX formerly Beijing, China
Visit site
✟24,283.00
Faith
Non-Denom
Marital Status
Married
Well, apparently it does. Unless we are working under the assumption that the only data usable is that which you decree valid.

Well the data you are 'declaring' as the only valid data has a starting and ending temperature almost the same--meaning average temperature change over the past 30 year--almost zippo!!! Fine, you want that to be your data, you can have it.


I find many times "raw data" must be processed and, gasp, "normalized" in order to correct for problems. Unless there's some reason to believe that NASA is incompetent in dealing with data as described, I should think that it is now up to you to prove that their conclusion of broader correlation of the "normalized" (anomaly) data is somehow in error.

Pure faith. You actually haven't tried this, have you???? No. I know because if you had you would see some bad things in the data. The problem I have is that people claim things but haven't TESTED those claims. It is very easy to claim things. It takes work to test them.

Ok, lets take that Colorado data that that Peterson article said could be fit together except lets use it from 1890 to the present. I first went through and removed the spikes from the record. I cut out anything below 47 deg. That is the first picture below. Then, I debiased the 5 stations by adjusting their average temperatures from 1890-2005 to the same number 52.07, which was the average temperature of ALL 5 stations (sans numbers below 47). Now, after that adjustment each individual station has an average of 52.07 That is the second picture.

Now, lets take each year for the 5 cities (which are within an area of 85x 36 miles or so) and lets find the maximum temperature among the cities for each year and lets find the minimum for each year. Now, lets subtract the max from the min. That is the third picture. Please look at these pictures. I don't think you looked at the satellite picture because if you had, you would have seen that it was monthly and that it had the periodicities I noted.

Now, what we see is that the temperatures from these five closely spaced cities has a 2 deg F average spread, ~1.1 C. That is the error AFTER despiking, AND de-biasing the data through via making the average temperatures the same. I calculated the standard deviation of all the temperatures from 1890-2005. It is 1.6 deg F. Thus, I would contend that the intrinsic error bar is 1.6 deg F. The world has warmed by 1.1 deg F over the past 100 years--that is the red bar in the last picture. it is less than the average noise, if the noise is defined as one standard deviation. The error in the data is greater than the claimed warming. If I were to do this on Chinese data, it is even worse.

Now, if the noise is 1.6 degrees F and the claimed 100 year warming is only 1.1 deg F, there really is no way to know from these cities if the world has warmed.

So, why are you ignoring the raw data and the many air conditioners next to the thermometers?


I am still willing to assume, ceteris parabus, that NASA and NOAA are not systematically lying to me and that they are not wholly incompetent.

Ever hear of Morton's demon? It is a way to fool one's self. One sees it with people attached to a religion, in political partisans, in just about every area of life. They aren't lying. They have dumped their skepticism and only look for confirmation of their previously held beliefs.

Now, if you think my analysis of these data points is wrong, be my guest to edit them differently. The only thing I would rule out as not kosher would be the tilting of the trend, which is what Peterson did in that article you cited.
 
Upvote 0

grmorton

Senior Member
Sep 19, 2004
1,241
83
75
Spring TX formerly Beijing, China
Visit site
✟24,283.00
Faith
Non-Denom
Marital Status
Married
Glenn,
You wrote to your statistician friend, I too have one (well my wife shares a vanpool with a senior statistician). I posed my question about FT and statistical analyses. Here's his response (in part)

Did you show him the picture?

I bet you didn't.

Random noise should go up and down with a higher frequency than seen in the monthly satellite data. I stand by this. We may have to agree to disagree, but that doesn't look like random noise to me. It is cyclical. USE THE MONTHLY DATA, NOT THE YEARLY for pete's sake.
 
Upvote 0

grmorton

Senior Member
Sep 19, 2004
1,241
83
75
Spring TX formerly Beijing, China
Visit site
✟24,283.00
Faith
Non-Denom
Marital Status
Married
That's the stuff! Of course it kind of smells like "conspiracy", but it's always a good way to steer the debate.

These "bureaucrats" are quite nefarious now. I am beginning to get a picture of them, let's look at them closer:

1. They are incompetent by continuing to collect bad data

2. They somehow collect "perfect" data (ie no noise so running a FFT on the data results in a "perfect" fit showing obvious periodicity)

No, you once again misunderstood what I said. I said that the FFT was a near-perfect transform. I didn't say there was no noise. Please drop this line. I think we have mis-communicated.

3. They are quite crafty about their incompetence by spinning us a yarn about quality control so that the more gullible among us will buy it hook line and sinker.

So, I conclude from this that you think it is highly competent to approve an airconditioner exhaust fan next to an MMTS thermomenter. That doesn't strike me as competent.

I did an experiment a couple of months ago right after Hurricane Ike, when I got my electricity back on. The temperature outside was 86 deg F. I put a thermometer on my air conditioner. When it settled down, it read 108.4 deg F.

Now, there are lots and lots of thermometers next to air conditioners. You don't talk about them. YOu would rather talk about FFT's and other things, anything except the bad siting of stations. I have posted pictues and you say, gee that is bad, but then ignore them. What else can I do? You didn't respond to why Lampasas Tx temperature took off when it was moved close to parked cars with their hot engines and next to an airconditioner and on hot cement, all of which violate the siting guidelines. But hey, no problem we can divert off to other issues.

I'm going to assume your life in geophysics has kept you insulated form "industrial quality control" like we have to deal with in manufacturing.

No, I am ISO 9003 qualified. Wrong again.

If you, for one microsecond, think there's perfect or even near-perfect QC in anything done by humans you are sadly sadly mistaken.

No, I am not expecting 100%, but I do expect that when a thermometer like at Watersville Washington takes a 12 deg jump, is it too much to ask that SOMEONE NOTICE???? Who was minding the store? What about Lampasas TEXAS? Los Angeles? Halletsville and Flatonia? I have posted these. Unlike you, I have actually looked at about 100 towns and their nearest neighbors around the country. Every single comparison brings out stuff like I have been showing. It isn't that there are one or two of these things, but there are these kinds of errors EVERYWHERE!!!!

6-Sigma stuff aside, statistical process control exists because there are bad gauges and bad workers. There is, in a word, "error" out there.

Everywhere? Below I did an analysis of every station I could get my hands on in Missouri. I did a chain of city to city comparisons, trying to compare the nearest city or compare the 2nd nearest city in a chain across the state. Then I did the max min temperature difference for all the comparisons. Here is the data in the picture below None of these towns are more than about 80 miles apart. Notice how big the temperature spreads are.

edited to add about the Missouri picture. The way to read this is the following. In the first column we have Appleton minus Truman. When Appleton is hotter, the number is positive. Thus there was a year where Appleton was about 4.5 deg F hotter than Truman on average FOR THE ENTIRE YEAR!!!!!

Then there was a year where Truman was on average hotter than Appleton by about 1.75 deg F FOR THE ENTIRE YEAR!!!!! THese are YEARLY AVERAGES. That is one heck of a temperature gradient . Where are the winds which temperature differences cause? Where are the thunderstorms that last for an entire year?

I have so far seen that you have found a "group" of people who have analyzed 43% of the U.S. temperature stations and found 56% of that 43% to have a bias of 5deg (I have not been able to determine if they claim a systemic bias, but I earlier mistakenly assumed they were suggesting a consistent positive bias, I think I was wrong in that "accusation", my apologies). They cannot make any claims about the remaining majority of the stations. And they make no claims about the quality of unmanned ocean bouys and probes, they make no claims about satellite data, etc. Do they address the issues around the use of "anomaly" versus raw temp data?

Oh brother. Do you know much about sampling theory? If you had 43% of the US population tell you whom they were voting for next Tuesday, the margin of error in the poll would be less than 1% REally, I must say this looks like a grasp for a straw.




56% of 43% = 24%

Is that how you do your QC studies? Find the worst points of a limited set of gauges and assume it means your system is messed up?

I think it would be more likely that I could sell you a bridge.

I see you don't understand the nature of random draws and their relationship to statistics. Sigh. No wonder we have communication problems. You are looking for confirmation of your belief and lawyering your way out of the problems.

What do you have except a string of jpegs?

A fourier analysis, an edited set of data, comparsions of raw data. Are you saying that looking at the raw data upon which the conclusion of global warming is based is not to be done?

So, you think putting an airconditioner blowing on a thermometer is a good thing? You seem so unconcerned about that. Why? Let's look at Happy Camp again with 20 airconditioners in the area. For your information those JPGs have data on them. What do you have? Nada.

You like the tropospheric temperature, but from the start up until Aug 2008, it only differs by .18 deg. But if you had measured in May 2008, the tropospheric temperature had only change by .01 deg C.
 
Last edited:
Upvote 0

Chalnoth

Senior Contributor
Aug 14, 2006
11,361
384
Italy
✟36,153.00
Faith
Atheist
Marital Status
Single
Lets use the scientific data (which is online and scientific articles whenever we can. The journalists are not scientists and what they say is not peer reviewed.

I read that article and find something really interesting. They say that the instrument is biased to give too low a number. Thus, they did this
Right, this is what scientists do. They do their best to find and remove any biases in their data. I see that you left out the more detailed description of the process:
To determine the scaling factor for the entire ice sheet,
we applied our averaging function to the gravitational
signature of a uniform 1-cm water mass change spread
evenly over the ice sheet. We obtained an estimate of
0.62 cm. We thus multiplied each GRACE estimate by
1/0.62. Determining scaling factors for WAIS and EAIS is
more complicated because the EAIS averaging function
extends slightly over WAIS and vice versa. We applied
each averaging function to a uniform mass change over
each region individually and used the four resulting
values to determine the linear combination of WAIS and
EAIS results that correctly recovers the mass in each
region.
This was an overall scaling factor applied to the regions using a simulation. It doesn't affect the time-dependent nature of the signal, instead using a simulation to estimate the proper scaling factor for the data.

Of course, as a gravity probe of the ice level, this is measuring the land ice melt. This may be like the weather surface who changes the trends of bad stations to more suitable trends. Changing data is quite tough to get it right. Given the mass of ice on Antarctica, 36 cubic miles is not a large percentage.
If a large percentage of Antarctica melted, many coastal cities would be underwater. This is why we do science: to find out about such problems before they happen.

I have published 2 papers on gravity data. I know that gravity has a high noise level and it has non unique solutions to the potential fields.
The high noise problem is solved by averaging over large areas. As for the non-unique solution to the potential field, since the overall value was not of interest, but instead the change in the value over time, I fail to see how this is a problem.

You might potentially have an argument they they have unaccounted-for systematics, but I don't think your claims as to the noise or non-uniqueness of the gravitational potential solutions are valid.

Prove that I am cherry-picking. If you can't withdraw the charge.
Well, let's see:
And once again you ignre the fact that throughout history, from ice core information, when the arctic melts, antarctica ice extends. and when Antarctica melts, the arctic ice grows. Why do you ignore what I posted? Does data not matter to you?
This is cherry picking because it doesn't pay any attention to the current data. And, by the way, many models actually do predict that the overall Antarctica ice mass will increase during the first years of global warming, due to increased over-land precipitation. This is what makes the gravity measurements of the mass so shocking.

Or,
Attached is the fourier transform and the satellite data. See the jitter that takes place on a monthly basis? That is the highest amplitude frequency. See the 64 month peak? That is the underlying periodicity of the data. I would strongly suggest a perusal of the FFT part of the book John C. Davis, Statistical and data Analysis in Geology. That will let you understand that the FFT is a perfect fit to the data. But, in general, the biggest peaks are seen in the data without the fft.
...as if somehow a 3-8 year periodicity contradicted the existence of a decades-long trend.

Or,
Well, I guess I need to point out that that wonderful QC effort is why a 12 degree jump in temperature in Watersville Washington was allowed to continue. While they were picking their noses They didn't notice a sudden jump in temperature--yet you believe them when they say they do a good job of QC just because they say they do. Gullible.

Or even better, this post where you exclusively focus on cherry-picking data:
http://christianforums.com/showpost.php?p=49215997&postcount=88
...showing a shocking disconnect between weather and climate, as well as the fact that some areas are expected to cool while the average temperature of the globe increases.
 
Upvote 0

grmorton

Senior Member
Sep 19, 2004
1,241
83
75
Spring TX formerly Beijing, China
Visit site
✟24,283.00
Faith
Non-Denom
Marital Status
Married
Thaumaturgy asked for an explanation of his kappa function. Last night after getting back from he ranch, I was simply too tired to see it.

I first want to thank Thaumaturgy for making this thread boring where we are discussing things few understand. This will be my last on the nerd grenades.

Thaumaturgy used yearly data, I used monthly. As I commented earlier, averaging a time series is a huge filter. In order to explain this, and why one won't get the same powerspectra from averged time series data an un averaged, you need to understand the concept of the nyquist frequency.

If you have a sin wave that you want to represent in digital form you need at least two measurements spaced by half a wave-length. In other words you need to sample the positive part of the curve once and the negative part of the curve once. That is the minimal amount of sampling which will allow the detection of a frequency. But, you won't get the amplitude of the wave right (most likely). You might sample at the peak and trough but more likely you will sample say at 37 deg, or 75 deg or any of the degrees between 0 and 90.

The fact that you have to have two samples spaced at half wavelength period gives you a hint as to why the fast fourier transform must work on multiples of two.

Now, if, as in my case, where I used monthly data, Have 356 points, or can have 178 distinct frequencies to measure.

But, when you convert the monthly series to yearly averages, you do two things. First you severely filter the data, converting 356 points into 30. That means you can only detect 15 different frequencies in an fft, compared with my 178. Thus, you have fundamentally assumed your conclusion, by only examining in your FFT 1/12 of the possible frequencies and 1/12 of the data.

You only let though the filter frequencies that have integraly yearly periods. 2 year periods, 3 year periods, 4 year periods. But what you can't see with your approach are the 5.4 year periods. or the monthly periods. You have removed them, and then said Wow, they aren't there.

It is like taking off the wheels on your car and then being surprised that your car doesn't have wheels.

If this is a sign of how well you understand Fourier transforms, I am sorry, I am not impressed.

Now to your Kappa function. By filtering the data with an averaging process, you removed the periodicity and then you draw the mistaken conclusion that the series has no periodicity. But, even so, you couldn't remove part of the periodicity--the 4 year cycle had enough residual amplitude to stand out even in your poorly constructed transform.

The raw data of the global historical climate network is used in the IPCC. Your desire to ignore this data, when in fact it is used says to me that you are looking for something else other than this weak data to support your position.

I am through with the mathematical nerd grenades. YOu mangled the data by filtering it and then proclaim there is no periodicity. Such mistakes with time series data are elementary. Thus, I am going to continue to ask you to respond to why is it that towns separated by merely 20 miles can have a 4 degree temperature difference for decades and then reverse where the other town is hotter for decades.

The temperature gradients are horrendous and last for an entire year, according to the data. Such temperature differences should at least be reflected by constant winds blowing in one direction throughout the year. At worst such temperature differences would cause year long thunderstorms.

Temperature differences, if real cause meteorological phenomenon.

I am through dealing with the bad time series analysis you have presented. You didn't even seem to know that you don't average a time series without making dramatic changes to the frequency spectrum.
 
Upvote 0

Chalnoth

Senior Contributor
Aug 14, 2006
11,361
384
Italy
✟36,153.00
Faith
Atheist
Marital Status
Single
Thaumaturgy asked for an explanation of his kappa function. Last night after getting back from he ranch, I was simply too tired to see it.

I first want to thank Thaumaturgy for making this thread boring where we are discussing things few understand. This will be my last on the nerd grenades.

Thaumaturgy used yearly data, I used monthly. As I commented earlier, averaging a time series is a huge filter. In order to explain this, and why one won't get the same powerspectra from averged time series data an un averaged, you need to understand the concept of the nyquist frequency.

If you have a sin wave that you want to represent in digital form you need at least two measurements spaced by half a wave-length. In other words you need to sample the positive part of the curve once and the negative part of the curve once. That is the minimal amount of sampling which will allow the detection of a frequency. But, you won't get the amplitude of the wave right (most likely). You might sample at the peak and trough but more likely you will sample say at 37 deg, or 75 deg or any of the degrees between 0 and 90.

The fact that you have to have two samples spaced at half wavelength period gives you a hint as to why the fast fourier transform must work on multiples of two.

Now, if, as in my case, where I used monthly data, Have 356 points, or can have 178 distinct frequencies to measure.

But, when you convert the monthly series to yearly averages, you do two things. First you severely filter the data, converting 356 points into 30. That means you can only detect 15 different frequencies in an fft, compared with my 178. Thus, you have fundamentally assumed your conclusion, by only examining in your FFT 1/12 of the possible frequencies and 1/12 of the data.
This is absurd. Most of the effect of averaging the data is just cutting out the high-frequency information. There are some edge effects at the boundary where the frequency cut is done, and then at lower and lower frequencies the effect of the averaging is less and less. Since the only frequencies we're interested in are much longer than yearly, averaging before the FFT is a perfectly valid operation to perform.

You only let though the filter frequencies that have integraly yearly periods. 2 year periods, 3 year periods, 4 year periods. But what you can't see with your approach are the 5.4 year periods. or the monthly periods. You have removed them, and then said Wow, they aren't there.
Edit: I made a mistake in this response. See below.
 
Last edited:
Upvote 0

thaumaturgy

Well-Known Member
Nov 17, 2006
7,541
882
✟12,333.00
Faith
Atheist
Marital Status
Married
I first want to thank Thaumaturgy for making this thread boring where we are discussing things few understand. This will be my last on the nerd grenades.

My apologies if detailed analyses of technical information "bore" you. I find that an interesting attitude on your part.

I won't, however, stop hammering on the primacy of time series analysis in this respect.

Thaumaturgy used yearly data, I used monthly. As I commented earlier, averaging a time series is a huge filter. In order to explain this, and why one won't get the same powerspectra from averged time series data an un averaged, you need to understand the concept of the nyquist frequency.

Indeed you are correct. This weekend I downloaded the statistic programming language "R" in hopes of having a home-based robust stats program, however learning a whole new "unix-like" commandline programming language in a day is beyond me. Rest assured, that I have taken the data and re-coded it so that JMP statistical software (which I have on my work computer) will be more than up to the task of eliminating that "monthly filter".

It is like taking off the wheels on your car and then being surprised that your car doesn't have wheels.

Thanks for the thoughtful explanation. I will remove that filter. It was in error.

If this is a sign of how well you understand Fourier transforms, I am sorry, I am not impressed.

Look, Glenn, I've been more than a little forebearant in this conversation. I think you need to dial back the bitterness a bit. I am more than willing to acquiesce that I am not a signal-processing guru, but you do not strike me as being particularly skilled with statistics as evidenced by your total lack of dealing with the statistical analysis of Time-series data.

More on that in a later post...

I am through with the mathematical nerd grenades.

Sorry to hear that. I hope you achieved your goal of impressing everyone. Now let me do some....

You see, Glenn, while I may not be as skilled with FFT and I may be a newbie to statistical analysis of data I think you are avoiding one in preference to the other. My next post will be a background to this stance and will explain in further detail why your avoidance of statistics in this case is more damaging to your point than it is to mine.

I readily acquiesce that I over-filtered the data. I need to re-code it so JMP will recognize it not as repeat but as individual monthly means.

This will not happen until tomorrow AM at the earliest when I am back at my work computer.

YOu mangled the data by filtering it and then proclaim there is no periodicity. Such mistakes with time series data are elementary.

Get this straight: I overfiltered the data and found a secular trend with a statistical significance indicating a real non-zero trend. It ignored the cyclcity at higher levels.

That DOES NOT mean I made a "mistake". It means I made an "OMISSION".

If it were "elementary" in terms of "time series" analysis you could have done any of the time series analyses yourself and proven my error.

Try a Durbin-Watson Autocorrelation test.

Thus, I am going to continue to ask you to respond to why is it that towns separated by merely 20 miles can have a 4 degree temperature difference for decades and then reverse where the other town is hotter for decades.

And I am going to continue to ask you to treat statistical time-series data using statistics.

I am through dealing with the bad time series analysis you have presented. You didn't even seem to know that you don't average a time series without making dramatic changes to the frequency spectrum.

Thank you for your "kindness". However, you needn't read my next post which will lay the groundwork for the further analysis. It even includes my mea culpa for overfiltering and ignoring higher order cyclicities.

Something I see precious little of on your side. You talk big about statistics, but I am so far the only person actually doing statistics.

And ironically enough, this is a statistics conversation.
 
Upvote 0

thaumaturgy

Well-Known Member
Nov 17, 2006
7,541
882
✟12,333.00
Faith
Atheist
Marital Status
Married
I have some concern for the debate so far. I fear that in a detailed discussion of data we are splitting into two camps:

Thaumaturgy: Obsession with statistics
Glenn: Obsession with anecdotal analysis of data and avoidance of any statistical discussion

Now I must make the caveat that I am far from an “expert” on statistics, however statistics are really the only way to make sense of any of this data and any of the trends. I am kind of amazed that Glenn has shown so little interest in any real statistical analyses of the data so far.

So far what Glenn has produced is various jpegs and descriptions of “bad” gauges which are likely drawn from the database similar to: http://www.surfacestations.org/

(I must add in that I mistakenly thought they only assigned positive errors to the station biases, but I don’t think that was accurate. I have made a mistake. It is unclear, but it appears that they merely quantify the error outside of the bounds, so I assume +. My apologies for unnecessarily attributing something bad to the folks at surfacestation.org)

However, I am unsure if there is an assessment of systematic bias. That is a very important aspect. Bias on normalized data is problematic if it is systematically in one direction or another, but that is the important part that must be assessed, not merely finding “error” in a gauge.

Then Glenn went onto produce an ex cathedra claim of “periodicity” in a dataset supported only by a fourier transform which he then proclaimed:

Well, if you truly understand FFT, then you will know that the above data (along with the phase spectrum) will be a complete description of the exact variation of temperature from the satellite. It is a PERFECT fit because I can invert this FFT and end up with the exact temperature curve. I don't think you use FFT as much as you claim or you would know this.

Now I’m as vain as the next scientist so I don’t much like being told the following:

This last sentence clearly convinces me that you know nothing about fourier transforms.

But I must admit it is true that I clearly don’t have the extensive skills with FFT that Glenn does, however, as the saying goes: we are all ignorant, just in different things.

It is also probably quite reasonable to claim the fit is perfect, but that only hides the inherent error in the data rendering it useless as a scientific analysis. While the “Fit” may be perfect, the data fed into is, by definition, imperfect.

So I will recommend that it is wrong that any data that has been merely passed through an FFT is, ipso facto a perfect model. Simply t’aint so. The “fit” may be perfect, the model isn’t. It cannot be unless the data is perfect.

I will further hazard a guess that Glenn, by merit of his consistent avoidance of any real statistical discussion so far, is doing what we all do to a greater or lesser extent: sticking to his safe-place. I’m doing it. I love stats (I’m not really safe there as I’ve said numerous times, I’m still new to this field, but it is my love right now). Glenn may not have the stats background necessary to deal with the topic and I might not have the “signal processing” background.

This is where statistics actually comes to my rescue (yet again). In statistics there is a field which is directly germane to this topic. In fact, apart from it’s application in economics, it is perfectly tailored for this discussion. I am just learning about it now, as we speak! I would think that maybe Glenn could also benefit from what I’m learning, since it has so far only barely come up from his side of the discussion.

This field of statistics is called TIME SERIES ANALYSIS. This stuff is custom made to deal with periodic data.

Time series are, well, data collected over time at time-intervals! In fact there is an immense body of information about time series analysis developed over the decades in statistics!

Now here’s where I excel in these debates: I will readily agree I have oversimplified the dataset so far under discussion, and in doing so I’ve made an omission. By fitting a linear (or 2nd order or 3rd order polynomial) as I have done is, technically incomplete. I have merely modeled the secular trend in the data.

You see, when we are looking at the raw data from this or any time-series data we need to be mindful that there may, indeed, be regular cycles laid on top of larger trends as Glenn has stated numerous times. In this case the “secular trend” is merely the overall “increase”, in the present case I was able to fit it to a linear least-squares regression and come out with a statistically significant trend! I even got a p-value that it was a non-flat trend of 0.0001!

That’s pretty impressive. But as I noted (I will remind the court that I was the one who noted it, I am doing my work and Glenn’s in this case), the adj R[sup]2[/sup] of the fit was bad. I mean real bad. It was a really noisy data set. The Lack of Fit analysis indicated something else was needed (NOT that the linear least squares was “bad”, just insufficient). Higher order fits proved better with regards to the residuals, but the Lack of Fit still showed something was missing.

Glenn comes along, wipes away all the statistics talk (for reasons known only to him at this point), it was merely waved aside thusly:

BTW, I emailed a statistics professor friend and asked him if linear regressions are good with cyclical data. He said no. So, your linear regression is nice, but not meaningful.

Sadly his “statistics friend” didn’t point out the very real point that Time Series analyses do, indeed, include linear terms. I’ll assume that the statistics source Glenn spoke with didn’t bother to go into the topic in depth. So I’ll lay a little on ya to help ease the discussion:

The classical time-series model looks like this:

Y[sub]t[/sub] = T[sub]t[/sub] + C[sub]t[/sub] + S[sub]t[/sub] + I[sub]t[/sub]

Where
Y[sub]t[/sub] : The response variable of interest (like temperature anomaly or some such)
T[sub]t[/sub]: The “secular” trend (the overall trend of the data on long time scales)
C[sub]t[/sub]: Cyclical movement
S[sub]t[/sub]: Seasonal Flux
I[sub]t[/sub]: Irregular variation

The key here is that the model relies on an assessment not only of the presence or absence of a larger-scale “secular” trend (which is what I modeled with my linear least squares regression) but also the acceptance not only of cycles on top of that, but, and this is key, IRREGULAR VARIATION

Irregular variation is, for lack of a more precise term, noise or strange, uncontrolled or not-modeled variation. The “stuff happens” things. A weird day of strange temperature, the badly placed gauge, whatever)

There simply are no perfect data sets. So when Glenn decrees the FFT is a “perfect” fit to the data he is ignoring the important aspect that the data itself is not perfect. If it were then I’d have far too much skepticism of the data. There’s noise in the model.

Now when it comes down to whether the model of the time-series data, there’s two ways to look at it : Additive or Multiplicative. Here’s the form of the equations:

Y[sub]t[/sub] = b[sub]o[/sub] + b[sub]1[/sub]t + bQ[sub]1[/sub] + bQ[sub]2[/sub]….+ E

The b-terms are the coefficients of the model. b[sub]o[/sub] is the “intercept”, and the term b[sub]1[/sub]t is related to the SECULAR TREND. It is the analogue of the linear trend I graphed. However as noted (by me, not by Glenn) I was able to mathematically prove the model of the linear least-squares regression was insufficient. It was, likely, accurately ‘non-zero’, but not sufficient.

The cyclcity is shown by the other terms, the bQ[sub]n[/sub] terms of cyclcity etc. The very important thing comes at the end. That E is the residual. The “unassisgned error”.

Glenn would have us assume that there is no error in his model. Sadly the folks who do Time Series Analysis would tell us, as anyone who has ever done statistics in science knows; there is no perfect data set except bad data sets. If there’s no error, there’s likely no use to the data. If I were to see a perfect fit to data without any assessment of the potential for error I’d throw it away.

Not all Time Series analyses are “additive”, sometimes the problems arise in a non-linear change in the response variable. That’s where the Multiplicative form of the equation comes in:

Y[sub]t[/sub] = e[sup]b[sub]o[/sub][/sup] * e[sup]b[sub]1[/sub]t[/sup] * e[sup]bQ[sub]1[/sub] [/sup] * e[sup]bQ[sub]2[/sub] [/sup]….+ * e[sup]e [/sup]

Finally there’s the issue of autocorrelation. If we were to plot the residuals of a linear-least squares regression of cyclical data thus finding evidence for a “Secular Trend” we might note the residuals are arrayed around 0 in a “cyclical manner”. In other words we see that there is cyclicity of a higher frequency to the data. 1[sup]st[/sup] Order Autocorrelation is correlation between neighbors but higher orders of periodicity can be assessed for autocorrelation when calculated.

Here’s a nice quote from McClave, J.T., and Benson, P.G., Statistics for Business and Economics, 4th edition, Dellen Publishing Co., San Francisco, CA 1988

Rather than speculat about the presence of autocorrelation among time series residuals, we prefer to test for it.

This is the essence of the statistical analysis mindset. Don’t just go with yer gut, test for the results.

The way to test for a first order autocorrelation is the Durbin-Watson d-statistic

It’s general form is:

d = S (R[sub]t[/sub] - R[sub]t-1[/sub])[sup]2[/sup]/S R[sub]t[/sub] [sup]2[/sup]

Where R[sub]t[/sub] is the residual of the time series data.

(S is for "sigma" here, sorry)

0<d<4

If the residuals are uncorrelated d ~2
If the residuals are positively autocorrelate d<2, and if the autocorrelation is very strong d ~0.
If the residuals are negatively autocorrelated the d>2, strongly so, d~4

Now, if you&#8217;ve made it this far in, congrats, you&#8217;ve seen what I do on Saturday afternoon while avoiding the laundry I&#8217;m supposed to be doing while my wife works on the weekend. But here&#8217;s a very important point

To learn more about these topics I recommend:

Lapin, Lawrence L. Statistics for modern business decisions Harcourt Brace Jovanovich, c1987

McClave, J.T., and Benson, P.G., Statistics for Business and Economics, 4th edition, Dellen Publishing Co., San Francisco, CA 1988
 
Last edited:
Upvote 0

thaumaturgy

Well-Known Member
Nov 17, 2006
7,541
882
✟12,333.00
Faith
Atheist
Marital Status
Married
This is absurd. Most of the effect of averaging the data is just cutting out the high-frequency information. There are some edge effects at the boundary where the frequency cut is done, and then at lower and lower frequencies the effect of the averaging is less and less. Since the only frequencies we're interested in are much longer than yearly, averaging before the FFT is a perfectly valid operation to perform.


While true, what happens is that those 5.4 year periods are averaged with the 4.6 year periods, mostly contributing to the 5-year period. Note that this is so because both the averaging and the FFT are linear operations, which means their order can be interchanged. And while the averaging obviously suppresses the relative signal, it suppresses the noise as well. The main problem is that in neither analysis was the noise modeled, which is essential for determining whether the 3-5 year peak is significant or not.

Chalnoth has a point here. While I may have filtered out shorter frequency cyclicity, clearly, annual averaging should have shown a multi-year trend. Which, as I recall, was what I pegged as the secular trend of the data.

JMP and other statistical analysis packages essentially treat secular trends as (as I understand it) extremely long-wavelength periods.

Again, a linearl least squares regression is not wholly inappropriate. It is insufficient to fully describe the data. ANd I'm even willing to assume that there may be some cyclical trends in the data...but this is a very big point:

In a statistical time series data set, Glenn has merely asserted that there is a periodicity and shown anecdotal evidence of a periodicity but has assiduously avoided all statistical discussion of inherently statistical data

Glenn, surely, you as an old veteran of scientific debate understand the logic formalism of the debate. You are obligated to support your contention by more than a mere "appearance" when there are numerically robust means to prove it.

You might have the best "answer" but unless you can prove it using the appropriate means, that answer is of little value.
 
Upvote 0

Chalnoth

Senior Contributor
Aug 14, 2006
11,361
384
Italy
✟36,153.00
Faith
Atheist
Marital Status
Single
Oops! I made a mistake here. Rather than editing, since there has been a post, I thought I'd post a reply.
While true, what happens is that those 5.4 year periods are averaged with the 4.6 year periods, mostly contributing to the 5-year period. Note that this is so because both the averaging and the FFT are linear operations, which means their order can be interchanged. And while the averaging obviously suppresses the relative signal, it suppresses the noise as well. The main problem is that in neither analysis was the noise modeled, which is essential for determining whether the 3-5 year peak is significant or not.
This is not true, it turns out. Having finer data over the same time domain does not add any additional granularity in the frequency domain. All that it does do is add higher-frequency information. What is happening, then, is that some of the high-frequency information gets wrapped into the low frequency information as a result of the averaging. As long as you're still at significantly lower frequency than the frequency at which the averaging was done, however, this contamination isn't that significant.

If you want better granularity in the frequency domain, the only way to do that is to have a longer series of data.
 
Last edited:
Upvote 0

thaumaturgy

Well-Known Member
Nov 17, 2006
7,541
882
✟12,333.00
Faith
Atheist
Marital Status
Married
No, you once again misunderstood what I said. I said that the FFT was a near-perfect transform. I didn't say there was no noise. Please drop this line. I think we have mis-communicated.

YOU miscommunicated. We are talking about a statistical data set and you have chosen to hide your "fit" as somehow "perfect".

Let's rewind the tape:
I expect no less of you to fit according to your hypothesis and to back up the claim with an actual analysis. And some assessment of "goodness of fit" would also be of great value.

To which you responded:

Well, if you truly understand FFT, then you will know that the above data (along with the phase spectrum) will be a complete description of the exact variation of temperature from the satellite. It is a PERFECT fit because I can invert this FFT and end up with the exact temperature curve. I don't think you use FFT as much as you claim or you would know this.

Indeed you have answered a direct challenge for a "goodness of fit" to the data by claiming some "perfection" of your model.

You have miscommunicated. Please do not hold me responsible for interpretting your failure to follow the statistical portion of the discussion.


So, I conclude from this that you think it is highly competent to approve an airconditioner exhaust fan next to an MMTS thermomenter. That doesn't strike me as competent.

Are you deliberately misrepresenting my repeated stance? I have said over and over that I find these "bad gauges" to be Bad. Unequivocally so.

The fact I was being rather sarcastic here clearly has missed you. But I will repeat it yet again, so you will have no actual excuse to cover your false witness:

I find the anecdotal data you present to be examples of bad gauges. Bad data input.

It is incumbent upon you to please support your apparent hypothesis of systemic bias.

Not my job to disprove your anecdotal guesses.

I did an experiment a couple of months ago right after Hurricane Ike, when I got my electricity back on. The temperature outside was 86 deg F. I put a thermometer on my air conditioner. When it settled down, it read 108.4 deg F.

Who cares? Do you know that I've actually had a thermodynamics class and I know that air conditioners generate heat! Yeah! I know that!

Now, there are lots and lots of thermometers next to air conditioners. You don't talk about them.

How does one get ISO9003 certified and be so insulated from Statistical Process Control issues?

YOu would rather talk about FFT's and other things, anything except the bad siting of stations.

No, Glenn, I would rather talk STATISTICS when talking about a STATISTICAL DATA SET.

Look, it is becoming alarmingly clear you are not into stats. That's fine. YOU steered the conversation into FFT, I merely followed with an explanation that FFT alone was insufficient. It is an incomplete a picture as merely following the secular trend in the data as I did. (you'll note that in many cases of time series data, a fourier transform is done, I am not denying the validity of an FT).

Neither of us is "superior", but at least I was willing to go down the FFT path. Do me the favor of going down the statistics path.

Unless, of course, you are afraid of the actual tools necessary to process this data.

I have posted pictues and you say, gee that is bad, but then ignore them.

They have not been ignored. Stop misrepresenting my point. My point is you have anecdotal data which you are trying to foist of in a statistcs discussion.

I would think they would cover this in ISO9003. But I've only been through ISO9001 and document control. Maybe they don't cover actual statistical information in 9003.

What else can I do?

Discuss some statistics in an inherently statistical discussion?

No, I am ISO 9003 qualified. Wrong again.

Well, then why don't you ever mention any statistical analyses of data in this discussion?

You know what the good book says: "By their fruits ye shall know them".

It isn't that there are one or two of these things, but there are these kinds of errors EVERYWHERE!!!!

You have just made a statistically unsupported and unsupportable claim. Have you been everywhere? Really?

You have seen photos people have taken from an as-yet-undefined "sampling grid" and found that 26% of this sub-set have a >5deg bias and you are claiming "everywhere"? How truly random is the sampling?

(this isn't just stats anymore, we are also verging on bad "logic" as well.)

Oh brother. Do you know much about sampling theory? If you had 43% of the US population tell you whom they were voting for next Tuesday, the margin of error in the poll would be less than 1% REally, I must say this looks like a grasp for a straw.

You are too entertaining on this front: Go back to your buds on surfacestations.org and show me their sampling plan.

Are they randomly chosing stations?

Please, stop with the accusation stuff. You spend too much time telling me what you are capable of in regards to statistics and too little time doing actual statistics in this discussion.

I see you don't understand the nature of random draws and their relationship to statistics. Sigh.

Fine. Show me the actual random generation of samples in these various analyses. Don't just accuse me of ignorance, show your stuff.

I'm getting really hot under the collar and that makes me sad.

Rather than simply lobbing accusations of my incompetence, stop blustering and start giving me the goods.

No wonder we have communication problems. You are looking for confirmation of your belief and lawyering your way out of the problems.

Son, if I wanted to 'lawyer' my way out of a problem here, believe me, I could do it. Something I get to do as an unrelated part of my job is "Intellectual Property Law Issues". Part of what we do in innovation in industry.

I'm so far on this discussion the only one who seems to follow the logic formalisms inherent not only in statistics but in the overall hypothesis testing protocols.

-sigh-

A fourier analysis, an edited set of data, comparsions of raw data. Are you saying that looking at the raw data upon which the conclusion of global warming is based is not to be done?

Wow, you really are ISO9003 certified? Really? So, do you know why people do statistics? Do you really understand the need for appreciation of "error" in the data?

Huh.

So, you think putting an airconditioner blowing on a thermometer is a good thing?

How many times do I have to answer this question???

Really, please just read my posts. This time follow along. Have I ever said it was a "good" thing or something that was of no concern?

Ever?

Did you miss the Gauge R&R portion of your ISO training?

Please, less bluster and bluff, less "anecdotal data" and more statistical robustness in a statistical discussion.

Is that too much to ask? The title of the thread is "Global Warming--the Data, and serious debate".

Is this data best dealt with statistically or not? Just answer that point.
 
Upvote 0

thaumaturgy

Well-Known Member
Nov 17, 2006
7,541
882
✟12,333.00
Faith
Atheist
Marital Status
Married
If he claims that the current level of CO2 is unprecedented, I stand by my claim at least as far as CO2 is concerned.

I don't believe Revelle was making any claims about relative CO2 concentrations over earth's history. I believe his point was that this is the first time humanity has been in the process of, by its own hand, shifting this CO2 balance in a way we know about. Which is why he called it a vast global "experiment".


I am sorry, I still have to laugh at this. CO2 in the atmosphere doesn't care whether or not it is human. The quote I was responding to, as I recall (and I am simply dead tired right now) was

But obviously not tired enough to just go off on a tangent that may not be an accurate representation.


Understood.

exactly how can we claim that this experiment couldn't happen in the past--it did.

I didn't realize mankind was at a knowable rate generating massive CO2 signature from the combustion of fossil fuels in the Miocene. Must have missed that lecture in Historical Geology.

If your guy doesn't know about this, I stand by my claim that he doesn't know very much about earth history. He may know a lot about the ocean, but not about earth history.

[wash my mouth][wash my mouth][wash my mouth]. Please, stop this line of "attack" before you go too far off the deep end. You are going into "strawman" territory now. Maybe you just need to learn a little about the background of the CO2 debate. It goes back to the 1950's and Revelle and Seuss's pioneering work. And Revelle wasn't some "global warming" nut or anything. He was merely tracking CO2 buffering capacity by seawater.

He isn't just "my guy", he's about as famous in oceanography as Hutton is in geology. Or Vine and Matthews are in Plate tectonics.

The only difference between then and now is that humans are the cause of the CO2. But the earth doesn't care about that.

Well, the earth doesn't care, but humanity can quantify humanity's impact, and that's what makes it the "experiment". Not that it is the highest level of CO2 ever encountered.

Was this point too subtle for you? Get some sleep. Think about it. Read up on it.
 
Upvote 0

thaumaturgy

Well-Known Member
Nov 17, 2006
7,541
882
✟12,333.00
Faith
Atheist
Marital Status
Married
LOL LOL. This is not a single point transform. Do you know what the transform of a single point is, that is, a spike? It produces an amplitude spectrum which is perfectly flat and contains all frequencies with equal ampllitudes. Boy, you really don't know Fourier transforms very well.

As a point of information I had 357 points to transform--Just shy of 30 years of monthly data.

You are a smart guy. Don't try to argue about things of which you don't know. Be skeptical. that is ok, but don't argue about processes you don't know very well.

Wow. I didn't mean that you transformed a literal, single point! (Please, if you can possibly spare some, grant me at least a little intelligence, I saw the data too!) I put the quotes around it more out of sloppiness. What I was rather getting at was that you assume all points in the data set are equally "valid" (ie no noise), and that this was, after all, a single data set so far discussed in detail. At the expense of masses of other data.

So maybe I did state it poorly and unclearly, but don't go off on a rail about something that ends up with you lambasting my knowledge.

Remember, Matthew 7:3. I suspect you may have a bit of a beam in your eye with regards to statistics. The mote of my FT skill is no less egregious, but please, dial back on the vitriol here.
 
Last edited:
Upvote 0

thaumaturgy

Well-Known Member
Nov 17, 2006
7,541
882
✟12,333.00
Faith
Atheist
Marital Status
Married
Everywhere? Below I did an analysis of every station I could get my hands on in Missouri. I did a chain of city to city comparisons, trying to compare the nearest city or compare the 2nd nearest city in a chain across the state. Then I did the max min temperature difference for all the comparisons. Here is the data in the picture below None of these towns are more than about 80 miles apart. Notice how big the temperature spreads are.

edited to add about the Missouri picture. The way to read this is the following. In the first column we have Appleton minus Truman. When Appleton is hotter, the number is positive. Thus there was a year where Appleton was about 4.5 deg F hotter than Truman on average FOR THE ENTIRE YEAR!!!!!

Then there was a year where Truman was on average hotter than Appleton by about 1.75 deg F FOR THE ENTIRE YEAR!!!!! THese are YEARLY AVERAGES. That is one heck of a temperature gradient . Where are the winds which temperature differences cause? Where are the thunderstorms that last for an entire year?

OK, I snagged the Missouri temperature data from USHCN for 1893-1994

Could you please explain what, in detail, your graph shows? Because here's what I'm seeing:

For example: Appleton: MAX high in the century was in July 1901 The temp: 104.23degF

July 1901 for Truman had a high temp of 102.17deg. Are you really concerned that there was a 2degF temperature difference?

In Truman it's all-time high for a July was in 1954 at 103degF. Appleton that same month? 101.5degF. Again, less than 2 deg difference.


In the "Min" category, Appleton's lowest temp (aside from the occasional "-99.99" which I take to be errors) is 2.52degF in Jan 1940. Truman? Ironically enough it has the exact same month, same year (Jan 1940) as it's lowest recorded temp at 0.19degF.

Let's look at the Lebanon-Lamar couplet:

Lamar's highest monthly average max was; Whaddaya know! July 1901, at 103.8degF!!! That sounds familiar...hmmm, hot month in the ShowMe state (I did a graduate degree in Missouri, so I kinda like this "show me" stuff).

Lebanon's highest was in 1934 at 104.9degF, but in 1901 it was still pretty smokey at 101.1degF.

MIns? Lamar's was 5.8degF in Jan 1940. A lot like Appleton, wouldn't ya say?

Lebanon was 4.79degF, it's lowest. Same month, same year.

You want to make big deal of anecdotal data? I can do that.

But more importantly I'd like to note that your graph of the Missouri temp station differences shows a range of + 8degF, right? Please explain exactly what you were calculating and describe the adiabatic correction factor you invoked.

But more importantly, you are making a huge deal of 1.75degF temperature difference? Surely in your ISO9003 training you must have learned something about relative error.

This is why the discussion of statistics is so crucial to this debate. It is irrational and frankly pointless to merely go through, find messed up gauges and occasions of data inhomogeneities and assume the entire system is worse than useless.

Again, if relative error is a problem for you, then you really need to learn how harness statistics. Error can be, to some greater or lesser extent, quantified and dealt with. You cannot, however, simply go anecdotally and assume any single gauge is giving you the entire picture. Please tell me in your ISO9003 training you did get some appreciation for the fact that there is never any perfect measurement system, right?

Did you do any Gauge R&R studies?
 
Upvote 0

grmorton

Senior Member
Sep 19, 2004
1,241
83
75
Spring TX formerly Beijing, China
Visit site
✟24,283.00
Faith
Non-Denom
Marital Status
Married
Right, this is what scientists do. They do their best to find and remove any biases in their data. I see that you left out the more detailed description of the process:

Which is why you cited the Washington post???

The high noise problem is solved by averaging over large areas. As for the non-unique solution to the potential field, since the overall value was not of interest, but instead the change in the value over time, I fail to see how this is a problem.

The other problem, that people who don't deal with gravity data don't know is that the density of the material leaving the antarctic continent must also be perfect. How much rock flour, how many rocks do you think the base of the ice carries? That adds to the density of the ice and if you don't account for that correctly, it will look like more ice is leaving the continent than actually is.

I once oversaw a gravity study of a salt dome. We were trying to use the gravity and magnetics to calculate the thickness of the salt layer in the gulf of Mexico. THe one thing we didn't have was the exact density of the sediments beneath the salt. For very tiny changes in density, the calculated base of salt went up or down by kilometers. We were wanting to use this data to help us in a depth migration of a 3d seismic volume. After spending lots of money trying to find the base of salt, we had to give up because of the lack of precise knowledge of the sedimentary density.

Now, rock flour and base load rock in the glacier has about 3 x the density of ice. Thus, if in all of Antarctica, the glaciers are disgourging slightly more rock than is being picked up, you will skew the results.

I have no doubt you won't care about this fact, but it is true.

You might potentially have an argument they they have unaccounted-for systematics, but I don't think your claims as to the noise or non-uniqueness of the gravitational potential solutions are valid.

They also raise several other limitations on their data. But what the heck, you don't really care about any objections anyway from someone who has actually done gravity modeling. In a career in the oil industry one gets exposed to a huge number of technologies, and as a technology director at one time, the number of technologies I became familiar with was even more.

As a geophysical manager working from the Atlantic Coast to California and the Gulf of Mexico to the North Sea, I used millions of dollars of technology to help us find oil. Not all were successful. Sometimes you learned how sensitive a gravitational solution is to the exact density.


This is cherry picking because it doesn't pay any attention to the current data. And, by the way, many models actually do predict that the overall Antarctica ice mass will increase during the first years of global warming, due to increased over-land precipitation. This is what makes the gravity measurements of the mass so shocking.

Or,

...as if somehow a 3-8 year periodicity contradicted the existence of a decades-long trend.

Or,


Or even better, this post where you exclusively focus on cherry-picking data:
http://christianforums.com/showpost.php?p=49215997&postcount=88
...showing a shocking disconnect between weather and climate, as well as the fact that some areas are expected to cool while the average temperature of the globe increases.

This from the guy who cherry-picked that he wouldn't discuss weather stations. Since I have downloaded all of Missouri, and large chunks of other states, I don't take your cherry-picking charge very seriously. I will let others judge that question.

I came here to discuss the station data, which, as I said, I will do. If you and thaumaturgy, choose to cherry-pick the data you respond to and the data you won't respond to, fine. The temperature data is what is used in the IPCC.

So, lets go back to the temperature data How about this picture tonight. Let's discuss the consequences from a meteorological perspective.

Attached are the temperature difference between Perry, Ok and Stillwater OK. From 1899 to about 1917, in general the subtraction is positive meaning Stillwater is hotter than Perry. Now, Perry is NW of Stillwater. Since hot air rises, we should see the air from Perry flowing towards Stillwater for these years. Well, the typical wind direction is from either the NW or SW, so no problem during these years.

But, in 1917 the temperature record says that Perry was hotter than Stillwater by about 2 degrees. Using the logic above, wind should have blown from the east to the west, a very rare direction for wind in Oklahoma and it should have done it for 3 years running. Temperatures have consequences, and we didn't see those consequences in the wind record.

Then from 1920 to 1938 Stillwater was once again hotter as much as 2 deg hotter in 1930. With the exception of 2 brief periods, from 1945 to 1974 Perry was hotter than Stillwater Afterwhich for the most part, Stillwater has been warmer than Perry-- by up to 2 degrees F.

Now, these towns are only 16 miles apart as the crow flies. Lets put these temperature differences in other terms. The temperature gradient along cold fronts has been classified as follows

site below said:
A suggested set of criteria based on the horizontal temperature gradient has been devised. A weak front is one where the temperature gradient is less than 10[deg]F per 100 miles; a moderate front is where the temperature gradient is 10 [deg]F to 20 [deg]F per 100 miles; and a strong front is where the gradient is over 20 deg]F per 100 miles.
http://www.tpub.com/content/aerographer/14312/css/14312_117.htm
[FONT='Arial','sans-serif']
[/FONT]

Now, for perspective, 10 deg F/100 miles is .1 deg/mile. 20 deg F/100 miles is .2 deg F/mile. That marks the boundary of a strong cold front. What the data says is that there was, in 1930 a strong cold front between Perry and Stillwater for the entire year. And a rather strong moderate cold front between Perry and Stillwater in 1917-192 and again in 1951-1952. And moderat fronts were between these two cities the entire year of 1947,1948 and 1957.

Now, temperature differences have consequences--like thunderstorms. But there were no years long thunderstorms during those years. The winds don't match the temperature record. No one knows why the data has such high temperature gradients. You may think this isn't of any value and cherry-pick the problems you respond to, but I am simply going to keep pointing out the problems. You can chose to respond or not. I really don't care.

One thing is certain, this data is used in the IPCC and it is crap. How can we know that the climate has changed if the data measuring temperature over the past 100 years is so bad???? I know, you won't respond to that question.
 
Upvote 0

grmorton

Senior Member
Sep 19, 2004
1,241
83
75
Spring TX formerly Beijing, China
Visit site
✟24,283.00
Faith
Non-Denom
Marital Status
Married
This is absurd. Most of the effect of averaging the data is just cutting out the high-frequency information. There are some edge effects at the boundary where the frequency cut is done, and then at lower and lower frequencies the effect of the averaging is less and less. Since the only frequencies we're interested in are much longer than yearly, averaging before the FFT is a perfectly valid operation to perform.


Edit: I made a mistake in this response. See below.


But, averaging within a year constitutes a discontinuous filter. Each segment/year has a different filter appllied. We never do that in time series analysis. So, I agree with your last statement.

I would point out that if, as is being claimed here that the satellite data is a linear line, it should yield a power spectrum like a line. It doesn't. Note the second line in the following plot. There is a ramp and the power spectrum doesn't have the bumps that both Thaumaturgy's and my fft had. Thus, your data isn't linear. QED

The pic is from http://www.dsprelated.com/blogimages/SteveSmith/SWSmith_Power_Law_Pairs.gif
 
Upvote 0

thaumaturgy

Well-Known Member
Nov 17, 2006
7,541
882
✟12,333.00
Faith
Atheist
Marital Status
Married
Let's look at Farmington and Conception. Farmington is down in SEMO. Spent a lot of time down in SEMO myself collecting kerogen samples over the MVT mineralizations waaay back when. Conception is at the polar opposite corner of the state. Let's compare their maxes and mins:

Min:
Farmington was 3.5degF Jan 1918, while at the same time Conception was 2.79degF.

Conception's lowest temp was Jan 1940 at -3degF, but at that same time it was a balmy 7.9degF in Farmington.

Max:
Farmington was in July 1901 (wow, another from that year!) at 100.9degF
Conception was 96.3degF.

But in July 1936 Conception hit its max at 100.9degF.

What does this prove? Nothing really. Just fun. I sure wouldn't want to be in Missouri in 1901.
 
Upvote 0

grmorton

Senior Member
Sep 19, 2004
1,241
83
75
Spring TX formerly Beijing, China
Visit site
✟24,283.00
Faith
Non-Denom
Marital Status
Married
My apologies if detailed analyses of technical information "bore" you. I find that an interesting attitude on your part.

Boy you twist everything. I said it was boring our readers. And about 1/2 a percent of them have the ability to follow our discussion.


I won't, however, stop hammering on the primacy of time series analysis in this respect.

Then at least do me the favor of responding to some of the temperature records. Or are you afraid of them?



Indeed you are correct. This weekend I downloaded the statistic programming language "R" in hopes of having a home-based robust stats program, however learning a whole new "unix-like" commandline programming language in a day is beyond me. Rest assured, that I have taken the data and re-coded it so that JMP statistical software (which I have on my work computer) will be more than up to the task of eliminating that "monthly filter".



Thanks for the thoughtful explanation. I will remove that filter. It was in error.

I wish I remembered how to give blessings on this darn site. Well done. Can you tell me how to send you some blessings? Honesty demands reward. I promise that when (not if) you back me into a corner, I too will acknowledge it. I will be hard to back into that corner, but I do eventually find myself there on some issue or other.


Look, Glenn, I've been more than a little forebearant in this conversation. I think you need to dial back the bitterness a bit. I am more than willing to acquiesce that I am not a signal-processing guru, but you do not strike me as being particularly skilled with statistics as evidenced by your total lack of dealing with the statistical analysis of Time-series data.


I will try, you are right, and I apologize. Please forgive me. All I will say is that it is frustrating that I post abysmal temperature data and get almost no response to it, am said to be cherry-picking when that person won't even discuss the temperature record, and have it said of me that I looked at the data saw it go up and down and thus concluded that it was cyclical. But, I do applaud your statement above.

For the record, I am not a guru on FFT either. There are guys in my industry who top almost all others. I am knowledgeable in it.

And I am going to continue to ask you to treat statistical time-series data using statistics.

...

Thank you for your "kindness".

I apologize. Can we at least discuss the percentage of stations that are next to air conditioners? Your math suggesting that only 24% of the stations (not 53%) were class 4 assumes that every other unsurveyed station is fine. Doesn't that strike you as odd?

Lets start over. You address the crapppy station data and I will address the time series data and statistics. I will skip the rest of what you wrote. We are multiplying posts like bunny rabbits. Since I don't want to not answer something that you find important, can you do me a favor of looking through your posts and then posting this one per day and I will do the same--one per day for each of us. I am sure that your boss would rather you work than write email at work. I have 2 hours per day when I am not working or eating, so I do like doing a few other things as well.

If there is a post that you made tonight that you feel I simply must answer, I will do so. But by doing this, I am letting you have the last word on some of those issues unless they come up again.
 
Upvote 0

thaumaturgy

Well-Known Member
Nov 17, 2006
7,541
882
✟12,333.00
Faith
Atheist
Marital Status
Married
But what the heck, you don't really care about any objections anyway from someone who has actually done gravity modeling. In a career in the oil industry one gets exposed to a huge number of technologies, and as a technology director at one time, the number of technologies I became familiar with was even more.

-sigh-

We sure do spend a lot of time waving our "swords" around don't we?



Since I have downloaded all of Missouri, and large chunks of other states, I don't take your cherry-picking charge very seriously. I will let others judge that question.

I've downloaded the data as well. What do you want to discuss about it. I like Missouri, lived there several years. Would have stayed with Peabody Coal in St. Louis if a permanent position had come open.

I came here to discuss the station data, which, as I said, I will do. If you and thaumaturgy, choose to cherry-pick the data you respond to and the data you won't respond to, fine. The temperature data is what is used in the IPCC.

Well, be fair, Glenn, the IPCC and actual real life climate scientists deal with the data as a statistical set. Not just anecdotal.

I don't think you're cherry picking. I think you are stuck on this anecdotal data kick and you are unwilling to discuss the statistics of an inherently statistical data set.

Please don't accuse me of "cherry picking" either. We are both lobbing a ton of data at each other. I look at your fun little jpegs and I have consistently pointed out these are a small subset from a group that you have yet to describe the sampling methodologies of, so I cannot say that it is or isn't truly random.

YOU brought up sampling theory, back it up in the present case. Are these photos from a truly random sampling of the U.S. data stations? Or are they collected as they get people to send 'em in? The map on the surfacestations.org site looks like there are some big "holes" which indicates perhaps they are not a truly random sample (?) I don't know.

So, lets go back to the temperature data How about this picture tonight. Let's discuss the consequences from a meteorological perspective.
yes, let's dive right into more anecdotal data. Thankfully the IPCC, NOAA, NASA, the National Weather Service, the University of Anglia, etc. don't rely on anecdotal data to verify their models. But let's us all here stick with anecdotal data analysis.

Attached are the temperature difference between Perry, Ok and Stillwater OK. From 1899 to about 1917, in general the subtraction is positive meaning Stillwater is hotter than Perry. Now, Perry is NW of Stillwater. Since hot air rises, we should see the air from Perry flowing towards Stillwater for these years. Well, the typical wind direction is from either the NW or SW, so no problem during these years.

But, in 1917 the temperature record says that Perry was hotter than Stillwater by about 2 degrees. Using the logic above, wind should have blown from the east to the west, a very rare direction for wind in Oklahoma and it should have done it for 3 years running. Temperatures have consequences, and we didn't see those consequences in the wind record.

Glenn, you've been a Director of Technology for a major corporation. But yet you are providing data wholly without a framework to discuss it in.

To wit: you are looking at temperature data taken in 1917 in Oklahoma and drawing detailed conclusions from 2deg temperature differences? Honestly? Really? How accurate do you actually know those gauges to be? What is the relative error?

Now, of course, this will make a difference, the data from across the globe going back over a century is important and indeed errors are known to exist. But think about it for a second. A 2degF temperature "error" in a thermometer set out in Oklahoma in 1917 is hardly a shockingly bad gauge. I don't know what time of year your data comes from but in 1917 in Stillwater the annual average max was about 70deg. That's a 3% relative error! Please give me a break. 3% error is "Happy" time!

If the entirety of the conclusions of Global Warming were drawn solely from Oklahoma's raw absolute temperature data from the early 20th century I'd say it was irrational to claim knowledge about global temperature increase. But it isn't.

The fact that you can find "bad gauges" indicates that the reality is there is error in the data. That's just life. That's why statistics is a real area of study.

You can't just "process the signal" and ignore the statistics. Statistics deals with error and the quantification of error and the appreciation that error informs our every decision.

Decisions based on anecdotal data are bad decisions. There's no "gut check" to be made. You can't look at individual data and draw reasonable conclusions in a global scale.


Now, these towns are only 16 miles apart as the crow flies. Lets put these temperature differences in other terms. The temperature gradient along cold fronts has been classified as follows

You are spending a lot of time hyperanalyzing noise at this point, Glenn. You should be more than familiar with noise in your work in signal processing. Please, do us a favor and spare us what we've all seen rookie freshmen do: analyze noise as signal. I"ve done it myself. I used to try to find peaks of interest in my FTIR spectra all the time. A lot were noise.

cherry-pick the problems you respond to, but I am simply going to keep pointing out the problems. You can chose to respond or not. I really don't care.

Why do you avoid statistics in a statistical discussion?

You are bringing a sword to a gun fight.

One thing is certain, this data is used in the IPCC and it is crap. How can we know that the climate has changed if the data measuring temperature over the past 100 years is so bad???? I know, you won't respond to that question.

I will. I will tell you with a straight face that a 3% error in century old technology is probably not a particularly bad thing. What is a bad thing is hyperanalysis of noise as signal. That's why we have statistics.

Would you answer my question? Why do you not discuss statistical data using statistics?
 
Upvote 0