• Starting today August 7th, 2024, in order to post in the Married Couples, Courting Couples, or Singles forums, you will not be allowed to post if you have your Marital status designated as private. Announcements will be made in the respective forums as well but please note that if yours is currently listed as Private, you will need to submit a ticket in the Support Area to have yours changed.

Global warming--the Data, and serious debate

thaumaturgy

Well-Known Member
Nov 17, 2006
7,541
882
✟12,333.00
Faith
Atheist
Marital Status
Married
[FONT=AdvTTf90d833a.I]Balling and Idso
[/FONT]
[2002] found that adjusted HCN temperature trends for the past 30 years were slightly more positive than those calculated using an updated version of the [FONT=AdvTTf90d833a.I]Jones [/FONT][1994] dataset. This is surprising because the latter contains a nearly complete version of HCN that includes adjustments for the time of observation bias. To resolve this discrepancy, temperature trends derived from the fully adjusted HCN database are compared here with those derived from two subsets of Jones. The first subset consists of all U.S. stations in Jones (1578 in total). The second consists of 248 stations that are not in HCN and that require no adjustments for variations in observation time because they always have an observation hour of midnight. (Source= Vose et al, 2003, Geophysical Research Letters cited in earlier post)


Then later on:

Consistent with [FONT=AdvTTf90d833a.I]Balling and Idso
[2002], HCN has a larger trend during that period (0.29C dec1) than either subset of Jones (each with a trend of 0.23degC dec1). However, the difference in trend results from a drastic change in the size of the Jones dataset in 1996; prior to that point the network contains at least 1000 U.S. stations per year (the majority being HCN stations), whereas thereafter it contains no more than 150. When trends are computed for the period 1970–95, HCN and both subsets of Jones exhibit the same rate of warming (0.25degC dec1). The fact that the non-HCN subset of Jones has the same trend as HCN suggests that the time of observation bias has been properly treated in the latter. (ibid, emphasis added)


Huh. Whodda thunk it? Statistics does matter!

And further, people aren't just "making up data" to fill in some preconceived warming ling bias. How do we know this?

Welllll.

A switch to morning observation times has steadily occurred during the latter half of the century to support operational hydrological requirements, resulting in a broad-scale nonclimatic cooling effect. In other words, the systematic change in the time of observation in the United States in the past 50 years has artificially reduced the temperature trend in the U.S. climate record [[FONT=AdvTTf90d833a.I]Hansen et al.
, 2001]. (ibid, emphasis added)


[/FONT]​
[/FONT]
 
Upvote 0

grmorton

Senior Member
Sep 19, 2004
1,241
83
75
Spring TX formerly Beijing, China
Visit site
✟24,283.00
Faith
Non-Denom
Marital Status
Married
To be fair I think most people who are scared off by nerd grenades have left long ago.

I for one would like to see you address the statistics as much as I'd like to see Thaumaturgy address Balling & Idos.

(And I'm not quite sure what you were trying to do with standard deviations either, given that you are discussing trends in the mean... but then two half-hearted semesters don't usually make people expert statisticians.)

One of the reasons I didn’t want to get into statistics is that it opens a can of worms and it drives the readers away. There is a rule, if you have ever written a book. You cut your readership in half for each equation you publish. That has happened here.

Now, I have been called an amateur on this. In some sense, this is right. I have never been paid as a statistician. I am not a petroleum engineer, but I was in charge of reservoir modeling for 5 years. I am not a geologist and never took a geology course, but the Ph. D. geologists who reported to me when I was technology director had to do extra prep to answer my questions. They couldn’t tell that I had never had a geology course (and my 4000 volume personal scientific library is about 20% geology. So, with that as an admission, lets see what an amateur can do when he, unlike those calling him an amateur, actually looks at the data rather than merely read the NOAA brochures.

THere is an interesting philosophical question that must be answered before one can actually deal in the standard deviations of climate. It is something I have been thinking about but don't quite know what to do with.

Let's take a measurement of the speed of light, which I did long ago in physics. There is a true value for the speed of light and my measurements (hopefully) approach the correct value. I can use multiple measurements to come up with an mean and then calculate the standard deviation from all the data, that is, all the measurements OF THE SAME THING--e.g. the speed of light. If one assumes that my measurement errors are random, then one can use the standard statistical techniques.

Measure a table 50 times you will get a mean and a standard deviation from those data.

But, Now lets go to temperature. Each day gets one measurement. There are no repeated measurements so there is no mean and no standard deviation for an individual day. It is a unique measurement because one can't go back in time and measure June 5th, 1847 again or make several measurements of that day's temperature.
Now, the closest we get to repeated measurements are the measurements at nearby stations--but they are 20 or more miles away and you are not really measuring the same thing. It is a unique one-off measurement. You can average the two and calculate a standard deviation from those two cities, but if you want to add more stations to the party, you have to move further and further away to stations that are more and more different than the ones already being used. How far can one move away and say you are measuring the same thing?


Now, lets look at a yearly temperature record of a single station. You have 365 measurements of 365 different things—the daily temperature. What one does is to measure each day’s temperature and then average them for the yearly average. But not a single daily measurement is actually an estimation or measurement of the annual average temperature. So, while I can calculate a mean, and all the statistical variables from these 365 measurements, they don't really fit the standard view of what the statistics of measurement are all about. It is taking 365 on-offs and averaging them to get another one-off. One can’t remeasure the temperature of St Paul Minnesota for the entire year and then compare that to the first measurement.

Another way to look at this is to note that when I measure Jan 3,2008 temperature, I am NOT making a measurement of the annual average temperature. I am measuring Jan 3,2008 and I have no other chance to re-measure it to see what the error bar really is. Thus this data is a time series, not 365 independent measurements of the annual average temperature.

So, how do we measure the error bar for a yearly temperature? Well, we could try to start this by taking the intrinsic error on the daily temperature. The max temperature and the minimum temperature are measured as integers. Below is a picture of a temperature record for 2006 from Florida. One can see the abysmal shape this record is in. But basically what happens is that every day the max and min are measured and written down. Since the numbers are integer, they are ‘correct’ to +/- 0.5 deg F. So, lets use that as the estimate of the error. If one uses the relevant equations sigma = sqrt( (0.5)^2+ (0.5)^2.. . 365 times)/365, which works out to a sigma of 0.026 deg F. Thus, theoretically we should know the annual average temperature to that SD.
If we know the temperature to that accuracy, then clearly it would be hard to doubt global warming and the claims made by its advocates. However, it is clear that this can not be the case.


Remember Susanville California? It is a class one station. That is as good as it gets. The raw data, if one averages the entire yearly series, has an average of 51.7 deg F. The edited data which is what is on the HCN website Thaumaturgy sent us to has an average of 48.9. Now, I don’t know, being the amateur I am, it seems odd to me that if one actually knows the temperature to +/- 0.026, one shouldn’t have to do enough editing on the series to move the average 82 standard deviations away! Now, some will criticize this value because it is an average of the entire series. So, how much do they move the averages for individual years? Since the sanitized data only covers 1905 to 1994, I can only discuss that parat of the record. In this 90 years, only 13 of the yearly averages were moved less than 3 standard deviations. Twenty were moved between 3 and 10 SD, the rest of the years were moved by more than 10 SD from the original raw value. Some years were moved over 400 standard deviations, if one calculates the standard deviation by this method.
I don’t know, I am to amateurish to know for certain but I suspect that this means they really don’t know the temperature to the accuracy one might expect from the instrument accuracy alone.


So, what other way could we determine the Standard deviation? Well we could lay out the variability in temperature on each individual day throughout the year, and then perform the standard calculation one uses when adding numbers with error bars. Below is a picture of the SD / day of St. Paul Minnesota. Note that it has a max in January of 15 deg F as an SD and a low of around 6 as an SD in July. If we calculate the yearly standard deviation from this data by mimicking it with a curve SD = 10.5 +4.5sin(2*pi*n/365) (where * is the multiplication and n is the day number), we find that the annual SD in St. Paul would be .57 deg F. This is a wee bit larger number than the instrumental SD. But, such a value would still cause problems. Let’s assume this is right. Now you want to calculate the difference between two years in the series to determine the change in climate. Well, once again, using the standard equations for calculating standard deviation of a subtraction (which is the same for addition), except that one doesn’t divide here because one isn’t averaging), one finds a resultant SD of .81.

That is an interesting figure because the claim is that the world has only warmed by 1.1 deg F or so. And that then brings into play another statistical point of interest, the CV, the coefficient of variation. This is the SD/mean * 100. Now, if one reduces the temperature record to an anomaly series (by subtracting some arbitrary value from it, the change in that anomaly would be 1.1 F according to the global warming people. But the SD for the series wouldn’t change because to create an anomaly series, one is subtracting an exact number, so there is no change to the SD under such a situation. That means that the Coefficient of variation for the anomaly would be


.81/1.1 * 100 = 72.


The interesting thing about the CV is that if one inverts it, one gets the signal to noise ratio, at least that is what time series folk, like EE’s do. That means that the strength of the signal (the 1.1 deg F of warming) is only .014 the strength of the noise. I don’t know, being the amateur that I am, but I seem to recall posting pictures of the effect of a signal that is half the noise level. You could no longer see the original signal in that circumstance. But this is about 30 times more noise than that example.


So, if one uses this method to calculate the SD, one is faces with the unlikelihood that one can actually measure the global warming in any of the signals. And once again, if one uses .57 as the SD, one would find that the editing moved the standard deviation of 32 years MORE than 3 SD.


I don’t know, but my amateurish ways lead me to believe that if a data value was more than 3 SD away from the mean, it was a significant thing. Here, with this value, one would find that the SD moved more than 1/3 of the values more than 3 SD.


Admittedly the SD was for St. Paul. So, lets use it on Minneapolis, which is really near to St. Paul. To choose a year at random, in 1964, the raw data says the yearly average was 45.68 but the edited data said it was 43.53, a move of 3.77 SD. So, to my amateurish way of thinking, I suspect that this isn’t a good thing because I was always told to not move things more than 3 SD. But what do I, the amateur know?

Well, there is another approach we could use. We could lay out all the daily temperatures and simply run a mean and standard deviation on them. But if I mimick the daily temperatures with a function T= 60 + 15*sin(2*pi*n/365), and calculate the mean and standard deviation from just those numbers, as if they were attempts to measure the yearly temperature (which as I noted, they are not), then we get SD’s over 10 deg F, which is clearly too large because one hasn’t taken out the seasonal effects, and the daily temperatures are NOT the misguided measurements of an annual mean.

The problem here is that there is only one measurement of each day and they are not measuring the yearly average temperature, they are measuring the temperature on that day. When we average them, we have precisely one measurement for the year’s annual average. We can’t re-measure the year so we have no ability to know how bad our measurements are by examining the data it self.
Now, those who claim to be expert think they can quantify the error on the data. I would love to hear just how.


I will discuss some other issues in another note.
 
Upvote 0

grmorton

Senior Member
Sep 19, 2004
1,241
83
75
Spring TX formerly Beijing, China
Visit site
✟24,283.00
Faith
Non-Denom
Marital Status
Married
Then later on:








Huh. Whodda thunk it? Statistics does matter!





And further, people aren't just "making up data" to fill in some preconceived warming ling bias. How do we know this?





Welllll.









Down to the asterisk was edited to add. T cited a sentence

The fact that the non-HCN subset of Jones has the same trend as HCN suggests that the time of observation bias has been properly treated in the latter. (ibid, emphasis added)

This may not be true

The NCDC data show a
regional decline in temperature by 0.1°C, whereas the USHCN
data shows an increase of 0.4°C. The USHCN results are
consistent with the Intergovernmental Panel on Climate
Change (2001) and Karl et al. (1996) for New England, which
strongly suggests that the region has warmed to an even larger
extent than that documented by the New England Regional
Assessment Group (2001)
who used the NCDC climate
divisional data to analyze statewide and regional trends from
1895-1999. Even at the climate divisional level, the USHCN
pattern is more geographically cohesive in that no division has
cooled over the period of record, and the region of significant
warming are all contiguous divisions in the southeastern
portion of the study region (Figure 1). This seems much more
logical than the NCDC data pattern where adjacent divisions
have significant trends, but in opposing directions, e.g., MA-1
and MA-2.” Barry D. Keim, et al, “Are there spurious temperature trends in the United States Climate Division database?” http://www.ccrc.sr.unh.edu/~cpw/papers/Keim_GRL2003.pdf


And if you understand the homogeneity correction which is used, it basically changes the trend to match the approved trend. That was in the article you cited the Peterson article.

“The homogeneity adjustments applied to the stations with poor siting
makes their trend very similar to the trend at the stations with good
siting.” THOMAS C. PETERSON, “EXAMINATION OF POTENTIAL
BIASES IN AIR TEMPERATURECAUSED BY POOR STATION
LOCATIONS,” American Meteorological Society, Aug, 2006, p. 1078 fig 2

So having the same trend may be an artefact of the editing. Don't rest your entire argument upon this, T
*


That is a mighty weak response to a curve that shows that the yearly corrections between the raw data and the final edited data grow each year.

I guess I will use this to complete my post from above.


Now there is another issue that my amateur ways draws me to. That is the issue of the significant digits. The attached picture of a temperature report from 2006 from Bartow Florida shows that all the data is in integer format. Thus, any reporting of temperature with a single decimal significant digits is questionable (given that there is lots of variation in the single digits column, but granting that one can go one more digit, then the maximum place for reporting temperatures should be the tenth of a degree.. For those who don’t know, the rule is that in any mathematical operation the answer ends up having the same level of significant digits as the number having the fewest significant digits that is involved in the calculation. If you multiply 1.6789 x 2.3, the answer is 2.8, not 3.8 not 3.86147. So, any calculation of global temperature that claims to be more accurate than this is problematical.

James Hansen, when correcting the average temperature of the world said this

““Sorry to send another e-mail so soon. No need to read further unless you are interested in temperature changes to a tenth of a degree over the U.S. and a thousandth of a degree over the world.”
http://www.columbia.edu/~jeh1/mailings/20070810_LightUpstairs.pdf

But we don’t measure temperature to a tenth or a thousandths of a degree. So this is like the engineer with a calculation showing that the oil is flowing at the rate of 50.82045876210934 barrels of oil per day, when we measure it to the half barrel.

I know everyone wants me to discuss the satellite data. I already have. I have no doubt it is rising if one uses a linear regression. The reason I am not impressed is that I know the periodicities of long term temperature variations. The Vostock core measured the ocean temperature via the proxy deuterium. He hotter the oceans the more Deuterium can be found in the ice. The cooler, it remains in the ocean and there will be less. Below is a chart of CO2 (red) Deuterium Blue in years BP which starts at 1950 and goes back. Note the huge changes in ocean temperature while the CO2 values don’t wiggle much. CO2 seems to be doing very little while the oceans do a lot. And, the red bar is exactly the length of time we have been measuring satellite data. It can’t detect long term cycles, but we know they are there. So, any test for whether or not it is rising or not is simply not relevant to long term behavior. So, yes, the F-test will show rising, but as a geologist I know the mistakes one can make taking a short term view

The mistake of taking 30 years and claiming that we are destined for a GW catastrophe as some have claimed is precisely the mistake novice investors make. They think if the market is rising, it will rise forever, and if it is going down it will go down forever. We know this isn’t true for the market or the weather. There are feed backs, the temperature will neither fall to -460 F nor will it rise to 8000 F, to use extremes, so the temperature must be cyclical. As long as we have an atmosphere and present solar output the global temperature is unlikely to fall below freezing. But similarly, since we have already seen CO2 contents of the atmosphere as high as 3000 ppm (10x today’s level), it is unlikely that we will kill ourselves by having CO2 go to 1000.

There is another problem I see in the data, given my amateur status, I can ask dumb questions without fear of embarrassment.

Now, lets look at Electra. What is the local SD used as input to calculate the global SD? At Electra the average of the raw is 61.1 deg F after editing it is 60.2. The raw sd is 4.26, the sd of the edited is 1.36. Now, when the climatologists calculate the final SD, what value do they use as input, 4.26 or 1.36?? It would make a bit of a difference if the final claims for error were based upon the edited and sanitized data rather than on the actual SD derived from the raw data which are, after all, the real observations. But then, I, the amateur might be wrong.

One other thing, At Electra, 5% of the measurements are beyond 3 SD. 1986 is 14 deg above the average, 1987 is 16 deg above (almost at 4 SD) and 1988 is 15 deg above, 1991 is 14 deg above and 1993 is 15 deg above the mean. With a SD of 4.26, these five points are outside of the 3rd standard deviation or outside the .99% confidence interval. That is not good, at least my novice views tell me so.

By editing the data, they fundamentally change its statistical properties, and they replace the bad values with something judged to be good, but how close is that value to the real average temperature of those years?
So, When I saw the note asking me to get into the statistics, I had to go away and calculate for a while. But, if I have made error, I have little doubt that my superiors will correct (and chide) me for it. I await the lashings from those who feel superior.

I do think it is time to go back to some gut feel stuff for those who don't enjoy nerd grenades.

Below is Roseberg Oregon,'s temperature station. Note the air conditioners near by, and it is on a hot roof. What foresight and planning these weather guys have. They get to ensure their continued employment moving the raw data 400 standard deviations to its 'correct' value so that we humble peasants can bow and ask them how much money they need to save us from these nasty GW problems.

Also is a plot of two towns in Illinois, which are very close. Note the size of the difference in yearly annual average temperature over such short distances. I know it doesn't bother Thaumaturgy but it does bother me. But hey, I am just a stupid amateur. (for those who should know, the word amateur comes from Latin Amo. It means one who loves. I love science, which is why I have ranged all over the place over the years. I am proudly an amatuer, even in geophysics where I get paid for my amateur work.​
 
Last edited:
Upvote 0

grmorton

Senior Member
Sep 19, 2004
1,241
83
75
Spring TX formerly Beijing, China
Visit site
✟24,283.00
Faith
Non-Denom
Marital Status
Married
Well, to be fair, because you have now thrown in the towel on statistics in an inherently statistical discussion, it really only leaves one on here who is serious about the discussion.

You danced at your 10 yardline in celebration of the touchdown, but dropped the ball on the 5. My my aren't you arrogant and self confident?

I am merely the amateur. Let's see you defend data that needs to be moved more than 3 sd.

The other one is more like a freshman chem student trying to interpret noise and anecdotal data as meaningful signal.

Maybe, but I really didn't want this to devolve to nerd grenades. In your arrogance you didn't beleive me.


Hey, I need to get to a library, dude. I said I'd look at it.

But you're sounding like one big ol' hypocrite yourself because "I see you didn't respond to..." THE STATISTICS.

You must give me some time to do some calculating and actually going to work. I know that you must not do anything at work since you write posts for this place then. My bosses actually require me to work during office hours.

But remember, at all points in the game: I've gone out more on the line than you have. I don't respect someone who demands more of me than they are willing to give themselves. You brought up FFT, I followed down your path. We are talking statistics here (even your friends Balling and Idso deal with stats), so I suggest you repay me in kind.

Seems that you are celebrating a victory that might not come.


The discipline and drive to study things outside of my comfort zone comes from my experience going for and getting a PhD. (You can call me Dr.)

Herr Doktor, I know that with your piled higher and deeper you can't possibly be wrong or make any error whatsoever and us mere amateurs must bow before you at all times :bow:



Corrections are not necessarily random functions!

Is that why you ad nauseam keep reminding us of:

1. How long you lived in China
2. How long you've been studying this data
3. How rich your friend is who works with math
4. Your unrelated statistics paper rather than doing any stats on the current data
5. How ignorant I am on FFT
6. How many FFT you run in a day

Well, to my knowledge you still haven't corrected your error on FFT's low frequencies requiring a secular trend. Maybe I missed it.


Well, too bad you'll need actual statistics and an appreciation of the entire data set (including satellite, borehole, seasurface and ocean temps) to prove that point.

But don't let your little "freshman anecdotal data game" get sidetracked with how real scientists do data.

Well, this poor pup of a scientist can't match up to the likes of you. I have only been involved in finding a billion barrels of oil and publishing a few papers in a few topics. If you find an error in what I posted, I will be glad to be corrected. It is how I learn. It is why I debate. I learn with these debates. They make me work and it helps keep my mind sharp, and that keeps me as a very highly paid owner of my own consultancy. So, I thank you for the exercise of my mind here.

Now, please explain why the delta T is growing with time between the edited and raw data over the past 70 years? Merely claiming that no one is making things up, when the authors, Balling and Idso said that it was adding a spurious long term trend to the data, doesn't count as much of a response in my book. But then, I forget, I must bow :bow: to the Ph. D who knows it all.

Like you're avoiding the statistics? Ha ha!



Actually, no, you are completely wrong. I like science and I like statistics and I was rather under the impression you were interested in a real discussion. I didn't realize when I started this you were unskilled in statistics beyond freshman level and you think that "anecdotal data" is somehow meaningful.

My bad. I will be unable to keep the conversation as simple and elementary school as you appear to want. It just won't happen, Mr. Morton. (Oh, and again, if you like you can call me Dr.)
:thumbsup:
 
Last edited:
Upvote 0

grmorton

Senior Member
Sep 19, 2004
1,241
83
75
Spring TX formerly Beijing, China
Visit site
✟24,283.00
Faith
Non-Denom
Marital Status
Married
Hey Glenn, since I'm doing my homework on your Balling and Idso paper, do you care to do any homework on THIS paper?

Yes, already answered that. see above. But I want to talk about geologic history and the CO2 levels


It is time to talk about the past climate history of the earth.


I am going to show that the present warming is NOT unprecedented in earth history. Indeed, there have been several times when the earth was warmer than it is today, even within the relatively recent geological past. Modern hysteria about polar bears dying or global catastrophes (as predicted by James Hansen, a guy who benefits from the free funding which comes from global warming hysteria), is simply not rational when one looks at geologic history. Let us look first at Greenland over the past half million years. MS

Eric J. Steig and Alexander P. Wolfe said:
“de Vernal and Hillaire-Marcel analyzed a marine sediment core from the Ocean Drilling Program (ODP) site 646, raised from a depth of 3460 m. At this site, sediment has been deposited continuously since at least MIS 17 (7). The core contains a rich terrestrial pollen record, because the core is located on the south Greenland continental rise, which captures runoff from the adjacent land mass. Taxa currently extant in southern Greenland are well represented, including spores from mosses and club mosses and pollen from shrub birch and alder. During inter-glacials, the record is punctuated by marked increases in total pollen concentrations and additional contributions from boreal coniferous trees, namely spruce and pine, neither of which survives in Greenland today. The pollen assemblages differ tremendously between inter-glacials, with direct implications for the past development of ecosystems in south Greenland. For example, spruce pollen concentrations were three times as high during MIS 13 and 5e, and more than 20 times as high during MIS 11, as during the Holocene. On the other hand, MIS 9 and 7, have unspectacular conifer pollen signatures similar to those in the Holocene.”
see the first picture

For information, MIS 5e is 110,000 years ago, MIS 11 is around 400,000 years ago and MIS 13 is around 500,000 years ago. Why do we find spruce and pine pollen in sediments from Greenland run-off in these interglacials? Because at these times Greenland was WARMER and less ice covered than it is today!

Eric J. Steig ad Alexander P. Wolfe said:
“Evidently, the Greenland ice sheet was smaller during MIS 5e and 13 than it is today, but ice probably still covered the location of the Dye 3 ice core. During MIS 11, deglaciation must have been much more extensive. The six-fold increase in spruce pollen abundance during MIS 11 relative to MIS 5e and 13 is unlikely to reflect minor differences in ice sheet size. Spruce is absent in Greenland today not because of the high latitude but because there is no land sufficiently removed from the hostile microclimate at the ice sheet margin. Thus, the Dye 3 area must have been completely deg1aciated during MIS 11. For that to occur, most of southern Greenland must have been ice free.”

Dye 3 core site is at 64 degrees in a place that is currently covered with 2 kilometers of ice, yet, 400,000 years ago, it was ice free! Ice free and the polar bears didn’t die off.

Let us go even further back, 34 million years ago. Core data tells an even more amazing story of the Arctic region. I first learned of this in an oil industry publication—one published by seismic contractors, not oil companies. Jane Whaley writes:

“Reading the popular press, one could be forgiven for thinking that the 'greenhouse effect' is an exclusively modern phenomenon. As geoscientists know, however, major changes in temperature have occurred throughout the geological record and particularly in the Cenozoic, where they are now the subject of intense research.”

“When we can no longer use the CO2 bubbles locked into the ice, the graph can be extended further back in time by using 'proxies' such as alkenoid carbon isotopes and boron isotopes. Although results from these proxies may differ in detail, they gennerally agree with those achieved through estimates of palaeotemperatures and climate modelling.”

"What is fascinating is that all of these techniques indicate that pC02 levels were even higher in the past," Jonathan explains. "Between the Oligocene and mid-Miocene, 11 to 35 million years ago, values averaged 600 ppm, get if we extend into the Late Eocene, we see levels possibly up to 2,000 ppm. This reflects a major decrease starting in the Eocene and coinciding with the development of widespread Antarctic glaciation in the earliest Oligocene. An Eocene-Oligocene boundary fall in pC02 is supported by climate models, which indicate that large-scale Antarctic glaciation cannot occur with pC02 values above -850 ppm. Perhaps we can use this to predict the I effect of future increases in pC02 on Antarctic deglaciation?"

“The story told by estimated pC02 levels becomes even more interesting as we go further back in time to the end of the Paleocene, about 55 million years ago. "At this stage, levels of pC02 in the atmosphere may have been as high as 3,500 ppm -- that's almost 10 times present day levels," says Jonathan. "Even the more conservative researchers, such as Mark Pagani at Yale, indicate that early Eocene pC02 values would have been in the region of 2,000 ppm. This was a true greenhouse world, which was inherited from the Mesozoic, continuing into the Paleocene. Then near the beginning of the middle Eocene, about 50 million years ago, pC02 levels unexpectedly and dramatically fell as low as 600 ppm. At the same time, palaeotemperature proxies indicate decreased temperatures, especially at higher latitudes, reflecting the initial shift from the Mesozoic-Early Eocene greenhouse world towards the modern icehouse climate. Why? We need to find out what happened to cause this dramatic fall in pC02

They describe ocean cores taken from the Arctic Ocean which show that a freshwater plant was growing in the Arctic about 50 million years ago.

"ACEX was the first Integrated Ocean Drilling Project (I0DP 302) expedition into the Arctic and recovered over 400m (1,400h) of cores, including 200m (700h) of Paleocene and Eocene deposits:' explains Jonathan. "In the section corresponding to the earliest Middle Eocene - near the point in time when we start to see a sudden shift in CO2 levels –[b ] we found more than eight metres of core composed almost completely of a plant known as Azolla, a floating fern sometime found in suburban ponds.[/b] But Azolla is a freshwater plant, so what is it doing in the middle of the Arctic Ocean? Did it grow there or was it transported to the depositional site?"

"Since the ACEX cores were drilled in 2004, I have been able to confirm the Azolla interval in more than 50 Arctic wells from northern Alaska, the Canadian Beaufort and the Chukchi Sea." Jonathan continues, "As in the ACEX cores, Azolla usually occurs as laminations, reflecting seasonal or longer cycles. This indicates that the plant grew in situ and was not transported to these areas. The 'Azolla interval; as it is termed, occurs within the same biostratigraphic zone in both the ACEX cores and the exploration wells. It is also represented in coeval well and IODP sections to the south, where it probably represents transported material. These provide confirmation that the event lasted about 800,000 years."
"As we have discussed, the Early Eocene Arctic Ocean Basin was largely enclosed, with elevated temperatures, evaporation and precipitation leading to increased runoff and the development of extensive surface freshwater layers," Jonathan explains. "Our model suggests that Azolla was an opportunistic plant, able to repeatedly colonize the freshwater layers which periodically spread across large areas of the Arctic Ocean.”

My emphasis

Fresh water? In the Arctic Ocean? That makes that a HUGE freshwater lake which would dwarf the largest freshwater lakes of the past. And it has been found in 50 arctic boreholes. The plant obtains maximum growth in warm waters with 20 hours of sunlight, doubling its biomass in 2-3 days! This is not your typical cold weather plant. So what is it doing in the Arctic Ocean bottom?

Anil Ananthaswamy writes:

Anil Ananthaswamy said:
The waters of this mega-lake were a surprisingly warm 10° C, but that's nothing to the temperatures reached a few million years earlier during the hottest part of the Eocene, when the ocean was salty. According to another study of the core the surface water 55 million years ago was around 18o C, peaking at an incredible 23o C - more than warm enough for a pleasant swim at the North Pole!”

The North Pole was 23 degrees C! and we think a few more degrees will be a catastrophe worthy of the expenditure of several million dollars to avoid. The religion of Global warming never looks at geology. Like YECs, they merely read their own material and talk to themselves. They see anyone who disagrees with them as out of touch with reality. The two movements are really quite similar! The Nature article says it was 24 deg. C
Moran et al said:
A global increase in Apectodinium occurred during the Palaeocene/
Eocene thermal maximum (PETM), the largest known climatic
warming of the Cenozoic. In a companion paper, by TEX86 analysis,
we show that even at extreme high latitudes in the Arctic Ocean, peak
PETM sea surface temperatures soared to, 24oC.

Is there any other evidence to support this? Yes, Freshwater Asian turtles made it to North America, which would require a non-marine connection, I. E. a freshwater lake in the Arctic.

Anil Ananthaswamy said:
“Most recently, the team has found fossils of a family of turtles called Macrobaenidae on Axel Heiberg Island (the details have yet to be published). These turtles originally lived in Asia, but from the late Cretaceous onwards appeared in North America too. Because turtles are very sensitive to climate, the researchers think they could have survived the migration only if they moved along a route in the far north that was warm all year round. More significantly, these turtles -like the champsosaurs - were freshwater creatures. "They would have required a non-marine connection," says team member Donald Brinkman of the Royal Tyrrell Museum in Drumheller, Alberta, Canada. "If the Arctic was a big freshwater lake, that would have made it possible."”

There is also evidence to support this in the O18 record over the past 120 million years. Notice that the earth has been COOLING since the Cretaceous!

see picture 2 the O18 curve

Part of the global warming hysteria is due to the concern that animals will go extinct as the arctic warms. Most of those proclaiming this and arguing against it, are evolutionists. Yet somehow they seem to suddenly become believers in the fixity of species, as if species can’t evolve to changing conditions. (Some will say that the rate of change is too great, but, in point of fact, evolution is the history of species after species affecting the globe, causing some other species to go extinct. Angiosperms are sometimes said to have caused the demise of the dinosaur. Life will continue but it will be different. That is the story of evolution and one that global warming hystericists should remember.

Life didn’t go extinct during the previous warmings and it won’t go extinct during this one. Nor will life be extinguished if we continue to put CO2 into the atmosphere. If only the global warming hystericals would look at geology rather than the last 30 years of temperature data, they would be calmed and their pulses would drop. And they would then cease hectoring the rest of us to pay for their silliness.

Nicola Scarfetta and Bruce West said:
The causes of global warming—the increase of approximately 0.8±0.1 °C in the average global temperature near Earth’s surface since 1900—are not as apparent as some recent scientific publications and the popular media indicate. We contend that the changes in Earth’s average surface temperature are directly linked to two distinctly different aspects of the Sun’s dynamics: the short-term statistical fluctuations in the Sun’s irradiance and the longer-term solar cycles. This argument for directly linking the Sun’s dynamics to the response of Earth’s climate is based on our research and augments the interpretation of the causes of global warming presented in the United Nations 2007 Intergovernmental Panel on Climate Change (IPCC) report.”


Physics Today is not exactly equivalent to the Moon-Landing-is-fake Gazette. And this article above was published 3 months ago!
To believe that what some want us to believe, that questioning Global is equivalent to questioning the moon landing, is ridiculous. This is from the Geophysical Research Letters, a well respected peer-reviewed scientific publication.

According to the findings summarized in Table 1 the increase of solar activity during the last century, according to the original Lean et al.’s [1995] TSI proxy reconstruction, could have, on average, contributed approximately 45–50% of the 1900–2000 global warming, and 25–35% of the 1980–2000 global warming.
http://acrim.com/Reference Files/P... warming.pdf

Here is more from this NASA funded study:

GEOPHYSICAL RESEARCH LETTERS said:
An ACRIM composite TSI time series using the Nimbus7/ERB results [Hoyt et al., 1992] to relate ACRIM1 and ACRIM2 demonstrates a secular upward trend of 0.05 percent-per-decade between consecutive solar activity minima.” Richard Wilson and Alexander V. Mordvinov, “Secular total solar irradiance trend during solar cycles 21–23”
http://acrim.com/Reference Files/Se...nce trend during solar cycles 21–23.pdf
 
Upvote 0

grmorton

Senior Member
Sep 19, 2004
1,241
83
75
Spring TX formerly Beijing, China
Visit site
✟24,283.00
Faith
Non-Denom
Marital Status
Married
And where did the other 65%-75% of the warming come from if not the Sun?

Some comes from albedo changes. The albedo is the percentof energy reflected by the earth. Part, comes from CO2, but it is a much smaller amount than these other effects. Did you see the flat-lined CO2 from the Vostock core while the Deuteriium curve makes all sorts of excursions? I posted the picture last night. The oceans were making significant cooling and warmings without the CO2 moving much at all, so that shows what the sun can do without even taking into account the CO2.

Below is a chart of the same data only over 12,000 years. One can see that the Deuterium curve was much much higher in the past, meaning the seas were much much warmer in the YOunger Dryas era 3000 years ago than it was first part of last century.


Now as to the Albedo. I have posted that info before but it has been lost in the ridiculous discussion of the Fourier charts and the post after post of ridicule towards me (posts that could have been better spent looking at the data)

Carl Sagan said:
"Simple climate models (31) suggest
that if the global albedo changes from its
value of 0.30 by 0.01, a surface temperature
change of - 2 K will result."


Bruce A. Wielicki said:
decrease of[/b]
0.006. These results stand in stark contrast to
those of Pall et al., which show a large
increase of 6 W m^2 or an albedo increase of
0.017, as shown for comparison in Fig. 1."

Putting these two figures together one gets an approximate warming of 2 * .6 (.006 is .6 of .01) = 1.2 deg C warming possible just from an albedo change, and everyone is decrying CO2. Here is the thing, I doubt that the figures are completely accurate but they are in the neighborhood. If one said that the albedo change was only half as effective at raising temperature, you still have much of the warming just due to albedo change.



But even Palle et al note that the earths albedo has dropped, meaning more heat absorption and thus an increase in temperature.
CO2 isn't the only cause of warming.

E. Palle, P. R. Goode, P. Montane-Rodriguez, S. E. Koonin, "Changes in Earth's Reflectance
over the Past Two Decades, Science 304(2004), p. 1299
"We correlate an overlapping period of earthshine measurements of Earth's reflectance (from 1999 through mid-2001) with satellite observations of global cloud properties to construct from the latter a proxy measure of Earth's global shortwave reflectance. This proxy shows a steady decrease in Earth's reflectance from 1984 to 2000, with a strong climatologically significant drop after 1995. From 2001 to 2003, only earthshine data are available, and they indicate a complete reversal of the decline. Understanding how the causes of these decadal changes are apportioned between natural variability, direct forcing, and feedbacks is fundamental to confidently assessing and predicting climate change"


What I think is happening is that all the blame is being put to the feet of CO2 when it is only a small part of the problem. If you look at the sensitivities of temperature to CO2 used by the IPCC and published elsewhere, the fact is that the values are all over the place. The sensitivity is the amount of CO2 which would cause a doubling of temperature. NO one knows what that is. Here are the various estimates. NOte the scatter, meaning lack of certainty.

"'Climate sensitivity'is a term used to characterize the response of the climate system to an imposed forcing, usually radiative. This term has come to have a variety of usages by the scientific community, but it is most commonly defined as the equilibrium global mean surface temperature change that occurs in response to a doubling of atmospheric carbon dioxide (CO2) concentration.. . .It is presently one of the largest sources of uncertainty in projections of long-term global climate change."
"Based on analysis of several leading climate models, the Intergovernmental Panel on Climate Change/Third Assessment Report (IPCC/TAR) estimated this climate sensitivity to be in the range of 1.7-4.2 oC (IPCC, 2001).

Sensitivity estimate Source Date
5 C Arrhenius 1896
2.3 C Manabe and Wetherald 1967
3 C Charny Committee report 1979
1.5-4.5 C IPCC 1990
1.7-4.2 C IPCC 2001
Steering Commit te on Probabilistic Estimates of Climate Sensitivity, Estimating Climate Sensitivity, Washington: National Academies Press, 2003), p. 7

I don't know about you but this doesn't make me believe that they actually know what CO2 will do.
 
Upvote 0

grmorton

Senior Member
Sep 19, 2004
1,241
83
75
Spring TX formerly Beijing, China
Visit site
✟24,283.00
Faith
Non-Denom
Marital Status
Married
Some comes from albedo changes. The albedo is the percentof energy reflected by the earth. Part, comes from CO2, but it is a much smaller amount than these other effects. Did you see the flat-lined CO2 from the Vostock core while the Deuteriium curve makes all sorts of excursions? I posted the picture last night. The oceans were making significant cooling and warmings without the CO2 moving much at all, so that shows what the sun can do without even taking into account the CO2.

Below is a chart of the same data only over 12,000 years. One can see that the Deuterium curve was much much higher in the past, meaning the seas were much much warmer in the YOunger Dryas era 3000 years ago than it was first part of last century.


Now as to the Albedo. I have posted that info before but it has been lost in the ridiculous discussion of the Fourier charts and the post after post of ridicule towards me (posts that could have been better spent looking at the data)






Putting these two figures together one gets an approximate warming of 2 * .6 (.006 is .6 of .01) = 1.2 deg C warming possible just from an albedo change, and everyone is decrying CO2. Here is the thing, I doubt that the figures are completely accurate but they are in the neighborhood. If one said that the albedo change was only half as effective at raising temperature, you still have much of the warming just due to albedo change.



But even Palle et al note that the earths albedo has dropped, meaning more heat absorption and thus an increase in temperature.
CO2 isn't the only cause of warming.




What I think is happening is that all the blame is being put to the feet of CO2 when it is only a small part of the problem. If you look at the sensitivities of temperature to CO2 used by the IPCC and published elsewhere, the fact is that the values are all over the place. The sensitivity is the amount of CO2 which would cause a doubling of temperature. NO one knows what that is. Here are the various estimates. NOte the scatter, meaning lack of certainty.



Sensitivity estimate Source Date
5 C Arrhenius 1896
2.3 C Manabe and Wetherald 1967
3 C Charny Committee report 1979
1.5-4.5 C IPCC 1990
1.7-4.2 C IPCC 2001
Steering Commit te on Probabilistic Estimates of Climate Sensitivity, Estimating Climate Sensitivity, Washington: National Academies Press, 2003), p. 7

I don't know about you but this doesn't make me believe that they actually know what CO2 will do.

I forgot to post the picture
 
Upvote 0

Chalnoth

Senior Contributor
Aug 14, 2006
11,361
384
Italy
✟36,153.00
Faith
Atheist
Marital Status
Single
Some comes from albedo changes. The albedo is the percentof energy reflected by the earth. Part, comes from CO2, but it is a much smaller amount than these other effects.
When you measure the albedo changes from space, you're including the CO2 as an effect.

But why do you think the CO2 effect is small? You haven't addressed that issue, near as I can tell. Current data puts the climate sensitivity of CO2 at somewhere between 1C and 3C.

Oh, and by the way, most of the greenhouse effect is actually coming from the positive feedback effect of water vapor. Yes, CO2's effect is small, but it's enough to upset the balance and set off warming (note: the above sensitivity includes the water vapor feedback, if I recall correctly).

Did you see the flat-lined CO2 from the Vostock core while the Deuteriium curve makes all sorts of excursions? I posted the picture last night.
So what? Nobody's debating that climate changes significantly due to other factors as well. We're just saying that this time the primary driver is CO2.

The oceans were making significant cooling and warmings without the CO2 moving much at all, so that shows what the sun can do without even taking into account the CO2.
Yes, this is the essence of the El Nino/La Nina southern oscillation. The problem is that oceans have been warming significantly as well, which means that they've been acting as sort of a buffer for future warming, and another El Nino event will dump a lot of that heat right back into our atmosphere (like it did in 1998).
 
Last edited:
Upvote 0

thaumaturgy

Well-Known Member
Nov 17, 2006
7,541
882
✟12,333.00
Faith
Atheist
Marital Status
Married
One of the reasons I didn’t want to get into statistics is that it opens a can of worms and it drives the readers away.


So you're "doing it for the good of everyone else", yes, I'm sure it's not that you don't want to get into the statistics, but gosh ahmighty, it would scare everyone off if you were to flex your mighty brain muscles.

You are really too kind to everyone, Glenn.

There is a rule, if you have ever written a book. You cut your readership in half for each equation you publish. That has happened here.


Too bad that rule doesn't apply to science books, Glenn. You've written enough peer reviewed publications to know better than that?

I keep looking at the title of this thread and wondering: "Global Warming -- The Data and Serious Discussion".

Well, I guess we know how "serious" you wanted.

Now, I have been called an amateur on this. In some sense, this is right. I have never been paid as a statistician.


Nor I. But indeed, I am quite happy you confessed to being an amateur on this. I am as well. Doesn't mean I can't flex and step off into this area. It is very important to me.

You see, Glenn, this is what is needed to go out and get a PhD. A driving interest to expand from your base and explore the caves.

You obviously have done that in your career. It's part of what defines a scientist.

I am not a petroleum engineer, but I was in charge of reservoir modeling for 5 years. I am not a geologist and never took a geology course, but the Ph. D. geologists who reported to me when I was technology director had to do extra prep to answer my questions.


Oh jeez, is this onanism really necessary?

They couldn’t tell that I had never had a geology course (and my 4000 volume personal scientific library is about 20% geology. So, with that as an admission, lets see what an amateur can do when he, unlike those calling him an amateur, actually looks at the data rather than merely read the NOAA brochures.


You got a geology question? Because I got my BS, MS and PhD in geology. If you need some help, by all means, ask anything. Granted, I'm good enough that I did my postdocs in regular chemistry and I've spent most of my professional life as research chemist, but I've done enough geology teaching and research that if you have any questions, by all means ask.

How far can one move away and say you are measuring the same thing?


I believe I already cited this in an earlier post, but I'll post it again:


The impact of random discontinuities on area-averaged values typically becomes smaller as the area or region becomes larger, and is negligible on hemispheric scales (Easterling et al., 1996). (SOURCE)


If you want a nice overview of the types of data treatment used in the current debate please note the reference (LINK)


Now, lets look at a yearly temperature record of a single station. You have 365 measurements of 365 different things—the daily temperature.


What you are getting at is; how can statistics be done on "daily temps". Indeed they probably cannot for a single station since these appear to be single point measurements as you say.

The key is to look at broader trends. The gridded averages spoken of in the various papers on how this stuff is measured. This is a reasonable question you have. But it misses the overall trends in the data. If you have a century's worth of a single station's measurements you can fit a line to see if there is a trend in how that site's temperature has drifted. There are countless examples of this in science. There are an almost infinite number of examples where people graph one-x/one-y type data.

The correlation and statistics are helped if you have multiple response measurements for each independent X, but a statistical measure of trend is still possible with a true "scatterplot".

That's what I showed with the linear trend on the earlier dataset shown. The whole dataset was a very large set of data with a broad range on the x-axis, but regardless, the statistics takes that into account and calculates the F-statistic showing that for the data that is there I can be about 99.99% sure that the trend is not false.

And that's really what this is all about.

You made a point of showing how the data at the end was about the same level as the beginning but that is a flawed way to look at messy noisy data. It does contain a cycle as you noted and more importantly as the statistics showed. The overall data, fitted by a least-squares regression resulted in not only a "visible" trend but a statistically significant "non-zero" trend. Meaning your "gut feel" for the fact that the end point was about the same level as the beginning was in error, by about 99.99% likelihood.


Another way to look at this is to note that when I measure Jan 3,2008 temperature, I am NOT making a measurement of the annual average temperature. I am measuring Jan 3,2008 and I have no other chance to re-measure it to see what the error bar really is. Thus this data is a time series, not 365 independent measurements of the annual average temperature.


But take the temperature at 3 Jan of every year for a century, compare it to neighboring stations, grid the average, and see how it changes. Quality control is obviously done on these measurements as shown in the earlier citation and the various other papers I've referenced in this discussion (ie no one is "making data up", they are using very specific pre-arranged rules for controlling "accountable error" and letting the chips fall where they may on error they cannot account for.)
 
Upvote 0

thaumaturgy

Well-Known Member
Nov 17, 2006
7,541
882
✟12,333.00
Faith
Atheist
Marital Status
Married
You danced at your 10 yardline in celebration of the touchdown, but dropped the ball on the 5. My my aren't you arrogant and self confident?

Perhaps only as arrogant as you Glenn. You see, I've not spent half my time here talking about my unrelated statistics papers to prove my stats chops. I've not spent half my time here reminding everyone how I lived in China as if that makes some point about the U.S. HCN data set.

I've actually been the one to point out most if not all of my own errors in detail (cf the Kappa discussion etc.)

I am merely the amateur.

Suddenly the man who wrote the big "Global Markov Models for Eukaryotes Nucleotide Data" in the Journal of Statistical Planning and Inference is a mere "amateur"???

Color me shocked!

(I think I see where the path of "arrogance" leads now)


You must give me some time to do some calculating and actually going to work. I know that you must not do anything at work since you write posts for this place then. My bosses actually require me to work during office hours.

LOL! Glenn, this stuff is easy for me! I can pop off one of these posts in a ridiculously short time! I get into work at 5:30AM and stay 'til 4:00PM and I do this stuff on my lunch and in the evenings as well as when I'm walking around thinking about statistics!

I'm working on learning statistics robustly. As such this thread has provided me with a huge opportunity to expand my stats background (the whole Time-Series discussion has helped me learn some stuff that has come in quite handy at work recently! You call it a "nerd grenade" because you presumably don't have to do any science work. I am a scientist. That means that pretty much everything I do every minute of every day is somehow related to science. Even in my brief career as a graphic artist I didn't stray far away from science.

Herr Doktor, I know that with your piled higher and deeper you can't possibly be wrong or make any error whatsoever and us mere amateurs must bow before you at all times :bow:

Funny that you should be such a hypocrite here too! You spend all your time trying to impress people on how much you've done, how smart you are, how degreed geologists have to prep to answer your insightful questions, how you've written a big article in a stats journal, etc etc etc etc. And now you decree the PhD "piled higher and deeper".

Funny. Hypocritical, but funny. You really are...well, can't say it here.

Let's just say I'm not impressed by someone who spends half his time telling me what he's capable of but not showing me. Remember our earlier discussion, I did a degree in Missouri. I like the "show me" attitude.

Further I don't much respect people who didn't bother to get a PhD calling it "piled higher and deeper". You wouldn't know what it takes I suppose. So for you, you can just slag it as you wish.

Maybe if you had the cojones for it...but well....

Well, to my knowledge you still haven't corrected your error on FFT's low frequencies requiring a secular trend. Maybe I missed it.

That's because I don't believe the low frequencies require it to be a secular trend (that was indicated by the linear least squares regression). I believe I stated repeatedly that a secular trend will show up as a low frequency peak at or near zero (as quoted by the SAS Institute and verified by a PhD statistician), AND that a low frequency peak can be an offset or very long period cyclical function (review post # 192 where I actually did the math to verify that point to myself).

If you want to review the number of times I've pointed this out please check:
(HERE, HERE, HERE, HERE)


That is simply the fact.

from the SAS Institute:

The data are displayed as a thick black line in the top left plot. The periodogram of the data is shown as dots in the top right panel. Note the exceptionally high periodogram values at low frequencies. This comes from the trend in the data. Because periodogram analysis explains everything in terms of waves, an upward trend shows up as a very long (low frequency) wave.(SOURCE)


(emphasis added)

Well, this poor pup of a scientist can't match up to the likes of you.

Damn straight skippy.

I have only been involved in finding a billion barrels of oil and publishing a few papers in a few topics.

Oh no. Go get the windex, you're going to have to clean off your monitor again.
 
Last edited:
Upvote 0

thaumaturgy

Well-Known Member
Nov 17, 2006
7,541
882
✟12,333.00
Faith
Atheist
Marital Status
Married
Now, lets look at a yearly temperature record of a single station. You have 365 measurements of 365 different things—the daily temperature. What one does is to measure each day’s temperature and then average them for the yearly average. But not a single daily measurement is actually an estimation or measurement of the annual average temperature.

An “annual average” is nothing more than the average of the data from the entire year.

So, while I can calculate a mean, and all the statistical variables from these 365 measurements, they don't really fit the standard view of what the statistics of measurement are all about.

You are overly narrowing the definition of statistics.

It is taking 365 on-offs and averaging them to get another one-off.

The key then is to compare this with the 100+ measurements taken over a century. Each measurement shows the spread around each year’s temperature, hence by definition each year’s data point has a given statistical certitude.

One can’t remeasure the temperature of St Paul Minnesota for the entire year and then compare that to the first measurement.

That isn’t the point of an annual mean. Even if it were the exact same measurement over and over again, a mean doesn’t necessarily tell you that you ever measured the exact mean at any point.

So, how do we measure the error bar for a yearly temperature? Well, we could try to start this by taking the intrinsic error on the daily temperature. The max temperature and the minimum temperature are measured as integers. Below is a picture of a temperature record for 2006 from Florida. One can see the abysmal shape this record is in. But basically what happens is that every day the max and min are measured and written down. Since the numbers are integer, they are ‘correct’ to +/- 0.5 deg F. So, lets use that as the estimate of the error. If one uses the relevant equations sigma = sqrt( (0.5)^2+ (0.5)^2.. . 365 times)/365, which works out to a sigma of 0.026 deg F. Thus, theoretically we should know the annual average temperature to that SD.

This is a reasonable set of questions to ask, however, this overly narrows “statistics”. There are numerous cases in science where scientists get “one bite at the apple”. Cases where only one measurement of a response to any given “X-value” is allowed. This does not render statistics unusable, but it does shift from a single data point estimate of error to MODEL error estimates.

The key then is to do statistics on the trend.

This is how this is done.

In a given data set of interest we want to see how response (y) and predictor (x) correlate:

Each day’s temperature range gives a “mean” for that day. It has a standard deviation for that day which isn’t very useful. Just the calculation you pointed out earlier: of the square root of the sum of the (min-mean)^2 and the (max-mean)^2 divided by the degrees of freedom (N-1).

Since there’s only two data points not much to be learned from this. BUT take this string of data collected every day for 365 days and you have 365 mean temperatures which will show seasonal cyclicity as expected, but indeed will have a mean temperature. It’s just the math. Nothing mysterious, nothing smoke-and-mirrors. It not only has a meaningful standard deviation (because you have 364 degrees of freedom) but now you’ve got an even more narrow estimate of the true mean for the year given by the 95% confidence interval calculation:

95%CI = t[sub]0.05[/sub]*s/sqrt(N)

When N~365 that means you are dividing ~(2*standard deviation) by 19. That means, effectively you have dropped the “range” of your sureness of the mean by ~10X over just the standard deviation alone. That’s pretty big. That’s the key difference between just looking at the standard deviation and looking at the confidence interval on the mean. This is why the Central Limit Theorem is important.

The key now is to see how this annual mean changes over time, hence the collection of this “annual mean” over the course of a century. 100 Years’ worth of data. Each year is, as you say, a “one-off” of sorts, but each year has associated with it a standard deviation for that entire year.

So we can “vette” each year’s data by doing a variance component analysis. This asks the question: which gives the bigger change in the data: the variance around the annual mean or the variance around the entire data set from 1888 to 1988? (This is standard for data quality control and Gauge R&R analyses).

But, let’s get back to the data for each year as single points. We can run an F-test to see if there is reason to believe there is non-zero trend to the data.

The data is fitted to a least squares line which by definition is the best linear fit to that line. The residuals of each data point (the difference between the actual Y-value and the “fitted line Y-value) can be squared and summed to give what is sometimes called a RESIDUAL ERROR. Then you can take the mean of the entire century’s worth of data and measure how far each year’s data point falls off that mean and get a different ERROR MEASUREMENT (sometimes called a C Total Error such as in JMP statistical software).

The difference between the RESIDUAL ERROR and the C TOTAL ERROR is called the MODEL ERROR.

From these two errors “MEAN SQUARES” are calculated (the sum or squares of each error/degrees of freedom). The F-test is the ratio of the Model Mean Square/Residual Mean Square. This is compared against an F-statistic table and a probability measurement is generated that you will not make an error in rejecting the claim that the data is “flat” with no trend.

Essentially you are comparing how the data is spread around a FLAT LINE (the mean of the entire data set) versus how it is spread around a linear least-squares fit. The higher this F-test value, the more likely the data can be modeled using the LINE than merely being noise around a flat line (no trend).

This does not mean that this simple line is the BEST to model the data, just BETTER THAN NO TREND.

It then becomes necessary to do the time-series analyses etc. In the case of the present data we have that example here:

Temp_Data_Linear.JPG


It has been shown several times over that there is a non-zero trend to this data. We can quantify against our “gut feeling” which is the error you made by looking at just the start and end and decreeing that they were kinda close to each other and hence just a cyclical data set.

It is also why the cyclic component you identified needed to be assessed as well.

In essence your “intuition” that there was cyclicity just by “looking” at the data is borne out by the time-series stuff. Your “intuition” that there was no overall trend in the data was incorrect according to the standard statistics on the trend.

This analysis of the data carries only a 0.01% chance of being in error as to the likely existence of a non-zero trend.

That’s all this says. It does not say that this linear trend is absolutely perfect, just far more likely to be a reasonable assumption versus the "no-trend" hypothesis.


This is why statistics is crucial to this debate.


Of course it's important to look at the data! "Anscomb's Quartet" is an object lesson of merely going by the fits to the exclusion of the data itself (LINK). There is a place for the gut to start but there's also an important aspect in which the statistics reveal patterns that the gut may miss.

In this data in this discussion:

See the little spike in the dataset up there in the mid-1990's? You could make an argument that that imposes an unnecessary "leveraging" effect (where data concentrated in one spot pulls the line of regression with undue weighting). So I eliminated those points and re-ran the correlation. Still found a significant non-zero trend. p<0.001.

Without this "spike" data the curve equation was Global = 0.012*TimeStamp - 24.
With the spike data the curve equation was Global = 0.013*TimeStamp - 25

(This was a really crude experiment to show the relative weight of that spike, but indeed there are proper "outlier tests" that can be run on data that are much more strict).
 
Last edited:
Upvote 0

grmorton

Senior Member
Sep 19, 2004
1,241
83
75
Spring TX formerly Beijing, China
Visit site
✟24,283.00
Faith
Non-Denom
Marital Status
Married
When you measure the albedo changes from space, you're including the CO2 as an effect.

Since CO2 is a clear colorless gas, it doesn't cause an albedo at most of the frequencies the sun inputs to the earth (yes, one can quibble about minor frequency components). So, no, there is no 'co2' effect for the albedo. Albedo is defined as Reflected electromagnetic radiation/input radiation. What frequencies do you think I have overlooked in the CO2 reflection spectrum?

But why do you think the CO2 effect is small? You haven't addressed that issue, near as I can tell. Current data puts the climate sensitivity of CO2 at somewhere between 1C and 3C.

CO2 has a logarithmic effect. Its effect of the second doubling is less than the first.

Yes, this is the essence of the El Nino/La Nina southern oscillation. The problem is that oceans have been warming significantly as well, which means that they've been acting as sort of a buffer for future warming, and another El Nino event will dump a lot of that heat right back into our atmosphere (like it did in 1998).

You got to be kidding? YOu didn't look at the scale of that chart. Those are centuries long warmings and coolings and are NOT the ENSO. Below is the duration of La Nina.

La Nina
Previous Cold Phases
La Ninas occurred in 1904, 1908, 1910, 1916, 1924, 1928, 1938, 1950, 1955, 1964, 1970, 1973, 1975, 1988, 1995
http://www.publicaffairs.noaa.gov/lanina.html

La Nina duration
1 1950-51
2 1954-56
3 1964-65
4 1967-68
5 1970-72
6 1973-76
7 1984-85
8 1988-89
9 1995-96
10 1998-2000
11 2000-01
http://iri.columbia.edu/climate/ENSO/background/pastevent.html

These scales are no where near what is in the picture
 
Upvote 0

grmorton

Senior Member
Sep 19, 2004
1,241
83
75
Spring TX formerly Beijing, China
Visit site
✟24,283.00
Faith
Non-Denom
Marital Status
Married
[/size][/font]

So you're "doing it for the good of everyone else", ...

You are really too kind to everyone, Glenn.


I see this isn't actually a discussion of the statistics. I guess you think a data set is just fine if it has 5% of its values beyone the 3 SD range. Hmmm. Lets me know that you may protest too much. Does anyone who knows statistics think a data set is fine if it has 5% of its values out beyond the 3SD range? Apparently Thaumaturgy isn't bothered by the data at Electra, CA.





Too bad that rule doesn't apply to science books, Glenn. You've written enough peer reviewed publications to know better than that?
I keep looking at the title of this thread and wondering: "Global Warming -- The Data and Serious Discussion".

Well, I guess we know how "serious" you wanted.



Again, not a discussion of statistics. I guess you think it is ok for the the 'corrections' to the data to be hundreds of SD (instrumental limits used as the determinat) as is the case at Susanville and Electra CA.

I guess you would rather spend time insulting me for your childish ego than actually discussing what you have been badgering me about.

But indeed, I am quite happy you confessed to being an amateur on this. I am as well. Doesn't mean I can't flex and step off into this area. It is very important to me.
You see, Glenn, this is what is needed to go out and get a PhD. A driving interest to expand from your base and explore the caves.



More evasions of the statistics issues. After ranting on about how I didn't know anything you now avoid discussing the actual statistics of the data sets you loudly proclaimed as being important. And, unless you have been paid as a professional statistician, you too are as amateur as I.



You got a geology question? Because I got my BS, MS and PhD in geology. If you need some help, by all means, ask anything. Granted, I'm good enough that I did my postdocs in regular chemistry and I've spent most of my professional life as research chemist, but I've done enough geology teaching and research that if you have any questions, by all means ask.


Ah, so you too are an amateur statistician. Interesting Herr Doktor. You want to be called Doktor. People with degrees like yours called me boss. You too may call me boss.

What you are getting at is; how can statistics be done on "daily temps". Indeed they probably cannot for a single station since these appear to be single point measurements as you say.


Golly, maybe I did know something after all, amateur that I am. I am sure that pained you to admit it.



The key is to look at broader trends. The gridded averages spoken of in the various papers on how this stuff is measured. This is a reasonable question you have.


Bait and switch. YOu wanted a discussion of the statistics. I showed that the standard deviation by any reasonable estimation of it was such that the corrections to the data were moving the temperature by vastly more than 3 sd. Using the instrument error as the basis, we should know the annual average by .026 deg F. But the AVERAGE correction is over 2 degrees, which means, they are moving the temperature around by about 80 sd. Yet, you flee from this simple fact. If you move the observed data by 80 SD, you are making the data up. Sorry, there is little way to view it other than that. The output of the data does not statistically relate well to the input data.


While you can talk about gridded averages, it is the screwed up sanitized data which has been altered vastly more than 3 SD from the observed data which is then put into those gridded averages. If you put crap into a gridded average, you get crap out. You may be a chemist, but I make maps every day--geologic maps with gridded averages. If I map the top of the cretaceous and then move it by 2000 feet, and tilt it, it doesn't matter that I call it the Top Cretaceous map, it ain't a map of the top of the Cretaceous. Get real Herr Doktor. For your amusement, below is a time series for a grid of weather stations in Texas. It is from the raw data. Note that the regression line is flat. I guess the global warming must be 'corrected' into the data.


But it misses the overall trends in the data. If you have a century's worth of a single station's measurements you can fit a line to see if there is a trend in how that site's temperature has drifted.


Yes and you can clearly see if that station has had a 10 deg F step function jump in temperature. Such things don't happen in the real world, so your theorizing is just silliness and avoidance of the real issue. The instrumental SD is so small over a year's worth of measurments, that to correct it like they do is absolutely absurd. 80 or more SD moves of the raw data????? Clearly you don't know statistics as well as you claim you do or you would be appalled, like everyone I mention this to is. Everyone that is that isn't hooked on believing the NOAA brochures that say 'Don't worry; be happy!'


There are countless examples of this in science. There are an almost infinite number of examples where people graph one-x/one-y type data.


Blather! this is simply an avoidance mechanism to actually discuss the standard deviations and how many of them go past when the data is 'corrected'.



The correlation and statistics are helped if you have multiple response measurements for each independent X, but a statistical measure of trend is still possible with a true "scatterplot".

Non responsive to the issues at hand. I thought you wanted to discuss the statistics of the data. Now that I am; you aren't. Does anyone else notice this evasion going on here?




That's what I showed with the linear trend on the earlier dataset shown. The whole dataset was a very large set of data with a broad range on the x-axis, but regardless, the statistics takes that into account and calculates the F-statistic showing that for the data that is there I can be about 99.99% sure that the trend is not false.


Fantastic. but that doesn't address the fact that even if I use the St.Paul type of methodology where I get the SD for each single day and derive a yearly SD of .57 deg F, I still have 33% of the single station temperatures being corrected by more than 3 SD.


Speaking of that, if the average 'correction' to the observational data is 2.13 deg F, as at the class 1 station of Susanville, how can you say that we can detect a 1.1 deg F temperature change over the past century. What that would be saying is that the temperature has varied by 1.1 +/- 2.13 deg, which is statistically absurd. I guess you beleive that that is statistically reasonable, Herr Doktor.


Sorry, I think your statistical knowledge is as flawed as the logic you use with Fourier..


And that's really what this is all about.


Of course the statistics show the trend that has been adjusted INTO the data. You clearly are not reading the article you pointed me too. The homogeneity adjustment adjusts the trend of the data. Here it is again, my statistically illiterate freind.



&#8220;The homogeneity adjustments applied to the stations with poor siting
makes their trend very similar to the trend at the stations with good
siting.&#8221; THOMAS C. PETERSON, &#8220;EXAMINATION OF POTENTIAL
BIASES IN AIR TEMPERATURECAUSED BY POOR STATION
LOCATIONS,&#8221; American Meteorological Society, Aug, 2006, p. 1078 fig 2



Having an F-test find a trend that has been manufactured into the data, is quite an easy trick, but it is quite meaningless.



You made a point of showing how the data at the end was about the same level as the beginning but that is a flawed way to look at messy noisy data. It does contain a cycle as you noted and more importantly as the statistics showed.


Let's not change history now. You initially denied that it showed cyclicity. I could quote you again on that if you wish. Don't you recall this Herr Doktor?


From post 90

[B said:
Thaumaturgy[/b] ]
Ask your stats friend to explain the Fishers' Kappa function. I would love to learn more about it. But from what I can tell, Fisher's Kappa indicates no such "cyclicity" among the noise to a 99% level of assurity.
From post 129

[B said:
Thaumaturgy[/b] ]
I think I was mistaken about the Kappa function. It does show a statistical significance for cyclcity when it is low on the p-value.
No problem. We see from the graph that, as Glenn has pointed out, there is, indeed, cyclicity. AND it has a multi-year period. The residuals bear this out.


The overall data, fitted by a least-squares regression resulted in not only a "visible" trend but a statistically significant "non-zero" trend. Meaning your "gut feel" for the fact that the end point was about the same level as the beginning was in error, by about 99.99% likelihood.


Changing history again. It wasn't a gut feel. It was a mathematical subtraction. Subtraction is a wonderful tool. In Dec 1978 the satellite tropospheric temperature anomaly was -.199. In May 2008, the satellite temperature anomaly was -.183. Let me teach you how subtraction works, since you can't seem to get the concept and wish to change history. -.183-(-.199) = .016 C. That is a tiny difference and the satellite temperature data was almost back to where it started 30 years ago. I guess your subtraction ability is as good as your statistics ability. Clearly you don't want to talk about how big the correction factors are compared to the claimed temperature warming and to the instrumentally determined standard deviation. Now that I finally discussed statistics, you seem to want to talk about anything other than standard deviations.



But take the temperature at 3 Jan of every year for a century, compare it to neighboring stations, grid the average, and see how it changes. Quality control is obviously done on these measurements as shown in the earlier citation and the various other papers I've referenced in this discussion (ie no one is "making data up", they are using very specific pre-arranged rules for controlling "accountable error" and letting the chips fall where they may on error they cannot account for.)

That doesn't get one out of the problem of how large a correction is required. If you have to correct the data by 2+ degrees, then you can't claim to actually see a warming of 1.1 degree. Is your statistical ability so lousy that you don't see the stupidity of saying that the temperature has risent by 1.1 +/- 2+ degrees????

And you said I was an amateur. What an utter hoot.

If anyone else has the guts to get into this mud pit, I would be willing to stand corrected on anything I have said. Unlike Herr Doktor, I actually make mistakes and will fix them. So, does anyone wat to say that a 1.1 +/- 2 deg F temperature change makes any sense?

T, I also noted that you avoided talking abut the coefficient of variation and its relation to the signal to noise ratio. I would wonder why, but since it doesn't support your position, and you only talk about things that support your position, it is truly no wonder.

Your other posts are not very interesting and not worthy of responses. If you care to discuss statistics, start with the fact that the average temperature at Susanville California is corrected by 80+ standard deviations from the observational data.

I really am disappointed in you. I thought you would have the courage to actually discuss statistics. I guess we were all fooled by your bluster. Or you might be a statistical chicken who is too afraid to discuss the standard deviation of the data as observed and as corrected.
 
Upvote 0

grmorton

Senior Member
Sep 19, 2004
1,241
83
75
Spring TX formerly Beijing, China
Visit site
✟24,283.00
Faith
Non-Denom
Marital Status
Married
An “annual average” is nothing more than the average of the data from the entire year.



You are overly narrowing the definition of statistics.


Long before you and I debated, I discussed this issue with a statistician. He agrees that the daily temperature isn't a measurement of the annual average. Thus, there is a philosophical issue concerning how to calculate the error bars on the annual average. The best way to do it is the instrumental way--starting with each day having a +/-0.5 deg error. But, that leads to a very small SD which conflicts with the average of 2 deg F corrections made to the raw data. One can't claim that one is measuring the annual average temperature to .026 deg F, and then correct it by 2 degrees. That is inconsistent.

After you badgered me over an over, and insulted me, I really expected you to be more honest with the data than this.


The key then is to compare this with the 100+ measurements taken over a century. Each measurement shows the spread around each year’s temperature, hence by definition each year’s data point has a given statistical certitude.


What naivete! No measurement by mankind has a statistical certitude. I take that term to mean 100% chance of being correct. And you think you know statistics. You clearly don't have the foggiest clue about measuring things--something a physicist, which I am by training, learns about rather quickly. All measurements have error; there is no statistical certitude. What a laugh--after you badgered and insulted me, you say something as stupid as this?


That isn’t the point of an annual mean. Even if it were the exact same measurement over and over again, a mean doesn’t necessarily tell you that you ever measured the exact mean at any point.


Oh Lord. We always can measure the mean--average the data for pete's sake. What we can't know is that we have measured the TRUE value. With any sequence of measuremnts we can arrive at a mean, Herr Doktor. I am incredulous at the silliness of the terms you use after the arrogant head banging you gave me over not wanting to get into the statistics.

This is a reasonable set of questions to ask, however, this overly narrows “statistics”. There are numerous cases in science where scientists get “one bite at the apple”. Cases where only one measurement of a response to any given “X-value” is allowed. This does not render statistics unusable, but it does shift from a single data point estimate of error to MODEL error estimates.

And one must know the model befoer one can know the answer. We deal with this problem all the time in geophysics. wanna know something? Multiple models will fit the same data set. It happens all the time in my business.

The key then is to do statistics on the trend.

Well that is a grand idea except for the lil' ol' fact that the homogenety correction changes the trend of the individual stations. Thus they even ruin that idea.

Why you don't pay any attention is beyond me.

Again, the homogeneity adjustments applied to the stations
with poor siting make their trend very similar to the trend at the
stations with good siting.” THOMAS C. PETERSON, “EXAMINATION OF POTENTIAL
BIASES IN AIR TEMPERATURECAUSED BY POOR STATION
LOCATIONS,” American Meteorological Society, Aug, 2006, p. 1078 fig 3.


In a given data set of interest we want to see how response (y) and predictor (x) correlate:
Each day’s temperature range gives a “mean” for that day. It has a standard deviation for that day which isn’t very useful. Just the calculation you pointed out earlier: of the square root of the sum of the (min-mean)^2 and the (max-mean)^2 divided by the degrees of freedom (N-1).

Since there’s only two data points not much to be learned from this. BUT take this string of data collected every day for 365 days and you have 365 mean temperatures which will show seasonal cyclicity as expected, but indeed will have a mean temperature. It’s just the math. Nothing mysterious, nothing smoke-and-mirrors. It not only has a meaningful standard deviation (because you have 364 degrees of freedom) but now you’ve got an even more narrow estimate of the true mean for the year given by the 95% confidence interval calculation:

95%CI = t[sub]0.05[/sub]*s/sqrt(N)

When N~365 that means you are dividing ~(2*standard deviation) by 19. That means, effectively you have dropped the “range” of your sureness of the mean by ~10X over just the standard deviation alone. That’s pretty big. That’s the key difference between just looking at the standard deviation and looking at the confidence interval on the mean. This is why the Central Limit Theorem is important.


But it is also important that the corrections to the temperature is more than 2 degrees and the claimed global warming is half that. You seem to once again want to talk about everything that isn't important and avoid the things that are important. If the error in your data set is 2 deg. as evidenced by the amount of correction between the raw and edited data, then you will never be able to see the claimed warming.

The key now is to see how this annual mean changes over time, hence the collection of this “annual mean” over the course of a century. 100 Years’ worth of data. Each year is, as you say, a “one-off” of sorts, but each year has associated with it a standard deviation for that entire year.

I see you don't think an instrumental error plays any role in the measurment process. We can collect any ol' bad data and then use it as if it is a statistical certainty. What a hoot!

But, let’s get back to the data for each year as single points. We can run an F-test to see if there is reason to believe there is non-zero trend to the data.


Are we talking the raw data or the edited data?

It then becomes necessary to do the time-series analyses etc. In the case of the present data we have that example here:


This 'example' is not of the actual land temperature data. You seem to have a remarkable ability to avoid the real issues. Should you have to edit your data by moving it more than 3 SD? CAn you please answer that simple question?

It has been shown several times over that there is a non-zero trend to this data. We can quantify against our “gut feeling” which is the error you made by looking at just the start and end and decreeing that they were kinda close to each other and hence just a cyclical data set.

Once again, an exercise in evasion. You won't talk about the 2+ deg correction to the raw data and the mathematically required tiny standard deviation (instrumentally determined, or daily SD determined) and how that affects the ability to see global warming. What a weasel you are.

You also haven't mentioned the coefficient of variation, which is important. If the Signal to noise ratio is too low, you will never see the signal of global warming. Stop weaseling and answer the real data. You are evading and avoiding.

In essence your “intuition” that there was cyclicity just by “looking” at the data is borne out by the time-series stuff. Your “intuition” that there was no overall trend in the data was incorrect according to the standard statistics on the trend.


Let's do this again. Dec 1978 the satellite temp anomaly was -.199. May 2008, the satellite temperature anomaly was -.183. That is only a .016 C rise over the 30 years. There were periods when the anomaly was higher, but it varied and there were several times when the temp anomaly was lower than -199. It was as low as -.486 in the 3rd quarter of 1984, for the whole quarter. In July Aug and Sep, 1986 the anomaly was lower than -.199. It was in the area of -258. In 1989, it was cooler. Such subtractions show that the data is cyclical. You just can't come to grips with that.

This analysis of the data carries only a 0.01% chance of being in error as to the likely existence of a non-zero trend.

I showed why I wasn't concerned. Rule out long term cycles, please.

This is why statistics is crucial to this debate.


Yes it is. I wish you would actually discuss some. How about starting with standard deviations and how many of them are required to 'correct' the raw data. Anyone else notice this evasion tactic?

I am very disappointed in your weak responses and evasions Herr Doktor.

Below is something for those who don't care about the evasion tactics of Herr Doktor. It is t he University of Tuscon's weather station, in the middle of a hot parking lot. And the meteorologists there don't actually fix the problem. It is stupid to try to measure temperature in the middle of a parking lot unless one wants it to be hot.
 
Upvote 0

Chalnoth

Senior Contributor
Aug 14, 2006
11,361
384
Italy
✟36,153.00
Faith
Atheist
Marital Status
Single
Since CO2 is a clear colorless gas, it doesn't cause an albedo at most of the frequencies the sun inputs to the earth (yes, one can quibble about minor frequency components). So, no, there is no 'co2' effect for the albedo. Albedo is defined as Reflected electromagnetic radiation/input radiation. What frequencies do you think I have overlooked in the CO2 reflection spectrum?
Right, and the effect of CO2 is to trap radiation, so its effect is going to be seen in the albedo, particularly at lower frequencies.

CO2 has a logarithmic effect. Its effect of the second doubling is less than the first.
Actually, if it were a logarithmic effect, the effect of the first doubling would be exactly the same as the second. After all:

ln(2x) = ln(2) + ln(x)
ln(4x) = 2ln(2) + ln(x)

Of course, the logarithmic nature is just going to be an approximation, but this is why climate sensitivity is quoted in terms of a doubling of the CO2 content. So, why do you think that the real CO2 sensitivity is lower than 1-3C?

You got to be kidding? YOu didn't look at the scale of that chart. Those are centuries long warmings and coolings and are NOT the ENSO. Below is the duration of La Nina.
I didn't say that the ENSO was the be-all and end-all of warming. It's basically just a cause of 1-2 year spikes in the temperature that occur relatively periodically. But since the ENSO is a recirculation of heat between the ocean and air, a warmer ocean means a warmer El Nino and a less cool La Nina.
 
Upvote 0

grmorton

Senior Member
Sep 19, 2004
1,241
83
75
Spring TX formerly Beijing, China
Visit site
✟24,283.00
Faith
Non-Denom
Marital Status
Married
Right, and the effect of CO2 is to trap radiation, so its effect is going to be seen in the albedo, particularly at lower frequencies.

The earth takes high frequencies, absorbs them and gives them out at low frequencies. One can elimate te earth's contribution to the IR emissions by calculating the earth's temperature and subtracting that from the known solar spectrum. So, I would disagree with you there.


Actually, if it were a logarithmic effect, the effect of the first doubling would be exactly the same as the second. After all:

ln(2x) = ln(2) + ln(x)
ln(4x) = 2ln(2) + ln(x)

Don't know what I was thinking. I plead temporary insanity and distraction. You are right and I was absolutely wrong. I want it very clear that I will correct my mistakes immediately without weaseling or excuse.

Of course, the logarithmic nature is just going to be an approximation, but this is why climate sensitivity is quoted in terms of a doubling of the CO2 content. So, why do you think that the real CO2 sensitivity is lower than 1-3C?

My point is two-fold. I don't think that they know the amount of variability.Because there are so many other factors involved. The earth will respond with NET sensitivity. An increase in clouds due to warming will reduce the NET sensitivity, a decrease in ice cover will, for a while add to net sensitivity but reduce CO2's contribution to that NET sensitivity. Solar flux changes which have increased over the past century, also play a role in the warming and that also reduces the absolute role played by CO2.


I didn't say that the ENSO was the be-all and end-all of warming. It's basically just a cause of 1-2 year spikes in the temperature that occur relatively periodically. But since the ENSO is a recirculation of heat between the ocean and air, a warmer ocean means a warmer El Nino and a less cool La Nina.

As I understood you, you said that the cycles in my chart, which are centuries long, were ENSO cycles. They aren't. My point was that you were wrong in that assertion.

A question for you. Do you think it is ok to 'correct' the annual average temperature data by much more than 3 standard deviations? Do you think that if we have to correct a temperature stream from a GOOD station like Susanville CA (a class 1 station) by 2+ deg F, that we can really claim to have detected a global warming of 1.1 degree?

Here is the statistical reality (pay attention Thaumaturgy. If the ending year of the time series has a +/-2 deg difference and the first year has a similar error, then merely subtracting the two numbers results in a 2.8 deg F SD for the final number. Thus, if the temperature went up by 1.1 deg, you have to put error bars on it of 2.8 deg. That is abysmally stupid to do.

If I am wrong, as I was abysmally wrong on the logarithmic crack, please explain where I am wrong on this statistically important issue. And don't forget to explain why it is OK for Electra to have 5% of their years beyond the 3 SD range (as calculated from the data series itself). If one were to use the instrumentally or daily SD methods, the outliers are hundreds of time beyond the 3 SD range.

In short, do you consider this data good???
 
Last edited:
Upvote 0

Chalnoth

Senior Contributor
Aug 14, 2006
11,361
384
Italy
✟36,153.00
Faith
Atheist
Marital Status
Single
The earth takes high frequencies, absorbs them and gives them out at low frequencies. One can elimate te earth's contribution to the IR emissions by calculating the earth's temperature and subtracting that from the known solar spectrum. So, I would disagree with you there.
Right, which has an effect of increasing the albedo at low frequencies.

My point is two-fold. I don't think that they know the amount of variability.Because there are so many other factors involved. The earth will respond with NET sensitivity. An increase in clouds due to warming will reduce the NET sensitivity, a decrease in ice cover will, for a while add to net sensitivity but reduce CO2's contribution to that NET sensitivity. Solar flux changes which have increased over the past century, also play a role in the warming and that also reduces the absolute role played by CO2.
Right, so you just model all of these other effects as noise and use long time scales to compute the correlation between CO2 and temperature, combined with precise air column simulations to estimate the strength of the forcing. The fact that these are noisy measurements is why the range is so large, 1-3C.

As I understood you, you said that the cycles in my chart, which are centuries long, were ENSO cycles. They aren't. My point was that you were wrong in that assertion.
Ah, yes, well, perhaps I glossed over the plot too quickly. I was still thinking of the 30-year timescale plots, where the ENSO cycle is the only one visible. The fact remains, however, that climate scientists are not idiots, and have looked at whether or not these other, longer-time cycles (such as the Milankovitch cycle) could potentially be the cause the current warming. The thing is, none of these long-time scale cycles are active right now (in a sense that none of their drivers are increasing right now).

A question for you. Do you think it is ok to 'correct' the annual average temperature data by much more than 3 standard deviations? Do you think that if we have to correct a temperature stream from a GOOD station like Susanville CA (a class 1 station) by 2+ deg F, that we can really claim to have detected a global warming of 1.1 degree?
Certainly, it all depends upon the how and why of the correction.

Here is the statistical reality (pay attention Thaumaturgy. If the ending year of the time series has a +/-2 deg difference and the first year has a similar error, then merely subtracting the two numbers results in a 2.8 deg F SD for the final number. Thus, if the temperature went up by 1.1 deg, you have to put error bars on it of 2.8 deg. That is abysmally stupid to do.
Inflating the statistical error is a hell of a lot better than leaving a systematic error in the data. Basically, if you don't do the systematic error correction in some misguided attempt to keep the statistical error low, your error at the end will be even larger, but you won't have modeled it, which means your answer will be just plain wrong.

And don't forget to explain why it is OK for Electra to have 5% of their years beyond the 3 SD range (as calculated from the data series itself). If one were to use the instrumentally or daily SD methods, the outliers are hundreds of time beyond the 3 SD range.

In short, do you consider this data good???
The reason, fundamentally, why I consider the data to be good has nothing whatsoever to do with nit picking on specific aspects of the station data. It has to do with how well it accords with other proxies of climate, such as glacier melt, sea ice melt, and the satellite temperature record.
 
Upvote 0

grmorton

Senior Member
Sep 19, 2004
1,241
83
75
Spring TX formerly Beijing, China
Visit site
✟24,283.00
Faith
Non-Denom
Marital Status
Married
Right, which has an effect of increasing the albedo at low frequencies.

I repeat, that can be corrected for and not counted as albedo.


Right, so you just model all of these other effects as noise and use long time scales to compute the correlation between CO2 and temperature, combined with precise air column simulations to estimate the strength of the forcing. The fact that these are noisy measurements is why the range is so large, 1-3C.

But, if one is to make public policy off of the data, one should first know what the data is.


Ah, yes, well, perhaps I glossed over the plot too quickly. I was still thinking of the 30-year timescale plots, where the ENSO cycle is the only one visible. The fact remains, however, that climate scientists are not idiots, and have looked at whether or not these other, longer-time cycles (such as the Milankovitch cycle) could potentially be the cause the current warming. The thing is, none of these long-time scale cycles are active right now (in a sense that none of their drivers are increasing right now).

I think they are incompetent to allow thermometers to be placed next to air conditioners. They are the responsible party and they fail their responsibility miserably. But, as I have pointed out several times there is one long term trend that is active. The sun has been more active than at any time in the past 8000 years. Doesn't that count?






Certainly, it all depends upon the how and why of the correction.

We will disagree there. I would have gotten an F in my physics lab for doing that. When one is making corrections, especially homogeneity type corrections, where the trend of the data is tilted to some pre-determined or pre-approved, or pre-judged correct value, one is changing both the mean and the Standard deviation. In other words, one is finding reasons to make the data say what one thinks it ought to say. One thing about us humans, we are quite susceptible to fooling ourselves. It is why so many dry holes are drilling in the oil business. I have seen people make excuse after excuse about why the data doesn't say what it clearly says and if they have a glib enough tongue, they can talk the investors into drilling an abysmal prospect.


Inflating the statistical error is a hell of a lot better than leaving a systematic error in the data. Basically, if you don't do the systematic error correction in some misguided attempt to keep the statistical error low, your error at the end will be even larger, but you won't have modeled it, which means your answer will be just plain wrong.

Agreed that it is better to remove the bias/systematic error. But when one does that, one must be aware that one is essentially saying, "I know the temperature was not that value, and I believe it to be this value." If one then uses the made up value (which some will call a correction) in the determination of a final SD for a yearly temperature, it clearly underestimates the SD of the actual measurement. If I want to 'perform' an experiment to measure the speed of light, I can do it with a calculator alone. I know the set up and can calculate what all the parameters ought to be and come up with a beautiful 'measurement' of the speed of light. But, unfortunately it would all be fantasy because all the observations were made up. Science isn't supposed to work that way. This is why it irritates the heck out of me for them to know that the station data is as bad as it is and do absolutely nothing about it.


The reason, fundamentally, why I consider the data to be good has nothing whatsoever to do with nit picking on specific aspects of the station data. It has to do with how well it accords with other proxies of climate, such as glacier melt, sea ice melt, and the satellite temperature record.

As to agreeing with long term trends, you are aware, aren't you that the earth has been warming because it is coming out of the Maunder Minimum, which warming started long before autos and oil were being burned. The Alaskan glaciers have been melting for 200 years. This is the first year in 200 years that they have grown. Why were they melting before the massive outpouring of CO2? Could it be that the sun has a wee bit to do with it?

See the reconstruction of temperature from that time to the present. The first picture is of sunspot numbers.

The second picture is of the solar irradiance in the IPCC. Note the LONG TERM RISING OF SOLAR OUTPUT LONG BEFORE THE MASSIVE INPUT OF CO2!!!! Amazing that the increase in the rate of CO2 influx to the atmosphere started about the time that the sun increased its output. Do you know what happens to warmer ocean water? It degasses CO2.

Any way you slice it there is about a 4 watts per sq meter increase in solar irradiance over the past 300 years. That is 4 joules per second per meter squared. And even the IPCC knows of this.
 
Upvote 0

grmorton

Senior Member
Sep 19, 2004
1,241
83
75
Spring TX formerly Beijing, China
Visit site
✟24,283.00
Faith
Non-Denom
Marital Status
Married
A bit more on the analysis of the good stations in California. Below is a chart of the average edited temperatures minus the raw temperature for each year. This is the US HCN data. Note that there is, in general a greater downward editing for the early years than for the later years. The editing alone on these good stations will either help reduce a cooling trend if one existed, or create an exaggerated heating if the trend raw trend was either flat or increasing. This is consistent with the bias that was shown by Balling and Idso, in the chart I posted here. It seems to me that this kind of bias makes the measurement of global warming invalid.

I would also note that one can not easily claim that there is a great accuracy in measuring the temperature when the standard deviation of these differences is 2.9 deg F. When you have to move a temperature by 3 degrees for whatever reason, the accuracy of your measurement system can't be better than that!
 
Upvote 0