Readers may recall Pat Franks’s excellent essay on uncertainty in the temperature record. He emailed me about this new essay he posted on the Air Vent, with suggestions I cover it at WUWT, I regret it got lost in my firehose of daily email. Here it is now. – Anthony
Future Perfect
By Pat Frank
In my recent “New Science of Climate Change” post here on Jeff’s tAV, the cosine fits to differences among the various GISS surface air temperature anomaly data sets were intriguing. So, I decided to see what, if anything, cosines might tell us about the surface air temperature anomaly trends themselves. It turned out they have a lot to reveal.
As a qualifier, regular tAV readers know that I’ve published on the amazing neglect of the systematic instrumental error present in the surface air temperature record It seems certain that surface air temperatures are so contaminated with systematic error – at least (+/-)0.5 C — that the global air temperature anomaly trends have no climatological meaning. I’ve done further work on this issue and, although the analysis is incomplete, so far it looks like the systematic instrumental error may be worse than we thought. J But that’s for another time.
Systematic error is funny business. In surface air temperatures it’s not necessarily a constant offset but is a variable error. That means it not only biases the mean of a data set, but it is likely to have an asymmetric distribution in the data. Systematic error of that sort in a temperature series may enhance a time-wise trend or diminish it, or switch back-and-forth in some unpredictable way between these two effects. Since the systematic error arises from the effects of weather on the temperature sensors, the systematic error will vary continuously with the weather. The mean error bias will be different for every data set and so with the distribution envelope of the systematic error.
For right now, though, I’d like to put all that aside and proceed with an analysis that accepts the air temperature context as found within the IPCC ballpark. That is, for the purposes of this analysis I’m assuming that the global average surface air temperature anomaly trends are real and meaningful.
I have the GISS and the CRU annual surface air temperature anomaly data sets out to 2010. In order to make the analyses comparable, I used the GISS start time of 1880. Figure 1 shows what happened when I fit these data with a combined cosine function plus a linear trend. Both data sets were well-fit.
The unfit residuals are shown below the main plots. A linear fit to the residuals tracked exactly along the zero line, to 1 part in ~10^5. This shows that both sets of anomaly data are very well represented by a cosine-like oscillation plus a rising linear trend. The linear parts of the fitted trends were: GISS, 0.057 C/decade and CRU, 0.058 C/decade.
Figure 1. Upper: Trends for the annual surface air temperature anomalies, showing the OLS fits with a combined cosine function plus a linear trend. Lower: The (data minus fit) residual. The colored lines along the zero axis are linear fits to the respective residual. These show the unfit residuals have no net trend. Part a, GISS data; part b, CRU data.Removing the oscillations from the global anomaly trends should leave only the linear parts of the trends. What does that look like? Figure 2 shows this: the linear trends remaining in the GISS and CRU anomaly data sets after the cosine is subtracted away. The pure subtracted cosines are displayed below each plot.
Each of the plots showing the linearized trends also includes two straight lines. One of them is the line from the cosine plus linear fits of Figure 1. The other straight line is a linear least squares fit to the linearized trends. The linear fits had slopes of: GISS, 0.058 C/decade and CRU, 0.058 C/decade, which may as well be identical to the line slopes from the fits in Figure 1.
Figure 1 and Figure 2 show that to a high degree of certainty, and apart from year-to-year temperature variability, the entire trend in global air temperatures since 1880 can be explained by a linear trend plus an oscillation.
Figure 3 shows that the GISS cosine and the CRU cosine are very similar – probably identical given the quality of the data. They show a period of about 60 years, and an intensity of about (+/-)0.1 C. These oscillations are clearly responsible for the visually arresting slope changes in the anomaly trends after 1915 and after 1975.
Figure 2. Upper: The linear part of the annual surface average air temperature anomaly trends, obtained by subtracting the fitted cosines from the entire trends. The two straight lines in each plot are: OLS fits to the linear trends and, the linear parts of the fits shown in Figure 1. The two lines overlay. Lower: The subtracted cosine functions.The surface air temperature data sets consist of land surface temperatures plus the SSTs. It seems reasonable that the oscillation represented by the cosine stems from a net heating-cooling cycle of the world ocean.
The major oceanic cycles include the PDO, the AMO, and the Indian Ocean oscillation. Joe D’aleo has a nice summary of these here (pdf download).
The combined PDO+AMO is a rough oscillation and has a period of about 55 years, with a 20th century maximum near 1937 and a minimum near 1972 (D’Aleo Figure 11). The combined ocean cycle appears to be close to another maximum near 2002 (although the PDO has turned south). The period and phase of the PDO+AMO correspond very well with the fitted GISS and CRU cosines, and so it appears we’ve found a net world ocean thermal signature in the air temperature anomaly data sets.
In the “New Science” post we saw a weak oscillation appear in the GISS surface anomaly difference data after 1999, when the SSTs were added in. Prior and up to 1999, the GISS surface anomaly data included only the land surface temperatures.
So, I checked the GISS 1999 land surface anomaly data set to see whether it, too, could be represented by a cosine-like oscillation plus a linear trend. And so it could. The oscillation had a period of 63 years and an intensity of (+/-)0.1 C. The linear trend was 0.047 C/decade; pretty much the same oscillation but a slower warming trend by 0.1 C/decade. So, it appears that the net world ocean thermal oscillation is teleconnected into the global land surface air temperatures.
But that’s not the analysis that interested me. Figure 2 appears to show that the entire 130 years between 1880 and 2010 has had a steady warming trend of about 0.058 C/decade. This seems to explain the almost rock-steady 20th century rise in sea level, doesn’t it.
The argument has always been that the climate of the first 40-50 years of the 20th century was unaffected by human-produced GHGs. After 1960 or so, certainly after 1975, the GHG effect kicked in, and the thermal trend of the global air temperatures began to show a human influence. So the story goes.
Isn’t that claim refuted if the late 20th century warmed at the same rate as the early 20th century? That seems to be the message of Figure 2.
But the analysis can be carried further. The early and late air temperature anomaly trends can be assessed separately, and then compared. That’s what was done for Figure 4, again using the GISS and CRU data sets. In each data set, I fit the anomalies separately over 1880-1940, and over 1960-2010. In the “New Science of Climate Change” post, I showed that these linear fits can be badly biased by the choice of starting points. The anomaly profile at 1960 is similar to the profile at 1880, and so these two starting points seem to impart no obvious bias. Visually, the slope of the anomaly temperatures after 1960 seems pretty steady, especially in the GISS data set.
Figure 4 shows the results of these separate fits, yielding the linear warming trend for the early and late parts of the last 130 years.
Figure 4: The Figure 2 linearized trends from the GISS and CRU surface air temperature anomalies showing separate OLS linear fits to the 1880-1940 and 1960-2010 sections.The fit results of the early and later temperature anomaly trends are in Table 1.
Table 1: Decadal Warming Rates for the Early and Late Periods.
| Data Set |
C/d (1880-1940) |
C/d (1960-2010) |
(late minus early) |
| GISS |
0.056 |
0.087 |
0.031 |
| CRU |
0.044 |
0.073 |
0.029 |
“C/d” is the slope of the fitted lines in Celsius per decade.
So there we have it. Both data sets show the later period warmed more quickly than the earlier period. Although the GISS and CRU rates differ by about 12%, the changes in rate (data column 3) are identical.
If we accept the IPCC/AGW paradigm and grant the climatological purity of the early 20th century, then the natural recovery rate from the LIA averages about 0.05 C/decade. To proceed, we have to assume that the natural rate of 0.05 C/decade was fated to remain unchanged for the entire 130 years, through to 2010.
Assuming that, then the increased slope of 0.03 C/decade after 1960 is due to the malign influences from the unnatural and impure human-produced GHGs.
Granting all that, we now have a handle on the most climatologically elusive quantity of all: the climate sensitivity to GHGs.
I still have all the atmospheric forcings for CO2, methane, and nitrous oxide that I calculated up for my http://www.skeptic.com/reading_room/a-climate-of-belief/”>Skeptic paper. Together, these constitute the great bulk of new GHG forcing since 1880. Total chlorofluorocarbons add another 10% or so, but that’s not a large impact so they were ignored.
All we need do now is plot the progressive trend in recent GHG forcing against the balefully apparent human-caused 0.03 C/decade trend, all between the years 1960-2010, and the slope gives us the climate sensitivity in C/(W-m^-2). That plot is in Figure 5.
Figure 5. Blue line: the 1960-2010 excess warming, 0.03 C/decade, plotted against the net GHG forcing trend due to increasing CO2, CH4, and N2O. Red line: the OLS linear fit to the forcing-temperature curve (r^2=0.991). Inset: the same lines extended through to the year 2100.There’s a surprise: the trend line shows a curved dependence. More on that later. The red line in Figure 5 is a linear fit to the blue line. It yielded a slope of 0.090 C/W-m^-2.
So there it is: every Watt per meter squared of additional GHG forcing, during the last 50 years, has increased the global average surface air temperature by 0.09 C.
Spread the word: the Earth climate sensitivity is 0.090 C/W-m^-2.
The IPCC says that the increased forcing due to doubled CO2, the bug-bear of climate alarm, is about 3.8 W/m^2. The consequent increase in global average air temperature is mid-ranged at 3 Celsius. So, the IPCC officially says that Earth’s climate sensitivity is 0.79 C/W-m^-2. That’s 8.8x larger than what Earth says it is.
Our empirical sensitivity says doubled CO2 alone will cause an average air temperature rise of 0.34 C above any natural increase. This value is 4.4x -13x smaller than the range projected by the IPCC.
The total increased forcing due to doubled CO2, plus projected increases in atmospheric methane and nitrous oxide, is 5 W/m^2. The linear model says this will lead to a projected average air temperature rise of 0.45 C. This is about the rise in temperature we’ve experienced since 1980. Is that scary, or what?
But back to the negative curvature of the sensitivity plot. The change in air temperature is supposed to be linear with forcing. But here we see that for 50 years average air temperature has been negatively curved with forcing. Something is happening. In proper AGW climatology fashion, I could suppose that the data are wrong because models are always right.
But in my own scientific practice (and the practice of everyone else I know), data are the measure of theory and not vice versa. Kevin, Michael, and Gavin may criticize me for that because climatology is different and unique and Ravetzian, but I’ll go with the primary standard of science anyway.
So, what does negative curvature mean? If it’s real, that is. It means that the sensitivity of climate to GHG forcing has been decreasing all the while the GHG forcing itself has been increasing.
If I didn’t know better, I’d say the data are telling us that something in the climate system is adjusting to the GHG forcing. It’s imposing a progressively negative feedback.
It couldn’t be the negative feedback of Roy Spencer’s clouds, could it?
The climate, in other words, is showing stability in the face of a perturbation. As the perturbation is increasing, the negative compensation by the climate is increasing as well.
Let’s suppose the last 50 years are an indication of how the climate system will respond to the next 100 years of a continued increase in GHG forcing.
The inset of Figure 5 shows how the climate might respond to a steadily increased GHG forcing right up to the year 2100. That’s up through a quadrupling of atmospheric CO2.
The red line indicates the projected increase in temperature if the 0.03 C/decade linear fit model was true. Alternatively, the blue line shows how global average air temperature might respond, if the empirical negative feedback response is true.
If the climate continues to respond as it has already done, by 2100 the increase in temperature will be fully 50% less than it would be if the linear response model was true. And the linear response model produces a much smaller temperature increase than the IPCC climate model, umm, model.
Semi-empirical linear model: 0.84 C warmer by 2100.
Fully empirical negative feedback model: 0.42 C warmer by 2100.
And that’s with 10 W/m^2 of additional GHG forcing and an atmospheric CO2 level of 1274 ppmv. By way of comparison, the IPCC A2 model assumed a year 2100 atmosphere with 1250 ppmv of CO2 and a global average air temperature increase of 3.6 C.
So let’s add that: Official IPCC A2 model: 3.6 C warmer by 2100.
The semi-empirical linear model alone, empirically grounded in 50 years of actual data, says the temperature will have increased only 0.23 of the IPCC’s A2 model prediction of 3.6 C.
And if we go with the empirical negative feedback inference provided by Earth, the year 2100 temperature increase will be 0.12 of the IPCC projection.
So, there’s a nice lesson for the IPCC and the AGW modelers, about GCM projections: they are contradicted by the data of Earth itself. Interestingly enough, Earth contradicted the same crew, big time, at the hands Demetris Koutsoyiannis, too.
So, is all of this physically real? Let’s put it this way: it’s all empirically grounded in real temperature numbers. That, at least, makes this analysis far more physically real than any paleo-temperature reconstruction that attaches a temperature label to tree ring metrics or to principal components.
Clearly, though, since unknown amounts of systematic error are attached to global temperatures, we don’t know if any of this is physically real.
But we can say this to anyone who assigns physical reality to the global average surface air temperature record, or who insists that the anomaly record is climatologically meaningful: The surface air temperatures themselves say that Earth’s climate has a very low sensitivity to GHG forcing.
The major assumption used for this analysis, that the climate of the early part of the 20th century was free of human influence, is common throughout the AGW literature. The second assumption, that the natural underlying warming trend continued through the second half of the last 130 years, is also reasonable given the typical views expressed about a constant natural variability. The rest of the analysis automatically follows.
In the context of the IPCC’s very own ballpark, Earth itself is telling us there’s nothing to worry about in doubled, or even quadrupled, atmospheric CO2.

Dave, does the convention make derivative analysis easier?
There is no much point delving into AMO , PDO and solar relationships unless science can relate to what is causing what.
AMO and PDO ‘drivers’ as I’ve identified from available data, are not perfectly synchronised either among themselves or with the solar activity.
http://www.vukcevic.talktalk.net/DAP.htm
However, one can’t escape impression that since 1900s (time of the reasonable data reliability) there is a loose relationship to the solar output, not perfect, but there is some commonality.
Since none of the data I used are TSI related, then one can say that the solar science is partially correct to say ‘it is not TSI’.
On the other hand solar scientists do not have monopoly on the Sun-Earth link knowledge.
Fitting certain “data” = numerology. LOLOL!
Mr. Frank
Sin/Cos correlation is usually described as numerology (my personal experience).
However in this case no need to use Cos function, just superimpose the true North Magnetic Pole (until 1996/7 located in the Hudson Bay Area) magnetic flux and you will get just as good match.
See graph on the index page of my website:
http://www.vukcevic.talktalk.net/
@Leif Svalgaard June 2, 2011 at 11:37 am
You’re out of your league facing off against Kravtsov & Tsonis.
Paul Vaughan says:
June 2, 2011 at 1:32 pm
You’re out of your league facing off against Kravtsov & Tsonis
Numerology is numerology regardless who commits it. That is not to say that numerology cannot at times be useful.
http://www.squaw.com/uber-cam
Too bad they closed. There’s more snow now than there was a month ago. They close leaving all the snow to go to waste due to insurance reasons plus, the flatlanders watching their boob tubes probably think there is no snow and everything is going tropical up there. After all, da man on da news sed dat dem tornados is doo to gwobo warmin’, can’t be no snow up dhere.
Just like models.
James Sexton says:
“Bob, all that is fine, except, we will never, (I repeat for emphasis) never, come to an understanding of all of the forcings and specific weights to each forcing that goes into our climatology. Its a pipe-dream and a fools errand to go chasing such. It would be much easier to state we don’t know and move on.”
I am sorry, but that attitude belongs in the Medieval Warm Period.
If we don’t try and understand what causes the climate to change, then we will never be able to answer the question of whether or not burning fossil fuels will be detrimental to the human race.
I think a big problem currently before us is determining what the current forcings are. I don’t think we have a good measure of some of the current important ones such as the amount of aerosols currently being emmited.
And without that, predictions on which way the climate will go in the future are fraught with peril.
BTW involved in a debate (between the name calling and pseudo science put downs) on the ‘Say Yes Australia’ FB page and someone put in a reference to
http://www.columbia.edu/~jeh1/mailings/2011/20110118_MilankovicPaper.pdf
Anyone care to comment and contrast? I’d like to further the meaningful debate. Thanks
It is incredibly easy to see sinusoids and linear trends in any set of data. Absent a reason or an explanation, one cannot just subtract something like that out of the temp series. And to the extent that there is a natural warming trend “before human influence” (1900-1950), you can’t just assert that the natural trend would have continued indefinitely the same way over the next 50 years, and then just take the difference to be the man-made contribution. That is a moronic and naive assumption, absent any further information or theory.
Real scientists who calculate climate sensitivity account for countless factors such as solar irradiance, volcanic activity, and orbital variations when they “subtract out” natural effects. They also account for many different human effects that push the climate in opposing directions (after all, the human impact is not monolithic).
For your readers who find this article to be an exciting example of “real science!”, I suggest that they make a genuine effort to learn about the science of climate sensitivity rather than latching on to the first “scientificy” article on a political blog that reinforces their prior beliefs. Real science requires genuine skepticism and commitment to rigor, not sloppy contrarianism.
Thanks for your impressive post, Dr. Frank.
I feel, however, the whole picture may be drastically changed if you use the UAH-MSU satellite temperature data, which already has a 32-year history, instead of the error-prone surface station data.
@Leif Svalgaard says:
June 2, 2011 at 1:49 pm: “…Numerology is numerology regardless who commits it. That is not to say that numerology cannot at times be useful….” I think that applies here. Pat Frank caught a fine trout. After gutting it, there was nothing left. Using the warmist’s bullshit data, he showed there’s nothing in that data.
@Ammonite says:
June 2, 2011 at 10:12 am: …Advice for general readers. Please check any post that describes a “central AGW tenet” or “major assumption” or “fundamental prediction” etc against the relevant IPCC chapter….” I’m sure that they state, somewhere in all their bloviation, that it is “likely” that pigs can fly. Likely, loosely defined, isn’t a scientific term, but then, the IPCC isn’t a scientific body.
@Doug Proctor says:
June 2, 2011 at 9:14 am: Good appeal to stop being the smartest guy in the classroom and actually pitch in and see if one can make a useful contribution to the discussion (in the case that the discussion isn’t widely viewed as idiotic).
@tallbloke (June 2, 2011 at 8:54 am )
Genuinely looking for some clarification on your essay…
You say the tail doesn’t wag the dog, but then you go on to emphasize the importance of the sunshade in the sky (clouds). Despite drawing attention to decadal patterns in specific humidity, you appear to restrict your conceptualization to anomalies, ignoring annual & semi-annual heat-pump cycles and the related intertwining of circulation geometry, oceans, & sunshade (which Stephen Wilde so strongly emphasizes in recent months). Reading your essay helped me understand vukcevic’s perspective (ocean-centric), but other perspectives also reveal a climate-dog chasing its tail (i.e. looping spatiotemporal causation chain that moots debate), so a line of productive inquiry is to step back far enough from the neverending loop to see what drives changes in the rate & amplitude of “tail chasing” […at scales supported by observation].
So my question is (& I sure hope it’s obvious by now that this is where I was heading)…
Do you disagree with Sidorenkov (2003 & 2005) on ice?
My understanding (A G Foster comment in a somewhat-recent WUWT thread) is that NASA’s R. Gross is now pursuing exactly this (…which, as I hope you know, matches -LOD, AMO, PMO, etc. in multidecadal phase).
@aaron chmielewski: The derivative of cos(x) is -sin(x) versus the derivative of sin(x) is cos(x), so if a negative sign is a significant complication to the derivative analysis, then it actually would make the derivative analysis significantly more complicated. On the other hand, since the integral of cos(x) is sin(x), the convention would similarly significantly simplify an integral analysis.
Still, in the figure 2 CRU curve, the period looks more like 70 years (pre 1940 to post-2000) than the 60 year period called out in fig 3. Something is amiss.
Thank-you very much, Anthony, for picking up my essay at Jeff’s tAV. It was a happy surprise to find it here today.
Thanks also to everyone for your very thoughtful comments. I’m a little overwhelmed with all your responses, and with 88 of them so far to go through. I’m a little stuck for time just now, but hope to post replies this weekend.
I’d like to acknowledge Bob Tisdale’s comment, though. As he mentioned, we’ve discussed the PDO+AMO periodicity described in my analysis. Those interested are encouraged to read the exchanges at tAV, at the link above.
It’s clear though, that to get to a place that benefits us all, Bob, you’ll have to work out your differences directly with Joe D’Aleo and Marcia Wyatt, et al., and make the conclusion public.
Later . . . 🙂
Good luck getting this analysis published. I doubt even E & E would touch it with a bargepole. It’s so full of holes it makes the Titanic look watertight.
@Matt
“It is incredibly easy to see sinusoids and linear trends in any set of data.”
Is it? If a data set shows a straight line then it clearly can’t fit to a simusoid. The point of this analysis is that the peaks and troughs shown in the data set are of a similar amplitude to the underlying straight line trend, so it is reasonable to see what happens mathematically if you fit the data set to a sine.
“Absent a reason or an explanation, one cannot just subtract something like that out of the temp series.”
Fair enough, and a valid criticism of Pat Frank’s explanation here, since he really concentrates on a simple mathematical analysis. But actually there is a perfectly good theory underlying Pat Frank’s analysis. There are two competing theories for observed variations in climate. One is held by the IPCC that AGW caused by a significant uptick in CO2 output after 1950 is causing a significant increase in the rate of warming after 1950. The other is held by the sceptic camp – since it denies AGW has a serious impact on climate then it is safe to assume that the sceptic camp believes the climate is fairly “stable”. Now we have to be careful here since “stable” can mean many things in this case – it can mean “flatlining” but it can also mean continuous oscillation and limit cycling (limit cycling is the dramatic and rapid change from one condition to another – the ice age/interglacial climate oscillation is an example of a limit cycle). Oscillation tends to occur in systems where there is negative feedback and energy storage in the system. In any system there can be multiple sources of feedback and energy storage and hence multiple sources of oscillation all at different frequencies and amplitudes that could be superimposed on each other. Pat Frank identifies one possible source of energy storage as the ocean, since water has a very high specific heat capacity and therefore can store enormous amounts of energy showing only a small rise in temperature. Given this scenario it is perfectly reasonable to look for sinusoidal oscillation within a climate data set and propose ocean heat storage as a possible cause of that oscillation. It is not pure “numerology” – it is fitting a function to a data set based on a hypothesis and looking to see how good the fit is. The fit certainly looks as good as fitting a pure linear trend to the data, and the reasoning behind is certainly no worse than fitting a pure linear trend to the post 1950 data and then deriving a gradient from that trend and proclaiming it to be the climate sensitivity.
“And to the extent that there is a natural warming trend “before human influence” (1900-1950), you can’t just assert that the natural trend would have continued indefinitely the same way over the next 50 years, and then just take the difference to be the man-made contribution.”
No, you can’t. But doing the reverse and trying to ignore a trend that existed before 1950 is even worse. The fact is that the data in the range 1910 to 1943 has the same gradient as the data in the range 1970 to 2000 – how can we say that the trend 1970 – 2000 is purely due to AGW? We can’t – the data doesn’t allow us to do that. The dataset is completely inconclusive. CRU and GISS have been wasting their time. We cannot say that the gradient after 1950 is in any way exceptional and therefore related even in part to AGW. You could perfectly well derive from the dataset that AGW has no impact – in fact since that is the default position that would normally be the approach science would take, but proponents of AGW are claiming a special case here because they say the risks are very high (they neglect the risks of rolling back the great technological advances made in the West that are currently responsible for the survival of about 1billion people).
“That is a moronic and naive assumption, absent any further information or theory.”
Pat Frank’s sine analysis is actually somewhat less moronic than fitting a straight line to the 1950 to 2000 data, deriving a gradient and then proclaiming not only that the rise is due to AGW but also that it is likely to be accelerating. The dataset shows no such thing. Even a simple eyeballing of the data shows that there is not a pure linear trend, so subtracting a sine from the data to see where that leaves you is perfectly reasonable if you want to understand the real limits of the post 1950 gradient. Pat Frank is correct in that at the very least the gradient after 1950 is hardly any worse than the gradient before 1950 when AGW was minimal (since the CO2 in the atmosphere before 1950 is proposed to have been stable) – this is before we even get into the relatively small contribution in the acceleration that might be related to a sine oscillation in the climate with a period of 60years. Furthermore the most recent data from 2000 to 2010 shows deceleration not acceleration, so it hardly supports the theory that AGW is becoming the dominant contributor to temperature trends in the new century.
“Real scientists who calculate climate sensitivity account for countless factors such as solar irradiance, volcanic activity, and orbital variations when they “subtract out” natural effects.”
Shame you missed out cloud cover and wind direction. As an example, looking at the data for Lerwick in July 2002 we can see it was three Celsius higher than July 2001. What the hell happened there? An enormous cow fart? I doubt it. I doubt it had anything to do at all with AGW and yet that one month was 3 Celsius higher than a year previous. Smooth that month out over a whole year and it would still contribute 0.3Celsius increase in temperature for the whole year! In fact the difference in the whole year was much bigger than that – because all but one month in 2002 was warmer than 2001 for Lerwick, and in each case by at least 1.2Celsius. Why? Well not because of CO2. Those thermometer readings were measuring a temperature anomoly year to year that had nothing to do with CO2. I’m guessing cloud cover. 2002 was sunny and 2001 was cloudy, would be my guess (and that fits to my memory of 2002 as well). But maybe wind direction made a difference too. So when we look year to year at any location we can be 100% certain that differences in temperature have little to do with CO2 but are entirely due to cloud cover and wind direction. And yet, when we average out all these thermometric measurements of cloud cover and wind direction we assume that what we are left with is the contribution due to CO2???? That’s like making measurements of the speed of vehicles on a motorway/freeway over a 50 year period and coming to the conclusion that bicycles must be getting faster.
“For your readers who find this article to be an exciting example of “real science!”” – Well I don’t. There ain’t much science involved. The maths is OK however.
“I suggest that they make a genuine effort to learn about the science of climate sensitivity rather than latching on to the first “scientificy” article on a political blog that reinforces their prior beliefs.”
My prior belief was that AGW was real. My genuine effort to learn about the science of climate led me to ice core lies which led me to question what the “scientists” were saying. Since then I have seen a whole lot of other lies of which quite deliberate misinterpretation of thermometer data is one. I have come to the conclusion that climatology attracts a poor calibre of graduate – no big surprise there I guess since the bright sparks are in microbiology and nuclear physics.
The conclusion is this: Pat Frank’s analysis is no more and no less invalid than the IPCC analysis. No surprise there. Thermometers in Stephenson screens at ground level can be used to measure cloud cover anomalies but not atmospheric temperature anomalies. Human development likes clouds because clouds = rain = drinking water+irrigation. So that’s where the thermometers tend to be – in cloudy places. What you have above is two graphs showing how cloud cover has decreased slightly over the last 100yrs. Worrying in itself perhaps, but it has no connection with AGW.
@Vince Whirlwind (June 2, 2011 at 10:24 pm)
Rather than blasting unsupported cheap shots from the safe cover of the periphery, please step right out into the open, volunteering to the community your alternative to Pat Frank’s approach.
Here’s a critique of this post: http://tamino.wordpress.com/2011/06/02/frankly-not/
REPLY: Heh, he’s got what he thinks is a clever label, “mathurbation”, this kills any rebuttal integrity right there. The faux Tamino, as self appointed time series policeman, would complain about a straight line with two data points if it appeared here, so it’s just the usual MO for him. I’ll leave it up to Pat Frank to respond if he wishes, my advice would be to provide an updated post here rather than there, because as we all know and has been deomstrated repeatedly, Grant Foster can’t tolerate any dissenting analysis/comments there.
– Anthony
Ryan: “that’s where the thermometers tend to be – in cloudy places.”
What? You lost me there.
Ryan @ur momisugly “Is it? If a data set shows a straight line then it clearly can’t fit to a simusoid. ”
It most certainly clearly can fit if you chose a sinusoidal frequency on the order of 4+ times the length of the data. You fit a sinusoid by choosing a frequency w, calculating sin(w*t) and cos(w*t) and then so the same old ordinary least squares process to fit the model Y(t)=b0 + b1*cos(wt)+b2*sin(wt). Then the amplitude of the sinusoid is sqrt(b1^2+b2^2) and the phase of a cosine would be atan2(b2/b1). Some harmonic analysis codes even use the infinitely slow frequency of zero to model a constant intercept term, so a sinusoid can even model a constant.
Including the intercept term, the sinusoid/cosine model, does have 1+3=4 parameters compared to a linear model’s 1+1=2 terms, but clearly, it can fit the data at least as well as a straight line,
Frank’s methodology might be good math, but whether or not it is good stats would depend on a residuals and validation analysis, which in the above seems to be limited to visual analysis with repeated assertions of “clearly.”
“The anomaly profile at 1960 is similar to the profile at 1880, and so these two starting points seem to impart no obvious bias.”
Yes, but since you’re looking at a cosine function to normalize your data, you really should pick similar points and durations along your curve. The 1880 point is near the top of the curve, and has a duration of 60 years (to 1940).
However, your 1960 start is below midway up, and since your duration (to 2010) is 50 years (less than the 60 year period of your cosine function), the start and end points bias the results. In this case, the end point necessarily will be artificially higher on the curve, resulting in a greater slope.
The upshot is that, while you showed a minor sensitivity for CO2, the unbiased 1960-2010 slope actually should show an even lower sensitivity.
Otherwise, nice job, especially in using assumptions which give conservative results.
Look on the visible satellite on the side bar of this Blog:
DISCUSSION…AS OF 9:30 AM PDT FRIDAY…MID AND HIGH CLOUDS ARE STREAMING OVER THE DISTRICT IN ADVANCE OF THE APPROACHING LATE- SPRING STORM. THE UPPER LOW CENTER…CURRENTLY LOCATED NEAR 40N/130W…IS DROPPING SOUTHWARD OVER THE COASTAL WATERS…AND IS DUE TO REMAIN OFF THE COAST UNTIL LATE SUNDAY WHEN IT IS PROGGED TO FINALLY SWING EASTWARD OVER CENTRAL CALIFORNIA. PLENTY OF MOISTURE HAS BEEN ENTRAINED INTO THIS SYSTEM…AND THE WAY THE SYSTEM WILL INTERACT WITH THE COAST IN TERMS OF OROGRAPHIC ENHANCEMENTS…THIS LOOKS LIKE A POTENTIALLY RECORD BREAKING EVENT FOR OUR AREA FOR EARLY JUNE.
LATEST AMSU PRECIPITABLE WATER ESTIMATES GIVE WELL OVER AN INCH OF RAIN WRAPPED UP IN THIS SYSTEM. MODELS CONTINUE PREVIOUS TRENDS OF BRINGING LIGHT RAIN TO THE NORTH BAY TODAY…AND SPREADING SOUTH THROUGH THE GREATER SF BAY BY EVENING…THEN REACHING THE MONTEREY BAY AREA BEFORE MIDNIGHT. HEAVIEST RAIN IS EXPECTED OVERNIGHT TONIGHT INTO SATURDAY MORNING. BUT AS THE UPPER LOW IS FORECAST TO REMAIN WOBBLING OFF THE COAST THROUGH SUNDAY…SHOWER CHANCES WILL PERSIST THROUGH THE WEEKEND.
CONFERRING WITH THE CALIFORNIA/NEVADA RIVER FORECAST CENTER ON QPFS…2-5 INCHES STORM TOTAL ARE POSSIBLE ACROSS THE WETTEST AREAS INCLUDING NORTH BAY HILLS…SANTA CRUZ MOUNTAINS…AND THE SANTA LUCIAS. INLAND LOWER AREAS COULD GET UPWARDS OF 1-2 INCHES TOTAL. ALTHOUGH THE BASINS CAN HANDLE THIS AMOUNT OF RAINFALL SPREAD OUT OVER TWO DAYS…THESE ARE STILL BIG NUMBERS GIVEN WHERE WE ARE IN THE CALENDAR. THUS…SOME RECORD RAINFALL AMOUNTS ARE HIGHLY LIKELY FOR JUNE.
GIVEN THE PROXIMITY OF THE COLD UPPER LOW…THUNDERSTORMS ARE ALSO A POSSIBILITY…AND WILL ADD A SLIGHT CHANCE TO THE AFTERNOON FORECAST PACKAGE.
SHOWERS TO END LATE SUNDAY AS THE UPPER LOW FINALLY EJECTS TO THE EAST. THE REST OF THE FORECAST PERIOD IS EXPECTED TO CONTINUE COOL AS A LONG-WAVE UPPER TROUGH REMAINS OVER THE WEST COAST. NOT RULING OUT FUTURE SHOWER CHANCES AS WELL…GIVEN THE PRESENCE OF THIS TROUGH.
=================================
Thank goodness this system has a very cold core. Otherwise, we would face a rather cataclysmic situation given the massive snow pack in the high country.
Now for a quick primer regarding the Pacific / Hawaiian High. This feature, one of the famous semi permanent Semi Tropical / Horse Latitudes Highs, is normally well up into the mid latitudes by this time of year. But not this year. It is stuck in the tropics.
Consider this. What is described here, given the relative extents and masses of the Pacific and Atlantic Oceans, is essentially a low frequency input signal being applied to the global climate circuit. Draw your own conclusions.
@ur momisugly Ryan
I appreciate the thoughtful response.
“But doing the reverse and trying to ignore a trend that existed before 1950 is even worse. The fact is that the data in the range 1910 to 1943 has the same gradient as the data in the range 1970 to 2000 – how can we say that the trend 1970 – 2000 is purely due to AGW?”
Nobody is ignoring the trend before 1950 and no one is saying that the trend from 1970 to 2000 is purely AGW. Read the 4th assessment IPCC report.
The problem is: the climate system is driven by the interplay of multiple natural and multiple human forcings. In order to separate human and natural forcings, you need to meticulously account for these effects. You cannot just take the difference between a slope before and after some arbitrary year. That is nonsense.
“My prior belief was that AGW was real. My genuine effort to learn about the science of climate led me to ice core lies which led me to question what the “scientists” were saying. ”
I have the opposite story. I grew up an ardent “skeptic”. In grad school, I met some real climate scientists. At their encouragement, I started reading the literature and I was shocked to discover that the work is very thorough. I was also surprised at how open the community was about its uncertainties, contrary to how I was raised. I am not a climate scientist and do not purport to be an expert. However, as a experimental particle physicist I hope to be able to claim that I can see the difference between mature, rigorous scholarship and sloppy, hand-waving. This article is sloppy hand-waving.
“Pat Frank’s sine analysis is actually somewhat less moronic than fitting a straight line to the 1950 to 2000 data, deriving a gradient and then proclaiming not only that the rise is due to AGW but also that it is likely to be accelerating. The dataset shows no such thing. Even a simple eyeballing of the data shows that there is not a pure linear trend, so subtracting a sine from the data to see where that leaves you is perfectly reasonable if you want to understand the real limits of the post 1950 gradient.”
Again, read the attribution (finger-print) analyses. No one is following the procedure you have described. You are evoking a straw man for what the climate science is saying about temperature trends and human impact. First and foremost, aerosols have a cooling effect that obscured the full impact of greenhouse gasses for much of the 60s and 70s (pre-clean air act). Second, most of the known, natural climate forcing mechanisms have plateaued and even reversed over the last 50 years of the 20th century. Given this change in natural forcings, it is certainly wrong to subtract the trend of the first 50 years from the trend of the second 50 years. This also suggests, that the observed warming over much of the last 50 years is building on what otherwise would have probably been a cooling, absent human impact. One needs to be able to understand the magnitude and direction this natural trend before one can begin to separate out the human effect.
“Shame you missed out cloud cover and wind direction. As an example, looking at the data for Lerwick in July 2002 we can see it was three Celsius higher than July 2001…”
I only listed some of the factors. But these are accounted for in the climate literature. Water vapor is admittedly one of the poorest understood of the feedbacks, but there is tremendous work being directed towards this question. I don’t know anything about your Lerwick story but it sounds like an anecdote (the favorite tool of contrarians). Very large fluctuations from month-to-month temperatures often occur at particular localities. This is meaningless to the global average temperature anomaly.
“Since then I have seen a whole lot of other lies of which quite deliberate misinterpretation of thermometer data is one. ”
What deliberate misinterpretation of temperature data?
“I have come to the conclusion that climatology attracts a poor calibre of graduate – no big surprise there I guess since the bright sparks are in microbiology and nuclear physics.”
Why do you come to this conclusion? I think the work of the climate science community is of a very high caliber. There is a cottage industry built around maligning the climate science establishment. This is the part of the whole skeptic thing that really turns me off. These personal attacks and accusations go far beyond academic discussions about the science. You really seem sincere and I strongly urge you to visit a local University and talk to actual publishing climate scientists. They will appreciate your tough questions, as long as they are coming from sincere curiosity and not with a rhetorical and cynical tone. You will be surprised at the experience.
“The conclusion is this: Pat Frank’s analysis is no more and no less invalid than the IPCC analysis. ”
Read the IPCC AR4 report from working group 1. Not the summaries, but the actual report. It is a really good summary of the state of climate science, despite all of the attempts to paint it as a global liberal conspiracy. Even giving Frank the benefit of the doubt, this article is -at best- some preliminary speculation. But, I am afraid that it isn’t even interesting speculation. The talk sinusoid and slopes are repeated again and again in the contrarian rumor-mill, as if no one has thought about this stuff before. I’m sorry, but it is embarrassing that this guy would be so arrogant as to proclaim that people should “spread the word” of this calculation. It is such a rudimentary and flawed line of reasoning that it is utterly meaningless and not in the same Universe as the established attribution analyses.