Guest Post by Willis Eschenbach
My theory is that the BEST folks must have eaten at a Hollywood Chinese restaurant. You can tell because when you eat there, an hour later you find you’re hungry for stardom.
Now that the BEST folks have demanded and received their fifteen minutes of fame before their results have gone through peer review, now that they have succeeded in deceiving many people into thinking that Muller is a skeptic and that somehow BEST has ‘proven the skeptics wrong’, now that they’ve returned to the wilds of their natural scientific habitat far from the reach of National Geographic photographers and people asking real questions, I thought I might take a look at the data itself. Media whores are always predictable and boring, but data always contains surprises. It can be downloaded from the bottom of this page, but please note that they do not show the actual results on that page, they show smoothed results. Here’s their actual un-smoothed monthly data:
Figure 1. BEST global surface temperature estimates. Gray bars show what BEST says are the 95% confidence intervals (95%CI) for each datapoint.
I don’t know about you, but Figure 1 immediately made me think of the repeated claim by Michael Mann that the temperatures of the 1990s were the warmest in a thousand years.
WHAT I FIND IN THE BEST DATA
Uncertainty
I agree with William Briggs and Doug Keenan that “the uncertainty bands are too narrow”. Please read the two authors to see why.
I thought of Mann’s claim because, even with BEST’s narrow uncertainty figures, their results show we know very little about relative temperatures over the last two centuries. For example, we certainly cannot say that the current temperatures are greater than anything before about 1945. The uncertainty bands overlap, and so we simply don’t know if e.g. 2010 was warmer than 1910. Seems likely, to be sure … but we do not have the evidence to back that up.
And that, of course, means that Mann’s claims of ‘warmest in a mill-yun years’ or whatever he has ramped it up to by now are not sustainable. We can’t tell, using actual thermometer records, if we’re warmer than a mere century ago. How can a few trees and clamshells tell us more than dozens of thermometers?
Disagreement with satellite observations
The BEST folks say that there is no urban heat island (UHI) effect detectable in their analysis. Their actual claim is that “urban warming does not unduly bias estimates of recent global temperature change”. Here’s a comment from NASA, which indicates that, well, there might be a bias. Emphasis mine.
The compact city of Providence, R.I., for example, has surface temperatures that are about 12.2 °C (21.9 °F) warmer than the surrounding countryside, while similarly-sized but spread-out Buffalo, N.Y., produces a heat island of only about 7.2 °C (12.9 °F), according to satellite data. SOURCE
A 22°F (12°C) UHI warming in Providence, and BEST says no UHI effect … and that’s just a couple cities.
If there were no UHI, then (per the generally accepted theories) the atmosphere should be warming more than the ground. If there is UHI, on the other hand, the ground station records would have an upwards bias and might even indicate more warming than the atmosphere.
After a number of adjustments, the two satellite records, from RSS and UAH, are pretty similar. Figure 2 shows their records for global land-only lower tropospheric temperatures:
Figure 2. UAH and RSS satellite temperature records. Anomaly period 1979-1984 = 0.
Since they are so close, I have averaged them together in Figure 3 to avoid disputes. You can substitute either one if you wish. Figure three shows a three-year centered Gaussian average of the data. The final 1.5 years are truncated to avoid end effects.
Remember what we would expect to find if all of the ground records were correct. They’d all lie on or near the same line, and the satellite temperatures would be rising faster than the ground temperatures. Here are the actual results, showing BEST, satellite, GISS, CRUTEM, and GHCN land temperatures:
Figure 3. BEST, average satellite, and other estimates of the global land temperature over the satellite era. Anomaly period 1979-1984 = 0.
In Figure 3, we find the opposite of what we expected. The land temperatures are rising faster than the atmospheric temperatures, contrary to theory. In addition, the BEST data is the worst of the lot in this regard.
Disagreement with other ground-based records.
The disagreement between the four ground-based results also begs for an explanation. Note that the records diverge at the rate of about 0.2°C in thirty years, which is 0.7° per century. Since this is the approximate amount of the last century’s warming, this is by no means a trivial difference.
My conclusion? We still have not resolved the UHI issue, in any of the land datasets. I’m happy to discuss other alternative explanations for what we find in Figure 3. I just can’t think of too many. With the ground records, nobody has looked at the other guys’ analysis and algorithms harshly, aggressively, and critically. They’ve all taken their own paths, and they haven’t disputed much with each other. The satellite data algorithms, on the other hand, has been examined minutely by two very competitive groups, UAH and RSS, in a strongly adversarial scientific manner. As is common in science, the two groups have each found errors in the other’s work, and when corrected the two records agree quite well. It’s possible they’re both wrong, but that doesn’t seem likely. If the ground-based folks did that, we might get better agreement. But as with the climate models and modelers, they’re all far too well-mannered to critically examine each other’s work in any serious fashion. Because heck, if they did that to the other guy, he might return the favor and point out flaws in their work, don’t want that kind of ugliness to intrude on their genteel, collegiate relationship, can’t we just be friends and not look too deeply? …
w.
PS—I remind folks again that the hype about BEST showing skeptics are wrong is just that. Most folks knew already that the world has been generally warming for hundreds of years, and BEST’s results in that regard were no surprise. BEST showed nothing about whether humans are affecting the climate, nor could it have done so. There are still large unresolved issues in the land temperature record which BEST has not clarified or solved. The jury is out on the BEST results, and it is only in part because they haven’t even gone through peer review.
PPS—
Oh, yeah, one more thing. At the top of the BEST dataset there’s a note that says:
Estimated 1950-1980 absolute temperature: 7.11 +/- 0.50
Seven degrees C? The GISS folks don’t even give an average, they just say it’s globally about 14°C.
The HadCRUT data gives a global temperature about the same, 13.9°C, using a gridded absolute temperature dataset. Finally, the Kiehl/Trenberth global budget gives a black-body radiation value of 390 W/m2, which converts to 14.8°. So I figured that was kind of settled, that the earth’s average temperature (an elusive concept to be sure) was around fourteen or fifteen degrees C.
Now, without a single word of comment that I can find, BEST says it’s only 7.1 degrees … say what? Anyone have an explanation for that? I know that the BEST figure is just the land. But if the globe is at say 14° to make it easy, and the land is at 7°, that means that on average the ocean is at 17°.
And I’m just not buying that on a global average the ocean is ten degrees C, or 18 degrees F, warmer than the land. It sets off my bad number detector.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
Anyone think we need to go back to the Raw data produced by the stations and throw out all the data recorded in GHCN by the NCDC.
Has anyone been able to download the individual (264 Mb) station list in the Berkeley record?
From Bill Illis on October 26, 2011 at 6:01 pm:
Referencing this previous post of mine,
Just for shots and goggles, I tried downloading the 253MB zipped text version, on dial-up, which ideally would take about 11hrs with an otherwise-unused connection.
A day later (last night), I had a file the Archive Manager on my Debian Linux system thought was damaged. The built-in zip -FF command fixed it with a warning that data.txt was truncated, the repaired version reported as 181.7MB. Archive Manager was able to extract the files, barely, it sapped the resources of my old Dell P4 so horribly it would have repeatedly crashed if running Windoze, left the machine unusable for about ten minutes. Of the resultant files, there is data.txt, 590MB. After 10 minutes of the text editor trying to load it in, watching more records keep filling in at the bottom (uncertainty is always 0.0000?), I canceled that. The other largest file is flags.txt, reported as 12.6GB (yes, giga). I haven’t tried opening it. There are three other much-smaller files, less than 6KB each.
The second downloading will be finishing in about two hours. I’ll see then if this copy is likewise reported as damaged.
michael hammer says:
October 24, 2011 at 11:11 pm
“Something I dont quite understand. In the early 1970′s the National Academy of Science published a climate reconstruction showing that temperatures fell by 0.7C ( about 1.3F) between 1940 and 1970. This was the basis for many articles in both science and environmental journals suggesting we were heading for dangerous global cooling (anthropogenic of course). The modern reconstructions now show no cooling over this period. ”
====================================================================
Your observation is spot on. As best as can be determined, the disappearance of the decline to a strong minim2um in 1976 is due to the inconsistency of the set of stations used in modern reconstructions of GMT. Thousands of stations came online in the post-war era in cities thast had not reported temperatures before. The post war-period also was one of intense urbanization and motorization of society in many countries. That is the factor that makes all recontructions which fail to maintain a FIXED set of stations over the entire stretch of time ultimately unreliable indicators of climatic “trends.” Contrary to the impression they present, BEST’s statistical massage of record fragments doesn’t begin to address that problem,
@Bill Illis:
Download successful, 10 files unpacked.
data.txt 590.0 MB
data_flag_definitions.txt 5.7KB
flags.txt 12.6GB (Archive Manager reports an expected 631.8MB size)
README.data 1.6KB
README.txt 3.2KB
site_detail.txt 8.7MB
site_flag_definitions.txt 1.6KB
site_summary.txt 1.3MB
source_flag_definitions.txt 2.8KB
sources.txt 7.1GB
Text editor flatly said it couldn’t open the two biggest files, flags.txt and sources.txt. It made a brave attempt on the next largest, data.txt, but after about ten minutes of sluggish computing and the text editor being unresponsive I mercifully killed it. The rest opened fine.
Want to play with the BEST files? Got supercomputer? With at least 30-40GB of RAM?
Willis,
Thanks for the pointer about comparing land with land/ocean – I really should have read into it more! I’ll add the land-only datasets you suggest when I get a chance and retry.
This might solve another conundrum that one of my users, Mike Scott, pointed out a couple of days ago, and actually prompted me to add BEST in the first place: we can’t match Berkeley’s depiction of “HADCRU” in their analysis graphs with any variety of HADCRUT3. Is it possible they are using CRUTEM (land only) and mislabelled it?
Paul
Climate change charlatans do not do science. They manipulate science. They have been doing it for many years under the auspices of the IPCC. It is just that they have now well and truly been exposed by their own peers. They know they have been exposed … so what they are now doing is simply trying to desperately preserve what little is left of their once impressive reputations, every which way they can.
These disgusting climate change charlatans cannot bring themselves to acknowledge that they may have been wrong all along about catastrophic man-made global warming. So, for them, it is all or nothing. I mean, come on, after all we have now learnt about the shortcomings with the US surface temperature data and about UHI effects, these climate change charlatans think they can still con the world?
I cannot wait for a day when all these climate change rogues (well, they are certainly not honest scientists) dragged into the courts to be held accountable for grossly misrepresenting the science and for engaging in misleading and deceptive conduct.
Here’s their actual un-smoothed monthly data:
;———————————————————-
It’s cooked data.
Your bias is in the mean used to calculate the anomaly.
Why can’t you show us the un-molested data?
Where’s the beef?
Agile Aspect says:
October 28, 2011 at 11:56 pm
That’s what they released …
w.
Further to the above, in the unlikely event anyone is still reading here – I have now added land-only versions of GISTEMP, CRUTEM, RSS and UAH and compared with BEST, with baseline adjustment – see http://www.woodfortrees.org/notes#best
Me and this article, siitntg in a tree, L-E-A-R-N-I-N-G!
woodfortrees (Paul Clark) says:
October 29, 2011 at 6:47 am
You da man, Paul, and woodfortrees is a great resource.
My thanks,
w.
Just one more thing – I had it pointed out to me that GISS dTs is not a land dataset but a land-ocean dataset extrapolated from land data. I’ve renamed and reclassified it accordingly.
HTML quirk/typo above; the carets meaning “does not equal” didn’t reproduce. Here’s a “forced” version:
Energy < > temperature.
Theo
“Your thought “think about “power” in a statistical sense” is a clear recommendation that we should make inferences from our characteristics of our system of representation to the world.”
no. it a suggestion about looking at the property of tests we call “power” which is not what you think