On the Elusive Absolute Global Mean Surface Temperature – A Model-Data Comparison

Guest Post by Bob Tisdale

With the publication of the IPCC 5th Synthesis Report, I thought there might be some interest in a presentation of how well (actually poorly) climate models simulate global mean surface temperatures in absolute terms. That is, most climate model outputs are presented in terms of anomalies, with data shown as deviations from the temperatures of a multi-decadal reference period. See Figure 1.

Figure 1

Figure 1

Rarely, are models and model-data comparisons shown in absolute terms. That’s what’s presented in this post after a discussion of the estimates of Earth’s absolute mean surface temperature from data suppliers: GISS, NCDC and BEST. Afterwards, we return to anomalies.

The following illustrations and most of the text were prepared for my upcoming book. I’ve changed the figure numbers for this post and reworded the introduction (the two paragraphs above). This presentation provides a totally different perspective on the differences between modeled and observed global mean surface temperatures. I think you’ll enjoy it…then again, others of you may not.

This chapter appears later in the book, following (1) the preliminary sections that cover the fundamentals of global warming and climate change, (2) the overview of climate models, (3) the introductory discussions about atmospheric and ocean circulation and natural modes of variability, and (4) the detailed discussions of datasets. It would be one of many model-data comparison chapters.

One last note, you’ll find chapters of the book without chapter numbers referenced as “Chapter ___”. I simply haven’t written those chapters yet, or I’ve written them but haven’t placed them in the final order.

[Start of book section.]

THE ELUSIVE ABSOLUTE GLOBAL MEAN SURFACE TEMPERATURE DISCUSSION AT GISS

Some of you may already know the origin of this chapter’s title. It comes from the GISS Surface Temperature Analysis Q&A webpage The Elusive Absolute Surface Air Temperature (SAT). The initial text on the webpage reads:

The GISTEMP analysis concerns only temperature anomalies, not absolute temperature. Temperature anomalies are computed relative to the base period 1951-1980. The reason to work with anomalies, rather than absolute temperature is that absolute temperature varies markedly in short distances, while monthly or annual temperature anomalies are representative of a much larger region. Indeed, we have shown (Hansen and Lebedeff, 1987) that temperature anomalies are strongly correlated out to distances of the order of 1000 km.

Based on the findings of Hansen and Lebedeff (1987) Global trends of measured surface air temperature, GISS created a dataset that uses land surface air temperature anomalies in place of sea surface temperature data. That is, GISS extended land surface air temperature data out over the oceans. GISS has replaced that older dataset with their GISS Land-Ocean Temperature Index, which uses sea surface temperature data for most parts of the oceans and serves as their primary product. They still use the 1200km extrapolation for infilling land surface areas and areas with sea ice where there are no observations-based data.

Back to the GISS Q&A webpage: After answering a few intermediate questions, GISS closes with (my boldface):

Q: What do I do if I need absolute SATs, not anomalies?

A: In 99.9% of the cases you’ll find that anomalies are exactly what you need, not absolute temperatures. In the remaining cases, you have to pick one of the available climatologies and add the anomalies (with respect to the proper base period) to it. For the global mean, the most trusted models produce a value of roughly 14°C, i.e. 57.2°F, but it may easily be anywhere between 56 and 58°F and regionally, let alone locally, the situation is even worse.

In other words, GISS is basing their understanding of global surface temperatures on climate models, specifically “the most trusted models”. And they are saying, based on those “most trusted” climate models, the average global mean surface temperature during their base period of 1951 to 1980 (their climatology) is roughly 14 deg C +/- 0.6 deg C.

The 14 deg C on that GISS webpage coincides with the value listed at the bottom of the webpage for the GISS Land-Surface Air Temperature Anomalies Only (Meteorological Station Data, dTs) data, which is based on Hansen and Lebedeff (1987). At the bottom of the webpage, they write:

Best estimate for absolute global mean for 1951-1980 is 14.0 deg-C or 57.2 deg-F, so add that to the temperature change if you want to use an absolute scale (this note applies to global annual means only, J-D and D-N!)

That’s the same adjustment for absolute temperatures that GISS recommends for their Land-Ocean Temperature Index. See the bottom of the data webpage here.

NOTE: Some people might think it’s odd that GISS uses of the same adjustment factor for both datasets. One of the GISS datasets (GISS dTs) extends coastal and island land surface air temperatures out over the oceans by 1200 km, while the other GISS dataset (GISS LOTI) uses sea surface temperature data for most of the global oceans. (With the LOTI data, GISS replaces sea surface temperature data with land surface air temperature data only in the polar oceans where sea ice has ever existed.) If we assume the coastal and island land surface temperatures are similar to those of marine air temperature, then the bias is only about 0.2 deg C, maybe a little greater. The average ICOADS absolute global sea surface temperature for the last 30 years (1984 to 2013) is 19.5 deg C (about 67.1 deg F), while their absolute global marine air temperature is 19.3 deg C (about 66.7 deg F). The reason for “maybe a little greater” is, shipboard marine air temperature readings can also be impacted by a “heat island effect”, and the ICOADS data have not been corrected for that heat island effect. [End of note.]

THE NCDC ESTIMATE IS SIMILAR THOUGH DERIVED DIFFERENTLY

NCDC also provides an estimate of absolute global mean temperature. See the Global Analysis webpage from their 2013 State of the Climate (SOTC) report. There they write under the heading of Global Highlights (my boldface):

The year 2013 ties with 2003 as the fourth warmest year globally since records began in 1880. The annual global combined land and ocean surface temperature was 0.62°C (1.12°F) above the 20th century average of 13.9°C (57.0°F).

And not too coincidentally, that 13.9 deg C (57.0 deg F) from NCDC (established from data, as you’ll soon see) agrees with the GISS value of 14.0 deg C (57.2 deg F), which might suggest that GISS’s “most trusted models” were tuned to the data-based value.

The source of that 13.9 deg C estimate of global surface temperature is identified on the NOAA Global Surface Temperature Anomalies webpages, specifically under the heading of Global Long-term Mean Land and Sea Surface Temperatures, which was written in 2000, so it’s 14 years old. Data have changed drastically in 14 years. Also, you may have noticed on that webpage that the absolute temperature averages are for the period of 1880 to 2000, and that NCDC uses the same 13.9 deg C (57 deg F) absolute value for the 20th Century. It’s not a concern. It’s splitting hairs. There is only a 0.03 deg C (0.05 deg F) difference in the average anomalies for those two periods.

Like GISS, NOAA describes problems with estimating an absolute global mean surface temperature:

Absolute estimates of global mean surface temperature are difficult to compile for a number of reasons. Since some regions of the world have few temperature measurement stations (e.g., the Sahara Desert), interpolation must be made over large, data sparse regions. In mountainous areas, most observations come from valleys where the people live so consideration must be given to the effects of elevation on a region’s average as well as to other factors that influence surface temperature. Consequently, the estimates below, while considered the best available, are still approximations and reflect the assumptions inherent in interpolation and data processing. Time series of monthly temperature records are more often expressed as departures from a base period (e.g., 1961-1990, 1880-2000) since these records are more easily interpreted and avoid some of the problems associated with estimating absolute surface temperatures over large regions. For a brief discussion of using temperature anomaly time series see the Climate of 1998 series.

It appears that the NCDC value is based on observations-based data, albeit old data, while the GISS value for a different period, based on climate models, is very similar. Let’s compare them in absolute terms.

COMPARISON OF GISS AND NCDC DATA IN ABSOLUTE FORM

NCDC Global Land + Ocean Surface Temperature data are available by clicking on the “Anomalies and Index Data” link at the top of the NCDC Global Surface Temperature Anomalies webpage. And the GISS LOTI data are available here.

Using the factors described above, Figure 2 presents the GISS and NCDC annual global mean surface temperatures in absolute form from their start year of 1880 to the most recent full year of 2013. The GISS data run a little warmer than the NCDC data, on average about 0.065 deg C (0.12 deg F) warmer, but all in all, they track one another. And they should track one another. They use the same sea surface temperature dataset (NOAA’s ERSST.v3b) and most of the land surface air temperature data is the same (from NOAA’s GHCN database). GISS and NCDC simply infill data differently (especially for the Arctic and Southern Oceans) and GISS uses a few more datasets to supplement regions of the world where GHCN sampling is poor.

Figure 2

Figure 2

ALONG COMES THE BEST GLOBAL LAND+OCEAN SURFACE TEMPERATURE DATASET WITH A DIFFERENT FACTOR

That’s BEST as in Berkeley Earth Surface Temperature, which is the product of Berkeley Earth. The supporting paper for their land surface air temperature data is Rohde et al. (2013) A New Estimate of the Average Earth Surface Land Temperature Spanning 1753 to 2011. In it, you will find they’ve illustrated the BEST land surface air temperature data in absolute form. See Figure 1 from that paper (not presented in this chapter).

Their climatology (reference temperatures for anomalies) was presented in the methods paper Rhode et al. (2013) Berkeley Earth Temperature Process, with the appendix here. Under the heading of Climatology, Rhode et al. write in their “methods” paper:

The global land average from 1900 to 2000 is 9.35 ± 1.45°C, broadly consistent with the estimate of 8.5°C provided by Peterson [29]. This large uncertainty in the normalization is not included in the shaded bands that we put on our Tavg plots, as it only affects the absolute scale and doesn’t affect relative comparisons. In addition, most of this uncertainty is due to the presence of only three GHCN sites in the interior of Antarctica, which leads the algorithm to regard the absolute normalization for much of the Antarctic continent as poorly constrained. Preliminary work with more complete data from Antarctica and elsewhere suggests that additional data can reduce this normalization uncertainty by an order of magnitude without changing the underlying algorithm. The Berkeley Average analysis process is somewhat unique in that it produces a global climatology and estimate of the global mean temperature as part of its natural operations.

It’s interesting that the Berkeley surface temperature averaging process furnishes them with an estimate of global mean land surface air temperatures in absolute form, while GISS and NCDC find it to be a difficult thing to estimate.

The reference to Peterson in the above Rhode et al. quote is Peterson et al. (2011) Observed Changes in Surface Atmospheric Energy over Land. 8.5 deg C (47 deg F) from Peterson et al. for absolute land surface air temperatures is the same value listed in the table under the heading of Global Long-term Mean Land and Sea Surface Temperatures on the NOAA Global Surface Temperature Anomalies webpages.

Berkeley Earth has also released data for two global land+ocean surface temperature products. The existence of sea ice is the reason for two. Land surface air temperature products obviously do not include ocean surfaces where sea ice resides, and sea surface temperatures do not include air temperatures above polar sea ice when and where it exists. Of the 361.9 million km^2 (about 139.7 million miles^2) total surface area of the global oceans, polar sea ice only covered on average about 18.1 million km^2 (about 6.9 million miles^2) annually for the period of 2000 to 2013). While polar sea ice only covers about 5% of the surface of the global oceans and only about 3.5% of the surface of the globe, the climate science community endeavors to determine the surface air temperatures there. That’s especially true in the Arctic where the naturally occurring process of polar amplification causes the Arctic to warm at exaggerated rates during periods when Northern Hemisphere surfaces warm (and cool at amplified rates during periods of cooling in the Northern Hemisphere). See the discussion of polar amplification in Chapter 1.18 and the model-data comparisons in Chapter ___.

[For those reading this blog post, see the posts Notes On Polar Amplification and Polar Amplification: Observations versus IPCC Climate Models.]

As of this writing, there is no supporting paper for the BEST land+ocean surface temperature data available from the Berkeley Earth Papers webpage and there is nothing shown for them on their Posters webpage. There is, however, an introductory discussion on the BEST data page for their combined product. The BEST land+ocean data is their land surface air temperature data merged with a modified version of HADSST3 sea surface temperature data, which they have infilled using a statistical method called Kriging. (See Kriging, written by Geoff Bohling of the Kansas Geological Survey.)

The annual Berkeley land+ocean sea surface temperature anomaly data are here, and their monthly data are here. Their reasoning for providing the two land+ocean products supports my discussion above. Berkeley Earth writes:

Two versions of this average are reported. These differ in how they treat locations with sea ice. In the first version, temperature anomalies in the presence of sea ice are extrapolated from land-surface air temperature anomalies. In the second version, temperature anomalies in the presence of sea ice are extrapolated from sea-surface water temperature anomalies (usually collected from open water areas on the periphery of the sea ice). For most of the ocean, sea-surface temperatures are similar to near-surface air temperatures; however, air temperatures above sea ice can differ substantially from the water below the sea ice. The air temperature version of this average shows larger changes in the recent period, in part this is because water temperature changes are limited by the freezing point of ocean water. We believe that the use of air temperatures above sea ice provides a more natural means of describing changes in Earth’s surface temperature.

The use of air temperatures above sea ice may provide a more realistic representation of Arctic surface temperatures during winter months when sea ice butts up against the land masses and when those land masses are covered with snow, so that the ice and land surfaces have similar albedos. However, during summer months the albedo of the sea ice can be different than those of the land masses (snow melts exposing the land surface surrounding temperature sensors and the albedo of land surfaces are different than those of sea ice). Open ocean also separates land from sea ice in many places, further compounding the problem. There is no easy fix.

Berkeley Earth also lists the estimated absolute surface temperatures during their base period for both products:

Estimated Jan 1951-Dec 1980 global mean temperature I

  • Using air temperature above sea ice: 14.774 +/- 0.046
  • Using water temperature below sea ice: 15.313 +/- 0.046

The estimated absolute global mean surface temperature using the air temperature above sea ice is about 0.5 deg C (0.9 deg F) cooler than the data where they used sea surface temperature data for sea ice. The models presented later in this chapter present surface air temperatures, so we’ll use the Berkeley data that use land air temperature above sea ice. It also agrees with the GISS LOTI data methods.

The sea surface temperature dataset used by Berkeley Earth (HADSST3) is provided only in anomaly form. And without a supporting paper, there is no documentation of how Berkeley Earth converted those anomalies into absolute values. The source ICOADS data and the HADISST and ERSST.v3b end products are furnished in absolute form, so one of them likely served as a reference.

COMPARISON OF BEST, GISS AND NCDC DATA IN ABSOLUTE FORM

The BEST, GISS and NCDC annual global mean surface temperatures in absolute form from their start year of 1880 to the most recent full year of 2013 are shown in Figure 3. The BEST data run warmer than the other two, but, as one would expect, the curves are similar.

Figure 3

Figure 3

In Figure 4, I’ve subtracted the coolest dataset (NCDC) from the warmest (BEST). The difference has also been smoothed with a 10-year running-average filter (red curve). For most of the term, the BEST data in absolute terms is about 0.8 deg C (about 1.4 deg F) warmer than the NCDC estimate. The hump starting around 1940 and peaking about 1950 should be caused by the adjustments the UKMO has made to the HADSST3 data that have not been made to the NOAA ERSST.v3b data (used by both GISS and NCDC). Those adjustments were discussed in Chapter ____. I suspect the lesser difference at the beginning of the data is also related to the handling of sea surface temperature data, but there’s no way to tell for sure without access to the BEST-modified HADSST3 data. The recent uptick should be caused by the difference between how the two suppliers (BEST and NCDC) handle the Arctic Ocean data. Berkeley Earth extends land surface air temperature data out over the oceans, while NCDC excludes sea surface temperature data in the Arctic Ocean when there is sea ice and does not extend land-based data over the ice at those times.

Figure 4

Figure 4

And now for the models.

CMIP5 MODEL SIMULATIONS OF EARTH’S ABSOLUTE SURFACE AIR TEMPERATURES STARTING IN 1880

As we’ve discussed numerous times throughout this book, the outputs of the climate models used by the IPCC for their 5th Assessment Report are stored in the Climate Model Intercomparison Project Phase 5 archive, and those outputs are publically available to download in easy-to-use formats through the KNMI Climate Explorer. The CMIP5 surface air temperature outputs at the KNMI Climate Explorer can be found at the Monthly CMIP5 scenario runs webpage and are identified as “TAS”.

The model outputs at the KNMI Climate Explorer are available for the historic forcings with transitions to the different future RCP scenarios. (See Chapter 2.4 –Emissions Scenarios.) For this chapter, we’re presenting the historic and the worst-case future scenario, RCP8.5. We’re using the worst case scenario solely as a reference for how high surface temperatures could become, according to the models, if emissions of greenhouse gases rise as projected under that scenario. The use of the worst case scenario will have little impact on the model-data comparisons from 1880 to 2013. As you’ll recall, the future scenarios start for most models after 2005, others start later, so there’s very little difference between the model outputs for the different model scenarios in the first few years. Also, for this chapter, I downloaded the outputs separately for all of the individual models and their ensemble members. There are a total of 81 ensemble members from 39 climate models.

Note: The model outputs are available in absolute form in deg C (as well as deg K), so I did not adjust them in any way.

With that as background, Figure 5 is a spaghetti graph showing the CMIP5-archived outputs of the climate model simulations of global surface air temperatures from 1880 to 2100, with historic and RCP8.5 forcings. A larger version of the graph with a listing of all of the ensemble members is available here.

Figure 5

Figure 5

Whatever the global mean surface temperature is now, or was in the past, or might be in the future, the climate models use by the IPCC for their 5th Assessment Report certainly have it surrounded.

Some people might want to argue that absolute temperatures are unimportant—that we’re concerned about the past and future rates of warming. We can counter that argument two ways: First, we’ve already seen in Chapters CMC-1 and -2 that climate models do a very poor job of simulating surface temperatures from 1880 to the 1980s and from the 1980s to present. Additionally, in Section ___, we’ll discuss model failings in a lot more detail. Second, absolute temperatures are important for another reason. Natural and enhanced greenhouse effects depend on the infrared radiation emitted from Earth’s surface, and the amount of infrared radiation emitted to space by our planet is a function of its absolute surface temperature, not anomalies.

As shown above in Figure 5, the majority of the models start off at absolute global mean surface air temperatures that range from near 12.2 deg C (54.0 deg F) to about 14.0 deg C (57.0 deg F). But with the outliers, that range runs from 12.0 deg C (53.5 deg F) to 15.0 deg C (59 deg F). The range of the modeled absolute global mean temperatures is easier to see if we smooth the model outputs with 10-year filters. See Figure 6.

Figure 6

Figure 6

We could reduce the range by deleting outliers, but one problem with deleting outliers is the warm ones are relatively close to the more-recent (better?) estimate of Earth’s absolute temperature from Berkeley Earth. See Figure 7, in which we’ve returned to the annual, not the smoothed, outputs.

Figure 7

Figure 7

The other problem with deleting outliers is, the IPCC is a political body, not a scientific one. As a result, that political body includes models from agencies around the globe, even those that perform worse than (the bad performance of) the others, further dragging down the group as a whole.

With those two things considered, we’ll retain all of the models in this presentation, even the obvious outliers.

Looking again at the broad range of model simulations of global mean surface temperatures in Figure 5 above, there appears to be at least a 3 deg C (5.4 deg F) span between the coolest and the warmest. Let’s confirm that.

For Figure 8, I’ve subtracted the coolest modeled global mean temperature from the warmest in each year, from 1880 to 2100. For most of the period between 1880 and 2030, the span from coolest to warmest modeled surface temperature is greater than 3 deg C (5.4 deg F).

Figure 8

Figure 8

That span helps to highlight something we’ve discussed a number of times in this book: the use of the multi-model ensemble-member model mean, the average of all of the runs from all of the climate models. There is only one global mean surface temperature, and the estimates of it vary. There are obviously better and worse simulations of it, whatever it is. Does averaging the model simulations provide us with a good answer? No.

But the average, the multi-model mean, does provide us with something of value. It shows us the consensus, the groupthink, behind the modeled global mean surface temperatures and how those temperatures would vary, if (big if) they responded to the forcings used to drive climate models. And as we’ll see, the observed surface temperatures do not respond to those forcings as they are simulated by the models.

MODEL-DATA COMPARISONS

Because of the differences between the newer (BEST) and the older (GISS and NCDC) estimates of absolute global mean temperature, they’ll be presented separately. And because the GISS and NCDC data are so similar, we’ll use their average. Last, for the comparisons, we won’t present all of the ensemble members as a spaghetti graph. We’ll present the maximum, mean and minimums.

With that established, Figure 9 compares the average of the GISS and NCDC estimates of absolute global mean surface temperatures to the maximum, mean and minimum of the modeled temperatures. The model mean is reasonably close to the GISS and NCDC estimates of absolute global mean surface temperatures, with the model mean averaging about 0.37 deg C (0.67 deg F) cooler than the data for the period of 1880 to 2013.

Figure 9

Figure 9

In Figure 10, the BEST (newer = better?) estimate of absolute global mean surface temperatures from 1880 to 2013 is compared to the maximum, mean and minimum of the modeled temperatures. In this case, the BEST estimate is closer to the maximum and farther from the model mean than they were with the GISS and NCDC estimates. The model mean averages about 1.14 deg C (about 2.04 deg F) cooler than the BEST estimate for the period of 1880 to 2013.

Figure 10

Figure 10

MODEL-DATA DIFFERENCE

In the next two graphs, we’ll subtract the data-based estimates of Earth’s absolute global mean surface temperatures from the model mean of the CMIP5 simulations. Consider this when viewing the upcoming two graphs: if the average of the models properly simulated the decadal and multidecadal variations in Earth’s surface temperature, but simply missed the mark on the absolute value, the difference between the models and data would be a flat horizontal line that’s offset by the difference.

Figure 11 presents the difference between the model mean of the simulations of Earth’s surface temperatures and the average of the GISS and NCDC estimates, with the data subtracted from the models. The following discussion keys off the 10-year average, which is also presented in red.

Figure 11

Figure 11

The greatest difference between models and data occurs in the 1880s. The difference decreases drastically from the 1880s to the 1910s. The reason: the models do not properly simulate the observed cooling that takes place at that time. The model-data difference grows once again from the 1910s until about 1940. That indicates, because the models failed to properly simulate the cooling from the 1880s to the 1910s, they also failed to properly simulate the warming that took place from the 1910s until 1940. The difference cycles until the 1990s, with the difference gradually increasing again. And from the 1990s to present, because of the hiatus, the difference has decreased to smallest value since 1880.

In Figure 12, the difference between the BEST estimate of Earth’s surface temperature and the model mean of the simulations of it is shown. The curve is similar to the one above for the GISS and NCDC data. The BEST global temperature data show less cooling from the 1880s to the 1910s, and as a result there is not as great a decrease in the temperature difference between models and data. But there is still a major increase in the difference from the 1910s to about 1940, when the models fail to properly simulate the warming that took place then. And, of course, the recent hiatus has caused there to be another decrease in the temperature difference.

Figure 12

Figure 12

CHAPTER SUMMARY

There is about a 0.8 deg C (about 1.4 deg F) span in the estimates of absolute global mean surface temperatures, with the warmer estimate coming from the more recent estimate based on a more up-to-date global surface temperature databases. In other words, the BEST (Berkeley Earth) estimate seems more likely than the outdated GISS and NCDC values.

There is a much larger span in the climate model simulations of absolute global surface temperatures, averaging about 3.15 deg C (about 5.7 deg F) from 1880 to 2013. To put that into perspective, starting in the 1990s, politicians have been suggesting we limit the warming of global surface temperatures to 2.0 deg C (3.6 deg F). Or another way to put that 3.15 deg C (about 5.7 deg F) model span into perspective, consider that, in the IPCC’s 4th Assessment Report, they basically claimed that all the global warming from 1975 to 2005 was caused by manmade greenhouse gases. That claim was based on climate models that cannot simulate natural variability, so it was a meaningless claim. Regardless, global surface temperatures had only warmed about 0.55 deg C (1.0 deg F) between 1975 and 2005, based on the average of the linear trends of the BEST, GISS and NCDC data.

And the difference between modeled and observed absolute global mean surface temperature was yet another way to show how poorly global surface temperatures are simulated by the latest-and-greatest climate models used by the IPCC for their 5th Assessment Report.

BUT

Sometimes we can learn something else by presenting data as anomalies. For Figures 13 and 14, I’ve offset the model-data differences by their respective 1880-2013 averages. That converts the absolute differences to anomalies. We use average of the full term of the data as a reference to assure that we’re not biasing the results by the choice of the time period. In other words, no one can complain that we’ve cherry-picked the reference years. Keying off the 10-year averages (red curves) helps to put the impact of the recent hiatus into perspective.

Keep in mind, if the models properly simulated the decadal and multidecadal variations in Earths surface temperatures, the difference would be a flat line, and in the following two cases, those flat lines would be at zero anomaly.

For the average of the GISS and NCDC data, Figure 13, because of the recent hiatus in surface warming, the divergence between models and data today is the worst it has been since about 1890.

Figure 13

Figure 13

And looking at the difference between the model simulations of global mean temperature and the BEST data, Figure 14, as a result of the hiatus, the model performance during the most recent 10-year period is worst it has ever been at simulating global surface temperatures.

Figure 14

Figure 14

[End of book chapter.]

If you were to scroll back up to Figure 7, you’d note that there is a small subset of model runs that underlie the Berkeley Earth estimate of absolute global mean temperature. They’re so close it would seem very likely that those models were tuned to those temperatures.

Well, I thought you might be interested in knowing whose models they were. See Figure 15. They’re the 3 ensemble members of the MIROC5 model from the International Centre for Earth Simulation (ICES Foundation), and the 3 ensemble members of the GISS ModelE2 with Russell Ocean (GISS-E2-R).

Figure 15

Figure 15

That doesn’t mean the MIROC5 and GISS-E2-R are any better than the other models. As far as I know, like all the other models, the MIROC5 and GISS-E2-R still cannot simulate the coupled ocean-atmosphere processes that can cause global surface temperatures to warm over multidecadal periods or stop that warming, like the AMO and ENSO. As noted above, their being closer to the updated estimate of Earth’s absolute temperature simply suggests those two models were tuned to it. Maybe GISS should consider updating their 14.0 deg C estimate of absolute global surface temperatures for their base period.

Last, we’ve presented climate model failings in numerous ways over the past few years. Topics discussed included:

Those posts were also cross posted at WattsUpWithThat.

Just in case you want to know a whole lot more about climate model failings and can’t wait for my upcoming book, which I don’t anticipate finishing for at least 5 or 6 months, I expanded on that series and presented the model faults in my last book Climate Models Fail.

0 0 votes
Article Rating
111 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
rgbatduke
November 10, 2014 4:34 am

About time that somebody wrote a post addressing this issue. Excellent job, Bob. This has irritated me for years — we don’t know global average surface temperature itself within one whole degree C either way (as the curves above clearly show) only it is much worse than you present!
You forgot the error bars. Per computation. The true range of the error is at least 50% larger than what you obtain from the spread in the per model averages, as they typically have nearly symmetric error bars and even the anomalies have significant error bars, increasing as one goes back in time.
The one other thing I’d point out is that when one compares the GCMs to the global averages, one is comparing one kind of model to another kind of model. Neither one has been, or can be, validated in the past. In order to validate the GCMs one has to compare to the global anomaly averages produced by the named models (e.g. HadCRUT4, GISSTEMP…) to the GCM anomalies, per model. As we have seen and continue to see, that doesn’t work too well. Neither does comparing the CMIP5 MME mean to the model anomalies.
But then how can one validate the anomalies themselves? They are the results of model calculations also. We treat them as if they are empirical data, but they are not. They are heavily reprocessed, selected, kriged, infilled, and tweaked. And they do not agree! This alone suggests that the error in the anomaly is even larger than is generally acknowledged (where it is for various reasons almost never plotted with the data, even on these august pages).
rgb

Neil
Reply to  Bob Tisdale
November 10, 2014 6:16 am

How come this line (from Trenbeth’s blog post (reference about) isn’t being screamed from the rooftops:
“However, the science is not done because we do not have reliable or regional predictions of climate. But we need them. Indeed it is an imperative! So the science is just beginning. Beginning, that is, to face up to the challenge of building a climate information system that tracks the current climate and the agents of change, that initializes models and makes predictions, and that provides useful climate information on many time scales regionally and tailored to many sectoral needs.”
We do not have reliable or regional predictions of climate. Dear god; there is the money quote that sinks the entire alarmist edifice. Because, as an AGW skeptic, I completely agree with Trenberth: understanding the climatic system and building reliable models – even if only short term models – is something we should be doing.
Instead, as we know, the entire argument was hijacked by the Green anti-progress movement.

Brandon Gates
Reply to  Bob Tisdale
November 10, 2014 9:32 am

Bob, you write:

Maybe I need to preset that as a separate post.

I’d be interested to read it. Bonus points if you don’t reduce Kevin’s post to a single soundbite. Neil put his finger on a particularly poignant quote … were I you, I’d definitely include that one. I have another unsolicited suggestion — his concluding sentence:

We will adapt to climate change. The question is whether it will be planned or not? How disruptive and how much loss of life will there be because we did not adequately plan for the climate changes that are already occurring?

In risk management, which this is, greater uncertainty is often perceived as greater risk. One of the great ironies of the AGW political debate is that liberals act more like conservatives and vice versa in terms of risk aversion.

RACookPE1978
Editor
Reply to  Bob Tisdale
November 10, 2014 10:45 am

Brandon Gates:
In risk management, which this is, greater uncertainty is often perceived as greater risk. One of the great ironies of the AGW political debate is that liberals act more like conservatives and vice versa in terms of risk aversion.

No.
The entire “theme” of the liberal-enviro “control” behind their Precautionary Principle” is the ASSUMPTION that
“There WILL BE Catastrophic Future Damage from iincreasing global temperatures caused by future CO2 levels, therefore we “MUST” limit CO2 levels NOW!”
Rather, the true Precautionary Principle Statement actually is:
“There MIGHT BE future problems due to a POTENTIAL future increase in global average temperatures far in the future that MIGHT be associated with FUTURE increases in CO2 levels, therefore we must BALANCE the known ABSOLUTE and POSITIVE BENEFITS from today’s use of fossil fuel against the ABSOLUTE and totally NEGATIVE PROBLEMS caused by limiting fossil fuel production today and increasing tomorrow’s energy prices.”
However, the way today’s enviro’s apply the Precautionary Principle is to say
” We must absolutely and knowingly kill millions of people today and harm billions in the future order to reduce even the possibility of a much-feared and greatly exaggerated beneficial rise in global average temperatures.”

Reply to  rgbatduke
November 10, 2014 6:48 am

Just as a note, the average of 95+ million surface stations from 1940 to 2013 is 11.39C, most of the 2000’s averaged ~12.2C, with a number of years in the 40’s and 50’s being ~1C warmer, and the years between averaging ~2C cooler (~10C).

Brandon Gates
Reply to  Mi Cro
November 10, 2014 11:37 am

RACookPE1978, you write:

Rather, the true Precautionary Principle Statement must be:
There MIGHT BE future problems due to a POTENTIAL future increase in global average temperatures far in the future that MIGHT be associated with FUTURE increases in CO2 levels, therefore we must BALANCE the known ABSOLUTE and POSITIVE BENEFITS from today’s use of fossil fuel against the ABSOLUTE and totally NEGATIVE PROBLEMS caused by limiting fossil fuel production today and increasing tomorrow’s energy prices.

You and I agree more than you apparently think. I’m a big one for preaching that the cure must not be worse than the disease. The first time I was introduced to Bjørn Lomborg via his documentary film Cool It, I literally stood on the couch and applauded because I sensed in his arguments a pragmatism often completely absent in the environmental left when they speak to mitigation policy. I danced a jig on the same couch when James Hansen and several other brave souls broke rank and came out for accelerating development and deployment of nuclear fission plants as a medium term mitigation solution. Now Lomborg’s solutions may ultimately be impractical … hard to tell because he is not much loved in certain political circles which quite likely has tainted the scientific critiques against him, just as surely as the backlash against the open letter from Hansen et al. is based on irrational fear and the momentum of adhering to ideological tradition.
Two things jump out of my anecdotal examples: economics and epidemiology are far more art than science, especially when compared to physics. Even so, some climatologists are, and have been, quite candid about the levels of their own uncertainty. Trenberth has already been invoked on this thread, to that hopper I add this (somewhat infamous) quote from AR5 lead author Richard Betts in the comment thread in this Bishop Hill blog post: http://www.bishop-hill.net/blog/2014/8/22/its-the-atlantic-wot-dunnit.html:

Everyone* agrees that the greenhouse effect is real, and that CO2 is a greenhouse gas.
Everyone* agrees that CO2 rise is anthropogenic
Everyone** agrees that we can’t predict the long-term response of the climate to ongoing CO2 rise with great accuracy. It could be large, it could be small. We don’t know. The old-style energy balance models got us this far. We can’t be certain of large changes in future, but can’t rule them out either.

Mr. Tisdale has some commentary that can be found here: http://bobtisdale.wordpress.com/2014/08/26/a-lead-author-of-ipcc-ar5-downplays-importance-of-climate-models/
My favorite quote:

As noted earlier, it appears extremely odd that a climate modeler is downplaying the role of—the need for—his products.

I don’t think it’s odd at all for a scientist to be candid about the limitations of his or her own work. In fact, I call it the expected standard to be forthright and rigorous when it comes to discussing uncertainty. When an IPCC lead author says, “We don’t know” my little ears perk up. That tells me that balancing out the risk/reward analysis is not just going to be next to impossible from an economic perspective, it may very well be impossible from a physical science perspective as well. Faced with that kind of problem, it’s my opinion that going first and foremost with what is known to work is best policy. Nukes top my list on that score, and it pisses me off to no end that the bulk of enviros are still adamantly opposed to them.
OTOH, we don’t know that increased levels of CO2 will have net beneficial long-term benefits either. The ASSUMPTION that it will, or that the effects will be effectively neutral is just as egregious in my mind as the assumption of catastrophe if no mitigation happens at all.
The optimal climate for humanity from a risk management perspective is the one we’ve already adapted to: the one we’ve got right now, not 2 degrees warmer OR cooler. The risk of risk management is that you’ll panic and be overly precautionary. I’m not a fan of alarmism any more than I’m a fan of do-nothingism or — speaking of overconfident hubris — its first cousin, “CO2 is plant food and warm is better than cool.”

Reply to  Brandon Gates
November 10, 2014 11:53 am

the one we’ve got right now, not 2 degrees warmer OR cooler

Since you highlighted my earlier response (or just hit the reply button in it), I’ll note we’ve recently (last 75 years) lived in 1C warmer and 2C cooler already.

Nick Stokes
Reply to  rgbatduke
November 10, 2014 12:12 pm

RGB,
“we don’t know global average surface temperature itself within one whole degree C either way”
We don’t. And the real message of what is quoted from GISS and NOAA is, don’t even ask. There is a good reason for that, which I’ll come to.
But we’re used to dealing with situations where we know more about changes to something than we know about the thing itself. Who knows their total worth to within $1000? Trying to pin it down would create definitional problems. Yet we monitor changes to within $1000 sensitively. You don’t know your total wealth, but you know if you are getting wealthier (or not).
GAST is an integral. To get a meaningful integral, you have to take the values you know, estimate the values you don’t know (everywhere else), and add it all up. We can’t do that with absolute temperatures because of all the local variations. Microclimates, altitude variation, N vs S hillsides etc. But when temperature varies, it does so fairly uniformly. On a warm day, warm air blows in, and it affects everything. As a matter of measurement, anomalies are correlated between measuring points, and so it is reasonable to interpolate in between. Anomalies can be integrated.
And they are consistent vertically as well, at least for some distance. Absolute temperature is not. It is taken (on land) 1.5 m above ground, and that matters. The average temperature of a notional membrane 1.5m above the surface is not very meaningful, and of course goes haywire when connected to SST. But anomalies don’t have this sensitivity.
Integrating temperature is important because it is a measure of total heat (coupled with specific heat information). That 1.5m surface doesn’t correspond to any useful volume. But anomalies do. The relation between temperature and conserved heat is of course vital in GCM’s (or any CFD).

k scott denison
Reply to  Nick Stokes
November 10, 2014 3:57 pm

Horrible analogy Nick. Please explain how I know my net worth is going up or down without knowing what the total is.

rgbatduke
Reply to  Nick Stokes
November 10, 2014 9:55 pm

Sure, Nick, but what about drift? What about UHI? What about the fact that you have to model and guestimate and infill and krige and trim outliers and systematize everything you do into an algorithm that supposedly works just as well for 1877 data as it does for 1977 data as it does for 2014 data in spite of all of the many, many changes in just about everything in between.
You cited money as a metaphor. OK, fine. My income is much higher than it was when I was twenty as measured in dollars. But when I was twenty, Duke cost $4000 or so a year in tuition. A nice car cost $2000 to $4000. A house cost perhaps $20,000. Now, Duke costs around $60,000 a year. A nice car costs $20,000 to $50,000. A house costs $200,000. Back then, I could work all summer and make around $2000, roughly half of Duke’s tuition etc. Now my son works all year to make roughly $20,000.
So, is my son’s annual income higher or lower than my summer income in (say) 1974 or 1975? Is it even possible to tell? Would it be any more possible to tell if one monitored the entire “cash anomaly” over the years in between? I’d argue that this is a very difficult question to answer, because even though I cited a couple of points of reference — Duke cost a car a year in 1975 and it still costs a car a year, just a nicer car — that is far from the only referent in a complex economy. How are sales and other taxes now compared to then? What is the relative quality of Duke’s education or a new car now compared to then? How much does it cost to run a house now vs then? How much does borrowed money cost now compared to then?
In just 40 years, the economy and technology have drifted enough that nobody can answer the question. I’m typing on a device that is five or six years old and a bit flaky at this point that contains more storage and more processing power than maybe a billion dollars worth of 1975 computers. In fact, my laptop would have been classified as a munition in 1975. It’s scary to think how much of the eastern US and/or continental US my laptop would be able to outcompute with all of the 1975 resources working at the same time. The television over against the wall — flat panel the size of the “fireplace” it is sitting in — has already depreciated by a factor of four or five in the handful of years since I bought it and would be replaced by far more powerful technology and even bigger size and more features for far less money if I were to replace it tomorrow. But compared to 1975 TVs? Puh-leeze. Music? I can listen to the entire discography of nearly any combination of musicians in the known universe for less money a month than one single album cost in 1975 dollars. Food? Health care? Risk of dying in a war (that’s gotta count for something)?
It is really, really difficult to assess whether or not we are, in fact, getting wealthier even when we can state for certain that our dollar income is rising and falling! And as for my net worth — perhaps you known the market went through a correction of a bit over 10% over the last four weeks, then recovered all of its value and is heading up again. Or not. But if you weren’t watching, and had substantial investments in market securities, you could easily have lost a half year’s income in a week and a half — or more — and never noticed it, any more than you noticed it when the market recovered. Tomorrow the bottom could fall out — the market is arguably increasingly margin-based paper and is ripe for yet another crash. Or not — just try to predict it. You could be as aware as you like about how much you are spending and earning as a salary or for groceries, and utterly fail to account for large scale fluctuations in currency, in the value of money, in the market where a great deal of your hard wealth is invested, in the value of your house that is contingent upon people having enough money to be able to buy it and hence upon the general state of the economy.
To put it bluntly, your example is precisely backwards. It illustrates in no uncertain terms why it is nearly impossible to rationally compare a global temperature anomaly relative to some comparatively modern baseline that was evaluated with modern instrumentation that is comparatively carefully located and still does a terrible job in many well-documented cases of representing even the local temperature variation either absolute or as an “anomaly” and an enormously more sparse grid of thermometers operated with indifferent skill by humans in the 19th century or early 20th century. We do not even know if the circulation patterns in the atmosphere or ocean are comparable then to now. Whole continents were virtually unsampled then. The entire ocean is still poorly sampled now, and it covers 70% of the surface. Comparing the climate now to the climate then is like trying to compare the world economy in 1850 to the world economy now. In 1850 there were still parts of the world where we just didn’t know much about the economy because people that had a word for the economy pretty much had never visited them. Try using patterns of wealth and cash flow from today to model wealth 100 years ago, or 150 years ago. Oops.
Let’s take this one step further. We have a pretty good idea of what the world economy is doing at any given time. We have this idea not because we track and tally the exchange of “money”. In fact, if we took your suggestion and used the amount of money I gain or lose in each day — my net cash “anomaly” — and generalized it to the world, we would make horrendous errors. Why? Because there are so very, very many things that simple bookkeeping ignores or is simply ignorant of. Net return on investment? Taxes? Depreciation? Appreciation? Laundering? Inflation? Deflation? War? Natural Disaster? Invention? New Construction? It’s not a zero sum game, and we simply do not know enough to track all of the flow of money even if it were. People get rich selling pet rocks! Who could have foretold that?
In no possible sense is the net present value of the wealth of the entire world in any possible sense related to the sum of all of the cash flow that created it. Governments just print money. It has no real value. And the money in actual circulation is the constantly varying tip of a veritable iceberg in non-monetary assets.
This indeed is not that different from the problem of climate and a global temperature “anomaly”. In climate science, one has the advantage that it really is a zero sum game — oh, oops, not it’s not, that’s what all of the argument is about. The Earth is an open system. It is an open non-Markovian system. It is an open, non-Markovian system with vast thermal and chemical reservoirs. It is an open, non-Markovian system with vast thermal and chemical reservoirs being fundamentally driven by several sources of “external” energy, the most important of which varies by as much as 100% (from “illuminated at all” to “not illuminated at all both diurnally and annually/regionally, and which varies at the top of the atmosphere by around 7% over the course of a year. We are barely — maybe, possibly — able to start thinking about being able to track even the most elementary part of the global climate bookkeeping — energy coming in relative to energy coming out — and we have absolutely not the slightest clue in the world what this critical balance was or was doing twenty, forty, or a hundred and forty years ago. We lacked creditable instrumentation to be able to measure it even locally until perhaps 50 years ago.
Suppose I want to value my house. Well, no problem — Zillow will do it for me, right on line. It does so by taking its real estate and tax listing at the time of its last sale or refinance, averaging the “cost per square foot” for houses recently sold in my neighborhood, and applying a simple “kriging” formula, as it were — it multiplies my square footage by the cost per foot, maybe tweaks it a bit, and there it is. Basically, it tracks the anomaly in the cost of my house, as a function of the related anomalies in other houses sampled in what might or might not be a representative physically contiguous spatial domain.
Go try to borrow money against the Zillow appraisal.
Oh, wait, you mean the bank doesn’t give a rat’s ass about the computed, smoothed, adjusted, sampled anomaly? They insist on sending a qualified expert to actually look at the house and appraise its actual value, because they are at substantial risk if they finance a purchase of the house for $500,000 and it turns out that Zillow didn’t realize that I’ve got termites, there was a small unreported fire a few months ago (which is why I’m selling), and that my neighbors are a sweet little Mafia don on one side and a drug dealer on the other and it is barely worth $100,000. Or, they don’t realize that I finished my attic in the meantime, replaced all of the furnaces, and that the siding and roof are now new as well so it is worth $600,000. And even then it isn’t simple, because there are limits both up and down (mostly up!) due to the values of neighboring homes. Sometimes, anyway.
Sometimes there really is no substitute for computing the real thing. Indeed, sometimes one has to compute the real thing just so you know what the anomaly is the anomaly of, so that you can work off of a precise and accurate baseline. The anomaly is going to strictly add its own uncertainty to the uncertainty of the baseline in the usual way, after all, in any honest statistical appraisal.
If I went to my local high school and measured the changes in height of every single pupil every year, I would learn that the average increase per pupil is maybe 6, maybe 7 centimeters per year. If I sampled a handful of those students — mostly from one or two classrooms because the others are too far to walk to to do the measurements or the students already went home on the bus — I might conclude that the mean height of the students in the school is (say) 1.5 meters (give or take 0.3 meters, big error sloppy sample).
So, should I go to the school board and say “Look, we’ve got a disaster on our hands. Students in school are a meter and a half tall now, but they are growing 7 centimeters a year! In a decade, they will be well over two meters, and by 2100 they will be approaching seven or eight meters in height! Our buildings won’t hold them! We won’t be able to feed them! We need to pass a law right away forcing all students to undergo a mandatory amputation every year or Catastrophic Average Growing (like a) Weed theory says that they, like whales, won’t be able to move!”?
I don’t think so.
And note well! Over the century, it is entirely possible that the mean height of the students in the school will, in fact, increase, as diet and genetics work their magic on a healthier, wealthier, better fed world. You would never be able to detect this by measuring an anomaly, but you sure as heck can detect it by the simplest and most direct means available. Take all of the students in the school. Measure their height. Divide the result by the number of students in the school.
You could even do a decent job (for a large enough school) just sampling the heights of the students, provided that you rigorously use a blind, random selection process to pick a sample of students and refuse to reject measurements just because some of them appear to be “outliers” according to your preconceived biases of what student height should reasonably be. Yes, those are called “members of the basketball team” and they count as people going to school along with everybody else. Or maybe they are just short people. Short people happen too.
I’m not saying that tracking anomalies is completely without value. I’m saying that it is a process fraught with statistical peril, especially when one asserts that conclusions based on an anomaly smaller than 1% apply to a mean that you cannot measure to within 1%, and assert that the algorithm used to compute the anomaly — relative to some selected, but unknown and completely arbitrary baseline temperature that you just define within the model used to compute the anomaly in the first place — can be applied without fear to data taken with different methods and instrumentation in different locations by different people in a different century.
That’s just silly. You can do it, but do it without fear? In particular, without the fear that somebody will figure out that there is a rather high probability that the result is pure bullshit? Or do it without openly and honestly acknowledging the serious limitations associated with publishing an anomaly of a number that you cannot consistently comput (given that we cannot go into the past and make our past data somehow magically consistent with modern data) or compare to the numerical value of the number it is supposed to be the anomaly of?
rgb

Reply to  Nick Stokes
November 11, 2014 2:35 am

Nick,
Surely, almost by definition, the error bars (as opposed to statistical variation bars) should encompass all of the recognised estimates of global mean temperature; and these same, wide error bars should also apply to anomaly temperatures.
Thus we have an uncertainty over the time spans shown of some +/- 3 deg C. At least.

Reply to  Nick Stokes
November 11, 2014 1:31 pm

Geoff,
What are these “recognised estimates of global mean temperature”? Again, what NOAA and GISS are saying, loud and clear, is that the temperature record does not provide such an estimate. Sites weren’t chosen to be representative of topography, for example. Even that 1997 paper said to claim 16.5°C in fact doesn’t. It just has, in parenthesis, a definition:
“(Normal is defined by the mean temperature, 61.7 degrees F (16.5 degrees C), for the 30 years 1961-90)”
Mean would be the average of those readings. They are not claiming a measure of the actual average temperature of the Earth.
I gave an example above of how we can manage our finances accurately with only a fuzzy idea of our total wealth. There are plenty of scientific parallels. Fahrenheit could use thermometers with no concept of absolute temperature at all. Many thermodynamic quantities are expressed as differences. You’ll see lots of data about enthalpy changes due to reaction, and enthalpies expressed relative to some standard state (analogue – basis period). It’s harder to find an accurate absolute enthalpy for, say, a kilo of water at STP. That doesn’t affect normal thermo.

mwh
November 10, 2014 4:35 am

I agree with you entirely Bob good article. Any model trying to pick a way through all the variables and feedbacks of global climate is going to fail fairly quickly after inception no matter what IMO.
However seeing as virtually all of us here would completely dismiss RCP8.5 as a ridiculous scenario, isnt it a bit disingenuous to use this data set to discredit all models. To me it does make the case appear somewhat cherry picked (I did read the para explaining why, but my question still stands)

johnmarshall
November 10, 2014 4:53 am

Thanks Bob, lots of work there, look foreward to the book.
I must agree with rgbatduke. All the models show is that their basis is wrong, time for a total rethink.

Brandon Gates
Reply to  Bob Tisdale
November 10, 2014 9:54 am

Bob,
At the risk of publishing spoilers before the premiere, may I ask if your list includes: Some climate models leak energy, thereby violating the 1st law of thermodynamics. To balance energy at TOA, models are often tuned by arbitrarily tweaking cloud parameters and/or albedo? Or worse, by tweaking unobservable parameters? Those and many others here: http://www.mpimet.mpg.de/fileadmin/staff/klockedaniel/Mauritsen_tuning_6.pdf
An eminently (mis)quotable document if you ask me, but if I were to cherry pick a soundbite from it, I’d start with the first sentence of the abstract:

During a development stage global climate models have their properties adjusted or tuned in various ways to best match the known state of the Earth’s climate system. These desired properties are observables, such as the radiation balance at the top of the atmosphere, the global mean temperature, sea ice, clouds and wind fields.

I’d think that should put a few rumors here to rest. Hope springs eternal.

jjs
November 10, 2014 4:58 am

Starting to get cold again this year in the north, much earlier in the past. Artic is on an upward trend.http://arctic-roos.org/observations/satellite-data/sea-ice/observation_images/ssmi1_ice_ext.png

November 10, 2014 4:59 am

Very nicely done!
I continue to be fascinated by the “ability” of climate science to begin from “data” with whole degree uncertainties and produce analyses to two or even three decimal places.
I also continue to be fascinated by the apparent confidence of climate science that sensors, once placed, do not change calibration sufficiently over time to impact anomaly calculations reported to two or even three decimal places.

Hugh
Reply to  firetoice2014
November 10, 2014 9:47 am

“I also continue to be fascinated by the apparent confidence of climate science that sensors, once placed, do not change calibration sufficiently over time to impact anomaly calculations reported to two or even three decimal places.”
You are kidding. They admit that the calibration is erroneous and must be homogenised upwards recently and downwards back in time. Systematicly.

November 10, 2014 5:24 am

Digital micrometers read to 1/10,000″ but no one believes that last place. But in the world of climate science and the power of large numbers a millionth of an inch accuracy would probably be reported.

george e. smith
Reply to  Steve Case
November 10, 2014 7:10 am

Well you have to differentiate between the accuracy of the micrometer, and the measurability of the part. If the object being measured is not plane parallel to better than 0.0001″ then you can’t expect a micrometer to measure it.

Muzz
November 10, 2014 5:28 am

There’s something anomalous about anomalies and all this carry on about going to the umpteenth degree over a fraction of a degree. Exactly by how much is this AGW increasing Earth’s energy level? That surely has got to be the real measure of the effect of AGW on the climate.

Brandon Gates
Reply to  Muzz
November 10, 2014 10:35 am

Muzz, you wrote:

Exactly by how much is this AGW increasing Earth’s energy level? That surely has got to be the real measure of the effect of AGW on the climate.

Well yes and no. Yes because rate of change in total solar energy retained in the system is the best indicator of changing radiative forcings, albeit with a lot of dips, dives and — probably most significant — time lags due to thermal inertia. Actually measuring it “exactly” is the bugaboo. If you think — as many on this thread obviously do — that spatial distribution of surface temperature stations results in an impossible interpolation problem wrt atmospheric surface temperature, going to three dimensions to get an instantaneous scalar value expressed in joules becomes even more impossibe … if there is such a concept. Not that it isn’t being attempted, mind.
No, because change in absolute energy retained does not tie directly to climate effects. Where when and how energy gets shuffled around via circulation affects temperatures, which ultimately affect phase changes. Weather is nothing if not circulation and phase change, and climate is weather averaged over time … right? Right … else the themometer on your porch, thermostat — or your oven — would read in joules.
I’ve long thought temperature anomalies were the wrong metric to be using for a litany of scientific and political reasons. Thing is, there is an even larger litany of practical reasons why energy anomalies haven’t been used as much, and probably will not gain much traction in policy and media circles as the predominant published indicator. The big exception is OHC, which the consensus community prefers to publish for political reasons that are just as obvious as why skeptics prefer to speak in terms of OT. Unfortunately, the scientific merits of why to choose one over the other often gets lost in the shuffle given that nuance and politics are generally not mutually compatible with each other.

Bernd Palmer
November 10, 2014 5:41 am

Again, Bob, thanks for your enlightening and easy to understand explanations.

Frank K.
November 10, 2014 6:06 am

Thank you, Thank you, Bob! I am glad this topic has been investigated as it really displays one the many problems with climate models. A couple of points related to your post:
(1) “Indeed, we have shown (Hansen and Lebedeff, 1987) that temperature anomalies are strongly correlated out to distances of the order of 1000 km.”
No, NO, NO!!! I invite everyone to read the original Hanson and Lebedeff paper (available at the NASA GISS site). They only show “strong correlation” for limited parts of the Earth, and even then there is a tremendous amount of scatter in their data. I once asked the question as to what constitutes a “strong correlation” numerically. 0.9, 0.8, 0.5??? No one seemed to know.
(2) “Q: What do I do if I need absolute SATs, not anomalies?”
“A: In 99.9% of the cases you’ll find that anomalies are exactly what you need, not absolute temperatures.”
This response from GISS is puzzling since radiation heat transfer, which is the driving force for global warming, depends on ABSOLUTE TEMPERATURES!! Please repeat this to yourself over and over. And this is temperature in degrees Kelvin and NOT Celsius. Temperature anomalies are meaningless for the actual radiation physics embodied in the models. (Of course the big uncertainty in radiation modeling is the data required to model absorption, scattering, emission, reflection, and diffraction processes upon which the radiation models depend).
Anomalies are only useful for detecting trends, and say nothing about the underlying physics.
(3) Bob’s Figure 6 is an excellent illustration of the fact that climate modeling is an INITIAL value problem in mathematical physics and NOT a BOUNDARY value problem, as is often erroneously stated. It is truly appalling that the modelers can’t even start their models with the same (or even superficially similar) initial conditions (i.e. initial surface temperatures). How can they even hope to compare the results on a uniform basis?

rd50
Reply to  Frank K.
November 10, 2014 6:49 am

Thank you.

Rud Istvan
Reply to  Frank K.
November 10, 2014 8:28 am

Judith Curry’s testimony to the APS inquiry led by Dr. Koonin made an additional point about the models and actual temperature rather than anomalies. Most of the important feedbacks depend crucially on actual (absolute) temperature. Evaporation, precipitation, clouds, ice in clouds, … So the climate models cannot possibly get future temperature right because they don’t reflect the underlying temperature dependent processes. This is best seen in their tuning period failings hindcasts) concerning clouds and precipitation.

Brandon Gates
Reply to  Frank K.
November 10, 2014 9:09 am

Frank K.,
(1) From page 13,347 of H&L (1987): “The 1200-km limit is the distance at which the average correlation coefficient of temperature variations falls to 0.5 at middle and high latitudes and 0.33 at low latitudes.” GISS says about this paper: http://pubs.giss.nasa.gov/abs/ha00700d.html “The temperature changes at mid- and high latitude stations separated by less than 1000 km are shown to be highly correlated; at low latitudes the correlation falls off more rapidly with distance for nearby stations.” So, “strongly” or “highly” correlated is > 0.5 … on average, as there is a great deal of scatter, increasing as one moves from north to south since station density falls off at lower latitudes.
(2) You make a good point here about modeling the underlying physics, and indeed GCM models spit out absolute temperatures in K. Not only are anomalies only useful for detecting trends, they’re pretty much essential given the nature of the sampled data. In a perfect world, no weather station would ever have been moved, all be at the same altitude, away from population centers in the middle of grassy fields instead of parking lots, never drifted out of calibration, reported every hour every day of every year, and been geographcially equidistant across the entire surface. Unfortunately, the planet is not so homogeneous, so we have to do our best to homogenize the data ourselves, which includes subtracting each station’s data from itself to derive a common baseline and report out the aggregate as a deviation relative to the average zeroed time period. Can you think of any other reasonable way to do it?
(3) You’re appalled at a strawman: GCMs are fully capable of being set to initial conditions based on observation, and constantly are for experimental purposes. For current production output (read: IPCC ARs) they deliberately are not set to initial conditions. By design, as in ON PURPOSE — which is contentious. Judith Curry quite nicely sums up a (somewhat adversarial) debate between Lennart Bengtsson (arguing the intial value problem approach and Andy Lacis (arguing the boundary value problem approach): http://judithcurry.com/2014/05/26/the-heart-of-the-climate-dynamics-debate/
Bengtsson’s arguments more fully laid out here: http://judithcurry.com/2014/05/24/are-climate-scientists-being-forced-to-toe-the-line/
And Lacis going on record three years ago here: http://judithcurry.com/2011/10/09/atmospheric-co2-the-greenhouse-thermostat/
Also going back to 2011, Gavin Schmidt weighs in: http://www.realclimate.org/index.php/archives/2011/08/cmip5-simulations/ The money quote:

The ‘decadal prediction’ simulations are mostly being run with standard GCMs (see the article by Doblas-Reyes et al, p8). The different groups are trying multiple methods to initialise their ocean circulations and heat content at specific points in the past and are then seeing if they are able to better predict the actual course of events. This is very different from standard climate modelling where no attempt is made to synchronise modes of internal variability with the real world. The hope is that one can reduce the initial condition uncertainty for predictions in some useful way, though this has yet to be demonstrated. Early attempts to do this have had mixed results, and from what I’ve seen of the preliminary results in the CMIP5 runs, significant problems remain. This is one area to watch carefully though.

If Gavin has written a follow-up (I’d think he probably has) I haven’t read it. But his post raises a number of interesting questions about GCM capabilities, the most important of which (in my mind) is that expecting a GCM to act like a probabalistic weather forecast model based on initial values may be attempting to fit a square peg into a round hole. IOW, such an approach is applying them to a use case outside their design parameters. A little thinking about the definition of climate may shed some light. From the first link, Curry quoting Bengtsson:

Climate is nothing but the sum of all weather events during some representative period of time. The length of this period cannot be strictly specified, but ought to encompass at least 100 years. Nonetheless, for practical purposes meteorologists have used 30 years.

A statement which I think should be uncontroversial in this debate. The bifurcation begins in the penultimate sentence of Bengtsson’s quote:

Because of chaos theory it is practically impossible to make climate forecasts, since weather cannot be predicted more than one or several weeks.

Which is essentially your argument if I’m reading between the lines correctly. It’s also a common AGW skeptic refrain, so perhaps my biases are showing. Regardless, based on my readings of primary literature, I disagree with him — among many things, weather forecasts are concerned with timing down to daily and even hourly resolution. Predicting the chance of inches of rain on July 4, 2050 in Toronto is something that will never happen. Demanding anything close to that time resolution from a GCM that number of years out is an impossible standard and is setting up a condition where 100% failure is guaranteed. I rather suspect that’s the point of invoking weather forecasts in this debate. What Bengtsson leaves out of his reference to chaos theory is the concept of attractors, so not only do I disagree with his argument, I don’t trust that he is making it in good faith. What are Navier–Stokes equations applied to incompressible fluids if not but a classic boundary value problem in physics? You’d think such an accomplished fluid dynamicist would know this.

Frank K.
Reply to  Brandon Gates
November 10, 2014 2:11 pm

Thanks for your reply, Brandon.
(1) So a correlation > 0.5 is “strong”? Really??? Hmmm. That certainly is questionable in Hansen’s paper. Of course, they can assume anything they want for their algorithm, but it doesn’t make it correct. In this case, I believe it is grossly incorrect.
(2) “Can you think of any other reasonable way to do it?”
Yes, don’t homogenize! Anything. Present the data as it is, warts and all (outside of corrections for changed locations or obvious data entry errors). If you wish to integrate temperatures regionally, that’s fine as long as you clearly state your assumptions. However, developing a result based on your preferred approach does not necessarily mean that it has any meaningful connection to the underlying physics.
The attempt to simplistically characterize GMST through questionable homogenization techniques is misleading at best, and totally incorrect at worst. Anomalies say NOTHING about the complex thermodynamic/heat transfer mechanisms that govern our atmosphere and climate, but are merely a way attempting to characterize temperature “trends”. Unfortunately, I’m sure you’ll agree that these trends have been abused by the IPCC, government officials, and activists as a means of affecting political change. Fortunately, their efforts are failing.
(2) “What are Navier–Stokes equations applied to incompressible fluids if not but a classic boundary value problem in physics? You’d think such an accomplished fluid dynamicist would know this.”
What are you talking about here? First, please note that the “Navier-Stokes equations” (which, by the way, are a specific set of partial differential equations and in no way resemble the equations GCMs solve) have time derivative terms. The ** steady state ** NS equations are classified as elliptic PDEs and hence only depend on boundary conditions. The unsteady NS equations constitute a Parabolic/Elliptic system and in many cases the solutions depend ** strongly ** on the initial conditions. The only way you can classify it as a “boundary value” problem is if there is in fact a steady or quasi-steady (time periodic) solution towards which the unsteady solution asymptotes in the limit as time –> infinity. With climate, both the equations AND boundary conditions are unsteady and highly non-linear. For some parameters (such as the orbit of the Earth, spin on axis, extent of the atmosphere, solar radiation hitting the Earth, etc.) the boundary conditions are regular in time and impose their cycles on the solutions (e.g. the seasons). However, others are not – for example the evolution of surface temperature on the land and oceans. And the non-linearity of the equations makes predictions of climate evolution very difficult.
And again, for the thousandth time, the equations that climate models solve are formulated as initial value problems!! Why people like Gavin Schmidt wish to misinform people about this point is anyone’s guess, but they are clearly wrong.
Finally, there is nothing wrong with running models and attempting to gain some understanding about the climate system as long as the models are clearly documented and the boundary/initial conditions (and forcings) made known. In many cases, however, these critical details are glossed over and the scientists seem to want to draw unwarranted conclusions from their work in an effort to bolster their pet theories. We could do with many fewer models, and put our money into developing models which have the highest probability for success. Unfortunately, modern climate science is too inefficient and politically connected for this to ever happen.

November 10, 2014 6:11 am

This is a very interesting comparison of model calculations with data on the annual basis. How does a comparison on a monthly basis look like? The global temperature varies by about 4 °C during a year. How well is this reproduced by the models?

Brandon Gates
Reply to  Paul Berberich
November 10, 2014 6:20 pm

Frank K.,
(1) Correlation strength is relative to the problem domain. One may really want 0.9 or better, but if it just isn’t there all the wishing in the world won’t make it so. We can only use the past data we have.
(2) For the sake of absolute transparency, I personally think it would be a good idea for GISS, NCDC, BEST, CRU, etc., to periodically publish what their anomaly products look like with raw vs. adjusted data. But GMST trend indices don’t, won’t ever, make sense without at least calculating anomalies from a baseline period or limiting the number of stations to those which have contiguous records throughout the period in question and doing a heck of a lot (more) area weighting and interpolation than is already being done.
I do agree with you that anomalies do nothing for the underlying physics. That you’re picking on ST anomalies as the big thing being abused by the IPCC, enviro-leaning pols and climate alarmists seems strange to me. For one thing, I’ve got my own laundry list of abused memes based on scientifically dubious (if not flat out wrong) premises. Temperature anomaly just doesn’t rank; it’s a yardstick, nothing more.
(3) The key to your response on N-S is “towards which the unsteady solution asymptotes in the limit as time –> infinity”. That, in a nutshell, is the crux of of this discussion: how big must t get before you can reasonably integrate out a term or ten(thousand)? One year is too small a t … seasonal variations are not much more than semi-predictable weather. IOW, if you live in a temperate zone in the NH, it’s a safe bet that January is going to be cooler on average than July for the forseeable future. But for the past 10k years, climate has been relatively stable as compared to, oh, the Younger Dryas between 12,800 and 11,500 years ago … which makes the LIA look like a golf divot next to a empty swimming pool. If you’re looking for serious non-linearities in climate, look no further than (geologically) rapid ice melt and note that 1,300 years is considered an eye blink. Therein I think lies a good argument for not radically perturbing the system … but only if your planning horizon goes much beyond 50 years. If not, well, keep on keeping on.
Why do you think you know better than Gavin Schmidt how GCMs work? Have you read his code?

Reply to  Brandon Gates
November 10, 2014 7:05 pm

Brandon,
” But GMST trend indices don’t, won’t ever, make sense without at least calculating anomalies from a baseline period or limiting the number of stations to those which have contiguous records throughout the period in question ”
You don’t have to do either (though you do have to limit it to stations that have most of a complete year), you can measure the change of a station against the station itself. This is a meaningful anomaly.
“seasonal variations are not much more than semi-predictable weather. ”
I disagree, you can use it to calculate the rate of change based on the changing length of day.

Rob
November 10, 2014 6:26 am

Great information. There is no way that SST data can be even approximately accurate
for the 1880’s. It is my firm belief that the BEST data(using land station data) is far more
accurate than CRU. Hence, the 1880’s(not 1910) were almost certainly the coldest decade. I also prefer the BEST methodology over CRU, NOAA, and GISS.

HankHenry
November 10, 2014 6:29 am

Surface Air Temperature or even air and sea surface temperature are beside the point. The earth’s true surface temperature (in absolute terms) resides in the ocean deep. Kevin Trenberth is correct to wonder if there might be missing heat in the oceans. Nobody knows one way or the other because the Argo data is so recent.
It’s depressing and reminds me of Macbeth’s soliloquy.
Climate Science … is but a walking shadow, a poor player
That struts and frets his hour upon the stage… It is a tale
Told by an idiot, full of sound and fury,
Signifying nothing.
The earth’s true surface temperature is not being studied when we discuss air temperatures.

Dawtgtomis
Reply to  HankHenry
November 10, 2014 7:10 am

Bob, I commend you for presenting a complicated issue in a way that can be comprehended by climate concerned lay people like myself. Part of my encouragement to others that they should visit this site is the comparative lack of overly-scientific “mumbo-jumbo”, which I have found on IPCC sponsored sites and I suspect is used to cause those of average intellect to feel that if they don’t possess the degree of education that the author has, then the author has automatic authority and is therefore correct. After baffling us with ultra-technical double-speak, they simplify to the masses in the form of ultimatum, keeping with the acronym I coined yesterday, KISASS (Keep It Simple And Sufficiently Scary). I find it is much easier to actually learn something here and Hold Mr. Watts in high regard for his efforts.
I find it ironic that their “proof of the hockey stick” may actually be seen in the rear view mirror as a cryptic symbol of ice that’s to come.

Dawtgtomis
Reply to  Dawtgtomis
November 10, 2014 7:16 am

I’d also like to say that if I want to research a ‘myth’ I’ll ask Snopes rather than the IPCC.

November 10, 2014 6:47 am

Typo before Figure 12?

And, of course, the recent hiatus has caused there to be another decrease in the temperature difference.

Should that be “another increase”?
But thanks for the presentation and sorry about being a pedant.

Reply to  Bob Tisdale
November 10, 2014 7:19 am

Sorry, my mistake.
Now I need to reread again and work out exactly when the models deviate most from the observations.

November 10, 2014 6:50 am

All the tampered data gets warmer and warmer and the boots on the ground keep shoveling deeper and deeper on the sidewalk to where the driveway used to be….earlier and earlier….
and none of it means anything to those outside the reality distortion field….

Dawtgtomis
Reply to  Larry Butler
November 10, 2014 7:22 am

In the “Green Reich” all are expected to accept that getting colder is part of our punishment for causing it to be warmer…

November 10, 2014 6:56 am

Oh, and GISS has looked at model derived surface temps vs measured temps at the regional levels and found differences as large as 30C, but since the global average is pretty close, they think they are going pretty good.
http://icp.giss.nasa.gov/research/ppa/2001/mconk/
http://icp.giss.nasa.gov/research/ppa/2002/mcgraw/

george e. smith
November 10, 2014 7:02 am

So in the actual time period o 1951-1980, what was the best estimate of the mean global absolute Temperature ??
In other words, what did THEY report in 1951-1980, not what they have reconstructed recently??

Reply to  george e. smith
November 10, 2014 7:50 am

George,
25 million surface records from 1951-1980 average to 10.64C

george e. smith
Reply to  Mi Cro
November 10, 2014 10:36 am

Well Mi Cro, those 25 megasurface records are all pre 1980 aren’t they. Now 1980 is about the start of the oceanic surface buoy record, which in Jan. 2001, was reported by John Christy et al, to show that oceanic near surface water Temperatures (-1 m) and oceanic near surface air Temperatures (+3m) are NOT equal, and more importantly are NOT correlated.
Ergo, those 25 million pre 1980 records are sheer junk, rubbish, poppycock, balderdash, whatever.
Are all 25 million of those numbers actually real absolute Temperature measurements, of the complete earth ??
I would think that 25 million samples over the complete earth surface, still barely reaches the proper Nyquist spatial sampling density required, for just a single “measurement”.
Now that is just my opinion of course,
Well a quickie calculation gives me a circle of 2.8 miles radius on the earth surface, and all those temperatures would have to be taken simultaneously. If you took four measurements per day, allowing for at least a second harmonic component in the diurnal Temperature cycle, then the circle would have to be 5.6 miles radius. I don’t believe in 1200 km sampling radius nummers.

Reply to  george e. smith
November 10, 2014 11:35 am

George,
I think you’d mentioned 1951-1980, so I created an average using just those values.
So, to the rest of your comment. I use NCDC’s Global Summary of Days data set. These are what they publish as the readings from the surface stations they include (which was a good match to the stations CRU used). It isn’t really meant to be a surface “average”, it is an average of what was actually measured, as opposed to the “trash” that’s published as surface temp averages. GSoD averages are made from the min and max temps that are recorded on that day at a particular station. The GSoD data has a small number of buoy’s included, not enough to alter the results much (IMO, I’ve not done any calculations).
Giving each station a 100 mile diameter of measurement, together they add up to a very small % of the surface, iirc ~10% of the land area, and a couple % of the total area.
Like many have said before, this is the surface data we have, mostly what I was looking at is day over day change at each station to calculate an anomaly (overnight cooling compared to prior day’s warming). From looking at this, the temp for most of the actual stations haven’t changed a lot (as I noted above, basically from the 40’s to 2013 temps have not changed -0.00035F). I think this is important because it shows the warming in published values come from the processing of the data, filling in areas that have not been measured (among other processing), as the measurements do not show such a change.
Even the baseline temp published does not match what’s actually been measured. That was why i mentioned them.

Reply to  Mi Cro
November 10, 2014 12:37 pm

averaging will give you the wrong answer.

Reply to  Steven Mosher
November 10, 2014 2:03 pm

averaging will give you the wrong answer

No, it gives you an average of the measurements.
But, you’re right there’s no accounting for stations dropping in and out. Which is why I use the stations prior day’s reading to create the anomaly that I use, but that still leaves CRU, GISS, and dare I say BEST who makes up a bunch of data to fill in all the missing spots to create a baseline, to subtract from the station reading to create an anomaly.

Reply to  Mi Cro
November 10, 2014 3:45 pm

You know Steven, after thinking about this for a few minutes, what’s being done(by CRU, GISS and BEST) isn’t much different than using absolute values, it’s just a different baseline figure.Or if you calculate a latitude / altitude based baseline, you’re just stacking uncertainty on top of uncertainty.
The best way of generating an anomaly with the least amount of uncertainty is exactly what I’m doing. The only complaint you can have is that I don’t homogenize and interpolate the data, which BTW is why I get a different answer, I get the measured anomaly for the stations we have not the ones wished we had.

Frank
November 10, 2014 7:05 am

Surface temperature could mean different things to modelers and meteorologists. Modelers need to know the temperature of the earth’s actual surface to determine how much infrared radiation it emits. The sand on a beach on a calm summer day may be too hot to walk on in bare feet (150 degF/65 degC), but modelers should be using temperatures this high to calculate surface OLR from desert regions. Meteorologists measure temperature at convenient locations that are often less useful for modelers: 2 m above the surface on land, marine air temperature, a variety of methods for measuring ocean temperatures (including BEST’s below sea ice temperature), etc. Measuring SST’s by remote sensing from space is the methodology that comes closest to measuring the “surface temperature” a modeler would actually use in a calculation.

MikeB
November 10, 2014 7:27 am

When we talk about global warming we are talking about the temperature on the SURFACE of the earth, because that is where we live. So, how do we measure the absolute temperature of the surface of the earth? There is no practical way to do that. Although there are a large number of weather stations dotted around the globe do they provide a representative sample of the whole surface? Some stations may be on high mountains, others in valleys or local microclimates.
What is more, the station readings have not been historically cross-calibrated with each other. Although we have now have certain standards for the siting and sheltering of thermometers a famous study found that that many U.S. temperature stations are located near buildings, in parking lots, or close to heat sources and so do not comply with the WMO requirements (and if a developed country like the USA doesn’t comply what chance the rest of the world?).
It is difficult (impossible) therefore to determine the absolute temperature of the earth accurately with any confidence. We could say that it is probably about 14 or 15 deg.C ( my 1905 encyclopaedia puts it at 45 deg.F). If you really want absolute temperatures then just add the anomalies to 14 or 15 degrees. It doesn’t change the trends.
However, we have a much better chance of determining whether temperatures are increasing or not, by comparing measurements from with the corresponding ones in the pastt, hence anomalies. That is to say, my thermometer may not be accurately calibrated, but I can still tell if it is getting warmer or colder. Absolute temperatures are not important for this purpose and are something of ared herring.

Reply to  MikeB
November 10, 2014 7:57 am

However, we have a much better chance of determining whether temperatures are increasing or not, by comparing measurements from with the corresponding ones in the pastt, hence anomalies. That is to say, my thermometer may not be accurately calibrated, but I can still tell if it is getting warmer or colder.

The problem with the way they are calculating the anomaly is they are still comparing the surface measurement to some arbitrary average (say 1951-1980), so they don’t eliminate the issue with using absolutes, because they still use absolutes.
This is why I look at the change against the stations prior day’s value, this has to be the most accurate anomaly available. Average temp anomaly calculated this way for 1940 to 2013 is -0.00035F based on 95 million measurements.

Reply to  Mi Cro
November 11, 2014 6:33 pm

OMG Mi Cro, it took me a bit to figure out what you are doing but I think it is brilliant.

Reply to  Genghis
November 11, 2014 7:16 pm

Of course it is 🙂
But the real question is why hasn’t any of the big brained climate scientists figured it out?

ripshin
Editor
November 10, 2014 7:30 am

I understand the need / desire to use an “anomaly” to describe warming or cooling. Since it’s a comparative analysis, you need the delta for comparison. What I don’t understand, and was noted above by another commenter, is how one can achieve an anomaly of such precision from base data that has serveral times that in variability. Wouldn’t this, in any rational analysis, indicate that the so-called anomaly is just noise?
rip

Dodgy Geezer
November 10, 2014 7:31 am

It would be interesting to see EARLIER model divergencies.
The impression I get is that the models are always tuned to show that the temperature will be pretty flat NOW (whenever NOW is), but will suddenly soar upwards next year. Or in about 5 years time. Where the elapsed time depends on how long their funding is going to last.
If earlier models are ALL much further out of sync with current temps, that would validat my feeling…

November 10, 2014 7:42 am

I can’t understand why anyone here takes surface measurements seriously considering they are inaccurate, non-global, and too often “adjusted” to get the ‘right curve’ on a chart by people who
get government grants only if they claim a global warming catastrophe is ahead.
.
I would like someone here to direct me to the raw data for 1880, since it is shown on a chart.
If no one has ever seen the raw data, I wonder why it’s included on the chart?
.
I’d like to know the number of thermometers used in the 1800’s, where they were located, and how accurate thermometers of that era were compared with today’s thermometers. Does anyone know?
.
I’ve read that thermometers from the 1800’s that still exist today tend to read low, which would exaggerate warming if measurements start in the 1800’s. Is that true?
.
I also can’t understand why anyone here would put data on charts showing months and years after November 10, 2014, as those are not data at all — they are computer game wild guesses — .at the very least they should be shown as faint dotted lines, as projections often are on a chart.
.
Any focus here on inaccurate surface data, and projections of the future climate, is just falling into the trap set by the warmunists — surface data and projections do not deserve the respect they inadvertently get when they are placed on a chart by people here who should know their lack of accuracy.
.
This is a website where I enjoy seeing useful satellite and weather balloon data. I want to see the results of climate proxy studies. I want to see the lack of logic and accurate data coming from the warmunists ridiculed and debunked.
.
I’ve read enough articles and reports about US weather station siting and “adjustments” to surface data to realize those surface data are equivalent to having no data at all, or worse, since the “adjustments” seem to all have a global warming bias.
.
After reading Saul Alinsky’s two books, and his Playboy interview, this year, I think I understand why the warmunists refuse to debate people who disagree — when you try to debate someone, you give them credibility — implying their ideas are worth debating. That’s why warmunists prefer ridicule and character attacks.
.
If you put climate projections for future decades on a chart in 2014, that means the projections are being taken seriously, and they should not be. They are climate astrology!
.
When I send a friend to this website and he reads this report (or just looks at the charts!) filled with charts showing surface data, and a few charts that look like the infamous “Hockey Stick Chart” , I wonder why? … He could have seen the same charts at a pro-global warming website.
.

Nick Boyce
November 10, 2014 8:06 am

(1) According to NOAA (1998), 16.5°C is the average global surface temperature of the earth between 1961 and 1990.
http://www.ncdc.noaa.gov/oa/climate/research/1997/climate97.html
(2) According to Jones et al (1999), 14°C is the average global surface temperature of the earth between 1961 and 1990. This Jones is Prof Phil Jones who created HadCruts 3 and 4 etc.
http://www.st-andrews.ac.uk/%7Erjsw/papers/Jones-etal-1999.pdf
(3) Nobody has even come come close to measuring the earth’s global surface temperature during the instrumental temperature record.
http://data.giss.nasa.gov/gistemp/abs_temp.html
(4) Therefore, it is impossible to verify or refute the claims (1) and (2) above. Therefore, for all we know, the average global surface temperature of the earth between 1961 and 1990 could easily be as high as 16.5°C, or as low as 14°C, or anywhere in between.
(5) According to the July 2014 edition of HadCrut4 (published by the UK Met. Office), of the all the calendar years from 1850 to 2013, the calendar year 1911 alone has the lowest annual global surface air temperature, [Q-0.554]°C, and the calendar year 2010 alone has the highest annual global surface air temperature, [Q+0.547]°C, where Q°C represents the average global surface air temperature of the earth during 1961 to 1990. NB any of HadCrut4’s global surface temperature “anomalies” might change when UKMO up-dates HadCrut4 each month.
http://www.metoffice.gov.uk/hadobs/hadcrut4/
(6) Therefore, for all we know, the average global surface temperature of the earth during 1911 could be as high as (16.5-0.554=15.946)°C, or as low as (14-0.554=13.446)°C, or anywhere in between, and the average global surface temperature of the earth during 2010 could be as high as (16.5+0.547=17.047)°C, or as low as (14+0.547=14.547)°C, or anywhere in between, in which case, the average global surface temperature of the earth during 1911 could be as high as 15.946°C, and the average global surface temperature of the earth during 2010 could be as low as 14.547°C, and consequently, nobody has the foggiest idea of which years have the highest and lowest average global surface temperatures since 1850.
(7) More of this tedious arithmetic at:-
http://the-great-global-warming-swindle.blogspot.
But, I do not advise you to read it, because it really is so very boring, tedious and repetitive. God alone knows why I continue to do it. It’s like an itch that I can’t stop scratching, even though the scratching doesn’t make the itch go away.

ripshin
Editor
Reply to  Nick Boyce
November 10, 2014 10:19 am

Nick,
As I think about it, the problems you uncover demonstrate the reason for using an anomaly instead of an absolute value. It’s the change (represented by the anomaly) that has the AGW crowd so worried. Thus, whether the average from one set of instruments is 16 or 14 or whatever, it’s the change from this value that is relevant.
Of course, if the reported anomalies are in the same scale as known instrument uncertainty (like, perhaps, tenths of a degree), AND readings are taken over long periods of time, then things like instrument calibration, training on how to read consistently, measurement drift, and etc, all become factors that have to be considered. And frankly, unless you have every single one of these under control, your measurements are highly suspect and become “for information only,” which means, invalid, but mildly interesting.
rip

Billy Liar
Reply to  Nick Boyce
November 10, 2014 10:28 am

Fascinating. The problem for climatologists and surface temperatures is that the surface goes from -427m to 8,848m. There is quite a chunk of the surface above 3,000m and there have been and remain very few temperature measurements in this area. The result is a surface temperature biased by the clustering of thermometers within a few hundred meters of sea level. Unlike atmospheric pressure, it is not possible to relate a measurement to some pre-defined datum eg sea level. Hence the use of anomalies which, as someone pointed out above, cannot be related to the physical properties of, primarily, water which changes state at real temperatures, not anomalies. You can imagine the difficulty of attempting to incorporate the phase changes of water into calculations which only use anomalies.

tty
Reply to  Billy Liar
November 10, 2014 12:52 pm

“Unlike atmospheric pressure, it is not possible to relate a measurement to some pre-defined datum eg sea level.”
Actually it is, though only very approximately, since the lapse rate varies from c. 5.5 to 10 degrees per 1000 meters depending on the humidity of the air (drier air has a lower specific heat and hence a larger lapse rate). Usually an average of 6.5 or 7 degrees is assumed.

tty
Reply to  Billy Liar
November 10, 2014 1:01 pm

By the way I forgot to mention that recalculating atmospheric pressure to sea level is not as unproblematic as you seem to think, since this is also dependent on the lapse rate, though less strongly than temperature.

Billy Liar
Reply to  Billy Liar
November 10, 2014 2:51 pm

I realise that referring a barometric pressure to sea level is not accurate. Makes you wonder how you can get an ‘accurate’ sea level when you apply the IB correction to measured sea level where there are neither simultaneous pressure measurements nor other measured column properties as the satellite flies over the ocean.

MikeUK
November 10, 2014 8:21 am

Anomalies are very useful for doing averages, absolutes are not. Typically temperature records cover different periods, so imagine averaging absolutes where polar (cold) stations only start in (say) the last 50 years. The resulting average would have an artificial dip 50 years ago.

Frank K.
Reply to  MikeUK
November 10, 2014 8:57 am

However, to get an anomaly you need to average absolute temperatures, correct?
Anomalies are calculated:
T-anomaly(x,y,t) = T-absolute(x,y,t) – T-absolute-tavg(x,y)
where (x,y) is a point on the Earth’s surface at 2 meters altitude (exactly! heh), and t is time. Note that the average temperature T-absolute-tavg varies on the Earth’s surface, and is entirely arbitrarily defined (i.e. you can choose any time period to do the average). Moreover, there is no accounting for any other meteorological variables such as pressure, moisture, ground cover etc. Is a 0.2 C increase in the temperature anomaly at the South Pole the same as a 0.2 increase in Miami? In the Saraha desert? How about in the middle of the Indian Ocean? This is why anomalies are such a useless metric. They may give some fuzzy indication of temperature trends, but whether of not those trends have any meaning for our lives and the future of the Earth is an very open question.

Duster
Reply to  MikeUK
November 10, 2014 4:36 pm

“MikeUK
November 10, 2014 at 8:21 am
Anomalies are very useful for doing averages, absolutes are not. …”
That is profoundly absurd. You can’t obtain an anomaly without an average from which to estimate the deviation. The use of a “baseline” period is there in order specifically to estimate an average from which anomalies can be estimated. The entire problem of “cherry picking” start and end points for base periods is because the inclination and direction of any anomaly trend can be deliberately biased by careful choices in the length and location of the base period.

Reply to  Duster
November 10, 2014 4:40 pm

“That is profoundly absurd. You can’t obtain an anomaly without an average from which to estimate the deviation.”
You can, but I don’t think that is what they are doing.

Dawtgtomis
November 10, 2014 8:36 am

A proph. at the university where I spent my career once told me “Those who claim the sky is falling are usually located close to an acorn tree”. In 2012 the Midwest (where I live) was hotter and dryer than anytime since the thirties. Everyone locally agreed that global warming was taking it’s terrible toll. Only the 80+ codgers were stoic about it, saying they had seen this come and go all there lives. Now many people here agree with the old farts, because the heat and drought is somewhere else. We are just more aware now that there is always change in the climate, as record cold and rainfall hits us.
I wonder what the demographics are on this, worldwide. I’d like to know how ACGW opinions in the general public are influenced by local weather and track it over time by region. (I suppose that warmunists would also like that data so they could turn up the propaganda when support drops.)

Dawtgtomis
Reply to  Dawtgtomis
November 10, 2014 11:52 am

Oops, I meant to type prof. but subliminally must have been thinking prophet

November 10, 2014 8:41 am

Thanks, Bob, for this very wide look at the arts of measuring and modeling global temperatures.
I especially liked your “Whatever the global mean surface temperature is now, or was in the past, or might be in the future, the climate models use by the IPCC for their 5th Assessment Report certainly have it surrounded.”

November 10, 2014 9:51 am

Time is the warmunist’s ultimate enemy. Day after day, month after month, year after year, the actual data rolls as an unstoppable flow.
Right now, as Bob’s Figure 7 suggests, they are in a race with reality. In the real world, nature of course always ultimately wins and does whatever it is going to do with climate. But it is the perception in the mind’s of masses that the warmunist’s desire.

Curious George
Reply to  Joel O'Bryan
November 10, 2014 11:59 am

Time is actually the warmist’s friend, as long as generous paychecks keep coming.

michael hart
November 10, 2014 10:28 am

Bob, when you post a ~30 page document, I think it is helpful to add a “contents page”.
I honestly think you are losing readers with long posts that don’t have a defined structure (and I am certainly not the most organized person in the universe).

November 10, 2014 11:41 am

“If you were to scroll back up to Figure 7, you’d note that there is a small subset of model runs that underlie the Berkeley Earth estimate of absolute global mean temperature. They’re so close it would seem very likely that those models were tuned to those temperatures.”
That would be 100% wrong.

Tonyb
Reply to  Steven Mosher
November 10, 2014 12:35 pm

So, please tell us what is 100 percent correct?
Tonyb

DHF
Reply to  Steven Mosher
November 10, 2014 1:53 pm

Your argument reminds me about the arguments by John Cleese in the sketch “Argument Clinic” by Monty Python’s The Flying Circus 🙂
http://youtu.be/kQFKtI6gn9Y
I am sure there is more behind your assertion – I would also be happy know more about it.

Matthew R Marler
November 10, 2014 12:26 pm

Thank you again. I am glad that you are doing this work.

tty
November 10, 2014 12:43 pm

“Indeed, we have shown (Hansen and Lebedeff, 1987) that temperature anomalies are strongly correlated out to distances of the order of 1000 km.”
This is an excellent example of a whopping big lie that has been repeated so long and so often that a lot of people think it is actually true.
If you go back to the actual paper (available e. g. at http://www.snolab.ca/public/JournalClub/1987_Hansen_Lebedeff.pdf) you will find that the average correlation at 1000 km is about 0.5, except in the tropics (40 % of the entire surface area of the Earth) where it is more like 0.3. That is not a strong correlation. And the spread is enormous, particularly in the tropics where a large proportion of stations have zero or negative correlation even at distyances of a few hundred kilometers.
By the way it would be very interesting to see the geographical distribution of actual temperatures for the lowest-running GCM:s. If they manage to simulate arctic amplification they will almost certainly predict large-scale continental glaciation at higher latitudes, while if they spread the cooling more evenly the tropics will probably be too cool for either hurricanes or coral reefs to form.

November 10, 2014 1:22 pm

Nick.
“GAST is an integral. To get a meaningful integral, you have to take the values you know, estimate the values you don’t know (everywhere else), and add it all up. We can’t do that with absolute temperatures because of all the local variations. Microclimates, altitude variation, N vs S hillsides etc. But when temperature varies, it does so fairly uniformly. On a warm day, warm air blows in, and it affects everything. As a matter of measurement, anomalies are correlated between measuring points, and so it is reasonable to interpolate in between. Anomalies can be integrated.”
“We can’t do that with absolute temperatures because of all the local variations. ”
yes you can do it.

FightingScallion
November 10, 2014 3:00 pm

Count me amongst those who would love to see a real uncertainty discussion. I’ve been doing that at work lately for some wind tunnels. Dealing with the way things are propagated, a 0.030 psi uncertainty on each of a couple of sensors can easily turn into more than 0.1 psi error on a figure of merit.
Uncertainty propagation is a very big deal, and one that makes me very, very curious. Even assuming best case scenario on a lot of these models, I’d be absolutely floored if any of them could truly show that the expected temperature increases were outside of uncertainty bounds.
For an extremely simplistic case, let’s say the uncertainty of the mean over time is 0.1K and the global temp is 0.1K. The anomaly itself is given by A = Global – Mean. The simplest uncertainty would be given by ((.1K)^2 + (.1K)^2)^(1/2). Thus, the uncertainty is: 0.14K.
That would seem to be really optimistic to me, since so many temperature measurements were only taken to an ideal eyeball accuracy of 0.5K. The full uncertainty buildup would be fascinating to read, though I’m sure the partial derivatives would be painful to perform.

TW
November 10, 2014 5:01 pm

Great, thorough post as usual, Bob. I’ve learned so much from you about ENSO and SST’s.
I think there is a fundamental problem with this analysis because the absolute actual’s happen to be above the models. What triggered my thinking was what was mentioned above a couple times, that the hiatus reduces the actual/model difference. The hiatus produces a divergence from the trend of the two series, and I think it’s fundamentally misleading when it reduces the difference between the two.
Most would agree that the models are heating up faster than the actual’s. If the actual’s are above the models, this difference in trend overall will produce a narrowing of the difference between the value of one subtracted from the other.
I think if this were redone with, say, actual’s minus some constant so that the actual’s were below the models, the points you make in your analysis would remain valid, but be easier to see, and the difference in overall trend would produce a growing difference between the two series, as it should.since the trends are different.

Brandon Gates
November 10, 2014 7:46 pm

Mi Cro,
I hit reply to your post responding to someone else because for some reason threading isn’t working properly. I can’t tell if that’s a WordPress issue or if it’s just my browser … I can’t respond directly to you now either, so I hope you find this one wherever it lands in the thread.
By way of response, yes, the history of the instrumental record — so far as anyone here trusts it (and apparently you do to some extent) — shows some fairly rapid temperature swings. But globally (I’m looking at monthly GISTemp at the moment) I’m not seeing 2 degrees cooler or 1 degree warmer since 1940, or even 1880. The min anomaly in the past 75 years was -0.43, March of 1951, or 1.35 cooler than the 0.92 max in 2007. Anyway, it’s not the monthly, or even annual fluctuations like we’ve seen over the last 135 years that would be my biggest concern … we already know we can handle a fair amount of wiggles.
Personally, my expiration date is well before the worst of the nightmarish scenarios are proposed to play out. My nephews’ grandkids might not like what 4 degrees of warming does for things though. I think they’ve got a fair chance of finding out, too.

Reply to  Brandon Gates
November 10, 2014 8:41 pm

Brandon,
Yes I did find this.
“By way of response, yes, the history of the instrumental record — so far as anyone here trusts it (and apparently you do to some extent) ”
Not really, but if it is good enough to prove AGW, it’s good enough to be used to disprove it.
“But globally (I’m looking at monthly GISTemp at the moment) I’m not seeing 2 degrees cooler or 1 degree warmer since 1940, or even 1880.”
I shouldn’t have used those numbers, I don’t normally, but average temps were the topic and they were “handy” and having to explain what I do with the data set every time I mention it sounds like a broken record.
But, if you follow the url in my name you can find the output, and I explain what I’ve done here http://www.science20.com/virtual_worlds
As long as I know you are responding to me, I will see it and reply.

Brandon Gates
November 10, 2014 9:18 pm

Mi Cro, you write:

You don’t have to do either (though you do have to limit it to stations that have most of a complete year), you can measure the change of a station against the station itself. This is a meaningful anomaly.

And that’s exactly where the anomaly calculation process begins. Problem at the global level, which is the most meaningful anomaly, is that some stations either go away before, or don’t come online until after, the desired baseline period. So for GISS, the standard baseline is 1951-1980. Any station with fewer than 20 years of data — 2,121 out of 7,280 total found in the GHCN database — in that range either needs to be tossed out completely, or needs to have enough neighbors within 1200km and some significant number years overlap to be reasonably useful. GISS only throws out 1,243 of the 7,280 but still there’s a lot of infilling going on since some of those station years are missing months and because there are, well, lots of places on the planet that have never had a weather station within 1200 km. That’s not GISS being schlocky but a simple reality of the nature of the dataset — for perspective, there are only 108 GHCN stations with continuous annual records since 1880. Does that sound like enough spatial coverage to you? I didn’t think so.
I encourage anyone who doesn’t like how GISS does their patchwork quilting to download the data, raw and/or adjusted since both are out there, and try their own hand at it. A fun game is to see how many stations you can knock out and still get a global trend that reasonably matches the published results.

I disagree, you can use it to calculate the rate of change based on the changing length of day.

Sure, I guess but a fat lot of good it would do. That’s not what Frank is talking about though. He’s essentially defined a boundary value problem very narrowly to include only regular (i.e. predictable) cycles and used seasonal variation over the course of a 1-year trip around the sun as an example. What he’s missed is that even a 1st year physics student could tell you that if atmospheric CO2 increased ten-fold tomorrow the oceans wouldn’t boil off in the next decade, there’s simply too much ocean water for an additional 12 W/m^2 to heat up that rapidly. Basic thermodynamics gets that guesstimate, no need for the brain damage of N-S equations.
In fact they probably wouldn’t boil at all … the back of my napkin tells me that at 4,000 ppm CO2, all else being equal, equilibrium temperature would bump 30 degrees — about 10 of it from the increased forcing alone and the balance in water vapor feedback. I assume, of course that clouds are a neutral feedback, a highly contested assumption to be sure, and completely ignore reduced albedo from melted ice sheets. To say nothing of the gobs of methane the arctic tundra would cough up … but those frozen wastes might turn into darn good farmland, hey?
What pure descriptive statistics doesn’t tell us about temperature variability, pretty simple physics does. Those are the boundary constraints on the system. Large uncertainties do exist about where the means are and how long it will take to realize them, but there are pretty good constraints on the edges of the envelope just from observational evidence, esp. including the paleo reconstructions. All this talk of initial conditions is trying to shoehorn weather forecasting into a climate problem. It would never work, and isn’t really appropriate according to my understanding of things. YMMV.

Brandon Gates
November 10, 2014 10:09 pm

Mi Cro, you write:

Not really, but if it is good enough to prove AGW, it’s good enough to be used to disprove it.

I’ll spare you my full pedantry on the topic of “proof” in science. Suffice it to say that proof is for math and logic. Basically my view is this: if you wish to overturn a theory, you need to provide credible alternative evidence against it. By admitting you don’t trust the temperature record, you’ve got no position to argue from in my book, and the best answer you can give me that I’ll accept as rational is an agnostic “I don’t know.” Further, you’ve got to demonstrate to me that you’ve got evidence against the actual theory. I haven’t read the stuff on your blog — I’ll take a look at it tomorrow when I’m fresh — but thus far I’m getting the vibe that your argument is we’ve seen three degree range in transient swings in temperature over the past 150 or so years, therefore AGW = DOA. No. Better might be we’ve seen 0.8 degrees increase since 1850 and haven’t cooked yet, but there you’d be ignoring the ~0.5 W/m^2 TOA imbalance which is good for another 0.4 degrees warming even if CO2 levels flatlined tomorrow. That gets us to 1.2 above the beginning of the industrial revolution, which the mean of the CMIP5 ensemble from AR5 says we’re going to hit around 2025. The magic 2.0 degree demarcation line into the avoid at all costs line doesn’t happen in the models until about 2050. Even at that, no serious working climatologist I’ve ever read is on record as saying that pushing our toe over that line will cause instant calamity — that’s the kind of alarmist crap churned out by activists and politicians via their media lackeys.
What is popular among the consensus science crowd is talk of tipping points and irreversible effects … like WAIS collapse. What gets missed in popular press and Twitter soundbites is that “irreversible” means “not stoppable” in any sort of reasonable planning horizion, and that refreezing those ice sheets would take on the order of a thousand years. All estimates of course, and I’m not shy about calling them very uncertain ones at that. For sake of argument, assume they’re accurate. Miami becomes the new Venice, but it takes half a millenium to get there. Not an extinction-level event. Frankly we’re more likely to do ourselves in via Pakistan corking off a few nukes at India and getting themselves glassed by the retaliation.
My position, which I believe is fairly grounded in the far more sober deliberations of most consensus climatologists is that as temperature rises incrementally so does risk. By how much, we don’t know. Best to not find out for sure, just don’t break the bank trying to avoid the worst of it. Not an easy analysis to do, and certainly not one which calls for binary all or nothing “proof” for or against AGW or its postulated effects.

DirkH
Reply to  Brandon Gates
November 11, 2014 1:26 am

Brandon Gates
November 10, 2014 at 10:09 pm
“What is popular among the consensus science crowd is talk of tipping points and irreversible effects … like WAIS collapse. What gets missed in popular press and Twitter soundbites is that “irreversible” means “not stoppable” in any sort of reasonable planning horizion, and that refreezing those ice sheets would take on the order of a thousand years.”
The warmist scientists, journalists and politicians have, from the start and without exception, used words in an Orwellian Humpty-Dumptyesque manner – average temperature was never a physical entity, homogenization is in fact invention of data where they have not measured any, anomalies are used to disconnect the “temperature products” further from reality.
So no surprise that they also use “tipping point”, “irreversible” only as marketing terms, not to convey meaning.
Every professor or PhD should be ASHAMED, ashamed to be connected to the field of warmist climate science.
There are two ways to use language, the warmist way and the engineer’s way. Engineers cannot use language the way the warmists do – they would end up in jail.

Brandon Gates
Reply to  DirkH
November 12, 2014 7:20 pm

DirkH: So you’re not a fan of averaged temperature trends. Fine. Tell me how you’d calculate any net change in retained solar energy since 1850 in joules without “inventing physical entities”. Bonus points if your answer does not include anything remotely resembling a thermometer. Double bonus points if you can then explain how to predict phase changes using only measurements of net energy flux. Triple bonus points if you can find a GCM that uses temperature anomalies as an input parameters or in intermediate calculations.
Let’s say you calculate a 25% chance that a heavily travelled highway bridge will collapse within the next 10 years without repairs and the DOT head honcho tells you to sod off. What do you say to a newspaper reporter — if anything — that doesn’t warrant a trip to a federal pen? Bonus question: are bridges known for rebuilding themselves after they collapse? Double bonus question: has anyone ever successfully engineered 2.2 million km^3 ice sheet? Triple bonus question: explain how it is you know for certain that WAIS collapse is not “irreversible” as defined, and therefore why anyone should be ashamed of even so much as hinting such a thing.

Reply to  Brandon Gates
November 11, 2014 6:53 am

Basically my view is this: if you wish to overturn a theory, you need to provide credible alternative evidence against it.

I’ve taken their data and can show there’s little evidence of a change in surface cooling.

By admitting you don’t trust the temperature record, you’ve got no position to argue from in my book,

Really? How enlightened.

Further, you’ve got to demonstrate to me that you’ve got evidence against the actual theory.

I do. But it has nothing to do with the swing in the average of average temps. Those numbers were the average of the average, but the stations were not controlled, so they are basically meaning less, I shouldn’t have posted them. On the other hand the anomaly data I’ve generated I do think is very valid.

And that’s exactly where the anomaly calculation process begins. Problem at the global level, which is the most meaningful anomaly,

But generating an anomaly against some baseline is meaning less. And when you don’t restrict yourself to global anomalies, you find that what’s changed over the last 75 years isn’t max temps, it regional min temps, large swings. My current belief is that they are due to changes in ocean surface temps.

I encourage anyone who doesn’t like how GISS does their patchwork quilting to download the data, raw and/or adjusted since both are out there, and try their own hand at it. A fun game is to see how many stations you can knock out and still get a global trend that reasonably matches the published results.

I did, but why would I try to reproduce something that their data (well NCDC’s data) shows is all due to processing?

I assume, of course that clouds are a neutral feedback, a highly contested assumption to be sure

Clouds do far more to regulate surface temps than Co2 does, this is easy to see with an IR thermometer. I routinely measure clear sky Tzenith temps 100F to 110F colder than air/ground temps, Co2 would change that temps by a few degrees, clouds bottoms on the other hand can reduce that temp difference to 10F colder than air/surface temps.

and completely ignore reduced albedo from melted ice sheets.

I think this is a red herring, when you account for the large incident angle of the Sun at high latitudes
http://www.iwu.edu/~gpouch/Climate/RawData/WaterAlbedo001.pdf
, and when you calculate the energy balance with S-B equations it looks like open water in the arctic cools the oceans far more than it warms the water. At this point it is my belief that this is a temp regulation process that occurs naturally. And like the 30’s and 40’s, the open arctic is dumping massive amounts of energy to space.

For sake of argument, assume they’re accurate.

Why would you do that? Why not assume we’re going to get hit with an extinction level asteroid in the next 100 years? Or a flu(or ebola) pandemic.
I’ll know “warmists” are serious when they all start saying they want to start building Nukes (and yes I know some are doing that, but they waited a long long time to adopt that opinion).
I found one bit of possible evidence in the surface record, and that is a change in the rate of warming and cooling as the length of day changes, but it looks like it might have hit an inflection point and changed direction in 2004-2005, but I don’t have enough data yet to know for sure.
So right now, the only real evidence we have anything we’ve done has impacted climate is GCM’s.
Oh, and BTW I have read Hansen’s paper on GCM’s, and I have a lot of experience in modeling and simulations, so I’m quite aware how you think something works gets encoded into a model. Nothing wrong with this for a hypothesis, but until they can show that Co2 is actually the cause, its just a hypothesis. This is where I will say “we don’t know what’s going on”, but there is plenty of evidence (when you look for it, as opposed to ignoring or deliberately hiding it) that the planet is still cooling like it’s suppose to be doing.

Brandon Gates
Reply to  Mi Cro
November 12, 2014 11:13 pm

Mi Cro,

But generating an anomaly against some baseline is meaning less.

y = mx + b. Change b to b’. Tell me why m must also change, and/or why m is any less meaningful using b’ instead of b.

And when you don’t restrict yourself to global anomalies, you find that what’s changed over the last 75 years isn’t max temps, it regional min temps, large swings.

Looking at the first chart in your post, http://content.science20.com/files/images/SampleSize_1.jpg (number of daily observations) I notice a big downward spike circa 1972. Scanning down to the next figure, http://content.science20.com/files/images/GB%20Mn%20Mx%20Diff_1.png which you describe as “Global Stations, This is the annual average of the difference of both daily min and max temperatures, the included stations for this chart as well as all of the others charts have at least 240 days data/year and are present for at least 10 years” I notice big upward bumps in both the DIFF and MXDIFF curves during the same time period. There’s a less pronounced swing in both charts around 1982. When I see such obvious correlations between number of observations and aggregate totals I begin to suspect that n is having an undue influence on y and start wondering if I’ve bodged up a weighting factor somewhere.

My current belief is that they are due to changes in ocean surface temps.

What do you propose is changing SSTs? And which direction(s) is/are the trend(s)?

I did, but why would I try to reproduce something that their data (well NCDC’s data) shows is all due to processing?

Because if you can get a reasonably similar curve from the adjusted data as GISS, NCDC or CRU does, then you know you’ve adequately replicated their method. That gives you an apples to apples comparison when you pump the raw data through your reverse-engineered process.
Not that cooking up a new method from scratch is a Bad Thing per se, but while I’m talking about it, I did dig into yours somewhat. I’ll start here: “The approach taken here is to generate a daily anomaly value (today’s minimum temp – yesterday’s minimum temperature) for each station, then average this Difference value based on the area and time period under investigation.”
I’m with you on the first step, which basically gives you the daily rate of change for each individual station. I understand averaging the daily result by region, but your plots are generated from annual data so I’m guessing you took your daily regional averages and then summed them by year to get the net annual change?

Clouds do far more to regulate surface temps than Co2 does, this is easy to see with an IR thermometer. I routinely measure clear sky Tzenith temps 100F to 110F colder than air/ground temps, Co2 would change that temps by a few degrees, clouds bottoms on the other hand can reduce that temp difference to 10F colder than air/surface temps.

Mmmhmmm. Clouds are also a highly localized phenomenon, so the observations from your backyard don’t tell you much about what’s going on in Jakarta. As not many places on the planet regularly have 100% (or 0%) cloud cover, your observations need to be very routine over long periods of time, logged and tallied up to have any hope of telling you something other than the current weather. There is ZERO dispute among the atmospheric guys that the instantaneous forcings are dominated by water vapor and clouds, not CO2. As in NO question whatsoever. The perennial uncertainty is on the magnitude of feedbacks from clouds in response to forcings from … wherever (pick one) … and how best to model the damn things.

I think this is a red herring, when you account for the large incident angle of the Sun at high latitudes, and when you calculate the energy balance with S-B equations it looks like open water in the arctic cools the oceans far more than it warms the water.

Props for whipping out a old-school reference from ’79. What I know of S-B law as it relates to sea (and landed) ice is that you’ve got to be really careful about emissivity. GCM programmers have assumed unity, which is looking like a bad call because the models have been understating observed warming in the Arctic. I think albedo is better understood, which does need to account for angle of incidence — but which is completely irrelevant when it comes to a body radiating energy back out.

At this point it is my belief that this is a temp regulation process that occurs naturally.

Well duh. 🙂 “Nobody” is saying that there aren’t natural temperature regulation processes. Assuming your ocean theory of cooling is correct, the only way they could do it is by pumping up warmer water from depth and radiating it out. The data that I’ve been looking at say the exact opposite, especially for the past 20 years — it’s been cold water upwelling cooling off the atmosphere while previously warmed surface water has been churned under and sequestered.

Why not assume we’re going to get hit with an extinction level asteroid in the next 100 years?

I think we should assume an extinction level rock is going to hit us in the next 100 years. We have the basic technology to deploy a deflection solution, there’s really no excuse to not be actively working on it. It doesn’t make much sense to employ so many resources tracking NEOs if we’re not also concurrently developing a solution to do something about the one with our name on it when it’s detected. But there I go again using logic.
Back to the WAIS. You are aware that the models have been grossly conservative in estimating ice mass loss aren’t you? Perhaps not, since the favorite thing for this crew to do is point out how non-conservative GCMs have been about estimating SATs. Funny how y’all assume that most mistakes climate model related will err on the side of benefit not detriment. But then you say the same about the warmists. Well, some of us definitely do deserve it.

I’ll know “warmists” are serious when they all start saying they want to start building Nukes (and yes I know some are doing that, but they waited a long long time to adopt that opinion).

Far too long in my book. I think I mention it somewhere in this post how much it pisses me off. Seriously, it kills me that the anti-Nuke contingent hasn’t put it together that coal power causes 30 and 60K premature deaths per year in the US alone, but that the worst case risk of deaths (based on Fukushima and Chernobyl) would be < 100 per year if nukes replaced the coal plants. The ongoing move from coal to natural gas is a decent enough trade …. bbbbbbut FRACKING! Oh noes!
People make me tired sometimes.

So right now, the only real evidence we have anything we’ve done has impacted climate is GCM’s.

A popular slogan, but every working climatologist I’ve read strenuously disagrees. I’m gonna go with the guys doing the actual work, thanks.

Nothing wrong with [GCMs] for a hypothesis, but until they can show that Co2 is actually the cause, its just a hypothesis.

And how exactly would one go about doing that to your satisfaction?

This is where I will say “we don’t know what’s going on”, but there is plenty of evidence (when you look for it, as opposed to ignoring or deliberately hiding it) that the planet is still cooling like it’s suppose to be doing.

How do you know how the planet is supposed to be cooling?!!? That’s one of them there, how do you say, rhetorical questions … but answer it straight if you wish. I actually am interested in what you’d come up with.

Reply to  Brandon Gates
November 12, 2014 11:48 pm

Brandon Gates commented
First, it’s late, and I have to get up in a few hours, so I’m going to start with a couple of the first questions. I’ll look at the rest of your comments when I get some time tomorrow.

But generating an anomaly against some baseline is meaning less.
y = mx + b. Change b to b’. Tell me why m must also change, and/or why m is any less meaningful using b’ instead of b.

If your baseline is a constant, why bother to make the anomaly in the first place? Basically I was interested in how much it cools at night while I was standing out taking images with my telescope. The constant baseline just adds error, where I believe what I do reduces station measurement error to as small as possible.

Looking at the first chart in your post, http://content.science20.com/files/images/SampleSize_1.jpg (number of daily observations) I notice a big downward spike circa 1972. Scanning down to the next figure, http://content.science20.com/files/images/GB%20Mn%20Mx%20Diff_1.png

That was an older chart, further down the page you’ll find this.
http://www.science20.com/files/images/global_1.png
The big difference is that I realized that when I was doing my station year selection/filtering, once a station was selected, all years were included, even ones with fewer counts per year than my cutoff value. That was what got updated in ver 2.0 of my code (2.1 added average temp). That was the main cause of that spike.
And I’d like to thank you for digging into it.
I was going to send your off to sourceforge, but the reports there haven’t been updated to the new code yet, though you might still find them interesting. A few are done, I just need to zip them up and upload them, others need to be rerun and will take longer to get updated.

Reply to  Brandon Gates
November 14, 2014 7:03 am

Brandon, Let me touch on some of the rest of your other comments.

What do you propose is changing SSTs? And which direction(s) is/are the trend(s)?

Oceans accumulate energy, causing them to warm. But in generate the bulk of the water is near freezing, Then you have winds, tides, currents that generate self organize into things like AMO, PDO, the gulf stream, the currents that circle Antarctica, and so on. In electronic terms (it’s funny how many physical system have the exact same rules as electronic systems) oceans are a capacitor, they store energy, but unlike a cap the oceans have all of the inhomogenize I mentioned and then some. But all they can do is store and then release energy, which would be expressed as S-B equations. While some claim they know the oceans heat content is increasing, I don’t believe OHC has been monitored at the detailed level, over time that’s required to know this is truly the case, those that think it’s increasing are really just engaging in wishful thinking.

Back to the WAIS

Isn’t WAIS moving from geothermal activity? One of the reasons (that I failed to mention above) for the big swirling mix of thermal energy that is our planet, is the tilted axis compared to the orbital plane (as well as the variation in orbital eccentricity), this also smears the distribution of incoming energy, which also drives circulation. We don’t even understand what the time constants of all of the systems involved to know if we have measurements of thermal energy that cover a single time constant, so melting here or there is an interesting fact, but the meaning is unknown.

“So right now, the only real evidence we have anything we’ve done has impacted climate is GCM’s.”
A popular slogan, but every working climatologist I’ve read strenuously disagrees. I’m gonna go with the guys doing the actual work, thanks.

There’s lots of interesting facts that the surface climate of the Earth changes over the human lifespan, but it does that whether we’re here or not. But none of that make AGW a fact, nothing, and those scientist are basically lying by omission, sure Co2 absorbs and emits 15-16u photons, that is the sole fact, that’s it, nothing else. They then used that fact and wrote a model that made that fact the key to warming. Where they totally fail is they haven’t been able to tie anything the climate is doing to increasing Co2.
When you take the derivative of 20 some thousand measurement stations, 95 million sample, and the average is -0.00035 what do you think that means? This isn’t some bull$hit homogenized infilled pile of steaming cr@p, these are the fracking measurements. And that’s the derivative of the averge temp, the derivative of min temp is -0.097 which is 95 times larger than the derivative of max temps’s 0.001034, tell me what that means for surface temps!
How are you going to justify how wrong your scientists are once people really understand how little they really know? It’s just like all of us who told everyone how screwed we’d be over Obamacare, oh we were all nut cases, racists, idiots, and now look, we know the President lied (one of the many lies he told to get re-elected), Pelosi lied, Reid lied, Gruber lied, well maybe he didn’t lie, but he admits they knew all of this and wrote the bill in a way to hide the facts, yeah I guess that’d be lying. The same is going to happen to climate scientist, they are going to be outed as idiots, so you can have them.

Nylo
November 11, 2014 2:28 am

Natural and enhanced greenhouse effects depend on the infrared radiation emitted from Earth’s surface, and the amount of infrared radiation emitted to space by our planet is a function of its absolute surface temperature, not anomalies.

This is partially incorrect. Textually yes, it depends on the infrared radiation emitted from Earth’s surface, and yes, this is a function of the absolute surface temperature, but NO, it is NOT a function of the AVERAGE absolute surface temperature, which is what the models’ graphic shows. It is in fact strongly affected by how temperature is distributed across the surface, in time and space. Two planets with the same average temperature but a very different temperature distribution can radiate a very different ammount of energy. As a rule of thumb, for the same average temperature, the biggest the temperature diferences across the surface, the more energy is radiated in total. Or put it another way, the biggest the temperature differences, the lower average temperature you need to radiate the same ammount of energy. That’s why the average temperature of the moon is so cold despite being as close to the sun as we are, lack of greenhouse gases is only a tiny part of the effect, the key is the huge differences in temperature between the illuminated and not illuminated parts of the moon, due to the lunar day/night cycle lasting roughly 28 times the terrestrial one. This makes the moon radiate a lot of energy for its average temperature, compared to Earth.

Reply to  Nylo
November 11, 2014 8:02 pm

+1 And may I add that on the Earth, half of the surface area is between the 30˚ latitudes and it receives 64% of the solar insolation and has the smallest temperature swings. The tropical ocean is a net absorber and the higher latitude ocean is a net emitter.

November 11, 2014 9:46 am

Thanks again, Bob.
I now have an article on the Berkeley Earth Land + Ocean Data anomaly dataset (with graphic).

November 11, 2014 7:29 pm

It’s kind of hard to model something that has no physical existence or meaning.

Doug Proctor
November 11, 2014 9:02 pm

Each of the IPCC scenarios is internally consistent; there are “rules” that take them from the start to the finish. Yet the range gives not much at all in 2100 to absolute disaster. Gore, McKibben, Brune and others make much of the possiblility of the disaster scenario. Yet no scenario has an abrupt change from nothing happening to calamity. So how is the current situation justified in sometime becoming the disaster at 2100?
“How Do We Get From Here to There (Disaster)” may be a good post.
Without ignoring the IPCC analyses, I don’t see how the catastrophe in 2100 can be held within “settled science” any more.

Pierhache
November 12, 2014 10:10 am

Thanks Bob.Great post as usual .But one question is certainly of importance :the very notion of which is very questionable with our assymetric globe. If radiative forcing is estimated
with absolute temperature (in Kelvin),the failure to estimate the budget of mother earth is certain .I agree with NYLO, IR emitted do not depend on an evenly distributed mean temperature. Moreover losses can be different with global as T^4(16) and T^4(14) differs by about 3.5%.

GregK
November 12, 2014 7:12 pm

A quote from…
http://www.pnas.org/content/111/34/E3501.full
“This model-data inconsistency demands a critical reexamination of both proxy data and models”
So while they are “warmist modellers” they admit that don’t understand why their models don’t work.
The science is settled………..hmmmnn

Brandon Gates
Reply to  GregK
November 12, 2014 11:29 pm

GregK,
No … they identified a discrepancy between model output and observation which suggest that both are wrongly biased in a way that calls for review of both so as to arrive at a hopefully more correct conclusion. I recognize that this is a foreign concept to those who think they already know all there is to know … or who would rather the opposition don’t admit their errors so that it gives more credence to allegations of malfeasance and fraud. Take your pick.

Brandon Gates
November 14, 2014 4:00 pm

Mi Cro,

If your baseline is a constant, why bother to make the anomaly in the first place?

The offset is a constant for each month of each station, not across all stations. The slope of any trend for any station will remain the same regardless of where the intercept is.

The big difference is that I realized that when I was doing my station year selection/filtering, once a station was selected, all years were included, even ones with fewer counts per year than my cutoff value.

I still see a big spike circa 1972 which coincides with a dip in observations. That does not give me warm fuzzies.

And I’d like to thank you for digging into it.

Sure thing. It’s an interesting approach that seems like should work, but I don’t understand all the calculations you’re making.

Oceans accumulate energy, causing them to warm. But in [general] the bulk of the water is near freezing …

Again trend is independent of absolute, so a warming ocean will be ever so slightly less near freezing. Or ever so much closer to freezing if they were cooling. You brought up AMO and PDO; think about what those indices represent.

While some claim they know the oceans heat content is increasing, I don’t believe OHC has been monitored at the detailed level …

I was going to ask you if the oceanic capacitor is filling with cool or filling with warm, but it’s difficult to for you to answer given that you don’t know the long term trend.

Isn’t WAIS moving from geothermal activity?

Perhaps to some extent.

We don’t even understand what the time constants of all of the systems involved to know if we have measurements of thermal energy that cover a single time constant, so melting here or there is an interesting fact, but the meaning is unknown.

That does make it difficult to predict timing, distribution of energy and effects, etc. One thing we do know is that if incoming flux is greater than outgoing, net energy in the system will rise. If an internal heat source was melting Antarctic ice (and Arctic, Greenland, various other large glaciers, etc.) AND the oceans were losing energy on balance … wouldn’t you expect outgoing flux to be greater than incoming?

Where they totally fail is they haven’t been able to tie anything the climate is doing to increasing Co2.

I’ve asked you at least once now how someone could demonstrate to you that CO2 is a factor and you have not answered.

There’s lots of interesting facts that the surface climate of the Earth changes over the human lifespan, but it does that whether we’re here or not.

Tides come in, tides go out. Nobody knows why.

When you take the derivative of 20 some thousand measurement stations, 95 million sample, and the average is -0.00035 what do you think that means?

At present, that there’s an error in how you’re aggregating your derivatives.

Reply to  Brandon Gates
November 14, 2014 4:53 pm

Brandon Gates commented
“The offset is a constant for each month of each station, not across all stations. The slope of any trend for any station will remain the same regardless of where the intercept is.”
I didn’t realized this was how they were creating their baseline, I need to ponder on this for a while, but I’m inclined to place the source of the differences between the two methods to infilling and homogenization.
“I still see a big spike circa 1972 which coincides with a dip in observations. That does not give me warm fuzzies.”
I don’t do anything to reduce “weather” in my process, except take advantage of the averaging of large collections of numbers, so a significant reduction of values increases the bleed through of “noise”. For better or worse.
“I don’t understand all the calculations you’re making.”
I create my version of an anomaly, then average them together. I do other things in other fields, but that’s besides the point.
“You brought up AMO and PDO; think about what those indices represent.”
Self organizing pooling of energy. But we don’t know everything about them, particularly the time constants of all of the parts of it. So it’s difficult to nail down the impact to the overall energy balance.
“One thing we do know is that if incoming flux is greater than outgoing …. wouldn’t you expect outgoing flux to be greater than incoming?”
If we do, if we understand all of the time constants of all of the climate systems, but I don’t believe we do yet. But if we did, then yes.
“I’ve asked you at least once now how someone could demonstrate to you that CO2 is a factor and you have not answered.”
I don’t know, he!! I left open they might even be right 15+ years ago when I started looking at AGW, I have become more and more convince they are wrong though.
“At present, that there’s an error in how you’re aggregating your derivatives.”
And if there isn’t?
The math part of the code is stone simple as such things go, and it posted at the url in my name, it’d be pretty easy to prove me wrong, that would shut me up about temp trends, and I’m sure it would make Mosh happy.
Oh, and the Moon causes (or at least most of) the tides : )

Brandon Gates
Reply to  Mi Cro
November 17, 2014 1:19 pm

MiCro,

I didn’t realized this was how they were creating their baseline, I need to ponder on this for a while, but I’m inclined to place the source of the differences between the two methods to infilling and homogenization.

I’ve had the GHCN monthly data for some time now. When I replicate the anomaly method (to my best understanding of it) and compare raw to adjusted, I come up with this:
https://drive.google.com/file/d/0B1C2T0pQeiaST082SnBJdXpvTVk
So, NCDC’s homoginaztion process adds on the order of 0.6 C to the trend since 1880 from raw to adjusted (yellow curve), which does not make me happy. At first blush I’d expect such adjustments to be more or less trendless … unless there is some sort of systemic/environmental bias causing long term trends. And there is: UHI. GISS knows this, so their process adjusts (from already upward adjusted data) the trend downward about 0.2 C since 1880 (blue curve). I can make these comparisons because I’ve attempted to replicate the anomaly calculation as done in the final products. Even so, I don’t have it quite right yet, and take my own output with so many grains of salt. However, my conclusion at present is that GISS’ land-only anomaly product has got on the order of a net 0.4 C of temperature trend which is best explained by NCDC’s adjustments to surface station data.

I don’t do anything to reduce “weather” in my process, except take advantage of the averaging of large collections of numbers, so a significant reduction of values increases the bleed through of “noise”. For better or worse.

I’m not talking about those kinds of short-term noisy variabilities here. Your curves show rather large deviations which coincide with similarly large changes in number of observations. I’m suggesting that your derivative calculations show artifacts which are somehow sensitive to number of observations. It could be a coding error, or a problem with your method. I can’t tell which, but I can point out what I see.

The math part of the code is stone simple as such things go, and it posted at the url in my name, it’d be pretty easy to prove me wrong, that would shut me up about temp trends, and I’m sure it would make Mosh happy.

It would help if you pointed me to a specific file in your SourceForge repo. Even better if you could write up a step by step description of each calculation. Your blog post thus far has not been sufficient for me to understand exactly what you’re doing. Aside from that, you know your code and method best so you’re the best one to answer challenges to their output.
I’m not privvy to your conversations with Mosh, so I have no comment.

Self organizing pooling of energy.

Sure, that describes the climate system as a whole. However, indices such as AMO and MEI attempt to describe internal energy fluxes based on anomalies of various meteorological variables. Or, IOW, how energy is moving around in the system in such a way that can be tied to weather phenomena such as surface temps, precipitation, storm frequency/intensity, etc.

But we don’t know everything about them, particularly the time constants of all of the parts of it. So it’s difficult to nail down the impact to the overall energy balance.

The problem with this tautology is that measurements of internal variability don’t speak directly to net energy contained in the system — only (mostly) to energy moving around inside it. I say “mostly” parenthetically because when, for example, ENSO is in a positive phase SST anomalies are higher than average, which tends toward more energy loss from the system as a whole. It’s one of those seemingly counter-intuitive things that most everyone screws up — contrarian and consensus AGW pundits alike.
But I digress. The main point I’d like to make is that our lack of understanding of a great deal about how ocean and atmospheric circulations work is more a problem for short and medium-term prediction. In the long run, such interannual and decadal fluctuations revert to a mean. If you trusted temperature records, you could see how evident it is that the past 20 years of flat surface temp. trends are not at all novel, and do not necessarily signal a natural climate regime change, nor a falsification of CO2’s external forcing role. There are large uncertainties in timing, magnitude and location of effects, but directionality is fairly well constrained … at least for those of us who don’t think that climatologists are part of some nefarious worldwide conspiracy, or to a man and woman, wholly incompetent.

If we do, if we understand all of the time constants of all of the climate systems, but I don’t believe we do yet. But if we did, then yes.

Some of the time constants involved exceed tens if not hundreds of your lifetimes. Fortunately we think we do know something about them because we’ve got hundreds of thousands of years of data on how the planet responds to external forcings in absence of our influences. Uncertainty abounds, of course … we all know how reliable treemometers are.

I don’t know, he!! I left open they might even be right 15+ years ago when I started looking at AGW, I have become more and more convince they are wrong though.

Over our exchanges I’ve noticed two main themes of your contrarianism: the data cannot be trusted and we don’t know enough to form any conclusions. That kind of position is pretty much unassailable because it leaves very little avenue for debate based on anything remotely resembling quantifiable objective factual information. Given all that we don’t understand about the human body and the health care industry’s love of profit, one wonders if you ever seek medical treatment?

And if there isn’t?

Then you should publish because it means that every single analysis done by multiple independent teams is wrong.

Oh, and the Moon causes (or at least most of) the tides : )

Indeed, but there’s a lot we don’t know about gravity. Or how the moon got there to begin with. Tide prediction tables are routinely in error by a half a foot or more for a number of reasons, including that we can’t predict barometric and wind conditions on a daily basis a year in advance.

Reply to  Brandon Gates
November 17, 2014 2:48 pm

Brandon Gates commented
It’s will take me a few days to respond appropriately, but I had a couple questions.

I’ve had the GHCN monthly data for some time now. When I replicate the anomaly method (to my best understanding of it) and compare raw to adjusted

My anomaly method?

Then you should publish because it means that every single analysis done by multiple independent teams is wrong.

And they all feed the data into a surface model prior to generating a temp series.
Ok, the code.
Here’s the readme for the data

MAX 103-108 Real Maximum temperature reported during the
day in Fahrenheit to tenths–time of max
temp report varies by country and
region, so this will sometimes not be
the max for the calendar day. Missing =
9999.9
….
MIN 111-116 Real Minimum temperature reported during the
day in Fahrenheit to tenths–time of min
temp report varies by country and
region, so this will sometimes not be
the min for the calendar day. Missing =
9999.9

Substr starting a 111 is Min temp, 103 Max temp
Here’s the field parsing code

ymxtemp – ymntemp,
ymxtemp – to_number(trim(SubStr(file_line,111,6))),
ydate,
to_number(trim(SubStr(file_line,103,6))) – ymxtemp,
ymxtemp,
trim(SubStr(file_line,111,6)) – ymntemp,
(ymxtemp – ymntemp) – (ymxtemp – to_number(trim(SubStr(file_line,111,6)))),
ymntemp,
trim(SubStr(file_line,25,6)) – ytemp,
ytemp

The ‘y’ indicates a saved variable from ‘yesterday’, followed by a usable label.

“RISING_TEMP_DIFF” NUMBER(6,1),
“FALLING_TEMP_DIFF” NUMBER(6,1),
“ODATE” TIMESTAMP,
“MXDIFF” NUMBER(8,2),
“YMXTEMP” NUMBER(6,1),
“MNDIFF” NUMBER(8,2),
“DIFF” NUMBER(8,2),
“YMNTEMP” NUMBER(6,1),
“AVDIFF” NUMBER(8,2),
“YAVTEMP” NUMBER(6,1)

These are the matching table field name, where the parsed data ends up.
So with this you can trace out how the values in each record is derived.
Then this is the average function on a field

avg(MNDiff) as MNDiff,
avg(MXDiff) as MXDiff,
avg(AVDiff) as AVDiff,

I also sum them

sum(MNDiff) as MNSum,
sum(MXDiff) as MXSum,
sum(AVDiff) as AVSum

For yearly reports I use ‘Group By year’ where year is the numeric value assigned to each record.
Lastly this uses Mn/Mx Lat/Lon values provided during invocation to select some area.

to_number(asr.LAT) >= to_number(”’ || mnLat || ”’) and to_number(asr.LAT) = to_number(”’ || mnLon || ”’) and to_number(asr.LON) <= to_number(''' || mxLon || ''')

Since Lat/Lon are text strings, and strings sort by char values, multidigit numbers have to be numbers to sort correctly.
That’s it, some subtraction to create the record values based on the prior day, and then I use an average function on a group of stations. There’s other stuff, to select what year to include, whether a station has enough samples, etc. But the math is avg(diff) by year (or day for daily reports).
I don’t think it’s station count, I’ve been very restrictive (in other testing) to reduce the number of samples to very low numbers, and it doesn’t take a lot of stations(10-20) to push significant digits further out to the right. But I’m sure I have some function that would tell us how much a difference the sample size makes, but I don’t know what I’d need to do to get the db to spit out the right info.

Brandon Gates
November 17, 2014 7:34 pm

Mi Cro,

My anomaly method?

No, my understanding of the GISS anomaly method … I was trying to figure out what GISTemp would look like run over the unadjusted raw data from GHCN. Thanks for the code and for pointing me to the specific readme file, I’ll have a go of it over the next few days.