I’m happy to present this essay created from both sides of the aisle, courtesy of the two gentlemen below. Be sure to see the conclusion. I present their essay below with only a few small edits for spelling, format, and readability. Plus an image, a snapshot of global temperatures. – Anthony

By Zeke Hausfather and Steven Mosher
There are a variety of questions that people have about the calculation of a global temperature index. Questions that range from the selection of data and the adjustments made to data, to the actual calculation of the average. For some there is even a question about whether the measure makes any sense or not. It’s not possible to address all these questions in one short piece, but some of them can be addressed and reasonably settled. In particular we are in a position to answer the question about potential biases in the selection of data and biases in how that data is averaged.
To move the discussion onto the important matters of adjustments to data or, for example, UHI issues in the source data it is important to move forward on some answerable questions. Namely, do the methods for averaging data, the methods of the GISS, CRU and NCDC bias the result? There are a variety of methods for averaging spatial data, do the methods selected and implemented by the big three bias the result?
There has been a trend of late among climate bloggers on both sides of the divide to develop their own global temperature reconstructions. These have ranged from simple land reconstructions using GHCN data
(either v2.mean unadjusted data or v2.mean_adj data) to full land/ocean reconstructions and experiments with alternative datasets (GSOD , WDSSC , ISH ).
Bloggers and researchers who have developed reconstructions so far this year include:
Steven Mosher
And, just recently, the Muir Russell report
What is interesting is that the results from all these reconstructions are quite similar, despite differences in methodologies and source data. All are also quite comparable to the “big three” published global land temperature indices: NCDC , GISTemp , and CRUTEM .
[Fig 1]
The task of calculating global land temperatures is actually relatively simple, and the differences between reconstructions can be distilled down to a small number of choices:
1. Choose a land temperature series.
Ones analyzed so far include GHCN (raw and adjusted), WMSSC , GISS Step 0, ISH , GSOD , and USHCN (raw, time-of-observation adjusted, and F52 fully adjusted). Most reconstructions to date have chosen to focus on raw datasets, and all give similar results.
[Fig 2]
It’s worth noting that most of these datasets have some overlap. GHCN and WMSSC both include many (but not all) of the same stations. GISS Step 0 includes all GHCN stations in addition to USHCN stations and a selection of stations from Antartica. ISH and GSOD have quite a bit of overlap, and include hourly/daily data from a number of GHCN stations (though they have many, many more station records than GHCN in the last 30 years).
2. Choosing a station combination method and a normalization method.
GHCN in particular contains a number of duplicate records (dups) and multiple station records (imods) associated with a single wmo_id. Records can be combined at a single location and/or grid cell and converted into anomalies through the Reference Station Method (RSM), the Common Anomalies Method (CAM), and First Differences Method (FDM), or the Least Squares Method (LSM) developed by Tamino and Roman M . Depending on the method chosen, you may be able to use more stations with short records, or end up discarding station records that do not have coverage in a chosen baseline period. Different reconstructions have mainly made use of CAM (Zeke, Mosher, NCDC) or LSM (Chad, Jeff Id/Roman M, Nick Stokes, Tamino). The choice between the two does not appear to have a significant effect on results, though more work could be done using the same model and varying only the combination method.
[Fig 3]
3. Choosing an anomaly period.
The choice of the anomaly period is particularly important for reconstructions using CAM, as it will determine the amount of usable records. The anomaly period can also result in odd behavior of anomalies if it is too short, but in general the choice makes little difference to the results. In the figure that follows Mosher shows the difference between picking an anomaly period like CRU does, 1961-1990, and picking an anomaly period that maximizes the number monthly reports in a 30 year period. The period that maximizes the number of monthly reports over a 30 year period turns out to be 1952-1983. 1953-82 (Mosher). No other 30 year period in GHCN has more station reports. This refinement, however, has no appreciable impact.
[Fig 4]
4. Gridding methods.
Most global reconstructions use 5×5 grid cells to ensure good spatial coverage of the globe. GISTemp uses a rather different method of equal-size grid cells. However, the choice between the two methods does not seem to make a large difference, as GISTemp’s land record can be reasonably well-replicated using 5×5 grid cells. Smaller resolution grid cells can improve regional anomalies, but will often result in spatial bias in the results, as there will be large missing areas during periods when or in locations when station coverage is limited. For the most part, the choice is not that important, unless you choose extremely large or small gridcells. In the figure that follows Mosher shows that selecting a smaller grid does not impact the global average or the trend over time. In his implementation there is no averaging or extrapolation over missing grid cells. All the stations within a grid cell are averaged and then the entire globe is averaged. Missing cells are not imputed with any values.
[Fig 5]
5. Using a land mask.
Some reconstructions (Chad, Mosh, Zeke, NCDC) use a land mask to weight each grid cell by its respective land area. The land mask determines how much of a given cell ( say 5×5) is actually land. A cell on a coast, thus, could have only a portion of land in it. The land mask corrects for this. The percent of land in a cell is constructed from a 1 km by 1 km dataset. The net effect of land masking is to increase the trend, especially in the last decade. This factor is the main reason why recent reconstructions by Jeff Id/Roman M and Nick Stokes are a bit lower than those by Chad, Mosh, and Zeke.
[Fig 6]
6. Zonal weighting.
Some reconstructions (GISTemp, CRUTEM) do not simply calculate the land anomaly as the size-weighted average of all grid cells covered. Rather, they calculate anomalies for different regions of the globe (each hemisphere for CRUTEM, 90°N to 23.6°N, 23.6°N to 23.6°S and 23.6°S to 90°S for GISTemp) and create a global land temp as the weighted average of each zone (weightings 0.3, 0.4 and 0.3, respectively for GISTemp, 0.68 × NH + 0.32 × SH for CRUTEM). In both cases, this zonal weighting results in a lower land temp record, as it gives a larger weight to the slower warming Southern Hemisphere.
[Fig 7]
These steps will get you a reasonably good global land record. For more technical details, look at any of the many http://noconsensus.wordpress.com/2010/03/25/thermal-hammer-part-deux/different http://residualanalysis.blogspot.com/2010/03/ghcn-processor-11.html models http://rankexploits.com/musings/2010/a-simple-model-for-spatially-weighted-temp-analysis/ that have been publicly http://drop.io/treesfortheforest released http://moyhu.blogspot.com/2010/04/v14-with-maps-conjugate-gradients.html
].
7. Adding in ocean temperatures.
The major decisions involved in turning a land reconstruction into a land/ocean reconstruction are choosing a SST series (HadSST2, HadISST/Reynolds, and ERSST have been explored http://rankexploits.com/musings/2010/replication/ so far), gridding and anomalizing the series chosen, and creating a combined land-ocean temp record as a weighted combination of the two. This is generally done by: global temp = 0.708 × ocean temp + 0.292 × land temp.
[Fig 8]
8. Interpolation.
Most reconstructions only cover 5×5 grid cells with one or more station for any given month. This means that any areas without station coverage for any given month are implicitly assumed to have the global mean temperature. This is arguably problematic, as high-latitude regions tend to have the poorest coverage and are generally warming faster than the global average.
GISTemp takes a somewhat different approach, assigning a temperature anomaly to all missing grid boxes located within 1200 km of one or more stations that do have defined temperature anomalies. They rationalize this based on the fact that “temperature anomaly patterns tend to be large scale, especially at middle and high latitudes.” Because GISTemp excludes SST readings from areas with sea ice cover, this leads to the extrapolation of land anomalies to ocean areas, particularly in the Arctic. The net effects of interpolation on the resulting GISTemp record is small but not insignificant, particularly in recent years. Indeed, the effect of interpolation is the main reason why GISTemp shows somewhat different trends from HadCRUT and NCDC over the past decade.
[Fig 9]
9. Conclusion
As noted above there are many questions about the calculation of a global temperature index. However, some of those questions can be fairly answered and have been fairly answered by a variety of experienced citizen researchers from all sides of the debate. The approaches used by GISS and CRU and NCDC do not bias the result in any way that would erase the warming we have seen since 1880. To be sure there are minor differences that depend upon the exact choices one makes, choices of ocean data sets, land data sets, rules for including stations, rules for gridding, area weighting approaches, but all of these differences are minor when compared to the warming we see.
That suggests a turn in the discussion to the matters which have not been as thoroughly investigated by independent citizen researchers on all sides:
A turn to the question of data adjustments and a turn to the question of metadata accuracy and finally a turn to the question about UHI. Now, however, the community on all sides of the debate has a set of tools to address these questions.









I’m impressed by the effort. However, as an engineer concerned about heat-balances, measuring the surface air temperature of the earth doesn’t tell us much of anything. The air has negligible mass compared to the oceans/earth for one thing.
Measuring OHC (ocean heat-content) is the only rational way to make global heat-balance determinations or even determining reasonable trends.
Nice graph from Smokey. Kind of puts things into perspective.
What I want to see is the result of the questioning of the data that should begin from this point. But I want the various “temperature trenders” who’s graphs appear above to consider the point made by Geoff Sherrington in relation to raw data. Is there any change when truely raw data (as reported in the original observations taken at each climate station) is used? I am not really interested in the tortured exuse for data that we find in the NOAA, GISS and CRU/Hadley databases. And what happens when the most obvious antropogenic influences (eg. UHI) are eliminated at each (preferably rural) site?
I am also rather suspicious of the anomaly method and would prefer careful constuction of annual altitude and latitude adjusted temperatures for equal-area grid cells (or parts thereof).
I think the “temperature trenders” also need to consider the findings of Roy Spencer in relation to the degree of UHI associated with population density. The greatest impact appears to occur at the beginning, i.e. from essentially zero population density to a few 10’s or 100’s of people per square km, rather than from 100’s to 1000s. So sites that appear on most measures to be rather rural may nevertheless be impacted by substantial UHI effects regardless of proximity to a major centre. I suspect that James Hansen’s “nightlights” strategy is not an adequate solution to the problem.
And of course there is Clear Climate Code, a reconstruction, in Python, of GISTEMP, for clarity.
Because clarity is our goal, we think that the source code should be of interest to people who want to see the nuts of bolts of one particular implementation. Source code is here.
Where is the code for MoshTemp? No Graphs Without Code!
DirkH: Is the SH warming faster than the NH?
No. The SH is warming slower than the NH.
DirkH: So global warming seems to affect foremost landmasses with a lack of thermometers.
No surprise there. Oceans are vast and they warm slowly.
DirkH: Hmm, what could one do to find out more?
Check out the bloggers linked in the original post. They are all doing more.
DirkH:Add thermometers? I don’t know if that is a scientific answer, though, me not being a scientist…
Neither am I, but I found more thermometers in the GSOD data set mentioned in posts above.
Can anyone shed light on this for me please. UAH trend 1980-2010 is almost identical to HADsst2gl trend.
http://woodfortrees.org/plot/hadsst2gl/from:1980/trend/offset:-0.104/plot/uah/from:1980/trend
But we see tropospheric temps rise much more than sst’s when there is an ENSO event like in 1998 or 2009-10.
So how can the Hadley SST trend be the same as the UAH tropospheric trend over the longer term? Or am I missing something obvious?
@ur momisuglySmokey: I don’t think the global temperature is roughly zero, as that graph seemingly shows. Not temperature then, is it?
beng says:
July 14, 2010 at 5:36 am (Edit)
I’m impressed by the effort. However, as an engineer concerned about heat-balances, measuring the surface air temperature of the earth doesn’t tell us much of anything. The air has negligible mass compared to the oceans/earth for one thing.
Measuring OHC (ocean heat-content) is the only rational way to make global heat-balance determinations or even determining reasonable trends.
Agreed. However, the current OHC record is unreliable. 🙁
drj11,
You’re right, it’s a temperature anomaly chart.
Mosh’s Figure 8 Gistemp/hadley area Land/Ocean graph shows an overall warming of around 0.6C but Gistemp shows around 0.8.
http://woodfortrees.org/plot/gistemp/from:1900/mean:36
WUWT?
How much overlap is “some” overlap in the datasets? Unless you show a chart indicating otherwise, it looks from the highly congruent curves that “some” is pretty high. So then the issue is not agreement of seemingly independent reconstructions, but the reliability of the source data — which the SurfaceStations project has shown to be suspect. The bias in reconstruction sausage-making is not in the grinding method, it’s in the raw meat.
Please generate a comparison of the overlap in datasets, Mosh.
Zeke Hausfather says:
July 13, 2010 at 8:23 pm
“Bill Illis,
I’m not sure where you are getting those slopes from, but they are much more similar than that.
1900-2009 (degrees C per decade)
All stations: 0.072
Long-lived: 0.086
Long-lived rural: 0.071”
Thanks for the data: Going back to 1880 using the same data:
1880-2009 (degrees C per decade)
All stations: 0.063
Long-lived: 0.072
Long-lived rural: 0.051
1880-2009 (Increase over 129 years)
All stations: 0.808C
Long-lived: 0.933C
Long-lived rural: 0.662C
That is different enough in my opinion.
Steven Mosher says: July 13, 2010 at 3:29 pm
3. underlying mechanism. Well, the results are consistent with and confirm the theory of GHG warming, espoused BEFORE this data was collected. They dont prove the theory, no theory is proven.
Steven, look at the trend from ~1917-1943. Steven, look at the trend from ~1975-1999 (both periods equally cherry picked). Please explain how “the results are consistent with and confirm the theory of GHG warming”.
Hasn’t NASA/GISS more important things to do than just taking care of weather/climate: “… foremost … to find a way to reach out to the Muslim world … to help them feel good about their historic contribution to science … and math and engineering.” How about climate?
Sorry, this time I couldn’t resist.
Gary,
There are four datasets, ISH, GSOD, WMSSC, and GHCN, but ISH/GSOD and WMSSC/GHCN are mostly overlapping. However ISH/GSOD has -many- more stations (20,000+) post-1970 than GHCN (~6000, and only ~2000 post 1990).
Fig 2 shows reconstructions from GSOD, WMSSC, and GHCN. You could add in UAH or RSS as well if you want, though they are measuring something slightly different.
Bill Illis,
Well, there are < 100 long-lived rural stations prior to 1900, so I'd imagine there is some spatial bias creeping in prior to then unless they are remarkably well distributed. I'll look into it some more when I have a chance.
Mosher: “High spatial correlation means fewer stations are required to capture the signal.”
2006 GHCN has 2 stations reporting July Max in Canada. Do you think 2 is “too few”.
Anyway … its colder now in July than it was. A lot colder.
GHCN v2 July Max in Canada Year,JulyMax mean,Count of Stations
Year JulMax JulCount
1840 24.4 1
1841 25.8 1
1842 25.4 1
1843 25.5 1
1844 25.8 1
1845 25.4 1
1846 26.2 1
1847 25 1
1848 22.7 1
1849 24.8 1
1850 25.8 1
1851 22.4 1
1852 23.9 1
1853 25.1 1
1854 29.3 1
1855 24.8 1
1856 26.8 1
1857 24.9 1
1858 24.2 1
1859 23.7 1
1860 22.8 1
1861 23.7 1
1862 24.7 1
1863 23.8 1
1864 26.7 1
1865 23.3 2
1866 27.4833333333333 6
1867 25.9166666666667 6
1868 30.6166666666667 6
1869 23.8833333333333 6
1870 25.9666666666667 6
1871 24.51 10
1872 25.3214285714286 14
1873 25.0733333333333 15
1874 24.2857142857143 14
1875 23.7705882352941 17
1876 24.452380952381 21
1877 25.0129032258065 31
1878 24.9714285714286 28
1879 23.7583333333333 36
1880 24.0323529411765 34
1881 24.2055555555556 36
1882 23.6194444444444 36
1883 23.2057142857143 35
1884 21.4066666666667 45
1885 24 41
1886 24.2955555555556 45
1887 25.4851063829787 47
1888 23.2377777777778 45
1889 23.6916666666667 48
1890 24.2 50
1891 22.5545454545455 55
1892 24.401724137931 58
1893 23.8954545454545 66
1894 25.0913043478261 69
1895 23.1041095890411 73
1896 24.3 74
1897 23.7575342465753 73
1898 24.3728395061728 81
1899 23.6303797468354 79
1900 23.1247191011236 89
1901 23.8759036144578 83
1902 22.9685393258427 89
1903 21.952808988764 89
1904 23.3631578947368 95
1905 23.784375 96
1906 25.0797872340426 94
1907 23.05 96
1908 24.5416666666667 108
1909 23.0342105263158 114
1910 24.2504347826087 115
1911 23.3418032786885 122
1912 22.2008196721311 122
1913 22.875 140
1914 24.8126506024096 166
1915 22.4983695652174 184
1916 24.4989473684211 190
1917 24.6735751295337 193
1918 23.5094059405941 202
1919 24.771144278607 201
1920 24.3913705583756 197
1921 25.6245283018868 212
1922 23.7359090909091 220
1923 23.9490990990991 222
1924 23.9655462184874 238
1925 23.8238493723849 239
1926 24.5540983606557 244
1927 23.273640167364 239
1928 23.6304721030043 233
1929 24.145867768595 242
1930 24.2072874493927 247
1931 24.4859922178988 257
1932 22.5463035019455 257
1933 24.0145038167939 262
1934 23.803007518797 266
1935 24.4981684981685 273
1936 25.4597826086957 276
1937 24.6967971530249 281
1938 24.3885416666667 288
1939 24.3436241610738 298
1940 23.7579124579125 297
1941 25.1128712871287 303
1942 22.974375 320
1943 23.7377643504532 331
1944 23.3178885630499 341
1945 23.1789473684211 342
1946 23.0744186046512 344
1947 24.1592261904762 336
1948 22.802915451895 343
1949 22.8943502824859 354
1950 22.1146814404432 361
1951 22.7132075471698 371
1952 23.1331606217617 386
1953 22.4753886010363 386
1954 21.9177215189873 395
1955 23.3086294416244 394
1956 21.6509900990099 404
1957 22.0631707317073 410
1958 22.098313253012 415
1959 23.031990521327 422
1960 22.4906474820144 417
1961 22.5366197183099 426
1962 20.7688073394495 436
1963 21.9846681922197 437
1964 21.8165909090909 440
1965 21.1193333333333 450
1966 22.01431670282 461
1967 21.9592274678112 466
1968 21.3837953091684 469
1969 20.9380753138075 478
1970 22.2475308641975 486
1971 21.3024291497976 494
1972 20.5340248962656 482
1973 22.2576612903226 496
1974 21.3495088408644 509
1975 23.2423762376238 505
1976 21.261554192229 489
1977 21.1826446280992 484
1978 21.59670781893 486
1979 22.6260504201681 476
1980 21.1364224137931 464
1981 22.1787685774947 471
1982 21.6768558951965 458
1983 21.675550660793 454
1984 22.2848758465011 443
1985 22.2145833333333 432
1986 20.4527186761229 423
1987 21.9730046948357 426
1988 22.3857142857143 427
1989 23.2185096153846 416
1990 23.2041095890411 73
1995 19.9 28
1996 19.2171428571429 35
1997 19.4028571428571 35
1998 20.5628571428571 35
1999 19.08 35
2000 19.8942857142857 35
2001 19.88 35
2002 17.2333333333333 24
2003 18.2217391304348 23
2004 19.2066666666667 15
2005 18.7833333333333 12
2006 18.5 2
2007 18.38 10
2008 21.66 5
2009 20.0206896551724 29
2010 0
drj,
I was remiss to omit a reference to CCC’s work, though you guys are a tad different than other efforts being more a replication than a reconstruction (and you don’t fare well on spaghetti graphs being indistinguishable from GISTemp 😛 )
Mosh should have his code polished, commented, and posted soon.
Geoff Sherrington says:
July 14, 2010 at 5:27 am
There has to be a large component of guessing in all these reconstructions…..
__________________________________________
A. J. Strata did an analysis of the error in the temperature “product” we are fed.
“I am going to focus this post on two key documents that became public with the recent whistle blowing at CRU. The first document concerns the accuracy of the land based temperature measurements, which make up the core of the climate alarmists claims about warming. When we look at the CRU error budget and error margins we find a glimmer of reality setting in, in that there is no way to detect the claimed warming trend with the claimed accuracy….”
http://strata-sphere.com/blog/index.php/archives/11420
“UHI is a tough one, simply because it can depend so much on micro-site effects that are difficult to quantify.”
Actually it is quite easy. Simply measure the temperature near the site, then measure the temperature at the nearest genuinely rural site. The difference is the UHI.
What is difficult is not UHI. UHI is easy. What is difficult is estimating how much UHI there was 100 years ago compared to today. Those sites that are clearly affected by UHI > 0.1Celsius as detected by the method above should be dismissed for this reason.
tallbloke says:
July 14, 2010 at 5:29 am
“Mosh says:
underlying mechanism. Well, the results are consistent with and confirm the theory of GHG warming, espoused BEFORE this data was collected.”
Which of these two graphs suggests the better correlation Mosh?
http://tallbloke.files.wordpress.com/2010/07/soon-arctic-tsi.jpg
Take your time…”
So tallbloke, even though Mosh hasn’t replied as yet, do you now agree with me that Mosh is a ‘true believer’? I’ve known this for a long time now, but recently on a different thread on tAV you seemed surprised about this fact.
Because Mosh has written a book on Climategate, most people wrongly assume that Mosh is an AGW skeptic. He isn’t and never has been an AGW skeptic but is in fact when in fact a ‘wolf in sheep’s clothing’ CAGW promoter.
His latest collaborations with Ron B, Zeke H, Nick s etc although laudable are IMO nonetheless something of a waste of effort as it’s clear that the whole concept of a so-called global mean surface temperature (GMST) is flawed.
So what’s the point in ‘creating the tools’ as Mosh puts it so that other issues like land use, UHI can be investigated etc if the whole exercise is flawed? Answer, because Mosh & Co somehow think that GMST means something when it doesn’t and if they continue their work that somehow we’ll all be convinced that teh possibility of CAGW is real and that we should therefor edo something to avoid it.
IMO it’s a concept/construction created primarily so that those who seek to promote CAGW can attempt to claim that the planet has towards the latter part of the 20th century somehow warmed in an unnatural (claimed by them to be unprecedented) way due to man’s emissions of GHGs.
They aren’t in the slightest bit interested in the poor correlation between 20th century temperature cooling/warming trends and CO2 emissions and certainly not much mor eplausible alternative explanations of these multi-centennial historic trends. What matters to them is the message i.e. that man is having an effect on the planet and so we must ‘act now’, chnage how our society is organised and agree to enrich them at our expense.
Please observe the 1940 to present land record was a 0.6 C increase while the global (including ocean) was 0.4 C. This means the ocean alone was close to 0.3 C. When this is combined with land records that were taken away from cities (which have some argument as being less biased by UHI effects), it appears the actual global total should be closer to 0.3 C. This is not even taking into account less reliable sea data in the early record from a different process. Since the rise to 1940 is admitted to be mainly natural, it appears that an increase of about 0.3 C since, with possibly still some continuing recovery from the LIA occurring, is not such a big deal. The current projections from many, including some AGW supporters, that the next 20 years are likely to be cooling, seems to drive the whole AGW band wagon off the track.
Re:
george e. smith
graham g
anna v
beng
and others…
In my own mind I am trying to move beyond the numerology of the temp data and toward looking at the energy and energy flows.
But, as thermo was my worst performance in undergrad, and psychrometrics is the most arcane field known to man, can someone please point me to a cogent discussion of the right way to “average” the energy of two different locations. Is it the energy of the air? What about radiation, thermal “inertia,” etc.?
I have mussed about with some basic excel sheets based on black/grey body energy equivalencies (where the T^4 makes the average energy diff from the “usual” average) but want to expand into latent heat, etc.
I’m sure this is basic stuff that is covered elsewhere, not just a thermo book (I have those about) but hopefully a real-world applied research article.
It would be interesting to take a 5km x 5km x 30,000ft profile as follows:
(1) humidity
(1.1) humidity regime as f(altitude)
(1.2) humidity regime as f(time)
(1.3) humidity as f(location)
(2) temperature
(2.1) temp regime as f(altitude)
(2.2) temp regime as f(time)
(2.3) tem as f(location)
(3) GHG conc
(3.1) total GHG potential as f(time)
(3.2) total GHG potential minus water vapour as f(time)
(4) land use
(4.1) urban as %area
(4.2) suburban as %area
(4.3) rural as % area
(4.x etc) forest, swamp, arid, etc as %area
(5 etc) barometric pressure, wind, cloud cover, precip, sunlight, etc
Now fully instrument four contiguous 5km x 5km blocks in a representative 20km x 20 km area. No, let’s do two of these 20km^2 areas – one around Madison, WI, and one around Huntsville, AL – for comparison. And arrange the 5km^2 boxes to have different %’s of lakes, urban, forest, etc. Cost is clearly f(resolution).
What would we find for energy budget over a multi-year period?
I guess the root question is “Has this already been done?”
Cheers,
BillN
“”” Leonard Weinstein says:
July 14, 2010 at 10:09 am
Please observe the 1940 to present land record was a 0.6 C increase while the global (including ocean) was 0.4 C. This means the ocean alone was close to 0.3 C. When this is combined with land records that were taken away from cities (which have some argument as being less biased by UHI effects), it appears the actual global total should be closer to 0.3 C. This is not even taking into account less reliable sea data in the early record from a different process. Since the rise to 1940 is admitted to be mainly natural, it appears that an increase of about 0.3 C since, with possibly still some continuing recovery from the LIA occurring, is not such a big deal. The current projections from many, including some AGW supporters, that the next 20 years are likely to be cooling, seems to drive the whole AGW band wagon off the track. “””
Leonard, when I see these stories; such as the brief note contained in this post of yours; I always find myself asking the same question; are these “ancient” (1940s) reports still talking about “anomalies” or are they claiming actual Global Temperature measurements; such as a real scientist might report, and actually referenced to some internationally recognised standard; such as the Kelvin Temperature scale for example.
And then that invariably leads me to note the paper by John Christy; et al, in I believe it is “Geophysical Reserach Letters” for January 2001; surely a reputable peer reviewed Journal.
In that paper, Christy, et al report on about 20 years of simultaneous records of oceanic water near surface (-1 m) Temperatures, and near surface, (+3 m) lower tropospheric atmospheric Temperatures; which atmospheric Temperatures should surely fit in most appropriately with the data obtainbed from the standard Stevenson Screen “owl boxes” and whatever that other mini owl box is called; maybe it’s for bats.
As you know; prior to that time (1980ish) ; which would include your 1940s; global records of air temperatures over the ocean; which is only about 73% of the entire planet surface; were inferrred by actually measuring; not the temperature of that air; but measuring the temperature of a bucket of water hauled over the side from some quite unknown water depth, and then measured on deck (with winds). Since about 1929 (apparently), the water bucket started to be replaced by measurments in a hot engin room, of the intake water picked up by the ship, and evidently used for engine cooling and the like. Once again; such water being gathered from some quite non standard water depth that depended on the specific vessel, that reported the data.
For some reason; totally unfathomable to me (pun intended); it was assumed back evidently to the 1850s that the water and air temperatures would be in equilibrium. This would seem almost slef evident; given that ocean currents are of the order of a few knots; and meander all over the place; whiel air speeds over the ocean can be upwards of 100 kph or higher at times; so naturally one would expect them to reach equilibrium. Well to be pedantic, I would not expect that, but evidently many generations of Climate Scinetists have believed that.
So what Christy et al reported Jan 2001, was that the simultaneous water and air temperature data, taken from a fixed water depth of one metre, and a fixed air height over the same buoy of 3 metres, showed thqat the increase intemperature over that 20 year interval for the air measurements was about 40% less than what the water temperature measurements recorded. I’m ad libbing here about the 40%; maybe they said the water temps inflated the rise by 40% or some other relation; but you get the idea; the air temperatures recorded a sizeable reduction in temperature increase compared to what the water temperatures claimed for the same period and location.
OK; simple problem; so now you have to take all the previous 150 years of oceanic temperature data; well all of that that Phil Jones hasn’t lost or thrown away; and you have to scale it back to 60% of what it was (growth wise) to get equivalent air data that can be properly merged with land based owl box temperatures measured up those poles by the barbecues.
Well NOT SO FAST !! The key result that Christy et al reported; for just that limited 20 years of observation; was not the 60% factor; but they found that the water and air Temperature ARE NOT CORRELATED !!
Not only are they not the same as had been historically believed for 150 years; affecting climate data for 73% of the global surface area; but they are not even correlated; which means that it is inherently impossible to go back and correct those erroneous water temperature data sets from the 1850s on; to arrive at comparable lower troposphere near surface air temperatures. The true air temperatures are NOT recoverable.
So excuse me if I take a jaundiced view of world climate data from the 1940s; I don’t believe any temperature measurement data for the earth’s oceans; and by inference for the whole planet; that precedes 1980; when those first oceanic buoy measurments were begun; well when Christy et al, began their collection.
I did actually e-mail Professor Christy, about that paper; and my recollection is that he said they found some correlations in some parts of the ocean. But he clearly would not have reported a lack of correlation, unless he had found some statistically significant lack of correlation somewhere. Well maybe the best way to put that would be statistically insignificant; rather than significant. I’m not going to Put words IN Prof Christy’s mouth. I heartily recommend that people read HIS paper and see what HE said.
An excerpt of a report from the National Academy of Sciences as listed on a popular warmist website…
Although our knowledge of climate change may be partial, we can be certain that our climate is changing and that human CO2 emissions are responsible. The US National Academy of Sciences issued a clear statement just a month ago which reads: “Some scientific conclusions … have been so thoroughly examined and tested, and supported by so many independent observations … that their likelihood of … being … wrong is vanishingly small. Such conclusions … are … regarded as settled facts. This is the case for the conclusion that the Earth … is warming and that much of this warming is very likely due to human activities.”
(My comments on this topic at the Warmist blog)
Studentskeptic says:
Regarded as settled facts. This is the case for the conclusion that the Earth … is warming and that much of this warming is “very likely due to human activities.”
1. I love the circular logic here, regarded as settled fact, the consensus, the science is settled….. “very likely”…
Picture a Doctor… We are absolutely sure that you have a health issue (although there are some who would disagree)… We are relatively sure that it is likely to be your male reproductive system that’s causing, it so we’ll have to remove it. It will take 33 years to remove your reproductive system piece by piece and there’s only .06% change in your situation by removing your reproductive organs, but we’re very likely sure that is the cause.
Who’s gonna sign up for that doctor? I’m monitoring that particular blog for the first volunteer. (snicker)