Systematic Error in Climate Measurements: The surface air temperature record

Guest essay by Pat Frank

Presented at World Federation of Scientists, Erice, Sicily, 20 August 2015

This is a version of the talk I gave about uncertainty in the global average air temperature record at the 48th Conference of the World Federation Scientists on “Planetary Emergences and Other Events,” at Erice, Sicily, in August of 2015.

It was a very interesting conference and, as an aside, for me the take home message was that the short-term emergency is Islamic violence while the long-term emergency is some large-scale bolide coming down. Please, however, do not distract conversation into these topics.

Abstract: I had a longer abstract, but here’s the short form. Those compiling the global averaged surface air temperature record have not only ignored systematic measurement error, but have even neglected the detection limits of the instruments themselves. Since at least 1860, thermometer accuracy has been magicked out of thin air. Also since then, and at the 95% confidence interval, the rate or magnitude of the global rise in surface air temperature is unknowable. Current arguments about air temperature and its unprecedentedness are speculative theology.

1. Introduction: systematic error

Systematic error enters into experimental or observational results through uncontrolled and often cryptic deterministic processes. [1] These can be as simple as a consistent operator error. More typically, error emerges from an uncontrolled experimental variable or instrumental inaccuracy. Instrumental inaccuracy arises from malfunction or lack of calibration. Uncontrolled variables can impact the magnitude of a measurement and/or change the course of an experiment. Figure 1 shows the impact of an uncontrolled variable, taken from my own published work. [2, 3]

clip_image002

Figure 1: Left, titration of dissolved ferrous iron under conditions that allowed an unplanned trace of air to enter the experiment. Inset: the incorrect data precisely followed equilibrium thermodynamics. Right, the same experiment but with the appropriately strict exclusion of air. The data are completely different. Inset: the correct data reflect distinctly different thermodynamics.

Figure 1 shows that the inadvertent entry of a trace of air was enough to completely change the course of the experiment. Nevertheless, the erroneous data display coherent behavior and follow a trajectory completely consistent with equilibrium thermodynamics. To all appearances, the experiment was completely valid. In isolation, the data are convincing. However, they are completely wrong because the intruded air chemically modified the iron.

Figure 1 exemplifies the danger of systematic error. Contaminated experimental or observational results can look and behave just like good data, and can rigorously follow valid physical theory. Without care, such data invite erroneous conclusions.

By its nature, systematic error is difficult to detect and remove. Methods of elimination include careful instrumental calibration under conditions identical to the observation or experiment. Methodologically independent experiments that access the same phenomena provide a check on the results. Careful attention to these practices is standard in the experimental physical sciences.

The recent development of a new and highly accurate atomic clock illustrates the extreme care physicists take to eliminate systematic error. Critical to achievement of its 10-18 second accuracy, was removal of systematic error produced the black-body radiation of the instrument itself. [4]

clip_image004

Figure 2: Close-up picture of the new atomic clock. The timing element is a cluster of fluorescing strontium atoms trapped in an optical lattice. Thermal noise is removed using data provided by a sensor that measures the black-body temperature of the instrument.

As a final word, systematic error does not average away with repeated measurements. Repetition can even increase error. When systematic error cannot be eliminated and is known to be present, uncertainty statements must be reported along with the data. In graphical presentations of measurement or calculational data, systematic error is represented using uncertainty bars. [1] Those uncertainty bars communicate the reliability of the result.

2. Systematic Error in Surface Temperature Measurements

2.1. Land Surface Air Temperature

During most of the 20th century, land surface air temperatures were measured using a liquid-in-glass (LiG) thermometer housed in a box-like louvered shield (Stevenson screen or Cotton Regional Shelter (CRS)). [5, 6] After about 1985, thermistors or platinum resistance thermometers (PRT) housed in an unaspirated cylindrical plastic shield replaced the CRS/LiG sensors in Europe, the Anglo-Pacific countries, and the US. Beginning in 2000, the US Climate Research Network deployed sensors consisting of a trio of PRTs in an aspirated shield. [5, 7-9] An aspirated shield includes a small fan or impeller that ventilates the interior of the shield with outside air.

Unaspirated sensors rely on prevailing wind for ventilation. Solar radiance can heat the sensor shield, warming the interior atmosphere around the sensor. In the winter, upward radiance from the albedo of a snow-covered surface can also produce a warm bias. [10] Significant systematic measurement error occurs when air movement is less than 5 m/sec. [9, 11]

clip_image006

Figure 3: Alpine Plaine Morte Glacier, Switzerland, showing the air temperature sensor calibration experiment carried out by Huwald, et al., in 2007 and 2008. [12] Insets: close-ups of the PRT and the sonic anemometer sensors. Photo credit: Bou-Zeid, Martinet, Huwald, Couach, 2.2006 EPFL-ENAC.

In 2007 and 2008 calibration experiments carried out on the Plaine Morte Glacier (Figure 3) tested the field accuracy of the RM Young PRT housed in an unaspirated louvered shield, situated over a snow-covered surface. In a laboratory setting, the RM Young sensor is capable of ±0.1 C accuracy. Field accuracy was determined by comparison with air temperatures measured using a sonic anemometer, which takes advantage of the impact of temperature on the speed of sound in air and is insensitive to irradiance and wind-speed.

clip_image008

Figure 4: Temperature trends recorded simultaneously on Plaine Morte Glacier during February – April 2007. (¾), Sonic anemometer, and; (¾), RM Young PRT probe.

Figure 4 shows that under identical environmental conditions, the RM Young probe recorded significantly warmer Winter air temperatures than the sonic anemometer. The slope of the RM Young temperature trend is also more than 3 times greater. Referenced against a common mean, the RM Young error would enter a spurious warming trend into a global temperature average. The larger significance of this result is that the RM Young probe is very similar in design and response to the more advanced temperature probes in use world-wide since about 1985.

Figure 5 shows a histogram of the systematic temperature error exhibited by the RM Young probe.

clip_image010

Figure 5. RM Young probe systematic error on Plaine Morte Glacier. Day time error averages 2.0±1.4 C; night-time error averages 0.03±0.32 C.

The RM Young systematic errors mean that, absent an independent calibration instrument, any given daily mean temperature has an associated 1s uncertainty of 1±1.4 C. Figure 5 shows this uncertainty is neither randomly distributed nor constant. It cannot be removed by averaging individual measurements or by taking anomalies. Subtracting the average bias will not remove the non-normal 1s uncertainty. Entry of the RM Young station temperature record into a global average will carry that average error along with it.

Before inclusion in a global average, temperature series from individual meteorological stations are subjected to statistical tests for data quality. [13] Air temperatures are known to show correlation R = 0.5 over distances of about 1200 km. [14, 15] The first quality control test for any given station record includes a statistical check for correlation with temperature series among near-by stations. Figure 6 shows that the RM Young error-contaminated temperature series will pass this most basic quality control test. Further, the erroneous RM Young record will pass every single statistical test used for the quality control of meteorological station temperature records worldwide. [16, 17]

clip_image012

Figure 6: Correlation of the RM Young PRT temperature measurements with those of the sonic anemometer. Inset: Figure 1a from [14] showing correlation of temperature records from meteorological stations in the terrestrial 65-70º N, 0-5º E grid. The 0.5 correlation length is 1.4´103 km.

clip_image014

Figure 7: Calibration experiment at the University of Nebraska, Lincoln (ref. [11], Figure 1); E, MMTS shield; F, CRS shield; G, the aspirated RM Young reference.

Figure 7 shows the screen-type calibration experiment at the University of Nebraska, Lincoln. Each screen contained the identical HMP45C PRT sensor. [11] The calibration reference temperatures were provided by an aspirated RM Young PRT probe, rated as accurate to <±0.2 C below 1100 Wm-2 solar irradiance.

These independent calibration experiments tested the impact of a variety of commonly used screens on the fidelity of air temperature measurements from PRT probes. [10, 11, 18] Screens included the traditional Cotton Regional Shelter (CRS, Stevenson screen), and the MMTS screen now in common use in the US Historical Climate Network, among others.

clip_image016

Figure 8: Average systematic measurement error of an HMP45C PRT probe within an MMTS shelter over a grass (top) or snow-covered (bottom) surface. [10, 11]

Figure 8, top, shows the average systematic measurement error an MMTS shield imposed on a PRT temperature probe, found during the calibration experiment displayed in Figure 7. [11] Figure 8, bottom, shows the results of an independent PRT/MMTS calibration over a snow-covered surface. [10] The average annual systematic uncertainty produced by the MMTS shield can be estimated from these data as, 1s = 0.32±0.23 C. The skewed warm-bias distribution of error over snow is similar in magnitude to the unaspirated RM Young shield in the Plaine Morte experiment (Figure 5).

Figure 9 shows the average systematic measurement error produced by a PRT probe inside a traditional CRS shield. [11]

clip_image018

Figure 9. Average day-night 1s = 0.44 ± 0.41 C systematic measurement error produced by a PRT temperature probe within a traditional CRS shelter.

The warm bias in the data is apparent, as is the non-normal distribution of error. The systematic uncertainty from the CRS shelter was 1s = 0.44 ± 0.41 C. The HMP45C PRT probe is at least as accurate as the traditional LiG thermometers housed within the CRS shield. [19, 20] The PRT/CRS experiment may then estimate a lower limit of systematic measurement uncertainty present in the land-surface temperature record covering all of the 19th and most of the 20th century.

2.2 Sea-Surface Temperature

Although considerable effort has been expended to understand sea-surface temperatures (SSTs), [21-28] there have been very few field calibration experiments of sea-surface temperature sensors. Bucket- and steamship engine cooling-water intake thermometers provided the bulk of early and mid-20th century SST measurements. Sensors mounted on drifting and moored buoys have come into increasing use since about 1980, and now dominate SST measurements. [29] Attention is focused on calibration studies of these instruments.

The series of experiments reported by Charles Brooks in 1926 are by far the most comprehensive field calibrations of bucket and engine-intake thermometer SST measurements carried out by any individual scientist. [30] Figure 10 presents typical examples of the systematic error in bucket and engine intake SSTs that Brooks found.

clip_image020

Figure 10: Systematic measurement error in one set of engine-intake (left) and bucket (right) sea-surface temperatures reported by Brooks. [30]

Brooks also recruited an officer to monitor the ship-board measurements after he concluded his experiments and disembarked. The errors after he had departed the ship were about twice as large as they were when he was aboard. The simplest explanation is that care deteriorated, perhaps back to normal, when no one was looking. This result violates the standard assumption in the field that temperature sensor errors are constant for each ship.

In 1963 Saur reported the largest field calibration experiment of engine-intake thermometers, carried out by volunteers aboard twelve US military transport ships engaged off the US central Pacific coast. [31] The experiment included 6826 pairs of observations. Figure 11 shows the experimental results from one voyage of one ship.

clip_image022

Figure 11: Systematic error in recorded engine intake temperatures aboard one military transport ship operating June-July, 1959. The mean systematic bias and uncertainty represented by these data are, 1s = 0.9±0.6 C.

Saur reported Figure 11 as, “a typical distribution of the differences” reported from the various ships. The ±0.6 C uncertainty about the mean systematic error is comparable to the values reported by Brooks, shown in Figure 10.

Saur concluded his report by noting that, “The average bias of reported sea water temperatures as compared to sea surface temperatures, with 95 percent confidence limits, is estimated to be 1.2±0.6 F [0.67±0.33 C] on the basis of a sample of 12 ships. The standard deviation of differences [between ships] is estimated to be 1.6 F [0.9 C]. Thus, without improved quality control the sea temperature data reported currently and in the past are for the most part adequate only for general climatological studies. [bracketed conversions added]” Saur’s caution is instructive, but has apparently been mislaid by consensus scientists.

Measurements from bathythermograph (BT) and expendable bathythermograph (XBT) instruments have also made significant contributions to the SST record. [32] Extensive BT and XBT calibration experiments revealed multiple sources of systematic error, principally stemming from mechanical problems and calibration errors. [33-35] Relative to a reversing thermometer standard, field BT measurements exhibited ±s = 0.34±0.43 C error. [35] This standard deviation is more than twice as large as the manufacturer-stated accuracy of ±0.2 C and reflects the impact of uncontrolled field variables.

The SST sensors in deployed floating and moored buoys were never field-calibrated during the 20th century, allowing no general estimate of systematic measurement error.

However, Emery estimated a 1s = ±0.3 C error by comparison of SSTs from floating buoys co-located to within 5 km of each other. [28] SST measurements separated by less than 10 km are considered coincident.

A similar ±0.26 C buoy error magnitude was found relative to SSTs retrieved from the Advanced Along-Track Scanning Radiometer (AATSR) satellite. [36] The error distributions were non-normal.

More recently, Argo buoys were field calibrated against very accurate CTD (conductivity-temperature-depth) measurements and exhibited average RMS errors of ±0.56 C. [37] This is similar in magnitude to the reported average ±0.58 C buoy-Advanced Microwave Scanning Radiometer (AMSR) satellite SST difference. [38]

3. Discussion

Until recently, [39, 40] systematic temperature sensor measurement errors were neither mentioned in reports communicating the origin, assessment, and calculation of the global averaged surface air temperature record, nor were they included in error analysis. [15, 16, 39-46] Even after the recent arrival of systematic errors in published literature, however, the Central Limit Theorem is adduced to assert that they average to zero. [36] However, systematic temperature sensor errors are neither randomly distributed nor constant over time, space, or instrument. There is no theoretical reason to expect that these errors follow the Central Limit Theorem, [47, 48] or that such errors are reduced or removed by averaging multiple measurements; even when measurements number in the millions. A complete inventory of contributions to uncertainty in the surface air temperature record must include, indeed must start with, the systematic measurement error of the temperature sensor itself. [39]

The World Meteorological Organization (WMO) offers useful advice regarding systematic error. [20]

“Section 1.6.4.2.3 Estimating the true value – additional remarks.

In practice, observations contain both random and systematic errors. In every case, the observed mean value has to be corrected for the systematic error insofar as it is known. When doing this, the estimate of the true value remains inaccurate because of the random errors as indicated by the expressions and because of any unknown component of the systematic error. Limits should be set to the uncertainty of the systematic error and should be added to those for random errors to obtain the overall uncertainty. However, unless the uncertainty of the systematic error can be expressed in probability terms and combined suitably with the random error, the level of confidence is not known. It is desirable, therefore, that the systematic error be fully determined.

Thus far, in production of the global averaged surface air temperature record, the WMO advice concerning systematic error has been followed primarily in the breach.

Systematic sensor error in air and sea-surface temperature measurements has been woefully under-explored and field calibrations are few. Nevertheless, the reported cases make it clear that the surface air temperature record is contaminated with a very significant level of systematic measurement error. The non-normality of systematic error means that subtracting an average bias will not discharge the measurement uncertainty about the global temperature mean.

Further, the magnitude of the systematic error bias in surface air temperature and SST measurements is apparently as variable in time and space as the magnitude of the standard deviation of systematic uncertainty about the mean error bias. I.e., the mean systematic bias error was 2 C over snow on the Plaine Morte Glacier, Switzerland, but was 0.4 C over snow at Lincoln, Nebraska. Similar differences accrue to the engine-intake systematic error means reported by Brooks and Saur. Therefore, removing an estimate of mean bias will always leave the magnitude ambiguity of the residual mean bias uncertainty. In any complete evaluation of error, the residual uncertainty in mean bias will combine with the 1s standard deviation of measurement uncertainty into the uncertainty total.

A complete evaluation of systematic error is beyond the analysis presented here. However, to the extent that the above errors are representative, a set of estimated uncertainty bars due to systematic error in the global averaged surface air temperature record can be calculated, Figure 12.

The uncertainty bars in Figure 12 (right) reflect a 0.7:0.3 SST:land surface ratio of systematic errors. Combined in quadrature, bucket and engine-intake errors constitute the SST uncertainty prior to 1990. Over the same time interval the systematic error of the PRT/CRS sensor [39, 49], constituted the uncertainty in land-surface temperatures. Floating buoys made a partial contribution (0.25 fraction) to the uncertainty in SST between 1980-1990. After 1990 uncertainty bars are further steadily reduced, reflecting the increasing contribution and smaller errors of MMTS (land) and floating buoy (SS) sensors.

clip_image024

Figure 12: The 2010 global average surface air temperature record obtained from website of the Climate Research Unit (CRU), University of East Anglia, UK. http://www.cru.uea.ac.uk/cru/data/temperature/. Left, error bars following the description provided at the CRU website. Right, error bars reflecting the uncertainty width due to estimated systematic sensor measurement errors within the land and sea surface records. See the text for further discussion.

Figure 12 (right) is very likely a more accurate representation of the state of knowledge than is Figure 12 (left), concerning the rate or magnitude of change in the global averaged surface air temperature since 1850. The revised uncertainty bars represent non-normal systematic error. Therefore the air temperature mean trend loses any status as the most probable trend.

Finally, Figure 13 pays attention to the instrumental resolution of the historical meteorological thermometers.

Figure 13 caused some angry shouts from the audience at Erice, followed by some very rude approaches after the talk, and a lovely debate by email. The argument presented here prevailed.

Instrumental resolution defines the measurement detection limit. For example, the best-case historical 19th to mid-20th century liquid-in-glass (LiG) meteorological thermometers included 1 C graduations. The best-case laboratory-conditions reportable temperature resolution is therefore ±0.25 C. There can be no dispute about that.

The standard SST bucket LiG thermometers from the Challenger voyage on through the 20th century also had 1 C graduations. The same resolution limit applies.

The very best American ship-board engine-intake thermometers included 2 F (~1 C) graduations; on British ships they were 2 C. The very best resolution is then about ±(0.25 – 0.5) C. These are known quantities. Resolution uncertainty, like systematic error, does not average away. Knowing the detection limits of the classes of instruments allows us to estimate the limit of resolution uncertainty in any compiled historical surface air temperature record.

Figure 13 shows this limit of resolution. It compares the instrumental historical ±2s resolution, with ±2s uncertainty in the published Berkeley Earth air temperature compilation. The analysis applies equally well to the published surface air temperature compilations of GISS or CRU/UKMet, which feature the same uncertainty limits.

clip_image026

Figure 13: The Berkeley Earth global averaged air temperature trend with the published ±2s uncertainty limits in grey. The time-wise ±2s instrumental resolution is in red. On the right in blue is a compilation of the best resolution limits of the historical temperature sensors, from which the global resolution limits were calculated.

The globally combined instrumental resolution was calculated using the same fractional contributions as were noted above for the lower limit estimate of systematic measurement error. That is, 0.30:0.70, land : sea surface instruments, and the published historical fractional use of each sort of instrument (land: CRS vs. MMTS, and; SS: buckets vs. engine intakes vs. buoys).

The record shows that during the years 1800-1860, the published global uncertainty limits of field meteorological temperatures equal the accuracy of the best possible laboratory-conditions measurements.

After about 1860 through 2000, the published resolution is small smaller than the detection limits — the resolution limits — of the instruments themselves. From at least 1860, accuracy has been magicked out of thin air.

Does anyone find the published uncertainties credible?

All you engineers and experimental scientists out there may go into shock after reading this. I was certainly shocked by the realization. Espresso helps.

The people compiling the global instrumental record have neglected a experimental limit even more basic than systematic measurement error: the detection limits of their instruments. They have paid no attention to it.

Resolution limits and systematic measurement error produced by the instrument itself constitute lower limits of uncertainty. The scientists engaged in consensus climatology have neglected both of them.

It’s almost as though none of them have ever made a measurement or struggled with an instrument. There is no other rational explanation for that sort of negligence than a profound ignorance of experimental methods.

The uncertainty estimate developed here shows that the rate or magnitude of change in global air temperature since 1850 cannot be known within ±1 C prior to 1980 or within ±0.6 C after 1990, at the 95% confidence interval.

The rate and magnitude of temperature change since 1850 is literally unknowable. There is no support at all for any “unprecedented” in the surface air temperature record.

Claims of highest air temperature ever, based on even 0.5 C differences, are utterly insupportable and without any meaning.

All of the debates about highest air temperature are no better than theological arguments about the ineffable. They are, as William F. Buckley called them, “Tedious speculations about the inherently unknowable.”

There is no support in the temperature record for any emergency concerning climate. Except, perhaps an emergency in the apparent competence of AGW-consensus climate scientists.

4. Acknowledgements: Prof. Hendrik Huwald and Dr. Marc Parlange, Ecole Polytechnique Federale de Lausanne (EPFL), CH-1015 Lausanne, Switzerland, are thanked for generously providing the Plaine Morte sensor calibration data entering into Figure 4, Figure 5, and Figure 6. This work was carried out without any external funding.

5. References

[1] JCGM, Evaluation of measurement data — Guide to the expression of uncertainty in measurement 100:2008, Bureau International des Poids et Mesures: Sevres, France.

[2] Frank, P., et al., Determination of ligand binding constants for the iron-molybdenum cofactor of nitrogenase: monomers, multimers, and cooperative behavior. J. Biol. Inorg. Chem., 2001. 6(7): p. 683-697.

[3] Frank, P. and K.O. Hodgson, Cooperativity and intermediates in the equilibrium reactions of Fe(II,III) with ethanethiolate in N-methylformamide solution. J. Biol. Inorg. Chem., 2005. 10(4): p. 373-382.

[4] Hinkley, N., et al., An Atomic Clock with 10-18 Instability. Science, 2013. 341(p. 1215-1218.

[5] Parker, D.E., et al., Interdecadal changes of surface temperature since the late nineteenth century. J. Geophys. Res., 1994. 99(D7): p. 14373-14399.

[6] Quayle, R.G., et al., Effects of Recent Thermometer Changes in the Cooperative Station Network. Bull. Amer. Met. Soc., 1991. 72(11): p. 1718-1723; doi: 10.1175/1520-0477(1991)072<1718:EORTCI>2.0.CO;2.

[7] Hubbard, K.G., X. Lin, and C.B. Baker, On the USCRN Temperature system. J. Atmos. Ocean. Technol., 2005. 22(p. 1095-1101.

[8] van der Meulen, J.P. and T. Brandsma, Thermometer screen intercomparison in De Bilt (The Netherlands), Part I: Understanding the weather-dependent temperature differences). International Journal of Climatology, 2008. 28(3): p. 371-387.

[9] Barnett, A., D.B. Hatton, and D.W. Jones, Recent Changes in Thermometer Screen Design and Their Impact in Instruments and Observing Methods WMO Report No. 66, J. Kruus, Editor. 1998, World Meteorlogical Organization: Geneva.

[10] Lin, X., K.G. Hubbard, and C.B. Baker, Surface Air Temperature Records Biased by Snow-Covered Surface. Int. J. Climatol., 2005. 25(p. 1223-1236; doi: 10.1002/joc.1184.

[11] Hubbard, K.G. and X. Lin, Realtime data filtering models for air temperature measurements. Geophys. Res. Lett., 2002. 29(10): p. 1425 1-4; doi: 10.1029/2001GL013191.

[12] Huwald, H., et al., Albedo effect on radiative errors in air temperature measurements. Water Resorces Res., 2009. 45(p. W08431; 1-13.

[13] Menne, M.J. and C.N. Williams, Homogenization of Temperature Series via Pairwise Comparisons. J. Climate, 2009. 22(7): p. 1700-1717.

[14] Briffa, K.R. and P.D. Jones, Global surface air temperature variations during the twentieth century: Part 2 , implications for large-scale high-frequency palaeoclimatic studies. The Holocene, 1993. 3(1): p. 77-88.

[15] Hansen, J. and S. Lebedeff, Global Trends of Measured Surface Air Temperature. J. Geophys. Res., 1987. 92(D11): p. 13345-13372.

[16] Brohan, P., et al., Uncertainty estimates in regional and global observed temperature changes: A new data set from 1850. J. Geophys. Res., 2006. 111(p. D12106 1-21; doi:10.1029/2005JD006548; see http://www.cru.uea.ac.uk/cru/info/warming/.

[17] Karl, T.R., et al., The Recent Climate Record: What it Can and Cannot Tell Us. Rev. Geophys., 1989. 27(3): p. 405-430.

[18] Hubbard, K.G., X. Lin, and E.A. Walter-Shea, The Effectiveness of the ASOS, MMTS, Gill, and CRS Air Temperature Radiation Shields. J. Atmos. Oceanic Technol., 2001. 18(6): p. 851-864.

[19] MacHattie, L.B., Radiation Screens for Air Temperature Measurement. Ecology, 1965. 46(4): p. 533-538.

[20] Rüedi, I., WMO Guide to Meteorological Instruments and Methods of Observation: WMO-8 Part I: Measurement of Meteorological Variables, 7th Ed., Chapter 1. 2006, World Meteorological Organization: Geneva.

[21] Berry, D.I. and E.C. Kent, Air–Sea fluxes from ICOADS: the construction of a new gridded dataset with uncertainty estimates. International Journal of Climatology, 2011: p. 987-1001.

[22] Challenor, P.G. and D.J.T. Carter, On the Accuracy of Monthly Means. J. Atmos. Oceanic Technol., 1994. 11(5): p. 1425-1430.

[23] Kent, E.C. and D.I. Berry, Quantifying random measurement errors in Voluntary Observing Ships’ meteorological observations. Int. J. Climatol., 2005. 25(7): p. 843-856; doi: 10.1002/joc.1167.

[24] Kent, E.C. and P.G. Challenor, Toward Estimating Climatic Trends in SST. Part II: Random Errors. Journal of Atmospheric and Oceanic Technology, 2006. 23(3): p. 476-486.

[25] Kent, E.C., et al., The Accuracy of Voluntary Observing Ships’ Meteorological Observations-Results of the VSOP-NA. J. Atmos. Oceanic Technol., 1993. 10(4): p. 591-608.

[26] Rayner, N.A., et al., Global analyses of sea surface temperature, sea ice, and night marine air temperature since the late nineteenth century. Journal of Geophysical Research-Atmospheres, 2003. 108(D14).

[27] Emery, W.J. and D. Baldwin. In situ calibration of satellite sea surface temperature. in Geoscience and Remote Sensing Symposium, 1999. IGARSS ’99 Proceedings. IEEE 1999 International. 1999.

[28] Emery, W.J., et al., Accuracy of in situ sea surface temperatures used to calibrate infrared satellite measurements. J. Geophys. Res., 2001. 106(C2): p. 2387-2405.

[29] Woodruff, S.D., et al., The Evolving SST Record from ICOADS, in Climate Variability and Extremes during the Past 100 Years, S. Brönnimann, et al. eds, 2007, Springer: Netherlands, pp. 65-83.

[30] Brooks, C.F., Observing Water-Surface Temperatures at Sea. Monthly Weather Review, 1926. 54(6): p. 241-253.

[31] Saur, J.F.T., A Study of the Quality of Sea Water Temperatures Reported in Logs of Ships’ Weather Observations. J. Appl. Meteorol., 1963. 2(3): p. 417-425.

[32] Barnett, T.P., Long-Term Trends in Surface Temperature over the Oceans. Monthly Weather Review, 1984. 112(2): p. 303-312.

[33] Anderson, E.R., Expendable bathythermograph (XBT) accuracy studies; NOSC TR 550 1980, Naval Ocean Systems Center: San Diego, CA. p. 201.

[34] Bralove, A.L. and E.I. Williams Jr., A Study of the Errors of the Bathythermograph 1952, National Scientific Laboratories, Inc.: Washington, DC.

[35] Hazelworth, J.B., Quantitative Analysis of Some Bathythermograph Errors 1966, U.S. Naval Oceanographic Office Washington DC.

[36] Kennedy, J.J., R.O. Smith, and N.A. Rayner, Using AATSR data to assess the quality of in situ sea-surface temperature observations for climate studies. Remote Sensing of Environment, 2012. 116(0): p. 79-92.

[37] Hadfield, R.E., et al., On the accuracy of North Atlantic temperature and heat storage fields from Argo. J. Geophys. Res.: Oceans, 2007. 112(C1): p. C01009.

[38] Castro, S.L., G.A. Wick, and W.J. Emery, Evaluation of the relative performance of sea surface temperature measurements from different types of drifting and moored buoys using satellite-derived reference products. J. Geophys. Res.: Oceans, 2012. 117(C2): p. C02029.

[39] Frank, P., Uncertainty in the Global Average Surface Air Temperature Index: A Representative Lower Limit. Energy & Environment, 2010. 21(8): p. 969-989.

[40] Frank, P., Imposed and Neglected Uncertainty in the Global Average Surface Air Temperature Index. Energy & Environment, 2011. 22(4): p. 407-424.

[41] Hansen, J., et al., GISS analysis of surface temperature change. J. Geophys. Res., 1999. 104(D24): p. 30997–31022.

[42] Hansen, J., et al., Global Surface Temperature Change. Rev. Geophys., 2010. 48(4): p. RG4004 1-29.

[43] Jones, P.D., et al., Surface Air Temperature and its Changes Over the Past 150 Years. Rev. Geophys., 1999. 37(2): p. 173-199.

[44] Jones, P.D. and T.M.L. Wigley, Corrections to pre-1941 SST measurements for studies of long-term changes in SSTs, in Proc. Int. COADS Workshop, H.F. Diaz, K. Wolter, and S.D. Woodruff, Editors. 1992, NOAA Environmental Research Laboratories: Boulder, CO. p. 227–237.

[45] Jones, P.D. and T.M.L. Wigley, Estimation of global temperature trends: what’s important and what isn’t. Climatic Change, 2010. 100(1): p. 59-69.

[46] Jones, P.D., T.M.L. Wigley, and P.B. Wright, Global temperature variations between 1861 and 1984. Nature, 1986. 322(6078): p. 430-434.

[47] Emery, W.J. and R.E. Thomson, Data Analysis Methods in Physical Oceanography. 2nd ed. 2004, Amsterdam: Elsevier.

[48] Frank, P., Negligence, Non-Science, and Consensus Climatology. Energy & Environment, 2015. 26(3): p. 391-416.

[49] Folland, C.K., et al., Global Temperature Change and its Uncertainties Since 1861. Geophys. Res. Lett., 2001. 28(13): p. 2621-2624.

5 1 vote
Article Rating
300 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Rich Lambert
April 19, 2016 5:16 am

I’ve always wondered about those magical air temperature thermometers.

Reply to  Rich Lambert
April 19, 2016 8:57 pm

Thanks for your interest, Rich.
And if the moderator doesn’t mind, I’ll use this reply to say, Thanks, Anthony for posting my essay! 🙂
Thanks also to everyone for offering your thoughts and responses. It’s very appreciated. I promise to respond to the various questions and challenges but this is evening and weekend work for me, so it may take awhile to work down the thread.
But I’ll get there. Thanks again to everyone, and especially to you, Anthony. Especially for just being there, doing the work you do, and contributing so much to everyone’s sanity.

Reply to  Pat Frank
April 20, 2016 1:01 am

This is *brilliant* entirely. It is quite literally *fundamental* science! the kind of thing I was taught in Physics 1 labs too many decades ago, but with up to date numbers about a topic of public concern. (And of course we must inspect systematic error in any proxy until it has been measured, if that’s even possible.). From now on, the touchstone for whether any climate scientist is competent to express an opinion will be “have they addressed this issue.” Thank you very much for a clear explanation.

Drcrinum
April 19, 2016 5:31 am

I have been saying for years that the surface temperature data record would be laughed out of existence by any other physical science outside of Climatology because of lack of standardization, quality control and calibration.

Reply to  Drcrinum
April 19, 2016 5:51 am

I could not agree more. But to add on to you comment, I don’t understand why we are not talking about the energy content of the atmosphere. We only talk temperature. And we only talk temperature at the surface where the temperature is the hottest due to gravity and other factors. And we don’t take humidity into account here on water world,
And then there is the fact that for most of the record we could only measure to the about plus/minus half a degree but we claim accuracy to the 2nd decimal place, What is up with that?
Getting an honest temperature data set out of the government “scientists” is like trying to get a fair loan from a Mafia Don.
~ Mark

Editor
Reply to  markstoval
April 19, 2016 6:27 am

Sometimes those surface measurements are not the warmest in the air column. On clear, windless nights radiational cooling in New Hampshire valleys occasionally results in day time low temps that are below the temperature atop Mt Washington. Once the inversion breaks, the valleys wind up some 30°F warmer than Mt Washington.
Satellite reading of the lower (which is not so low) troposphere are a lot more meaningful when considering energy budgets.
Every day we beat Mt Washington or there’s frost on my car I observe that CO2 (and H2O!) levels haven’t stopped all the longwave IR from leaving Earth.

ShrNfr
Reply to  markstoval
April 19, 2016 6:31 am

Exactly. If CO2 is causing anything to be retained, it is enthalpy. Temperature at the surface is a very, very poor estimate of retained enthalpy. Most especially with the contamination due to human activity that changes the albedo, hydrology, etc. in large areas and injects enthalpy in the areas of the sensors.

David S
Reply to  markstoval
April 19, 2016 7:39 am

I think you’re overestimating the crookedness of Mafia Dons 🙂

Greg
Reply to  markstoval
April 19, 2016 2:42 pm

This article raises a very important point. What Judith Curry calls the uncertainty monster.
One of the key problems here is that the officially quoted error estimations are usually taken from the variance of data. ie a certain number of std deviations is taken to give x% uncertainty range of the result.
This is again assuming constant , normally distributed causes of error. It does not account of systematic errors.
For example. hadSST2 made a clunky 0.5 deg C “correction” to SST in 1946. This was obviously clumsy and wrong. So hadSST3 decided it was “right for the wrong reason” and muddled together a system which, instead of step change, phased in the same thing exponentially over 20 or so years. It was less shocking to the eye but gave about the same long term tendency so no one had to redo all their climate science or rewrite the carefully tuned models to produce a different result . IPCC is saved ( phew ! ).
However, some of data from 1946-1960, which had changed by upto 0.5 deg C., was stated as having an uncertainty of 0.1 deg C just as it had done before the changes.
Well both can’t be correct. One set of data must have an uncertainty of +/-0.1 +/-0.5 deg C. or the error assessment technique is being done in a fundamentally incorrect manner.
You cannot seriously put forward two sets of data with a declared 0.1 deg C uncertainty that differ by 0.5 deg C.
The unique purpose of these blatantly optimistic uncertainty claims are to determine what can be called “statistically significant” and “unprecedented”.

Greg
Reply to  markstoval
April 19, 2016 2:50 pm

Oh, and BTW adding land and sea temps is not legit in the first place since they are different physical media.
https://judithcurry.com/2016/02/10/are-land-sea-temperature-averages-meaningful/

Reply to  markstoval
April 19, 2016 9:09 pm

Greg, you’re on the right track. Systematic uncertainty is roundly neglected throughout consensus climatology.
Random measurement error is the unvarying assumption among the air temperature compilers. The assumption allows them to discharge all the error when taking averages.
That assumption is also unwarranted.

MarkW
Reply to  Drcrinum
April 19, 2016 7:09 am

Climatology is also the only field where you can intensively measure 3 to 5% of the whole, and then declare you have perfect knowledge of the whole.

CaligulaJones
Reply to  Drcrinum
April 19, 2016 7:10 am

…and when they splice that surface temperature record to an even dodgier tree-ring proxy…big yucks, that.

John W. Garrett
Reply to  Drcrinum
April 19, 2016 7:29 am

Bingo.
That’s the real elephant in the room.
The historic temperature record is a mess. It is completely unreliable. For large swathes of the planet, there aren’t any records at all; for other large swathes what is being palmed off as temperature records are a joke.
How can anybody seriously believe accurate temperatures were being recorded in China from, say 1910-1960, or in Russia from, say 1917- 1945?

CaligulaJones
Reply to  John W. Garrett
April 19, 2016 11:47 am

“How can anybody seriously believe accurate temperatures were being recorded in China from, say 1910-1960, or in Russia from, say 1917- 1945?”
Well, I’ve read that with central planning, the colder it was, the more coal your city got. Nope, can’t see any potential gaming there at all.
I really need to find the article I read years ago in a “real” sciency magazine. Basically, the author admitted that he really, really hated to say that tree rings were, at best, accurate to within 2 degrees F because it would be used by “deniers”. Seriously, that’s the mindset of these folks: use inaccurate data, but blame someone for pointing out you are using it. Like a drunk who gets upset when you count his drinks I guess.

William Handler
April 19, 2016 5:34 am

Not saying I like their measurements, or their science, but if you take a series of measurements with a known accuracy you can extract a mean value with higher accuracy than the accuracy of individual measurements. The standard error on the mean value falls as you add more data.
I did a simple test to make sure I was right, made a million gaussian distributed numbers offset to the value of 10.215. I changed the number series into values to the nearest 0.1, averaged and get numbers close to the 10.215, closer than the accuracy of each number, which was 0.1.
I still do not think that they understand the systematics of their temperature experiments very well, and do a poor job on data reduction which lacks in transparency and simplicity.
Will

wsbriggs
Reply to  William Handler
April 19, 2016 6:02 am

As the paper says, and as they teach experimental physicists, do the groundwork on your systematic errors. If you don’t you wind up with an experiment which shows nothing but that you didn’t do the critical calibration work to show what the systematic error was. There is no magic statistical bullet which eliminates that error. Dr. Frank’s first graphic showed that conclusively.
To make it simple, you have a highly accurate gun, with extreme precision, it places the projectile on the target every time in the same hole, but it’s pointed to the shooting bay next to yours not your target. That is systematic error.

Reply to  wsbriggs
April 19, 2016 9:25 pm

William, you’re dead-on right. And we can augment your great shooter analogy with systematic error by adding the notion that serious hand tremors (uncontrolled variables) strongly compound the targeting error.

Bob Ryan
Reply to  William Handler
April 19, 2016 6:11 am

Will – your result is only valid for unsystematic error. Systematic error or bias cannot be averaged out – it needs to be corrected for and that is where the uncertainty lies.

Reply to  Bob Ryan
April 19, 2016 9:28 pm

Exactly right, Bob Ryan.
And given that the air temperatures are historical, none of the contaminating systematic error in the past temperature record can be known or corrected out.

Wayne
Reply to  William Handler
April 19, 2016 6:11 am

Will, My key takeaway from the article is that the errors of thermometers are being treated as your gaussian-distribution experiment: as symmetrical, “random” noise that will cancel out more and more as we collect more and more samples. To the extent that I understand the article, he shows that the errors are not symmetrical and are not “random”, but are in fact systemically biased.
So you are correct that many measurements can be used to more accurately determine a value than any individual measurement can. Under the right conditions, and given that the measurements/equipment/environment are consistent with a particular kind of error. The problem is, outdoor temperature measurements aren’t consistent with that kind of error.
For example, look at Figure 10. The bucket measurements are fairly symmetrical. The engine intake measurements are not symmetrical, so your nice gaussian simulation isn’t applicable.

Reply to  William Handler
April 19, 2016 6:16 am

Did you notice Figure 5. The instrument error there is nor Gaussian.
You can’t average out these failings.
If you could then just taking an anomaly would fix the problem.

Owen in GA
Reply to  M Courtney
April 19, 2016 6:39 am

That is the biggest error I have seen: assuming error is normal without testing it to prove the assumption. Non-Normal Errors do not average out but add a systemic error to the process every time! The errors shown in this report all are skew errors with long right tails, meaning they all add a warm bias that is still there after a million readings.

Reply to  M Courtney
April 19, 2016 9:31 pm

Wayne, M Courtney, Owen, you’ve all got it exactly right. And given the publication record, and the invariable assumption therein of random measurement error, you all understand something that is apparently completely lost to the professionals in the field.

Reply to  William Handler
April 19, 2016 6:18 am

Measuring the same thing 100 times with the same, accurate, measuring device is not the same as measuring it with one hundred different, accurate, devices.

Reply to  usurbrain
April 19, 2016 7:57 am

Yes indeed, measure the diameter of a drilled hole with calipers, a coordinate measuring machine, dedicated bore scope, a set of gauge pins, and a yard stick and you will get five different answers.
And now, just for fun, think about the fact that there are at least 1250 tide gauges around the world and we are treated to scientists telling us they know the rate of sea level rise to a tenth of a millimeter per year.

D. J. Hawkins
Reply to  William Handler
April 19, 2016 6:46 am

This sounds similar to the law of large numbers, where averaging allows you to narrow the error band below the inherent accuracy of the instrument. However that doesn’t apply here. In order to apply, you have to measure the same thing in space and time, not different things in different spaces and different times. Proper application: go to the hardware store and look at the display of outdoor thermometers. Quickly record all 200 readings. Now you can use the law of large numbers. Improper application: go to the hardware store and look at the display of outdoor thermometers. Record ONE reading. Repeat over the course of a year. Now you CAN’T use the law of large numbers. Which situation is more like the attempt to measure “global” temperature?
In addition, your experiment ignores the finding that the error distributions are NOT Gaussian as your test assumes. Look at the Winter errors for MMTS and the daytime errors for the Young PRT above.
Finally, your test assumes that there are no complex systemic errors, only a simple offset error.

Tom T
Reply to  William Handler
April 19, 2016 10:25 am

Uh look at the data.
The systematic error is not Gaussian around the mean.

Curious George
Reply to  William Handler
April 19, 2016 10:41 am

Oh – let’s take a thermometer with a 1 degree C accuracy. Let’s take 10,000 measurements – voila, we know the temperature to 1/100 degree C accuracy.
Not on my planet.

James Schrumpf
Reply to  William Handler
April 19, 2016 10:43 am

I could see this argument if you made the measurements of the exact same thing at the exact same time, as if one took a thousand thermometers and measured the temperature in one place at the same time. That is not what we have here. They’re taking thousands of measurements from different places on the earth and claiming they can use the same technique to get a very accurate of the Earth’s temperature, but what they have is thousands of individual measurements of different things and claiming they are of the same thing.

Reply to  James Schrumpf
April 20, 2016 8:48 pm

James Schrumpf, I’ve published a paper on systematic error in the temperature record here (869.8 KB), that discusses the cases you raise.
It’s open access.
In short, so long as thermometer error is random, measurement error will decrease in the mean of measurements from different thermometers. But as soon as systematic error enters, all bets are off.

john harmsworth
Reply to  William Handler
April 19, 2016 12:18 pm

I would accept that your experiment is valid for random errors. I believe it is important to understand that temperature monitoring includes a natural bias toward higher readings due to the fact that all extraneous heat must be excluded from the sensor to get a good reading. One cannot remove more than all extraneous heat so that aspect of error is eliminated. Beyond instrument error and calibration error, this fact is the constant inherent bias in temperature readings that makes it different than most other readings and causes readings to show upside error preferentially.

Reply to  William Handler
April 19, 2016 9:21 pm

William Handler, first, your analysis is correct only if error is randomly distributed.
Second, accuracy is determined by calibration experiments. The random error approach to a better mean is strictly true for precision, but not for accuracy.
That is, repetition of inaccurate measurements that are also contaminated with random error will converge to a true mean. But that mean will remain inaccurate.
The systematic error discussed in my essay derives from uncontrolled environmental variables, mostly wind speed and irradiance. The induced error is variable in time and in space, is not randomly distributed, and does not average away. Systematic error may even get larger with repeated measurements.

William Handler
Reply to  Pat Frank
April 20, 2016 5:00 pm

Thanks Pat, makes sense. I read all the other replies as well!

Reply to  Pat Frank
April 20, 2016 8:49 pm

Thanks right back, William.

Robert B
Reply to  William Handler
April 20, 2016 12:21 am

I had trouble getting this one across to people who bow to the law of large numbers. You can get the centre of a dart board to the nearest millimetre if you throw a million darts perfectly randomly and measure them to the nearest millimetre. If you are measuring where the darts hit to the nearest metre then its pointless. You can’t tell if its perfectly randon or there is mm systematic error, and you most likely have a systematic error because the resolution is so poor.

Robert B
Reply to  Robert B
April 20, 2016 12:23 am

Excuse the systematic spelling errors.

Reply to  Robert B
April 20, 2016 1:10 am

Suppose you can measure to an accuracy of +/-0.1 degree (random) PLUS a bias of one degree high. No amount of averaging will do anything to that bias. How could it.

Reply to  Richard A. O'Keefe
April 20, 2016 4:31 am

” Suppose you can measure to an accuracy of +/-0.1 degree (random) PLUS a bias of one degree high. No amount of averaging will do anything to that bias. How could it.”
The day to day change in min temp removes that bias, but you don’t get an absolute value. But for this issue (co2 warming) you don’t need absolute value, you really want the change.and if absolute is required, you can get that from an accurate station and use that value to start tracking the day to day change from.

Tom T
Reply to  Robert B
April 20, 2016 8:59 am

“But for this issue (co2 warming) you don’t need absolute value, you really want the change.and if absolute is required, you can get that from an accurate station and use that value to start tracking the day to day change from.”
You are assuming that the systematic error is stationary and/or Gaussian. Since as the article explains the systematic error is caused primarily by environmental conditions do we expect those conditions to stay the same, get better, or get worse with time.

Reply to  Tom T
April 20, 2016 9:38 am

You are assuming that the systematic error is stationary and/or Gaussian. Since as the article explains the systematic error is caused primarily by environmental conditions do we expect those conditions to stay the same, get better, or get worse with time.

I came up with my own method that reduces these kinds of systematics to the smallest value possible.
I was interested in how fast it cools. How fast does it close at sunset, through the blanket of Co2 that was suppose to be our death. I knew after spending many evenings setting up my telescope up, it cools fast. I theorized that if Co2 was causing warming, at the fastest cooling rates the planets sees every day, if we couldn’t tell, Co2 had already slowed it down, it was ineffective, no matter what a jar in a lab or made up global temperature series.
So I look at the difference between todays min temp and this afternoons max temp, and the that same max with tomorrow mornings min temp.
Now, what kinds of systematics will a station see in one 24 hours cycle.
1) Slow changes in everything. Grass growing, trees growing, things getting dirty. Things on a slow cycle, that eventually go away, will be removed from future 24 hr cycles with the opposite signed event likely the same magnitude and over a year it will be removed from the over all average.
2) Sharp events, the parking lot gets paved with asphalt will be a shift. but in a week, what is the difference in the cycles, I’m not looking at the temp, I look at how much it changed. so two weeks ago it change about 18F per day average, and this week it’s going to change near 18F per day, it will be at a higher temp, and that 24 hours cycle will have a bump in it’s derivative (because that is basically what I’m calculating because I have a constant cycle time). That bump will go into the average for a years worth of that station. But you know what, by winter, that asphalt is as cold as everything else. Same with an instrument error, if it goes bad it’s removed, and if it doesn’t produce enough days of days per year I don’t include it. If it reads high, it still drops the excess error by the next morning, so the rate of change of the station is higher than it should be, but so is it’s cooling rate, it averages out. Then I also can take advantage of the slow change as the length of day changes, for each station I track how fast the temp changes as the amount of energy is applied.
If I was designing an experiment, trying to understand the thermal response of a complex system, I’d apply a different amount of energy, and then measure it’s response, and then the next period I’d increase it a little and measure that.
You might be able to determine it’s dynamic response, and if Co2 was altering it.
And the answer is, if it is doing anything, it’s barely visible, and there are other process altering the regional climates orders of magnitudes larger than what Co2 is capable of doing.
All of the kvetching the last 30 years are from the oceans, land use, the Sun and not Co2. It’s obvious.
But you’ll never see it when you look at GAT.
https://micro6500blog.wordpress.com/2015/11/18/evidence-against-warming-from-carbon-dioxide/

Reply to  Robert B
April 20, 2016 8:52 pm

micro6500, taking anomalies accomplishes nothing when the systematic error is non-normal and varies with every measurement.

Reply to  Pat Frank
April 21, 2016 1:52 am

” , taking anomalies accomplishes nothing when the systematic error is non-normal and varies with every measurement.”
Can you provide an example of this type of error?
Are you referring to measurement uncertainty? That I agree is not removed in the method I use.
But that’s why I ask for an example, so I understand.

Reply to  Robert B
April 21, 2016 8:47 pm

micro6500, all of the land surface temperature errors in my article are examples of that sort of error.
The errors vary with wind speed and irradiance, which vary in time and space.

Reply to  Pat Frank
April 21, 2016 9:23 pm

” The errors vary with wind speed and irradiance, which vary in time and space.”
Ok, thanks.
So, with a methodology trying to resolve to a unique value, I agree with you, solving for an anomaly or averaging imprints the error 8nto the data.
But I’m not solving for a temperature, I’m solving for a derivative based on the min and max temps, and when trying to resolve mean temp to a hundredth or thousandth of a degree the wind makes an important difference, depressing say max temp on one day which alter daily mean, but the way I use it, it would depress both rate of warming and rate of cooling equally, and when you subtract the 2 rates, the wind didn’t change anything.
In other cases, any stack up of errors, as best as I can think of, maybe I get a low difference rate value, but it has to return back to normal, and I get the reverse sign, and it averages away.

Reply to  Robert B
April 23, 2016 1:44 pm

micro6500, assuming the wind effect always changes sign identically in warming and cooling is exactly the same assumption as that all measurement error is random.
That is, you’re making the same assumption as the compilers of the published temperature record.
There is no physical reason to think the wind effect auto-cancels. Nor the effect of irradiance.

Reply to  Pat Frank
April 23, 2016 5:35 pm

Pat, it cancels because the sample is at the min or max.
Look, max can artificially high or low, same with the following min, but if it is, it’s no difference than weather, it has to return at some point to the local macro climate for that time of year.
And you can see that if you take the derivative of daily max, it’s annual average is a few hundredth of a degree, it averages out over a year. Min however doesn’t. Go look if you haven’t, and then look at the daily rate of change during the year, this is a strong signal based on the change in length of day.
https://micro6500blog.wordpress.com/2015/11/18/evidence-against-warming-from-carbon-dioxide/

skeohane
April 19, 2016 5:39 am

Wow, what a great exposition. I spent a few years gathering data and studying the effects of temperature on linewidth control in ICs during the 80s, and beat the process into a +/-0.1°F to achieve a linewidth of 3 sigma=+/-0.1 micron.
As a sarcastic remark regarding the long term temperature records and the modern ‘adjustments’ to them, I once observed that people are getting taller. Therefore, a fixed height thermometer would be consistently under-read in the past and over-read today. Thus spuriously explaining the ‘adjusting’ to lower, past records, and raising modern ones.
[The opposite though? People are taller now, and so reading the fixed height thermometer from a higher relative point of the top of the mercury column. .mod]

skeohane
Reply to  skeohane
April 19, 2016 7:54 am

You’re right, not enough coffee yet!!

Pamela Gray
Reply to  skeohane
April 20, 2016 8:00 pm

With one or two exceptions. I have to get a ladder just to see the bulb, let alone where the top of the mercury is in the tube.

Sal Minella
April 19, 2016 5:40 am

Beautiful piece of work!! Instrument accuracy, measurement precision, siting biases and retroactive “fixing” of the data have always left me feeling that the instrumental record tells us nothing about historic global average temperature.

Andrew
April 19, 2016 5:43 am

I think I understand why you say the central limit theorem doesn’t apply, but that doesn’t rule out that the average is more accurate than the individual measurements, just that you can’t prove it with that. Could you explain please?

Don K
Reply to  Andrew
April 19, 2016 8:04 am

The average (mean) is more precise than the individual measurements, not necessarily more accurate. For some purposes — and measuring “global temperature change” may be one of them — precision is good enough.
For others like evaluating basic physical equations that have temperature terms, you need accuracy and the fact that you know an incorrect T value to several decimal places isn’t going to improve your (probably wrong) answer.

Steve from Rockwood
Reply to  Andrew
April 19, 2016 10:47 am

Every measurement includes a systematic and random error component. The random errors from many measurements will form a normal Gaussian distribution and the systematic error will shift (or offset) that distribution away from the true value. Averaging would only move the measurement closer to the true value plus the systematic error component (i.e. reduce the magnitude of the random error). This may sound beneficial at first but systematic error is often a few orders of magnitude greater than random error. You may use repeatability of measurement as a measure of accuracy but you are only confirming the precision of the measurement and not its absolute accuracy.
What is described here is not really systematic error, which is constant. This should mean that the trend of world temperatures is correct but the absolute value of the baseline is not. This would still confirm global warming. However, if temperature sensors have a tendency to drift higher over time, then the measurement bias would be toward higher temperatures. But this article doesn’t really address this.

Reply to  Andrew
April 19, 2016 12:23 pm

Precision and accuracy are very different. The average could be more precise, but not more accurate than the individual measurements. However, precision would only increase with more measurements if the distribution of the measurement values were a Gaussian distribution.

Reply to  isthatright
April 19, 2016 9:40 pm

Exactly right, isthatright. 🙂

Reply to  Andrew
April 19, 2016 9:39 pm

Andrew, given accurate measurements with only random error, then the mean of many measurements has improved accuracy.
Systematic error from uncontrolled variables does not reduce in a mean, because it’s not normally distributed.

MikeC
April 19, 2016 5:49 am

I am stunned. As an engineering student, measurement and the error associated with it were very important in all our experimental reports. It always seemed to me that the published temp records had error that were too small. But I never delved into how the measurements were made or processed. Thank you for a clear and compelling analysis!

Reply to  MikeC
April 19, 2016 9:44 pm

You’re well-trained, MikeC, and in a position to speak out about the total neglect of error in consensus climatology

JohnWho
April 19, 2016 6:01 am

Maybe as I layman I just don’t understand, but (and I asked about this in a recent Bob Tisdale topic) why is it that over land we measure the air temperature about 1 meter above the surface while over the oceans we are measuring the surface of the ocean temperature? Is there an exact correlation between the ocean surface temperature and the air temperature 1 meter directly above it? If it isn’t exact then it would seem to me that this would be another area where an uncertainty bias is being recorded and ignored.
Added to what is being discussed, the “Global Average Surface Temperature” becomes even more uncertain.

Reply to  JohnWho
April 19, 2016 7:43 am

The ocean ‘temperature’ is a real problem. The surface skin temperature can remain incredibly stable while the temperature 6 inches down can easily vary 20˚ depending on the solar insolation or lack of. Pretty much the opposite problem of measuring the actual surface temperature of a driveway compared to the air.

Reply to  jinghis
April 19, 2016 2:38 pm

Toneb – “No.”
Did you read your reference? “While the diurnal thermocline vanishes by sunrise nextmorning, the skin layer usually exists in both the daytimeand nighttime, even in windy conditions
Diurnal sea surface temperature variation and its impact on the atmosphere and ocean: A Review (PDF Download Available). Available from: https://www.researchgate.net/publication/225649603_Diurnal_sea_surface_temperature_variation_and_its_impact_on_the_atmosphere_and_ocean_A_Review [accessed Apr 19, 2016].

Toneb
Reply to  jinghis
April 19, 2016 2:49 pm

Jinghis:
Yes I did read it thanks.
Try reading the post I replied to.
The “no” was in reference to his absurd 20 dog variance in temp 6 ins down.

Reply to  jinghis
April 19, 2016 3:22 pm

Toneb
Ahh sorry, I see we are on the same page.

Reply to  JohnWho
April 19, 2016 10:43 am

JohnWho,
When I was serving on a weather observing cargo ship the sea temperature was taken anything from 15 to 30 feet below the sea surface, because that is where the engine cooling water intake was.

JohnWho
Reply to  JohnWho
April 19, 2016 12:40 pm

My question still remains:
Is there a direct relationship between the SST and the air temperature 1 meter above the sea surface?

Reply to  JohnWho
April 20, 2016 8:55 pm

“Oddly” is right, Bob. And yet, it passed peer review. And so it goes, these days.

Reply to  JohnWho
April 19, 2016 9:49 pm

JohnWho, the marine air temperatures are not used in part because they represent different (unknown) ship heights, wind speeds, and irradiance.
So, SSTs are used on the pretty good assumption that the air immediately above the sea surface is at pretty much the same temperature as the sea surface.
The problem is, of course, and jinghis has noted, that SSTs were not necessarily measured at the sea surface.

Editor
Reply to  Pat Frank
April 20, 2016 2:33 am

Pat Frank says: “JohnWho, the marine air temperatures are not used in part because they represent different (unknown) ship heights, wind speeds, and irradiance.”
I’m commenting on the “…marine air temperatures are not used…” part of that sentence. Oddly, NOAA used night marine air temperatures (an inferior dataset based on the quantity of observations) to bias adjust sea surface temperature data.

JohnWho
Reply to  Pat Frank
April 20, 2016 6:52 am

“Pat Frank
April 19, 2016 at 9:49 pm
JohnWho, the marine air temperatures are not used in part because they represent different (unknown) ship heights, wind speeds, and irradiance.
So, SSTs are used on the pretty good assumption that the air immediately above the sea surface is at pretty much the same temperature as the sea surface.”

“Pretty much the same temperature” plus/minus how much?
Seems this is just one more of the systematic errors not being addressed.
Especially if a GASTA is being determined by mixing surface station air temperatures with sea surface temperatures.

seaice1
April 19, 2016 6:04 am

“In graphical presentations of measurement or calculational data, systematic error is represented using uncertainty bars. [1]” Is this correct? I usually think of error bars as representing random rather than systematic error. If the syetematic error were known it would be corrected.

D. J. Hawkins
Reply to  seaice1
April 19, 2016 6:51 am

Systemic error does not refer only to simple offset error. Look at the results for the Young PRT again. Notice that the daytime errors are a range, and non-Gaussian to boot. The size of the error is different for each measurement. You’d have to construct an algorithm that takes into account the time of measurement and the distribution of the error at that time of day, assuming that based on TOD the errors were, in fact, normally distributed.

Reply to  seaice1
April 20, 2016 8:59 pm

seaice1, typically systematic error is estimated by doing calibration experiments under the experimental (observational) conditions. The average of the error is calculated as the usual root-mean-square and then the “±average uncertainty” is appended to every experimental (observational) result.

paullinsay
April 19, 2016 6:13 am

“Resolution limits and systematic measurement error produced by the instrument itself constitute lower limits of uncertainty. The scientists engaged in consensus climatology have neglected both of them.”
This sums up all of experimental climate science. I have yet to see a data plot with error bars of any kind in this field unlike say, physics, where every data plot includes them.
Mean sea level and sea ice extent are also prime candidates for a serious examination of measurement errors. Anyone who’s spent an hour at the ocean and seen how the wind, the currents, and the tides affect the surface, has to wonder how they get 1mm or better precision for sea level.

Reply to  paullinsay
April 19, 2016 8:00 am

Yep, over 1250 tide gauges around the world, and satellites that measure the radar reflections from the bottom of wave troughs.
Boggles the mind it does.

April 19, 2016 6:18 am

In some ways this provides justification for the treatment of seawater temperatures in Karl et al 2015.
The whole field is rife with meaningless measurements so one more won’t matter.

Owen in GA
April 19, 2016 6:34 am

Man, I missed a good conference. I can imagine the acrimony flying about at this presentation. I am surprised the speaker made it out of the venue to publish this as those who truly believe tend to get a bit violent in their rhetoric (if not in an actual physical sense – though with the AGs prosecuting “heretics” it is getting there). All the conferences I attend are fairly staid with nary a raised voice and lots of pats on the back.

Janice Moore
Reply to  Owen in GA
April 19, 2016 6:44 am

Indeed, Owen (in GA). My first thought upon seeing what group it was that Dr. Frank presented his excellent-as-usual findings to was, “that took guts.” The spirit of Dr. William Gray (and John Daly and Bob Carter and Hal Lewis, et. al.) is alive and well. Well done, Dr. Frank! We are proud of you.

Reply to  Janice Moore
April 19, 2016 9:57 pm

Thank-you, Janice. 🙂 If I may reassure you, though, the World Federation of Science is not at all like the Union of Concerned Scientists. The WFS is, well, actually concerned with dispassionate objective science.

Reply to  Owen in GA
April 19, 2016 9:55 pm

Owen, most of the audience members were polite and interested, some were very accepting of the points made, and only a few were upset. So, it wasn’t dangerous.
The email debate was a bit fraught, though, and could have had an unpleasant personal fallout if it had gone the other way.

GTL
April 19, 2016 6:45 am

Compare that to the previous 30 years without /or a different error and you have artificially created an anomaly.

GTL
Reply to  GTL
April 19, 2016 7:19 am

But here we are discussing thousands of instruments subject to numerous changes over a long time period.Often with instrument changes at various locations. I do not think the systemic error can be so easily dismissed.

Bill Illis
April 19, 2016 6:46 am

Now we see why the NOAA NCEI and Karl et al 2015 were so eager to adjust the bouys and Argo floats SST record up to match the ship engine intakes. There is a +0.5C average error in the ship engine intakes.

April 19, 2016 6:48 am

Here are the problems I have found with ARGO Buoy System. actually un-intended or ignored biases.
1. The ARGO Sales Brochure and calibration sheet (Dec 8 handout) claim it is calibrated against a TTS (Temperature Transfer Standard). It is not calibrated against an actual Triple Point Water (TPW) bath and a Gallium Melt Point (GPW) bath, or even boiling water at STD T/P. A TTS is an ultra-high accuracy resistor that can be used to provide the EXACT resistance of various Temperature sensors (e.g. RTD, PRT, etc.). It does not “magically” make the exact temperature (think oven or refrigerator) specified. The TTS is hooked up to the electronics with wires (calibrated leads), thus the temperature probe is not in the loop.
2. All of the data about accuracy is for the electronics only. PERIOD. It does not include, or even state the accuracy of the sensor, or how it is affected by Pressure (depth) or temperature. They just claim the sensors are highly accurate and high speed – fast. (See #1)
3. Also not included are the effects of ambient temperature on the electronic equipment. All of the SBE equipment that I have seen data sheets for will repeatedly and flawlessly provide the data they profess on the data sheets when, and only WHEN performed in laboratory environment. They do not provide any data as to what happens to the equipment or electronics under different ambient conditions. Read them yourself – it is not there. Real Scientific Instruments provide this data. Look through a Fisher Scientific, Omega Engineering, or other scientific instrument supplier and the ambient data is giving on the better equipment (or will with a phone call). Why isn’t SBE providing this data?
4. Where is the data for the PRT sensor? Are they telling us that everyone is exactly the same and provide exactly the same resistance as every other PRT they make, with exactly the same curve and readings at each of the ONLY two reference points they have calibrated this super expensive boondoggle as ever other PRT they make? I have only one word for that claim – B…… Again, why is this data missing?
5. Also missing is the effects of the probe (enclosure surrounding the “highly accurate” PRT temperature sensor). It is designed to withstand test depth plus some unknown margin (not specified on their “Sales Brochure”) or it would leak. That means there is a need to transfer the temperature of the ocean to the PRT. That means there is a gap between the two surfaces. That gap causes four (readily apparent) things 1. Decreases the speed, 2. Causes latency due to the fact that the probe must also change temperature. 3. The biggest problem is the gap causes a difference in temperature. 4. With cyclical pressure increases/decreases the gap will increase aggravating this condition. (I have seen it happen many times.) This the reason that you calibrate the equipment with a real triple point bath, etc. But this is EXPENSIVE, VERY EXPENSIVE. I have done it. You could buy a house or at least a very nice car for the price of calibrating the entire set of sensors in each probe for what this costs. That is why they use a TTS etc.
6. As explained earlier, all of the “Sales Brochures” indicates information and accuracy that would be obtainable only if used in laboratory conditions (an environmentally controlled area, including at least temperature, humidity, and pressure conditions equal to the conditions of calibration, +/- a few degrees). Equipment like this (this expensive) can result in a difference in displayed value of over 1-2% when subject to a temperature 100 degrees different than the factory calibration ambient. The ARGO probes are, from my understanding, subject to about 50 degrees F change from bottom to top of travel. That tells me that you will get about a 1% error that they do not discuss or deny.
7. Most of the above is also applicable to the surface thermometers. — The electronics are stuck in a small shelter that will beat the same temperature as the “measured” temperature. This means that the electrons is no longer at the same temperature as I it was when calibrated.

Quinn the Eskimo
Reply to  usurbrain
April 19, 2016 11:03 am

Wow.

bit chilly
Reply to  usurbrain
April 19, 2016 11:04 am

having personally experienced the difficulties of weighing industrial ceramic components in the green state to four decimal places i can appreciate what you are saying. if the same measured conditions were not maintained in the lab it played havoc with the finished product.

Reply to  usurbrain
April 19, 2016 12:34 pm

Excellent description of errors and calibration. In addition, almost any instrument will drift over time. Users should be aware of the magnitude of the drift and use that data to determine calibration frequencies.

Reply to  usurbrain
April 19, 2016 10:07 pm

usurbrain, you’re right. The accuracy and precision statements given in sensor brochures are only for ideal laboratory conditions.
Hence the need for field calibrations, as you so thoroughly explain. Field calibrations of meteorological air and SS temperature sensors are rare birds. This is almost criminal negligence, given the huge value of the decisions that are based on them.
Thanks for the insights about TTS calibration. I wasn’t aware of that.

Reply to  Pat Frank
April 20, 2016 12:42 pm

Pat Frank, Thanks.
Not saying they do not exist, however with the wealth of information I found on the internet I never found a “Calibration Curve” for the RTD (High accuracy resistor) cited or offered. This, after spending more than a week looking (I am retired and spent more than 8 hour days looking.) I worked with precision RTDs (0.1%) and all came with a calibration curve providing the exact resistance at 5 specified points. I would use these to adjust the function curve for the electronics, rather than using the standard “Alpha” curve. [“Alpha” is the slope of the resistance between 0°C and 100°C. This is also referred to as the temperature coefficient of resistance, with the most common being 0.00385W/W/°C.] I then “field calibrated” them to five points, two of which were the triple point and Boiling point at STP/Elevation for water. other three usually were “within tolerance.” The only thing I ever use the TTS for was a “Is it broke test.” These were also three wire RTDs so that the lead/termination/junction resistance temperature effects could be compensated for. I could find no evidence of this being done. Again, if used in a laboratory with a controlled environment, there is no need for that, the eads junctions are not going to see a temperature change. Look at any Precision Calibration facility, on the wall you will see a thermometer, barometer, Humidity, and other important environmental gauges, usually recording or-connected to a computer now-days.
In a nutshell this all looks like an expensive scam using expensive “Laboratory” equipment for a purpose in an environment it was not designed for – electronically. Or,at least done so by some graduate assistants that had the technical/book knowledge but none of the real world practical experience.

Reply to  Pat Frank
April 20, 2016 9:04 pm

usurbrain, those are all extremely salient criticisms, and seem centrally important.
I’d wonder if there was a way you could write them up analytically, and submit that to an instrumental journal.

Owen in GA
April 19, 2016 6:51 am

The problem is that the error isn’t always 1 degree too high. It has a skew right distribution that is not normal about some consistent high point. Your accuracy will never be higher than the error bars in the skew. The law of averages ONLY applies to NORMAL DISTRIBUTIONS.

April 19, 2016 6:51 am

Only problem is that we no longer use computer systems 10 years, (How old is your PC?) over 100 years even the NWS will replace equipment several times.

TA
April 19, 2016 6:51 am

This is pitiful. The Alarmists completely ignore their own surface temperature measuring instruments shortcomings, but bug the poor purveyors of satelllite temperature data to death over their potential instrument errors.
At least we have the satellites. We should get more of them up there. The more, the merrier

April 19, 2016 6:55 am

Pat
Many thanks for putting this together. On the occasions that I comment here and on Bishop Hill it seems like I’m a broken record when something so obvious and normal to experimenters and engineers is ignored in favour of blind theory.
Your Figure 12 (right) could be captioned “I’ll see your global warming and raise you a London Bus” such as the size of those error bars.

Reply to  mickyhcorbett75
April 19, 2016 10:10 pm

Thanks, mickyh. Investigating the whole AGW thing has been like entering quicksand. The deeper I got into it, the more it sucked me in. That essay is the part of the result of the struggle.

indefatigablefrog
April 19, 2016 6:56 am

Re: “More recently, Argo buoys were field calibrated against very accurate CTD (conductivity-temperature-depth) measurements and exhibited average RMS errors of ±0.56 C.”
I thought that only the cool argo results were the ones that needed tossing. (sarc)
At least that is what I learned from the words of the master argo data tosser:
“First, I identified some new Argo floats that were giving bad data; they were too cool compared to other sources of data during the time period. It wasn’t a large number of floats, but the data were bad enough, so that when I tossed them, most of the cooling went away. But there was still a little bit, so I kept digging and digging.”
http://earthobservatory.nasa.gov/Features/OceanCooling/

indefatigablefrog
Reply to  indefatigablefrog
April 19, 2016 7:05 am

Obviously, being a scientist, Willis must have then also conducted the same shambolic hand cherry-picking exercise but assuming that he had a preferential bias to discover argo floats that were “too WARM compared to other sources of data during the time period” – and tossing them instead.
Oh, no wait a minute – nope, he obviously ran out of time and then forgot what it was that he was supposed to be doing. Never mind.

Reply to  indefatigablefrog
April 19, 2016 10:31 am

preferential bias ?
I’d call it a political bias, but then what do I know.

Reply to  indefatigablefrog
April 19, 2016 6:12 pm

Ad hominem’s against Willis?
No reason? Just your typical assumed errors?

indefatigablefrog
Reply to  ATheoK
April 20, 2016 12:49 am

For the sake of clarification. My comment refers to Josh Willis.
The character at NASA, who is charitably depicted in my link.
Not Willis E. – the author of the O.P.
I think that may explain where our wires got crossed.
Since, I am certainly not presenting an ad-hom.

Reply to  indefatigablefrog
April 20, 2016 1:23 am

indefatigablefrog:

“…Willis must have then also conducted the same shambolic hand cherry-picking exercise but assuming that he had a preferential bias to discover argo floats that were “too WARM compared to other sources of data during the time period” – and tossing them instead.
Oh, no wait a minute – nope, he obviously ran out of time and then forgot what it was that he was supposed to be doing. Never mind.”

This comment refers to Josh? You mean that the sentence should start with “Josh must have…”, not Willis?

Reply to  indefatigablefrog
April 20, 2016 1:34 am

indefatigablefrog:
OK, I’ve got it now. You mean Josh Willis, keeper of the earth observatory.
Whenever I read Willis, I think Willis Eschenbach. Implicit versus explicit logic drives me crazy.
You have my apology for misreading and confusing your post.
I actually like Earthobservatory and have had a few discussions back and forth. To me it is a beautiful programming artifact and that someday will also represent accurate data.
Right now, they do the NASA/NOAA the world is on the edge of climate change disaster.
I do wish that Earthobservatory would actually identify what is modeled versus what is real data virtualization. e.g Earthobservatory uses the NASA/NOAA modeled CO2 data instead of pulling actual data right from the satellite dbs.
Temperature, Sea surface temperatures, etc are all the NOAA highly processed muck, not actual data.

Tom T
Reply to  indefatigablefrog
April 19, 2016 10:14 am

OMG I cant believe NOAA would even publish such a thing. Its scientific rape. Willis basically went out and looked for any reason he could think up to warm the record.

Tom T
Reply to  Tom T
April 19, 2016 10:16 am

Sorry NASA, but that is even worse. That NASA could think that such an analysis is valid is baffling.

MarkW
April 19, 2016 7:01 am

I’ve been saying for years that when you add together the resolution limits of the instruments, lack of basic site and sensor maintenance, and lack of spatial coverage, the true error bars for the ground based system is around 5C for modern measurements, and it increases as you go backwards in time.

April 19, 2016 7:11 am

“The first quality control test for any given station record includes a statistical check for correlation with temperature series among near-by stations.”
Wrong.
Depending on the source there are other QC tests that are performed first. Many in fact.
And depending on the processing correlation can be performed on a variety of metrics.
And in some cases ( to test the importance of this decision ) you can drop this testing
altogether.
QC or no QC the answer is the same: Its warming. There was an LIA

Reply to  Steven Mosher
April 19, 2016 10:02 am

Mosher,
Figure 12 says you are wrong. We don’t actually know what the temperatures were, so we don’t know how they have changed. This is why the phrase “Climate Scientists” should always be inside quotation marks…

Reply to  Steven Mosher
April 19, 2016 10:41 am

A tour de force presentation. The author brings a Big Bertha cannon argument and all Mosher can do is attempt to use a spit clogged pea shooter. That is the wimpiest non-rebuttal rebuttal ever by the Oz. This tells me the post deserves an A+ . I enjoyed it very much.
Bookmarked under the ever growing list of reasons to, at the very least, have some doubts about the consensus view.

Reply to  Steven Mosher
April 19, 2016 12:39 pm

Steven Mosher April 19, 2016 at 7:11 am Edit

“The first quality control test for any given station record includes a statistical check for correlation with temperature series among near-by stations.”

Wrong.
Depending on the source there are other QC tests that are performed first. Many in fact.

Steven, always good to hear from you. However, I think you might have missed the word “includes” in the statement that “the first test … includes a statistical check for correlation ….”.
I know that in my own work, if I’m looking at a new dataset, comparing it to other nearby measurements is included in my list of early tests.

And depending on the processing correlation can be performed on a variety of metrics.
And in some cases ( to test the importance of this decision ) you can drop this testing
altogether.
QC or no QC the answer is the same: Its warming. There was an LIA

Steven, “QC” is a red herring, it has little to do with what Pat is saying. Pat’s claim is that you have greatly underestimated the true error in your Berkeley Earth results. As a a response to Pat’s claim, your comment is … well … unresponsive.
However, given that you never did reply to my pointing out other issues with the Berkeley Earth methods in my post Problems With The Scalpel Method, while your response is unresponsive, it is not unrepresentative.
Zeke Hausfather did reply, he’s good about that, saying:
Zeke Hausfather June 29, 2014 at 12:01 pm Edit

Hi Willis,
Sorry for not getting back to you earlier; just landed back in SF after a flight from NYC.
The performance of homogenization methods in the presence of saw-tooth inhomogenities is certainly something that could be tested better using synthetic data. However, as Victor mentioned, relative homogenization methods look at the time-evolution of differences from surrounding stations. If the gradual part of the sawtooth was being ignored, the station in question would diverge further and further away from its neighbors over time and trigger a breakpoint.

So he says that
a) they haven’t actually tested their algorithm by e.g. adding synthetic sawtooth anomalies to real data, and
b) even if the error is there, it would be removed because the station would diverge from its neighbors over time.
Perhaps … and perhaps not. The point of relevance to this discussion is that there is additional uncertainty introduced by the use of the scalpel method, and I don’t see that being accounted for in the Berkeley Earth error estimates.
Best regards,
w.

Reply to  Willis Eschenbach
April 19, 2016 10:19 pm

Thank-you, Willis. 🙂

JohnWho
Reply to  Steven Mosher
April 19, 2016 2:10 pm


Steven Mosher
April 19, 2016 at 7:11 am

Its warming. There was an LIA”

Better might be “It has probably warmed. There was an LIA”,
leaving the question regarding the amount of warming and whether it continues
both scientifically debatable and, based on the quality of the historical atmospheric temperature data,
probably unknowable.

Reply to  Steven Mosher
April 19, 2016 6:17 pm

“…QC or no QC the answer is the same: Its warming. There was an LIA.”

Well, the last half of that is correct. Normal warming since the last LIA.

Scottish Sceptic
April 19, 2016 7:12 am

“It’s almost as though none of them have ever made a measurement or struggled with an instrument. There is no other rational explanation for that sort of negligence than a profound ignorance of experimental methods.”
I put an FOI request into the UEA to ask them what quality standard they used (e.g. ISO9000). As expected, they didn’t have any quality standard. Likewise, I doubt anyone them have ever actually made any serious measurements in real life situations – let alone had thousands of sensors that all needed calibrating.

bit chilly
Reply to  Scottish Sceptic
April 19, 2016 11:07 am

that is an utterly astounding admission .

Mike
April 19, 2016 7:19 am

Anybody know if ARGO has fixed its digital “trust” problem? Based on the technical information that was published after the start of this program, there is no digital proof that a given set of measurements came from a given probe, at a given place, at a given time. They instead trust the person(s) who collect the raw data from the buoys — and publish it. This failure to secure the real world to digital world interface would be totally unacceptable in any trusted system. The methods and technology to do this were well understood at the time this system was designed (e.g. your cell phone/network does it every time you make a call) so this is pretty inexcusable.

Reply to  Mike
April 20, 2016 1:30 am

Any digital reporting protocol I designed would include who I am, where I think I am, what time I think it is, how long since I last reported, my internal state ( battery level, temperature, humidity, vibration ) and so on before I got as far as sending the data. And that’s before putting serious thought into it. Where was the ARGO technical information published?

Tom Halla
April 19, 2016 7:24 am

Good discussion of instrument errors. The existence of non-Gaussian (non-random) errors seems to be an example of the old parable about the drunk looking under the streetlight for the item he dropped in the dark area farther down the block–treating errors as random is so much easier to deal with mathematically.

Reply to  Tom Halla
April 19, 2016 10:21 pm

You’re right, Tom. Accepting that the measurement error is non-random and systematic would leave them with nothing to talk about. Where’s the fun in that? 🙂

Joe Crawford
April 19, 2016 7:28 am

There is no other rational explanation for that sort of negligence than a profound ignorance of experimental methods.

Add this to a total lack of understanding of the scientific method and you have fully defined the current state of consensus climate research.

Reply to  Joe Crawford
April 19, 2016 2:54 pm

That is a great point, Joe. Everything I have seen out of “climate scientists”, especially the government funded ones, indicate they do not understand the scientific method or at least refuse to honor it.

ralfellis
April 19, 2016 7:32 am

Am I missing something? I don’t see the problem.
This is why anomalies are used, instead of absolute temperatures. As long as the errors remain constant over many years, the anomaly plot will give the correct information. The absolute temperature may be wrong on many occasions, but it will remain consistently wrong and predictably wrong, and so the anomaly differential will remain constant over time. And consistent with other stations, if they have the same instrument giving the same constant and predictable errors.
Systemic errors are only introduced into the system when something changes – like increasing UHI effects; like aircraft taxying past; like someone placing an air conditioning unit next to the sensor; or the sensor being replaced by something with a different set of constant errors etc: etc: Or, if the scientists systematically adjust historic data down, and recent data up.
So why is a consistent instrument error a problem, in this particular case?
R

Toneb
Reply to  ralfellis
April 19, 2016 7:56 am

Ralf:
You’re correct. It isn’t
The errors are cancelled given that the same instruments are used and they do not develop a fault. LIG thermometers won’t.
Also, I seem to recall a fuss made about the correction for the TOBs systematic error in the U.S. GISS dataset.
Ah well.

Reply to  Toneb
April 19, 2016 9:45 am

ralfellis and ToneB, it’s not immediately clear to me that either using anomalies or making a large number of measurements will alleviate issues associated with the resolution of an instrument.
If we do a simple thought experiment and consider a thermometer with a resolution of 1K. If an observer records the temperature as 283K it means that the temperature lies with equal probability at all values between 282.5 and 283.5K. Now let’s say the actual temperature is 282.7K. No matter how many times we measure it and average the results we will get 283K. This is different from the true temperature. Similarly the anomaly will also be incorrect and limited by the resolution of the thermometer.
If one were to do a full budget of the errors then the resolution contribution could be determined by dividing the resolution by the square root of 12. i.e. approximately 1K/3.5 or a little over 0.3K. Interestingly if we had a digital instrument based on A-D conversion then the error would be 1K/sqrt 3 or 0.6K. This is because of the differential non-linearity of A-D systems.
The results are not very intuitive and it takes a considerable amount of thinking to fully grasp the implications of low resolution. I only know because I have designed a measuring instrument, in this case a mass spectrometer and needed to understand the accuracy and precision of the detection systems. Ironically the mass spectrometer is to help measure pale-temperatures!
I have not read Pat Frank’s article in full but have sympathy with the frustration of people using data without an understanding of the measurement system.

Tom Dayton
Reply to  Toneb
April 19, 2016 9:47 am

Paul Dennis: Resolution is a different issue than systematic error. Use of anomalies compensates for the latter. Taking a large number of measurements compensates for the former (read up on the Law of Large Numbers).

Paul Dennis
Reply to  Toneb
April 19, 2016 11:03 am

Tom Dayton, I think you are wrong. If ones resolution is high compared to measurement precision then one can repeat the measurement many times and take the standard error of the mean. However, if the resolution is poor then no amount of repeated measurements allows one to recover a precision that isn’t there.
The reductio ad absurdum of your statement is that with a thermometer with 1K resolution and I make a thousand measurements of a signal with random noise varying between 282.5 and 283.1K with a mean of 282.8K I measure it as 283K +/- 0K! It is neither accurate nor a representation of the noise of the measurement.
Please read Pat Frank’s article because he is not just discussing systematic errors. Figure 13 and the accompanying discussion is all about instrument resolution and its impact on measurement precision.

Wagen
Reply to  Toneb
April 19, 2016 2:45 pm

“If we do a simple thought experiment and consider a thermometer with a resolution of 1K.”
This simply means the thermometer gives an output that is constrained to whole degrees. It says nothing about its precision.
“If an observer records the temperature as 283K it means that the temperature lies with equal probability at all values between 282.5 and 283.5K.”
Only if the the thermometer can differentiate between 282.49 and 282.51 reliably, which would mean it has a very high precision relative to the whole degree output.
“Now let’s say the actual temperature is 282.7K. No matter how many times we measure it and average the results we will get 283K.”
A more realistic scenario would be that the (whole-degree-output) thermometer says 283 in 7 out of 10 cases and 282 in 3 out of ten cases. And even this is oversimplified. However I hope you will start looking into your misconceptions.
And yes, anomalies filter out systematic biases (as long as the bias is constant and does not itself depend on the absolute, for instance when a temperature measurement bias gets higher when the absolute temperature is higher).
” This is different from the true temperature. Similarly the anomaly will also be incorrect and limited by the resolution of the thermometer.”
As shown above precision of a temperature measuring device is different from its resolution.

Toneb
Reply to  Toneb
April 19, 2016 3:15 pm

Met instruments are not required to be accurate to better than 0.1C. In climate we would like them to be.
However it does not matter that they may be inaccurate because they are consistent.
MIG/alcohol-in-glass will be wrong by a consistent amount – and in addition the UKMO uses platinum resistance thermometers FI, that have excellent stability that only requires calibration every 8 years.
http://www.metoffice.gov.uk/guide/weather/observations-guide/how-we-measure-temperature
In addition there are 10’s thousands of thermos around the world used in GMT datasets and I would suggest that they do not all err in the same direction. Just like tossing a coin or rolling a die will result in the convergence of the probability of a bias to heads or to 6’s – towards zero.
This is a none issue.

Reply to  Toneb
April 19, 2016 10:26 pm

Wagen,
again I am sorry you are wrong here. I suggest you read some texts on measurement theory. Take your example of a precise thermometer with resolution of 1 degree. As it’s precise it will always read 283K for a temperature of 282.7K and not 7 times out of 10!
Where did I say precision was the same as resolution? My example was to highlight that it is not. I’m sorry your example seems to confuse them.

Reply to  Toneb
April 19, 2016 10:33 pm

The errors are not constant, Toneb. Not in single instruments, not across instruments. That’s the impact of uncontrolled environmental variables.

Reply to  Toneb
April 19, 2016 10:42 pm

pauldennis, nice to see a post from you here. 🙂
I believe the square root of 12 adjustment comes of treating resolution as triangular, rather than rectangular. I’ve never been happy with that, because it presumes a higher weight on the center of the range.
I’ve never seen a good rationale for that, and have always taken the rectangular resolution to be a more forthright statement of uncertainty.

Reply to  Toneb
April 19, 2016 10:47 pm

Tom Dayton, taking anomalies does not remove the varying systematic error stemming from uncontrolled variables.
No large number of measurements improves the limits of instrumental detection. If the data are not there in one measurement, they’re not there in a sum of such measurements, no matter the number.

Reply to  Toneb
April 19, 2016 10:51 pm

Toneb, it appears the meaning of the graphical calibration displays of non-normal, time-and-space-varying systematic errors is lost on you.

Reply to  Toneb
April 19, 2016 11:46 pm

Pat, thank you for the welcome. I think you are right that the sqrt 12 does derive from a triangular distribution. I agree this weights for the centre of the range and I’m not sure that this is a best estimate of the uncertainty. My inclination would be that to use a rectangular distribution would be more robust. The only reason why an observer would weight for the centre of the range is that towards the limit they may bin data in the lower or upper range. However this is a supposition and unquantifiable so it seems that using sqrt 3, as you say, is more forthright and a better assessment of the uncertainty.

Reply to  Toneb
April 20, 2016 12:25 am

“Now let’s say the actual temperature is 282.7K. No matter how many times we measure it and average the results we will get 283K.”
I don’t see what situation in climate science this refers too. It is very unlikely that the exact same temperature would be repeatedly measured, especially with min/max thermometers. A much more reasonable thing to look at is a month average of varying temperatures by that thermometer. Then the number of times this 1° thermometer will round up is about equal to the number of times it will round down, cancelling in the sum. In fact, you can easily emulate this situation. Average 100 random numbers between 0 and 1. You’ll get close to 0.5. Then average 100 integer rounded such numbers. You’ll get a mean with binomial distribution, as if tossing a coin 100 times. 0.5 +- 0.05. Effectively, your thermometer does this modulo 1.

Reply to  Toneb
April 20, 2016 2:16 am

Nick, the average may or may not be close to 0.5. I have just tried and my first attempt with 100 numbers at 0.01 and 0.1 resolution gave 0.5289 (0.01) and 0.5572 (0.1). These means differ by nearly 0.03 and are at the 1 sigma uncertainty given by the resolution (0.1/sqrt 12). Of course a month is just 30 days with every possibility that the the rounding up or down is not truly random and differences between true means and the estimated mean could be greater.

Reply to  Toneb
April 20, 2016 4:16 am

Paul,
I presume you are rounding to respectively two figures and one. The problem there is that the resolution is too good; the se of the mean with perfect resolution is .1/sqrt(12)= .0289, and degrading the resolution to .01 makes virtually no difference, and .1 not much more. But resolution 1 (integer rounding) shows a difference. I got in my first three tries, 0.58, 0.38 and 0.51. Obviously poorer se than the hi-res se of 0.0289 (the theoretical is .05). But a lot better than resolution 1.

Reply to  Toneb
April 20, 2016 4:38 am

Paul,
Just expanding on the math there, I think it is that the expected standard error of the mean of 100 numbers, randomly distributed between 0 and 1 but measured to resolution r, is
sqrt((1+r)(1+r/2)/12)
So here is a table (hope format is OK)

Res r:  		0     0.01    0.1      1
S.E. of mean: 0.0289  0.0291  0.0310  0.0500

So at resolution 0.01, the se far exceeds the res. But at 0.1, there is a definite improvement, and for 1 the improvement is very great. Of course, if you average 10000, the SE’s divide by 10, so then even at res 0.1, the SE is far more accurate at 0.0031.

Reply to  Toneb
April 20, 2016 5:43 am

sqrt((1+r)(1+r/2)/12)
should be sqrt((1+r)(1+r/2)/12/N) where N is number of readings (actually, probably should strictly be N-2)

Reply to  Nick Stokes
April 20, 2016 6:00 am

“sqrt((1+r)(1+r/2)/12/N) where N is number of readings (actually, probably should strictly be N-2)”
What’s r, the reading?

Reply to  Nick Stokes
April 20, 2016 6:07 am

Never mind Nick.
I see up thread it’s resolution. I read these through email, so it was only after posting I saw the answer.

Reply to  Toneb
April 20, 2016 9:12 pm

Nick Stokes, the issue is resolution not random offsets. Resolution means the instrument is not sensitive to magnitude differences within the limits of that resolution.
It’s not that the data are noisy. It’s that the data are nonexistent. There is no way to recover missing data by taking averages of measurements with missing data.

Reply to  Toneb
April 20, 2016 9:48 pm

Nick Stokes, here’s what JCGM Guide to Uncertainty in Measurement (2 MB pdf) says about instrumental resolution (transposing their argument into temperature):
If the resolution of the indicating device is δx, the value of the stimulus that produces a given indication X can lie with equal probability anywhere in the interval X − δx/2 to X + δx/2. The stimulus is thus described by a rectangular probability distribution of width δx with variance u^2 = (δx)^2/12, implying a standard uncertainty of u = 0.29δx for any indication.
Thus a thermometer whose smallest significant digit is 1 C has a variance due to the resolution of the device of u^2 = (1/12)1C^2 and a standard uncertainty of u = [1/sqrt(12)]1C = 0.29 C.

This is typical of the standard applied to reading LiG thermometers, where the resolution is taken to be 0.25 of the smallest division.
The 1/12 above represents the result of an a priori triangular resolution; a relaxation of conservative rigor.
No average of any number of ±0.29 C resolution temperatures will reduce that uncertainty.

Reply to  Toneb
April 21, 2016 12:15 am

Pat Frank,
You say “the issue is resolution not random offsets” but then quote JCGM saying:
“the value of the stimulus that produces a given indication X can lie with equal probability anywhere in the interval X − δx/2 to X + δx/2. The stimulus is thus described by a rectangular probability distribution of width δx with variance u^2 = (δx)^2/12,”
Sounds exactly like specifying random offsets. In fact, it describes exactly the arithmetic I was doing. You read one day, offset σ 0.289. Read the next day, that’s an independent offset. The variances sum, so sum has variance 1/6, or σ sqrt(1/6) But the average has half that σ, or sqrt(1/24)=0.204. And so on.
Here is a full example. On pages like this, BoM shows the daily max for each recent month in Melbourne, to one decimal place. Here is last month (Mar):

33.7 34.7 23.9 33.0 23.7 25.2 24.9 38.9 28.5 22.1 26.1 22.3 23.2 21.3 26.8
31.4 32.5 19.5 18.8 23.3 23.5 24.3 28.8 21.2 20.4 20.2 19.9 19.2 17.9 18.7 22.7

Suppose we had a thermometer reading to only 1°C – so all these were rounded, as in the JCGM description. For the last 13 months, here are the means for the BoM and for that thermometer:

      Mar   Apr   May   Jun   Jul   Aug   Sep   Oct   Nov   dec   Jan   Feb   Mar
1 dp: 22.72 19.24 17.13 14.43 13.29 13.85 17.26 24.33 22.73 27.45 25.98 25.1  24.86
0 dp: 22.77 19.27 17.13 14.37 13.29 13.84 17.33 24.35 22.67 27.48 26    25.17 24.84
diff:  0.05 0.03    0   -0.06    0  -0.01  0.08  0.03 -0.06  0.03  0.02  0.08 -0.02

The middle row, measured by day to 1°C, has a far more accurate mean than that resolution. As a check, the sd of the difference (bottom row) is expected to be sqrt(1/12/31) (slight approx for days in month), which is 0.052. The sd of the diffs shown is 0.045.
You can check.

Reply to  Toneb
April 21, 2016 9:41 pm

Nick Stokes, you wrote, “You read one day, offset σ 0.289. Read the next day, that’s an independent offset.
Not correct. The meaning of the ±0.289 resolution is that one does not know where the true temperature lays within ±0.289 of the reading.
It’s not an offset. It’s a complete absence of data within that ± range.
The next day’s reading includes an identical ignorance width. The resolution uncertainties are 100% correlated. The mean of two readings has an uncertainty of ±0.41 C.
In your BOM example, given a rectangular resolution of ±0.289 C, all those daily readings would be be appended with ±0.3 C. The monthly readings would be immediately tagged as having one too many significant figures.
The resolution propagated as a rectangular uncertainty into a 30-day monthly mean is sqrt[(0.289)^2*(30/29)] = ±0.3 C.
Your 1 C rounding example includes a subtle but fatal error. That is, you assume that the significant digits are truly accurate to 0.1 C and that their rounding is therefore consistent with true accuracy.
However, limited resolution means that the temperatures are not accurate to that limit. They are accurate only to ±0.289 C. This means the recorded numbers have a cryptic error relative to the true air temperature.
So, even though the rounded values average to near the mean of the unrounded values, both means include the cryptic error due to the ±0.289 C resolution detection limit.
There’s no getting around it, Nick. Resolution is a knowledge limit, and nothing gets it back.

Reply to  Toneb
April 22, 2016 12:17 am

“The resolution uncertainties are 100% correlated.”
100% wrong. They are independent. Take the March max’s I showed above. The rounding residues, multiplied by 10 for display, are:
-3 -3 -1 0 -3 2 -1 -1 5 1 1 3 2 3 -2 4 5 -5 -2 3 -5 3 -2 2 4 2 -1 2 -1 -3 -3
The lag 1 autocorrelation coefficient is -0.057. That small value is consistent with zero and certainly not with 1. And the sd (without x10) is 0.289; coincidentally almost exactly the theoretical sqrt(1/12).
“However, limited resolution means that the temperatures are not accurate to that limit. They are accurate only to ±0.289 C. This means the recorded numbers have a cryptic error relative to the true air temperature.”
Where on Earth do you get this from? There is no reason to believe the BoM numbers are limited to 1° resolution. But in any case, there will be some other set of accurate underlying values which will give similar arithmetic. The point of my example is that degrading any similar set of T to integer resolution creates a 0.05 (sqrt(1/12/31)) uncertainty in the mean of 31.
“both means include the cryptic error due to the ±0.289 C resolution “
Again, how on Earth can you assign such an error to the BoM values? But there are no “cryptic errors” here. I have just formed the means directly and shown their observed distribution.
I have written a post on this stuff here.

Reply to  Toneb
April 22, 2016 9:31 am

Nick, the resolution uncertainties are 100% correlated because every single one of your instruments comes with an identical limit of resolution.
Your analysis compares the recorded temperatures. You’re treating them as though they are physically accurate. You’re looking at their lags and difference SDs and proceeding as though those quantities told you something about accuracy. They don’t. Internal comparisons tell you nothing about accuracy. They tell you only about precision.
Resolution — the limit of detection — tells us that all the BOM numbers are erroneous with respect to the physically correct temperatures. But we do not know the physically correct temperature, so that the true magnitude of resolution-limited measurement error is forever unknown.
You wrote, “There is no reason to believe the BoM numbers are limited to 1° resolution.” Except there is every reason to believe that, Nick, because in your example the instrumental detection limit is invariably ±0.289 C. There’s no getting away from that fact of grim instrumental reality.
The point of your example is that it misses the meaning of instrumental resolution.
You wrote, “Again, how on Earth can you assign such an error to the BoM values? But there are no “cryptic errors” here. ” Cryptic errors arise from the limits of detection, Nick. Resolution limits means there is an unknown difference between the instrumental reading and the physically true magnitude. That difference is the error that follows from the instrumental detection limit — the resolution.
Every single BOM temperature includes that error within its recorded value. The mean of those erroneous temperatures will include the mean resolution error, whatever that is. We can only estimate the resolution error in the mean as the resolution uncertainty, ±0.289 C.
All you’ve demonstrated is that perfect rounding of an erroneous series yields a mean with about the same error as the mean of the unrounded series.
Your demonstration has nothing whatever to do with the physical accuracy of the temperatures or of their mean.
Their “observed distribution is about precision, Nick, not about accuracy.

Reply to  Toneb
April 22, 2016 12:39 pm

“Except there is every reason to believe that, Nick, because in your example the instrumental detection limit is invariably ±0.289 C.
This is just nuts. No, in my example I took the actual BoM data and a hypothetical thermometer which could be (or was) read to 1°C accuracy. The latter might have that limit; there is no reason to say that it applies to the BoM instruments.

Reply to  Toneb
April 23, 2016 2:01 pm

Nick, it’s not “just nuts.”
Rather, it’s that you apparently do not understand limits of resolution. Data — information — ceases at the detection limit, Nick.
The BOM values include both systematic error and resolution error with respect to the unknown true air temperatures.
For a 1 C thermometer, resolution error means that the true air temperature can be anywhere within ±0.289 C of the recorded temperature. The recorded temperature will include some magnitude of error that will forever remain unknown.
The reliability of the measurement can be given only as the ±uncertainty of the known instrumental resolution (assuming systematic error is zero).
Absent a knowledge of the true air temperatures, we (you) do not know the true magnitude of the error within the BOM values.
No amount of internal analysis will reveal that error.
When you rounded to 1 C, you perfectly rounded erroneous values. Of course the rounded means will correspond to the unrounded means. But both means will include the original resolution-limited measurement error.
There’s no getting around it.

Reply to  Toneb
April 24, 2016 12:53 pm

I tried posting a reply at Nick Stoke’s site where he posted his full BOM temperature analysis that tries to show instrumental resolution doesn’t matter in averages, and which he advertised at WUWT here.
However, my reply always vanished into the ether, no matter whether I posted as “Anonymous” or under my OpenID URL.
So, I’ve decided to post the reply here at WUWT, where valid comments always find a home. One hopes the track-back notice will show up on Nick’s site.
If you visit there to take a look, notice the generous personal opinions expressed by the company.
Here’s the reply:
There seems to be little apparent understanding of instrumental resolution — limits of detection — within the comments here [at Nick’s site — P], and certainly so in Nick’s analysis.
Also, Eli’s sampling rejoinder misses the point.
Look, Nick’s BOM example claims to show that measurements limited to 1 C divisions, yield means of much higher accuracy.
However, resolution uncertainty means, e.g., a LiG thermometer with 1 C graduations cannot be read to temperature differences smaller than its detection limit of ±0.289 C (LiG = liquid-in-glass).
In such a case, all we can know is that the physically real air temperature is somewhere within ±0.289 C of the temperature readout on the thermometer.
Applying this concept to the BOM temperatures: every BOM temperature measurement has some unknown error hidden within it because of instrumental resolution. Limited resolution — the detection limit — means there is always a divergence of unknown magnitude between the readout value and the true air temperature.
Nick has shown that rounding those temperature measurements to 1 C yields a mean value very close to the mean of the unrounded values.
However, when the recorded temperatures include a cryptic error arising from limited resolution, the total set of rounded values will include erroneous roundings. These encode the error into the set of rounded values. The cryptic error is then propagated into the mean of the rounded values. The rounded temperatures then converge to a mean with the same error as the mean of the unrounded temperatures.
That is, rounding erroneous values means the rounding itself is erroneous. There is no cause to think the rounding errors are randomly distributed because there is no cause to think the resolution errors are randomly distributed.
The thermometer can not resolve temperature differences smaller its resolution. That puts errors into the measurements, and we have no idea of the true error magnitudes or of their true distribution.
The fact that Nick’s two means are close in value doesn’t imply that resolution doesn’t matter. It implies that the mean of large numbers of perfectly rounded values converges to the mean of the original unrounded values.
That is, Nick’s example merely shows that perfectly rounded erroneous temperatures converge to the same incorrect mean as the original unrounded erroneous temperatures.
Rounding does not remove the resolution-derived measurement error present in the unrounded temperature record. The rounded mean converges to the same error as the unrounded mean.
The true error distribution of those measurements can not be appraised because the true air temperatures are not known. This is obviously true for a LiG thermometer where the sensor detection limit is evident by inspection.
The same is true for digital thermometers as well, though. Suppose, for example, one has a digital thermometer of resolution ±1 C. That resolution does not refer to the readout, which can have as many digits as one likes. Resolution instead refers to the sensitivity of the instrumental electronics to changes in temperature, where “electronics” includes the sensing element.
When the instrument itself is not sensitive to temperature changes of less than 1 C, no amount of dithering will improve the ±1 C resolution because no higher accuracy information about the true air temperature is resident in the instrument.
Now suppose we have a digital sensor of ±0.1 C precision but of unknown accuracy, and so carry out a calibration experiment using a higher-accuracy thermometer and a water bath. We construct a calibration curve for our less accurate instrument — water bath temperature from the high-accuracy instrument vs. low-accuracy readout.
The calibration curve need not be linear; merely smooth, univariate, and replicable. We can use the calibration curve to correct the measured temperatures. We are now able to measure temperatures in our lab to ±0.1 C, by reference to the calibration curve.
Now the instrument goes outside, where the air temperature is variable. What happens to the response function of the instrument when the electronics themselves are exposed to different temperatures?
They no longer are bathed in 25 C air. Does the calibration curve constructed at 25 C — the lab air temperature — also apply at -10 C, at 0 C, at 10 C, or at 35 C? We don’t know because it’s not been determined. What do we do with our prior lab calibration curve? It no longer applies, except at outside air temperatures near 25 C.
This question has been investigated. X. Lin and K.G. Hubbard (2004) “Sensor and Electronic Biases/Errors in Air Temperature Measurements in Common Weather Station NetworksJ. Atmos. Ocean. Technol. 21, 1025-1032 show that significant errors creep into MMTS measurements from the temperature sensitivity of the electronics.
Here’s what they say about laboratory calibrations: “It is not generally possible to detect and remove temperature-dependent bias and sensor nonlinearity with static calibration.” Any experimental scientist or engineer will agree with their caution.
From electronics limitations alone, the best-case detection limit of the MMTS sensor is ±0.2 C. This means the state of the sensor electronics does not register or convey any information about air temperature more accurate than ±0.2 C.
Following from their analysis, Lin and Hubbard noted that, “Only under yearly replacement of the MMTS thermistor with the calibrated MMTS readout can errors be constrained within ±0.2 C under the temperature range from -40 C to +40 C.” Below -40 C and above +40 C, errors are greater.
Their findings mean that under the best of circumstances, MMTS sensors cannot distinguish temperatures that differ by ±0.2 C or less. No amount of averaging will improve that condition because the recorded temperatures lack any information about magnitudes inside that limit.
And there is no greater sensitivity to temperature changes available from within the instrument itself. That is, the electronic state of the instrument does not follow the air temperature to better than ±0.2 C. Dithering the readout gets you nothing.
Instrumental resolution — detection limits — means that averaging large numbers of resolution-limited temperature readings, each of which has an unresolved error and all of which errors conform to a distribution of unknown shape, can not result in a mean with a lesser uncertainty than an individual reading.
Supposing so is to magic data out of thin air.

Reply to  Toneb
May 3, 2016 10:26 pm

My friend, Carl W. (Ph.D. Stanford University) found an excellent explanation of the differences among instrumental resolution (detection limits), precision (repeatability), and accuracy (correspondence with physical reality).
It’s at Phidgets here.
For all who may still be reading, and care. 🙂

Alex
Reply to  ralfellis
April 19, 2016 7:59 am

I think the point of the post is that they aren’t consistent error problems.

Tom Dayton
Reply to  Alex
April 19, 2016 8:09 am

Alex, the author of this post claimed to be focusing on “systematic” error, which indeed does mean “consistent.”

Reply to  Alex
April 19, 2016 10:57 pm

You’re right, Alex.
Tom Dayton, systematic error means the error derives from some systematic cause, rather than from a source of random noise. It does not mean ‘constant offset.’
Uncontrolled systematic impacts can skew a measurement in all sorts of ways. When the error derives from uncontrolled variables (note: not uncontrolled constants), the error varies with the magnitude and structure of the causal forces.
That’s the case with unaspirated surface temperature sensors. Measurement error varies principally with variable wind speed and irradiance.

Owen in GA
Reply to  ralfellis
April 19, 2016 9:34 am

The error he is talking about are not removed by using an anomaly because the errors do not “average out”. This only works for normally distributed errors. These errors are skew right distribution so do not average. The anomaly argument uses the law of large numbers to justify averaging out the site. The first assumption of the law of large numbers is that the errors are normally distributed. If they are skew right distributions, the first assumption fails and the law does not apply.

Reply to  Owen in GA
April 19, 2016 11:41 am

Richard molineux April 19, 2016 at 9:43 am

” The first assumption of the law of large numbers is that the errors are normally distributed.”

….This is not true. The actual law makes no mention of “normal distribution”
The law states: The average of the results obtained from a large number of trials should be close to the expected value, and will tend to become closer as more trials are performed.

Thanks for the link, Richard. Actually, that is a very bad statement of the law of large numbers (LoLN), no surprise given that it is wikipedia … here’s a better one, which starts:

The law of large numbers formulated in modern mathematical language reads as follows: assume that X1, X2, . . . is a sequence of uncorrelated and identically distributed random variables having finite mean μ …

From this we see that the law of large numbers (LoLN) only works on uncorrelated “i.i.d.” distributions, where “i.i.d.” stands for “independent identically distributed”. If the variables are NOT uncorrelated i.i.d., then we cannot assume that the law of large numbers will work. It might … but we can’t be sure of that. And even if the LoLN does work in general, in that the average of the numbers approaches the true average, it may approach the average much more slowly than the approach rate calculated by the LoLN.
And of course, a bunch of temperature measurements from some area are NOT uncorrelated, and NOT independent, and are most likely NOT identically distributed. As a result, the error values given by the Law of Large Numbers will underestimate the true error.
w.

Reply to  Owen in GA
April 19, 2016 10:59 pm

Owen, you’re obviously unqualified to be a climate scientist. 🙂
More seriously, your understanding appears nowhere in the published literature.

Reply to  ralfellis
April 19, 2016 9:56 am

Try building a house with no measuring instruments that have a resolution of less than 1 inch. E.g., you can not measure a piece of wood or cut it to any length other than in exactly 1 inch increments other than guessing the fractions. Yes every wall stud can be cut to exactly 84 inches, and the wall could be exactly 16 feet long, but what happens when you nail on the sheathing, then the siding, plasterboard, molding, etc.

Tom T
Reply to  ralfellis
April 19, 2016 10:28 am

Did you look at the figures. The systematic error is not constant nor is it gaussian. So taking an anomaly will not filter it out.

Reply to  ralfellis
April 19, 2016 2:43 pm

A big part of the problem is we assume the thermometers are measuring air temperature, by reading this article I’ve realised it’s a false assumption; the thermometers are actually measure the temperature of the themometer, which is influenced by the air temperature, the wind speed, whether the enclosure is ventilated, passively ventilated or forced ventilated, whether the enclosure is expose to sunlight and how much sunlight. Because all of these variables change, what the thermal equilibrium achieved is and how long the thermometer takes to achieve equilibrium constantly changes, in short the instrumental error is inconsistent and unpredictable.

Reply to  Paul Jackson
April 19, 2016 11:06 pm

Exactly right, Paul Jackson. Air temperature sensors measure the temperature inside their enclosure. When the enclosure is well-aspirated, the sensor can approach the outside temperature pretty well.
Without aspiration, significant, non-normal, and variable errors are produced.
The entire land-surface historical temperature record up to about year 2000 was obtained using unaspirated sensors. Today, world wide, that’s still mostly true.

Reply to  ralfellis
April 19, 2016 10:31 pm

ralfellis, you’re missing the point that the systematic errors in air temperature measurements are produced by uncontrolled environmental variables.
That means the errors are not constant in time or space. They are not mere offsets, they are not removed at all by taking anomalies.
The only way to deal with persistent and variable systematic errors is to evaluate their average magnitude by a series of calibration experiments, and then report that average as an uncertainty attached to every field measurement.
In fact, taking an anomaly by subtracting a measurement contaminated with systematic error, u1, from a mean that also has a systematic error contamination, u2, produces a greater uncertainty in the anomaly, u3 = sqrt[(u1)^2+(u2)^2].
That is, with non-normal and variable systematic errors, taking differences produces an anomaly with increased uncertainty.

Reply to  ralfellis
April 20, 2016 1:35 am

This article showed that the systematic errors AREN’T consistent in the required sense. They vary. They don’t vary equally around zero, so averaging doesn’t help, but since they DO vary with time and place and divi dual instrument, anomalies don’t help either. It really was quite clear from the graphs.

Alex
April 19, 2016 7:33 am

Good luck with calibrating LIG thermometers. A calibration service will probably not calibrate it if it’s over 5 years old. You get a piece of paper that tells you where on the thermometer you have inaccuracies and what the factor is. Lab techs file the paper away and use the thermometer as is and then say it is calibrated.
I used to supply scientific equipment and visited labs. I have a reasonable idea of what goes on compared to what is supposed to happen.

Reply to  Alex
April 19, 2016 9:37 am

Agreed, and form my experience in classified government contracting, in addition, it’s standard procedure to record measurements that are smaller than the instrument manufacturer’s stated precision in measurement.

john harmsworth
Reply to  Alex
April 19, 2016 12:34 pm

I have no doubt that this is true and have seen many ways to get an incorrect temperature reading. i am but a humble refrigeration tech. but i would never trust a single temperature reading for something that matters without first calibrating my instrument. For refrigeration temperature readings I would calibrate a pressure gauge against atmospheric pressure and then use the observed pressure of the saturated gas to derive a temperature. Always more accurate. The idea of using ship intake temps as accurate measurement is laughable to me. Are all the intake pipes insulated, is the intake at the same depth for all readings,Daytime or nighttime,How far is the pipe run through the ship at what ambient temp, where any of the thermometers or sensors calibrated, how far does the well stick up, what is the depth of the well. ,,Is the inlet water filter clean. Ridiculous!

jsuther2013
April 19, 2016 7:34 am

Very good paper. Now render it down to the ten main points for those whose attention spans are less than two minutes.
Unjustified claims of precision and accuracy are much more common than anyone thinks.

David L. Hagen
April 19, 2016 7:43 am

Type B errors ignored
Thanks Pat for a superb presentation and clear discussion.
It appears the “Climate Consensus” willfully ignores the international guidelines for evaluating uncertainties formally codified under true scientific consensus among the national standards labs.
See:
Evaluation of measurement data – Guide to the expression of uncertainty in measurement. JCGM 100: 2008 BIPM (GUM 1995 with minor corrections) Corrected version 20100
This details the Type A and Type B uncertainty errors.
Type A. those which are evaluated by statistical methods,
Type B. those which are evaluated by other means.
See the diagram on p53 D-2 Graphical illustration of values, error, and uncertainty.
Type B errors are most often overlooked. E.g.

3.3.2 In practice, there are many possible sources of uncertainty in a measurement, including:
a) incomplete definition of the measurand;
b) imperfect reaIization of the definition of the measurand;
c) nonrepresentative sampling — the sample measured may not represent the defined measurand;
d) inadequate knowledge of the effects of environmental conditions on the measurement or imperfect measurement of environmental conditions;
e) personal bias in reading analogue instruments;
f) finite instrument resolution or discrimination threshold;
g) inexact values of measurement standards and reference materials;
h) inexact values of constants and other parameters obtained from external

See NIST’s web page Uncertainty of Measurement Results
International and US Perspectives on measurement uncertainty
Barry N. Taylor and Chris E. Kuyatt, Guidelines for Evaluating and Expressing the Uncertainty of NIST Measurement Results, NIST TN1297 PDF

Reply to  David L. Hagen
April 19, 2016 11:50 am

Thanks for the link on “Type B” errors. I’ve been beating this drum for a long time, saying “Yes, but that is just the STATISTICAL ERROR, not the total error.”
w.

David L. Hagen
Reply to  Willis Eschenbach
April 19, 2016 1:49 pm

Thanks Willis. Speaking of which: Errata typo “2010” not “20100”
Evaluation of measurement data – Guide to the expression of uncertainty in measurement. JCGM 100: 2008 BIPM (GUM 1995 with minor corrections) Corrected version 2010
From: JCGM Working Group on the Expression of Uncertainty in Measurement (GUM)
Hosted by BIPM Bureau International des Poids et Mesures
News JCGM
20th anniversary of the GUM Metrologia, Volume 51, Number 4, August 2014
Revision of the ‘Guide to the Expression of Uncertainty in Measurement’ Metrologia, Volume 49, Number 6 , 5 October 2012
News: Standards for monitoring atmospheric carbon dioxide

Measurements of the amount fraction of CO2 in air standards, performed with cryogenic trapping, have been achieved at the BIPM. This represents an important milestone towards the establishment of the BIPM CO2-PVT measurement system, which will play a role in the CCQM-K120 key comparison for CO2 in air in 2016.

Reply to  David L. Hagen
April 20, 2016 9:51 pm

Thanks, David. I’ve been using those references and have found them very very helpful. I think it may have been you who referred me to them originally, lo, these many years ago.

April 19, 2016 7:44 am

Pat, this is an excellent analysis of measurement error associated with estimation of global temperatures and anomalies. However, as you probably realize, this aspect is only part of the overall uncertainty and there are other important parts. Perhaps the most important part of the uncertainty is representativeness of the measurements, which includes microscale and mesoscale influences on the measurement site relative to the general characteristics of the typically large spatial area represented by a monitor and changes in those influences over time. I have provided a more detailed overview here, mainly pertaining to land based measurements:
https://oz4caster.wordpress.com/2015/02/16/uncertainty-in-global-temperature-assessments/
One other thought on the temperature measurement uncertainty is what happens when the temperature shield is covered in ice or snow? I have been looking at 3-hour temperature reports in the Arctic area north of North America provided by NOAA, including an archive of past data plots:
http://www.wpc.ncep.noaa.gov/html/sfc-zoom.php
On these plots, I have noticed that some of the buoy observations in the Arctic sea show little variation over time, as might be expected if the sensor shield was covered in ice or snow. And yet these observations are being reported as if they are accurate and the data may be used in weather forecast modeling and weather forecast model reanalyses for climatic assessment. One particular buoy has been reporting temperatures nearly constant at 13-15F while other buoys in the area show temperatures well below 0F and show much larger variations over time. I also have noticed a Canadian automated weather station on an island at the edge of the Arctic Ocean near Alaska reporting temperatures nearly constant at about 30F all winter while all other reports in the area are much lower. This type of problem adds considerable uncertainty to measurements over snow and ice, especially in remote locations with automated monitors.
Since the Arctic and Antarctic areas are critical to evaluating climate change, it is very important that accurate measurements are made. I would much rather see the huge amounts of money being funneled into redundant and misleading GCMs be diverted into obtaining better and more accurate coverage of the Arctic and Antarctic areas, and including better data validation and reporting. A lot of bad data are being reported as if they are good data and coverage is relatively poor compared to most populated areas.
Deserts and very complex terrain are additional examples of areas with poor coverage globally and with serious challenges to accurate and representative measurements.

Don K
Reply to  oz4caster
April 19, 2016 9:08 am

> also have noticed a Canadian automated weather station on an island at the edge of the Arctic Ocean near Alaska reporting temperatures nearly constant at about 30F all winter
With just a bit more atmospheric CO2, they’ll be able to grow bananas there.

john harmsworth
Reply to  oz4caster
April 19, 2016 1:23 pm

This is interesting but you may have missed the cause. I would suspect from what you have found that these sensors are often reading around the dewpoint of the air. If this is so they probably have condensation on them and if they are reading below freezing then they may have ice accumulation as well. If they continue to acquire condensation on top of the ice they will show a temperature skewed warmer by the heat released by the atmospheric moisture as it condenses. This will be the case for all sensors reading air temps around dew point unless some method has been adopted to correct for that fact. Makes me wonder about NOAA buoys at colder latitudes.

Reply to  john harmsworth
April 19, 2016 3:59 pm

john, I managed to track down data from the buoy in question. It has the WMO ID 48507 and it frequently shows long stretches of constant temperature with only small variations over time. The last 13 observations available for today showed a constant -8.46C spanning a five hour period from 1200 to 1700 UTC. The closest buoy station, WMO ID 48731, showed temperatures ranging from -23.52C to -21.05C over this same period and every one of the 20 observations had a slightly different temperature. You can find the data here:
http://iabp.apl.washington.edu/maps_daily_table.html
So, I’m not sure what is causing the bad data at 48507, but it certainly does not look like valid temperature data and yet it is still being reported for use in weather analyses. As usual, data users beware.

Reply to  john harmsworth
April 19, 2016 5:44 pm

john, I forgot to mention that the Canadian weather station with bad temperature data is CWND at Pelly Island. You can see data from that station on WunderGround here:
https://www.wunderground.com/history/airport/CWND/2016/04/19/DailyHistory.html
It has been reporting a high of 29F or 30F and low of 29F every day for the last 30 days. Seems highly unlikely to be real temperature measurements, especially considering all the nearby stations were around 14F to 17F at last report. Whoever is responsible for data quality control for this weather station is not doing their job to take that data offline until the problem is fixed.

Reply to  oz4caster
April 20, 2016 9:55 pm

oz4csater, you’re quite right about other sources of error, and weirdness in some sensors. To keep things simple, I’ve restricted myself to estimating the lower limit of error — that at the instrument itself.
As you rightly pointed out, there are many other sources of error that will not average away, and that must be included in any complete estimate.
We can expect that any full accounting of total error will reveal it to be very large, rendering most of the last 150 years of the air temperature record useless for unprecedentedness studies.

Steve Fraser
April 19, 2016 7:51 am

Outstanding presentation. Anyone motivated to put together a calibration suite, and go to randomly-sampled sensor locations to do comparisons? I’ve been toying with the idea of researching printed (i.e., newspaper) temperature records for a major city to see how they compare with the adjusted values.

Reply to  Steve Fraser
April 20, 2016 9:57 pm

Steve Fraser, it might be possible to do that by using the USCRN sensors as calibration thermometers for any nearby COOP stations.

April 19, 2016 8:05 am

One of the money quotes: [there were a few more]
The people compiling the global instrumental record have neglected a experimental limit even more basic than systematic measurement error: the detection limits of their instruments. They have paid no attention to it.
Resolution limits and systematic measurement error produced by the instrument itself constitute lower limits of uncertainty. The scientists engaged in consensus climatology have neglected both of them.
It’s almost as though none of them have ever made a measurement or struggled with an instrument. There is no other rational explanation for that sort of negligence than a profound ignorance of experimental methods.

pbweather
April 19, 2016 8:06 am

I would like to hear Steve Moshers view on this data or Nick Stokes.

April 19, 2016 8:12 am

When you add in the uncertainties introduced by the procedure of calculating a “global average”, you have to wonder if there is any meaning in published trends. Not to mention of course the “homogenization” and “adjustments” that occur between the readings and the average.
The grdding process used to calculate an average of randomly distributed points with huge variations in sample density is fraught with potential error. Does anyone know anything about the gridding programs they use in Climate Science? If they are anything like what we use in geophysics/geochemistry, they will contain user-defined variables that, if you change them, can substantially change the result you get. You have to wonder if there isn’t a bit of tweaking involved there too, to produce the desired trends. Stuff like that can even be unconscious. Or conscious, but the perception of the conscious knowledge is suppressed by a sub-conscious desire to see the hoped-for outcome. That may sound a bit vague, but I’m trying to express tendencies I’ve seen in myself when playing with data sets. If you’re honest, you have to sit up, take a deep breath and remind yourself that you’re supposed to be rigorous.
“Confirmation Bias” is a good term for what I’m incoherently trying to describe. It’s so very, very tempting when you realise that you can please the people you’re working for by just a little bit of tweaking. In large organizations like those involved in Climate Science, there should be strict QA/QC protocols in place to keep it under control, I wonder of there are? After reading this post and the very illuminating comments, it looks like they don’t even have QA/QC for their raw data.

Reply to  Smart Rock
April 19, 2016 12:11 pm

If GISS had strict QA/QC protocols, Jim Hansen could not have overwritten existing temperature data with his “adjusted” data. To do so, you must assume that your “adjusted” data is not only an improvement, but also that it is so close to perfection that you will never have to revisit the older data.

Don K
April 19, 2016 8:20 am

Some systematic errors can be dealt with by using anamolies. However suppose your thermometer isn’t one degree too high but is 0.3 percent (absolute) too high. At -25C, it reads -24.256 and at 30C, it reads 30.909.

April 19, 2016 8:34 am

“The RM Young systematic errors mean that, absent an independent calibration instrument, any given daily mean temperature has an associated 1s uncertainty of 1±1.4 C. Figure 5 shows this uncertainty is neither randomly distributed nor constant. It cannot be removed by averaging individual measurements or by taking anomalies. Subtracting the average bias will not remove the non-normal 1s uncertainty. Entry of the RM Young station temperature record into a global average will carry that average error along with it.”
This is why I chose the method I did of looking at the surface data. Obviously just averaging temperatures and making up what you didn’t have all cam up with the same basic answer.
I decided the only way to do anything about the systematic errors is don’t reference any single station directly with any other station, but only it’s day to day, and morning to night/night to morning, that’s the day to day derivative of change at that station, and then I average the station’s derivative for an area together, from 1×1 cells, to the entire world.
But the point is, if there’s a warm bias during the day, there should be the opposite bias that night., Not the night before, which is what you get if you use a clock to look at the data. Each day the planets gets a dose of energy, and at night, it cools off.
In the extra-tropics, the length of day changes through out the year. So we have a daily test, where the length of input changes, no body every think to look at the effect of that response? But I digress.
It’s the best bias removal possible for the data we have.
https://micro6500blog.wordpress.com/2015/11/18/evidence-against-warming-from-carbon-dioxide/

Reply to  micro6500
April 19, 2016 8:51 am

I forgot to add, what this method allows, is to not really care what the actual temps are, because it doesn’t matter, it’s the change in temp, it’s the only really decent number we have..
If it reads +1C, min and max are both +1, the systematic errors are removed in diff (equal to (Tmx day 0 – Tmn day 0) – (Tmx day 0 – Tmn day 1) = (Tmn day 0 – Tmn day 1) ) .
If there’s a solar warm bias, as soon as the sun stops shining on something it’s loosing heat to space. It looses heat to space all the time, it’s just overwhelmed by the Sun. I never realize how cold the optical window is, and remember, sure there’s a notch, surface temperatures are all most in the window, the other clear night, it was still in the 50’s, the grass in my front yard was in the upper 30’s, and the sky measured around -45F,
You can take the measured temp, use SB to turn it into a flux, then add the Co2 feedback (I think I saw an estimate of 22W/m2, with a 3.7W/m2 change), and then turn it back to a temp, should be close.
And surface temps dump to space all the time, unless it’s cloudy, and then it can swing 70-80F warmer. Oh, at -40F 3.7W is about 1.8F, so -40 with the change in Co2 forcing is still -37F, or -38F, and the full 22W/m2 it’s still 0F or -10F, it was a while back.
Again I digress ……

April 19, 2016 9:00 am

What is the error in CO2 measurements. As discussed by Greg Goodman at Judith Curry’s Climate Etc March 9, 2016) having errors in both the independent and dependent variable in models causes major problems in creating and using predictive models. https://judithcurry.com/2016/03/09/on-inappropriate-use-of-least-squares-regression. In the case of IPCC deterministic models used to forecast out on millennial time scales, aside from cherry picking the data used to fit the model, the uncertainty in both X and Y variables rends the model basically meaningless.

NanooGeek
April 19, 2016 9:39 am

Though there may not be that many Stevenson screen (CRS) still in use, I’m wondering if a different coating approach might reduce thermal emissivity more, and reduce the need to recoat. The enclosure’s interior would seem to be less subject to weathering. Eg ‘interior’ coatings, such as: http://www.solec.org/lomit-radiant-barrier-coating/lomit-technical-specifications/

Tom T
April 19, 2016 10:27 am

Wrong look at the figures. The systematic error of these instruments is not fixed nor is it Gaussian.

April 19, 2016 10:43 am

I have been saying it once a week on here 😀 I agree with Lindzen, Global average temp is first of all unknown and secondly is a residue not a metric of anything.
Residues are not drivers of the system they are a remnant of, because that is not possible without temporal adjustments 🙂
I agree with the Article, arguing over global temperature is akin to discussing theology

Reply to  Mark
April 20, 2016 1:49 am

At least with theology you don’t have government funded agencies rewriting the bible/Koran/book of Mormon/whatever to support a political agenda, so I would say theology is in much better intellectual shape.

April 19, 2016 10:44 am

Global Average Temperature is a distraction nothing more, watch what the other hand is doing!

April 19, 2016 10:53 am

The uncertainty estimate developed here shows that the rate or magnitude of change in global air temperature since 1850 cannot be known within ±1 C prior to 1980 or within ±0.6 C after 1990, at the 95% confidence interval.
Is it your contention that, regarding the change in global mean temperature since 1880, 0C and +2C are reasonably and equally supported by the data?
Compared to the amount of effort and the level of detail with which the BEST team has addressed these issues, your presentation is not an improvement over them.

Janice Moore
Reply to  matthewrmarler
April 19, 2016 12:57 pm

Apparently, BEST has a lot of room for improvement…

There is a lot to like about the BEST project. …
What is less good is their mindset, which needs changing. They are using their own notions of temperature trends and consistency to fill in missing temperature measurements, and to adjust temperature measurements, which are subsequently used as if they were real temperature measurements. This is a very dangerous approach, because of its circular nature: if they adjust measured temperatures to match their pre-conceived notions of how temperatures work, then their final results are very likely to match their pre-conceived notions. …
… a change in mindset is needed, to get us away from the idea that temperatures must be adjusted, homogenised and mucked around with in order to make them match some idea of what we think they should be. Instead, we need to work with all temperature measurements unchanged. …
richardscourtney said of my original proposed system, “That is what is done by ALL the existing teams who use station data to compute global temperature“. Maybe so, but I think not. For example, as demonstrated above, my system prevents UHE from being notionally spread into non-urban areas. I am not aware of any existing system that does that. …

(Source: https://wattsupwiththat.com/2016/01/28/a-way-forward-for-best-et-al-and-their-surface-temperature-record/ )
Given Mr. Jonas’ (and others’) analysis and, moreover, given BEST man, S. M0sher’s, non-science remarks in the threads of WUWT, there is good reason to doubt the quality of the BEST product.
Re: “amount of effort and level of detail,” — the key is quality of analysis, not quantity. If BEST is biased and reckless in its underlying approach, no amount of finessing the details will save it from being wrong. Further, are you bearing in mind, that this is not the full version of Dr. Frank’s paper? “This is a version of the talk I gave …
Mr. Marler,
I spoke a bit sharply and I must apologize for my tone. Your rather harsh and, in my view, unfair, criticism struck a nerve. I hope that all is well with YOUR excellent data analysis. Whenever you share here what you have been working on, I am always impressed with your careful attention to detail and your scientific integrity.
Janice

Reply to  Janice Moore
April 19, 2016 1:54 pm

Janice Moore: If BEST is biased and reckless in its underlying approach, no amount of finessing the details will save it from being wrong.
BEST is not biased or reckless in its underlying approach.

Reply to  matthewrmarler
April 20, 2016 10:03 pm

matthewrmarler, it’s my contention that no one knows where the correct temperature is within the uncertainty limits.
As there are many possible temperatures, both 0 C and 2 C are equally low-probability choices.
Nothing in the BEST method removes systematic error or obviates the detection limits determined by thermometer resolution.

April 19, 2016 11:25 am

I’ve been reading about global warming since 1997.
This is one of the best articles since then, mainly because too few people write about data error — too boring I suppose?
I’m not convinced average temperature is important to know.
I’ve also never found any compelling evidence to prove the average temperature from 1880 to 2016 has a margin of error less than +/- one degree C.
Thermometers from the 1800s tend to read low = exaggerating the warming.
Most of the world was not measured then, and is still not measured.
The accuracy of the sailors throwing buckets over the side of ships to measure 70% of the planet’s surface (if that vision alone is not enough to disqualify the data!) depends mainly on whether or not the sailor hauls up the bucket and stops to have a cigarette before he measures the water temperature.
On top of huge changes in the number of measurements, changes in weather station equipment and locations, failure to send people out in the field to carefully verify station accuracy at least once a year… the same people who make the glorious confuser model climate predictions … are also responsible for the actuals … which they repeatedly change … to show more warming and better match their predictions.
And if I have not already created enough doubt about data quality, NASA reports the average temperature of our planet in hundredths of a degree C. … while saying their margin of error is +/- 0.1 degrees C. !
They must pull their margin or error numbers out of a hat … just like the IPCC does for their 95% confidence claim !
You don’t have to be a scientist to know the temperature data are a joke.
CO2 data since 1958 are probably accurate — before that I don’t know.
The political influence is so strong I predict if the surface starts cooling in the future, the books will be “cooked” to show warming is still in progress. I am not convinced leftist bureaucrats will EVER allow global cooling to be reflected in “their” average temperature data.
Remember how the “pause” was eliminated from surface data with “adjustments” … and the satellite data are now under attack ?
Apparently there are only two kinds of data for warmunists: Good data that support the climate models, and bad data to be ignored or attacked.
Leftists treat climate change as a religion — they will lie, cheat and steal to keep their CO2 boogeyman and “green” industry alive. They seize more political power to “save the planet (i.e.; Tell everyone else how to live).
The climate today is the best it has been for humans and plants in at least 500 years.
More CO2 in the air is great news.
Slight warming of nighttime lows is great news.
Compiling the average temperature is a complete waste of money.
The fact that leftist politicians have destroyed the integrity of science, and the reputations of good scientists, will be a long -term problem. A problem with potentially very serious consequences for two reasons:
(1) The unjustified attack on fossil fuels is really an attack on economic growth and prosperity, and
(2) Some time in the future scientists may discover a real problem, unlike CO2, but they will be ignored because too many greedy scientists in the past took government money in return for false predictions of a coming environmental catastrophe … that will never come ( I’ve been waiting for 40 years so far … but the climate keeps getting better … and better ! ).
Climate blog for non-scientists
No ads/ No money for me.
A public service
http://www.elOnionBloggle.Blogspot.com

Reply to  Richard Greene
April 19, 2016 12:52 pm

Richard Greene – 11:25 am
Excellent summary. Many of the same things could be said for the sea level boogeyman.

JohnKnight
Reply to  Richard Greene
April 19, 2016 6:57 pm

I agree, a fine overview/comment, Richard.

Reply to  Richard Greene
April 19, 2016 10:52 pm

Nice rant Richard. Well said!

April 19, 2016 11:57 am

Thank you, Pat, for an excellent presentation on measurement error. I have had the impression that the proponents of AGW had not performed error analyses on their measurements which you confirmed. Over a dozen years ago in postings on Climate Audit I was discussing measurement error with an AGW proponent. I asked him how he dealt with measurement error. He responded that they just averaged many measurements together and that reduced the error. I realize that this person had never studied error analysis. You have pointed out that things have not improved.
The warm bias you pointed out in the older SST data further emphasizes the scientific bankruptcy of Karl’s pause buster “adjustments.” The most frightening fact that you discussed is that field calibration is the exception rather than the rule. I keep wondering why people in climate science seem to ignore error analysis of their data. Do universities no longer teach error analysis? Do those in the climate science field assume that all of their instrumentation is without error? They use the presence of errors as the rationale for adjusting data, but they do not seem to understand the errors with which they are dealing.

Reply to  isthatright
April 20, 2016 10:11 pm

isthatright, thanks for your thoughtful comments. Like you, I’m totally puzzled by the choices made by the record compilers to assume all measurement error is random.
Whether they are ever taught how to assess physical error or instrumental/measurement error is a very good question.
I have yet to encounter a single climate modeler, either, who understands the first thing about it. I very much doubt they are taught anything of it.
The negligence problem perfuses consensus climatology, and I’ve published about that.

Thomas
April 19, 2016 12:01 pm

Excellent article Dr. Frank.

Reply to  Thomas
April 20, 2016 10:11 pm

Thanks, Thomas.

April 19, 2016 12:07 pm

Pat
Many thanks for posting this very interesting and informative article.

Reply to  niclewis
April 20, 2016 10:13 pm

Thanks, nic. I’m glad you passed it. 🙂

April 19, 2016 12:32 pm

“It’s almost as though none of them have ever made a measurement or struggled with an instrument. There is no other rational explanation for that sort of negligence than a profound ignorance of experimental methods….”
Nay, it is almost as though every single one of them were driven by ideology to seek a justification for something that is outside of science and that is to support an ideological goal. That goal is socialistic in nature, to reduce and destroy liberty, to reduce and destroy prosperity. That is the only goal that is consistent with all the demands that repeatedly made no matter what the science.

JohnKnight
Reply to  buckwheaton
April 19, 2016 7:10 pm

Buckwheaton,
It seems so to me as well . . I see virtually no chance that what has been going on is due to mere incompetence.

Reply to  JohnKnight
April 19, 2016 11:02 pm

I don’t know John, I feel the same as you since. to me, it seems so obvious. How could anyone be as stupid as the climate alarmists? We are, as our host so eloquently demonstrates, perfectly capable of admitting the limits of our knowledge, but for some reason we don’t? Or, at the very least, for some reason some don’t.
But what is that reason? A friend and mentor of mine once passed on the wisdom of his father to me. He said “Never assume malice when ignorance will suffice”. In this example, my conversations with AGW advocates have, almost across the board, led me to believe I’m in the presence of stubborn stupidity, aka “arrogance”. It has caused me despair. Arrogance and stubborn stupidity are deadly; they can’t be overcome by reason.

David A
Reply to  JohnKnight
April 20, 2016 5:22 am

John says,
“It seems so to me as well . . I see virtually no chance that what has been going on is due to mere incompetence.”
===========
Confirmation bias is a systematic error of a different breed altogether.

April 19, 2016 12:33 pm

From the World Federation of Scientists web page on Climatology:
“Climatology
Members of the Panel:
Chairman:
Christopher Essex (CANADA)
Members:
Associate Panel Members:
(Associate Panel Members are a community of scientists who provide support and expertise for the working of the Permanent Monitoring Panel.)
Summary of the Emergency
Being revised.
Priorities in dealing with the Emergency
Being revised.”
Looks like your presentation may have had some results!

Reply to  Michael Moon
April 20, 2016 10:16 pm

Michael Moon, thanks. Chris Essex has been head of that panel for some time. Apparently, at the end of the WFS 2013 meeting, he declared there is no climate emergency. So, that position has been in flux for some time. But I can hope to have helped it along! 🙂

April 19, 2016 12:55 pm

Many years ago, instrument remote readouts were analog. The display was a needle which rotated through a scale which you could read to obtain a numerical value. Anyone who has used such an analog display will quickly realize that the needle is almost never stationary over one value. The needle swings back and forth depending upon its response to the amplified analog signal that it is receiving from the sensor. This swing of the needle is an indication of the error inherent in that particular instrument.
Today, most readouts are digital. Typically they will show a number which users will dutifully record as an indication of the sensor response. The amplified sensor response may be sending the identical signal to the readout, but in the case of a digital display, the signal passes through an A/D converter and is typically damped to provide a number. The number may vary slightly, but damping will reduce the observed variation. The use of digital displays can give the impression of enhanced precision even though the sensor is producing the same signal.

Bruce of Newcastle
April 19, 2016 1:03 pm

When doing work with hot aqueous solutions in our lab I’d grab half a dozen glass scientific thermometers and put them into a beaker of boiling water. Then choose the one reading 100 C. Often they would be out by a couple degrees high or low.
We’ve moved on from glass thermometers mostly, but instrument error has not been repealed.

john harmsworth
Reply to  Bruce of Newcastle
April 19, 2016 1:38 pm

Digital sensors can very easily carry errors at least as great as well calibrated glass thermometers. Digital sensors are mostly correlated to circuit resistance. Circuit resistance can be affected by the wire gauge, the length of the wiring and the quality on any connections in the circuit. All of these can be affected by the ambient temperature as well. Precisely accurate data is very hard to get. These guys arent really trying. Its lousy science.

TDBraun
April 19, 2016 1:19 pm

What answer have the AGW supporters given to this charge?
At the conference, what counter-arguments were given?

Reply to  TDBraun
April 20, 2016 10:19 pm

TDBraun, no one disputed the systematic error part of the talk. But several people said that resolution didn’t apply in the averages of large numbers of measurements. Hence the email debate I mentioned. At the end of that, instrumental resolution applied.

April 19, 2016 1:40 pm

With all of the above taking place they still report the hottest year/month/day etc to 0.01 resolution, they are so smart, these people……
https://www.ncdc.noaa.gov/sotc/global/2015/8/supplemental/page-1

1sky1
April 19, 2016 1:49 pm

Systematic measurement error is indeed a highly under-appreciated problem in climate studies. Unlike random errors, which have a reasonable body of proven theory to provide error estimates, systematic errors often require great practical experience to identify properly and estimate closely. The present post provides a sobering step toward that goal, but only in the case of INDIVIDUAL measurements. Inasmuch as basic climate data consists of various AVERAGES of measurements. the large uncertainty ranges indicated herein greatly overstate the problem, denying the strong ameliorative effects of averaging sizable samples.
Contrary to the presumption here, there is no requirement that the distributions of all variables be Gaussian or identical for the Central Limit Theorem to apply to the composite mean measurement. It suffices that they be statistically independent. With station temperatures, coherent variability is dominated by diurnal and seasonal cycles, leaving a chaotic residual that is effectively a normal random variable that is virtually independent month to month. Nor is a systematic bias in mean measurement an obstacle to resolving temperature CHANGES at any given station. Absolute accuracy is not necessary; only uniformity of measurement is required to gain strong improvement of resolution in large samples.
The upshot is that average annual data at a vetted station will typically allow a Celsius resolution of a tenth or two. The caveat, however, lies in the vetting–not only for all the usual instrumentation issuess, but for the general location away from UHIs that bias the apparent long-term “trend” enormously. That is the systematic error that most afflicts an overwhelmingly urban global data base.

Reply to  1sky1
April 19, 2016 6:18 pm

“Contrary to the presumption here, there is no requirement that the distributions of all variables be Gaussian or identical for the Central Limit Theorem to apply to the composite mean measurement.”
The Central Limit Theorem itself is not needed. All that is required is cancellation; mean effect of errors tends to zero, but doesn’t have to be normally distributed.
I generally agree with much of this comment. The key matter usually is how bias affects changes in temperature. Steady bias subtracts out with anomaly. Contrary to what is said, unsteady bias is a huge concern to climate scientists. It’s what homogenisation and the whole adjustment thing is about. UHI is a big concern; GISS at least tries directly to quantify it. NOAA relies more on homogenisation – also a legitimate approach. Varying bias is why people adjust for TOBS, ship-buoy differences etc.

Reply to  Nick Stokes
April 19, 2016 7:34 pm

” . Contrary to what is said, unsteady bias is a huge concern to climate scientists. It’s what homogenisation and the whole adjustment thing is about. UHI is a big concern; GISS at least tries directly to quantify it. NOAA relies more on homogenisation – also a legitimate approach. Varying bias is why people adjust for TOBS, ship-buoy differences etc.”
Then why doesn’t anyone use a method that makes it irrelevant?
As opposed to using methods that are overly complex and subjected to expectation bias.

Reply to  Nick Stokes
April 19, 2016 9:38 pm

To the All-Knowing Stokes,
The Central Limit Theorem, also knows as the Theory of Large Numbers, only applies to repeated measurements of One Variable!!! But then you knew that.
The temperature in Kalispell is not the same variable as the temperature in Belgrade, nor the temperature in Fairbanks, nor the temperature in Bangkok.
Your grandfather would be ashamed…

Reply to  Nick Stokes
April 19, 2016 11:59 pm

“Then why doesn’t anyone use a method that makes it irrelevant?”
It’s real and it’s relevant.
“The Central Limit Theorem, also knows as the Theory of Large Numbers, only applies to repeated measurements of One Variable!!! But then you knew that.”
No, if it were so restricted, it would not be of much use. But as I said, here the Central Limit Theorem, however restricted you may think it is, is not needed.

Reply to  Nick Stokes
April 20, 2016 4:24 am

” It’s real and it’s relevant.”
Then alter your methods.

Reply to  Nick Stokes
April 20, 2016 5:37 am

““Then why doesn’t anyone use a method that makes it irrelevant?”
It’s real and it’s relevant.”
Yes, but you can get the change in temp of the station with the minimal impact of it, but you have to take an anomaly off that station as compared to itself, instead you blend all of these errors into both your data and your baseline, so you end up with a near worthless mess.
And you have no attribute.
While the reality is that most of the attribution is ocean heat moving from one place to another, and the residual is so small it doesn’t even show up.
It’s almost like you guys don’t want to find out the temp changes over the last 100 years were almost all natural, other than land use changes (which have far more forcing than Co2 does many times over).

1sky1
Reply to  Nick Stokes
April 20, 2016 5:44 pm

While there’s much lip service about UHI being “a big concern,” none of the index makers tackles that time-variable-bias problem with well-founded signal analysis methods. On the contrary, simplistic adjustments are made to foster the impression that the degree of bias is not only easily recognizable, and if not negligible, then reliably correctible. Meanwhile, a plethora of ad hoc adjustments keep pushing the apparent century-long “trend” forever upward.

Reply to  Nick Stokes
April 20, 2016 9:43 pm

Once again, to the All-Knowing Stokes,
No technical professional, none, believe that an instrument giving a reading, can have that reading somehow improved, by Error Analysis. Create improved data, that did not exist before, by your amazingly excellent prescience, but do not claim that there is any proof whatsoever, except that you are far superior to the manufacturer of the instrument, and the people who read the instrument.
No respect for the data? Just exactly what job do you have now? Noisy, great, but really???
Political bias, Bueller, Bueller, anyone? Anyone??
If you continue down this path, you will richly deserve the Limbo to which you belong.
Data? Do you know what Data, and Datum”s” is/are?
If the instrument is wrong, deal with it as best you can, and then, wait for it, Get a Better Instrument!!!
Alter the “Data,” lose the respect of every single professional world-wide who has been trained in the fundamentals of Data.
But you are past that now, into the Media.
Congratulations, and once again, your grandfather is spinning in his grave…

Reply to  Nick Stokes
April 20, 2016 10:28 pm

Lot’s of hand-waving there, Nick, but no solution. The historical systematic error distributions cannot be known. There’s no way to avoid large uncertainty widths in the historical record.

Reply to  Nick Stokes
April 20, 2016 10:34 pm

Nick, you claim the CLT “is not needed,” and yet it is invariably invoked as the error-removal tool in the published literature.
I had an email conversation awhile back with William Emery about error in ARGO temperatures. He referred me to a graduate student who said the Law of Large Numbers and the CLT made errors irrelevant in temperature means.
They need you, Nick.

Reply to  1sky1
April 20, 2016 10:26 pm

1sky1, the central limit theorem only allows one to accurately determine the true mean of any distribution, given a large number of estimates.
Knowing the mean does not normalize a non-normal error distribution, however.
Even if the mean of a non-normal error distribution is found and subtracted out, the error distribution retains its non-normality. The measurement uncertainty does not average away.

1sky1
Reply to  Pat Frank
April 21, 2016 2:46 pm

The major point of CLT is that summing INDEPENDENT random variables leads to a normal distribution for the sum, IRRESPECTIVE of the distribution of the individual variables. That is an essential point when considering large aggregates of instruments, or lengthy time-averages, because normality of the joint distribution of individual measurements then implies independence of individual measurements when they are uncorrelated. Lack of correlation is usually obtained when temperature measurements are suitably “anomalized.”
While I much appreciate your endeavors to explicate the vagaries of Individual measurements, the issue at hand in climate studies is anomaly AVERAGEs, either spatial or temporal.. The r.m.s. values of the latter are invariably much smaller than those of INDIVIDUAL measurements, just as the error of estimating someone’s weight from a large sample of coarse measurements is much smaller than the least-count unit.

Reply to  Pat Frank
April 21, 2016 9:52 pm

1sky1, your analysis includes the standard assumption that measurement error is normally distributed around a systematic offset.
The evidence presented above shows that measurement errors have non-normal distributions. There is no reason whatever to assume that combining non-normal error distributions produces a normal error distribution.
Even if one could derive a valid offset for historical global averages (a wildly optimistic idea), the uncertainty due to a non-normal error distribution would remain. The historical error distributions are unknown. The CLT does not apply to unknown error distributions; especially when available evidence shows non-normality.
The entire standard approach to global air temperature errors is promiscuous in the extreme.

1sky1
Reply to  Pat Frank
April 22, 2016 4:08 pm

Pat Frank:
Once again you overlook the important proviso that in climate studies we do not deal with individual errors, but with aggregate errors. The distribution of individual errors is immaterial to the question of average errors in a large sample, to which the CLT indeed does apply.

Reply to  Pat Frank
April 23, 2016 2:09 pm

1sky1, once again you overlook the fact that there is no reason to assume that aggregating systematically non-normal error distributions produces a normal distribution.
The known error distributions violate the assumptions of the CLT. The compilers all just go ahead and apply its formalism anyway. It’s negligence, pure and simple.

1sky1
Reply to  Pat Frank
April 23, 2016 4:16 pm

Pat Frank:
On the contrary, CLT provides a rigorous reason to expect the aggregated distribution of non-Gaussian variables to be Gaussian. That is the essential point of the theorem. The unmistakable application of it to averages of measurements, the clear issue in climate studies, is discussed here: http://www.statisticalengineering.com/central_limit_theorem.htm

1sky1
Reply to  Pat Frank
April 23, 2016 4:30 pm

Additional insight is provided here: http://davidmlane.com/hyperstat/A14043.html

Reply to  Pat Frank
April 23, 2016 6:19 pm

1sky1, thanks for the link, which says the following:

Central Limit Theorem
The central limit theorem states that given a distribution with a mean μ and variance σ², the sampling distribution of the mean approaches a normal distribution with a mean (μ) and a variance σ²/N as N, the sample size, increases.

As you know, this is equivalent to saying that there is a standard deviation of the mean equal to
σ / N0.5 (Equation 1)
However, this is NOT the case for datasets with high Hurst Exponents, which are common in the climate world. In those datasets the standard deviation of the mean varies as
σ / N1 – H (Equation 2)
where H is the Hurst Exponent. This was first observed by Koutsoyiannis, and I later derived it independently without knowing of his discovery, as discussed in my post A Way To Calculate Effective N.
So we already know that the “Central Limit Theorem” as defined above is a special case. Note that normal datasets have a Hurst Exponent of ~ 0.5. Plug that into Equation 2 above and you get equation 1 …
Note that this effect of the Hurst Exponent can make a very, very large difference in the estimation of statistical significance, I mean orders of magnitude …
Next, let me see if I can clarify the discussion between you and Pat Frank by means of an example.
Suppose we have 1000 measurements, each of which has an inherent error of say +1.60 / – 0.01. In other words, the measuring instrument often reads higher than the actual value, but almost never reads lower than the actual value.
What is the error if we average all of the measurements?
I believe (if I understand you) that you say that the error of the average will be symmetrical because of the CLT.
Pat (if I understand him) says that the CLT gives bounds on the error of the mean … but that doesn’t mean that the error is symmetrical.
IF that is the question under discussion, I agree with Pat. If we average all of those measurements, the chances that the calculated average is below the true average is much smaller than the chance that the calculated average is high. In fact, with that kind of measuring instrument, the resulting mean could be considered a estimate of the maximum possible value of the true mean.
And of course, that still doesn’t include the Hurst Exponent …
Best to both of you,
w.

Reply to  Pat Frank
April 23, 2016 8:56 pm

1Sky1
You’re continuing to overlook the critical point.
Here’s what your own link says: “The CLT is responsible for this remarkable result:
The distribution of an average tends to be Normal, even when the distribution from which the average is computed is decidedly non-Normal.

The CLT says that the distribution of the average is normal. Not that the distribution itself is normal.
That misunderstanding is rife in climate science. The CLT does not say that non-normal distributions themselves become normal.
To illustrate, let’s suppose a measured magnitude, “X” with a non-normal error distribution “E.” The total error “E” includes some offset plus the distribution. Initially we don’t know the magnitude of the offset.
The initial result is written as X±e, where “e” is the empirical standard deviation of the non-normal error distribution “E.”
You sample a set of estimates of the average of “E” and plot the distribution of the estimates. The estimates of the average of “E” is a normal distribution that gives you a good estimate of the average “A_E” of the non-normal distribution, “E.”
You can now correct “X” by subtracting the error offset, “A_E,” so that corrected value of X is X’ = X-A_E.
The new error distribution around X’ is E’ = E-A_E, which is still a non-normal error distribution. The empirical standard deviation of the non-normal error distribution E’ is still “e.”
The corrected valus is now X’±e, where “e” is still non-normal. That is, the uncertainty in the value of X’ is still the empirical standard deviation of a non-normal distribution of error, “e.”
That non-normal distribution “e” does not average away.
When X’ is averaged with other corrected X2’, X3’… all of which have non-normal error distributions, ±e2, ±e3 … the combined non-normal error distributions do necessarily produce a normal distribution of error. They cannot be assumed to combine into a normal distribution. They do not necessarily average to zero, and cannot be assumed to do so.
The uncertainty in the mean, X_mu, is the root-mean-square of the SDs of all the non-normal error distributions, e_mu = ±sqrt[sum over (e1^2 + e2^2 + e3^2 …)/(N-1)].
The CLT performs no magic. It does nothing to normalize non-normal distributions of error.

Reply to  Pat Frank
April 23, 2016 11:32 pm

Bit of a missing but important negative here.
The clause, “±e2, ±e3 … the combined non-normal error distributions do necessarily produce a normal distribution of error.” …
should be, ‘±e2, ±e3 … the combined non-normal error distributions do not necessarily produce a normal distribution of error.
Apologies if I misled or confused anyone.

1sky1
Reply to  Pat Frank
April 26, 2016 5:14 pm

A fundamental point is continually being missed here. In climate, as opposed to real-time physical-weather statistics we are always dealing with averages of averages. The monthly mean is the average of daily means at a particular station. The yearly temperature average is the average of the monthly means, whether they pertain to a single station or any aggregate spatial average thereof. Neither the Hurst exponent of the underlying of individual variables, nor the shape their ordinate-error distributions is material to the determination of such climatic means, which is the only context in which I invoked the CLT here. It’s applicability to the issues at hand should be all the more evident, when the fact that in many cases the sampling, say for May 1945, is EXHAUSTIVE, with zero sampling error. Nowhere do I claim that this changes the shape of any error-distribution.
I’m traveling and will not divert valuable time to respond to any further out-of-context castings of proven statistical ideas or my own words.

Reply to  1sky1
April 26, 2016 6:31 pm

Actually I work directly the NCDC supplied min and max temperature daily records.
But you’re right that daily mean temp allows a lot of different min and max values to average to the same value.

Reply to  Pat Frank
April 29, 2016 10:04 am

1sky1, now that you have admitted, concerning taking averages, that, “ Nowhere do I claim that this changes the shape of any error-distribution. “, you have implicitly admitted that the CLT provides no grounds to assume systematic measurement error is reduced in an average.
Let me also remind you of what you wrote here, to wit, “On the contrary, CLT provides a rigorous reason to expect the aggregated distribution of non-Gaussian variables to be Gaussian. ” This earlier statement is a complete contradiction of your later claim noted immediately above.
So, you’ve diametrically changed your position while asserting it has been constant.
The error distributions of historical air temperature measurements are unknown. The best you can do with the CLT is find the true mean of the set of recorded temperatures.
However, those recorded temperatures are erroneous to an unknown degree, and additionally have an unknown uncertainty distribution. The CLT then gives you the mean of the set of erroneous temperatures. That mean is in error with respect to the unknown true mean temperature. And it has an uncertainty envelope of unknown SD and unknown shape.
The negligently few calibration experiments available provide a lower limit estimate of the unknown SD as ±0.5 C.
So, at the end, you have an incorrect mean temperature with an unknown uncertainty distribution of estimated SD so large that nothing can be said of the rate or magnitude of the 20th century air temperature change.
That’s your global average air temperature.

1sky1
Reply to  Pat Frank
April 30, 2016 3:49 pm

When words are read without understanding the underlying context and concepts, confusion prevails. There is nothing contradictory in what I have maintained throughout.
As is well-known to those who have seriously studied mathematical statistics, there is no indispensable requirement that the random variables in a sum or aggregate be identically distributed for the CLT to apply. Convergence in the mean to the normal distribution occurs for non-identical or even non-independent variables under certain conditions (Lyapunov). The essential requirement is a large number of member variables in the aggregate. Any SYSTEMATIC measurement error in any member irretrievably biases the mean of that member in a FIXED manner, without otherwise affecting its distribution.
But if a large number of such members with INDEPENDENT biases are aggregated, as in a regional or global average, then the variously fixed biases themselves will tend toward a normal distribution. Only if many members of the aggregate have a commonality of bias, such as UHI, will the aggregate bias fail to be sharply reduced by simple averaging. The effect of measurement error per se,–which is almost invariably independent, instrument to instrument and procedure to procedure–becomes miniscule in practice with large enough aggregates. And since the sample description space of monthly climatic averages is very much finite, exhaustive measurements of such leave the nearly uncorrelated year-to-year and longer variability of monthly means as intrinsic random variables of real significance in the climate signal.

Reply to  Pat Frank
May 1, 2016 12:01 pm

1sky1, you wrote, “There is nothing contradictory in what I have maintained throughout.
Let’s see: you wrote, “CLT provides a rigorous reason to expect the aggregated distribution of non-Gaussian variables to be Gaussian.
Followed by, “the shape their ordinate-error distributions is [not] material to the determination of such climatic means, which is the only context in which I invoked the CLT here. … Nowhere do I claim that this changes the shape of any error-distribution.
So you first claimed the CLT proves non-normal error distributions normalize in an aggregate, and later admit the CLT has nothing to say about error distributions.
Your own words contradict you.
You wrote, “Any SYSTEMATIC measurement error in any member irretrievably biases the mean of that member in a FIXED manner, without otherwise affecting its distribution.
That is not correct when the systematic error is due to uncontrolled variables. In such cases, the error goes as the changing variables. The error distribution then and necessarily also changes with the variables.
This is exactly the case for surface air temperature measurements. The land surface variables are wind speed and irradiance. These vary in both time and space. Therefore the error mean and the error distribution also vary in both time and space, both for single instruments over time, and for multiple instruments across space.
Real-time filtering has been introduced in order to remove this error. Real-time filtering would not be necessary at all if your claim was true, because all error would merely average away after removal of your supposedly constant error mean.
Your supposition of constant error offsets has already been demonstrated wrong in the published literature, and is just part of the tendentious assumptions made in the field that allow practitioners to discount the unknowable systematic error in the historical record.
You wrote, “But if a large number of such members with INDEPENDENT biases are aggregated, as in a regional or global average, then the variously fixed biases themselves will tend toward a normal distribution.
Also not correct. We already know the error biases are not fixed. Only the sampled distribution of the mean of the biases will tend toward a normal distribution. The biases themselves need not at all have a normal distribution.
That is, when the biases themselves are the result of varying systematic effects, the distribution of the error biases need not be normal. The fact that the CLT allows one to find the mean bias does nothing to normalize the distribution of the biases.
It’s quite clear that by “INDEPENDENT” you mean tends toward iid, e.g., “tend toward a normal distribution.” You suppose that each instrumental error distribution is iid and the aggregate of error biases also converges to iid.
Everything is iid. Isn’t that just so convenient! a supposition.
That’s your assumption in a nutshell, iid über alles, and it’s unjustifiable.

1sky1
Reply to  Pat Frank
May 2, 2016 3:56 pm

Pat Frank:
On the one hand, you seem to question the central point of CLT, i.e., the
asymptotic convergence in the MEAN (or aggregate) of independent random
variables to the normal distribution,irrespective of their underlying
ordinate distributions. On the other, you view my statement that the SHAPE
of ordinate-error distributions is immaterial to the [empirical]
determination of climatic [data] means as a direct contradiction. Inasmuch
as the mean is a location–not a shape–parameter, your view is wholly
illogical. All the more so when one recognizes that monthly means are
usually determined exhaustively at any location, rather than by random
sampling, leaving no sampling error.
Your appeal to ever-changing uncontrolled variables is simply a red herring,
because they no longer constitute systematic, but SPORADIC errors. While
bad data may be often encountered, thoroughly vetted data do not manifest
such errors in practice.
Your reference to filtering confuses the issue even further, because
AGGREGATE averaging of time-series is a wholly separate issue from
TIME-DOMAIN averaging of physical variables with their distinctive
stochastic structures, which are almost never i.i.d.
Ultimately, the proof that you severely overestimate the effect of
systematic measurement errors in practice is provided by the high
repeatabilty when comparing totally disjoint aggregates of regionally
representative time series and by the high coherence found throughout all
frequencies in cross-spectral analysis with satellite measurements.

Reply to  Pat Frank
May 2, 2016 9:23 pm

1sky1, you wrote, “On the one hand, you seem to question the central point of CLT, i.e., the asymptotic convergence in the MEAN (or aggregate) of independent random variables to the normal distribution,irrespective of their underlying ordinate distributions.
Really? Let’s see:
April 20, 2016 at 10:26 pm (my first comment addressed to you): “the central limit theorem only allows one to accurately determine the true mean of any distribution, given a large number of estimates.”
April 23, 2016 at 8:56 pm: “The CLT says that the distribution of the average is normal. Not that the distribution itself is normal.”
May 1, 2016 at 12:01 pm: “Only the sampled distribution of the mean of the biases will tend toward a normal distribution. … the CLT allows one to find the mean bias …”
Evidence has it that your report of my position is 180 degrees away from my actual position. Are you that careless everywhere, or just here?
You wrote, “On the other, you view my statement that the SHAPE of ordinate-error distributions is immaterial to the [empirical] determination of climatic [data] means as a direct contradiction.
Fortunately, the same set of quotes above fully refutes your second statement as well. The evidence is entirely clear that from the start I viewed the CLT as showing that sampling can produce a normal distribution about the mean of a distribution of any shape.
Re-iterating my April 23, 2016 at 8:56 pm: “The CLT says that the distribution of the average is normal. Not that the distribution itself is normal.”
That’s twice you’ve diametrically misstated my view.
More explicitly, you shifted your ground about the CLT, at first claiming it turrned non-normal distributions into normal distributions, and only later correcting yourself (and then denying you did so).
And so, after shifting your own ground, you’ve attempted to shift mine. Does that seem forthright to you?
You went on to write, “your view is wholly illogical.” Given the easily verifiable evidence above of your tendentious inversions of position, one of us certainly is illogical but it’s not me.
You then wrote (following from your entirely erroneous judgements), “All the more so when one recognizes that monthly means are usually determined exhaustively at any location, rather than by random sampling, leaving no sampling error.
As we have already established, accurately determining a mean, monthly or otherwise, does nothing to remove the systematic error in the mean. Thus your point here is irrelevant.
You wrote, “Your appeal to ever-changing uncontrolled variables is simply a red herring, because they no longer constitute systematic, but SPORADIC errors.
On the contrary, calibration experiments reveal persistent, not sporadic, systematic measurement errors.
By the way, how does “SPORADIC” obviate “systematic?” Systematic errors can easily be sporadic, if the impositional variable is episodic.
While bad data may be often encountered, thoroughly vetted data do not manifest such errors in practice.
How would you know? Data contaminated with systematic error can behave just like good data. That was the very point of the opening discussion in the head-post.
I.e., “Figure 1 exemplifies the danger of systematic error. Contaminated experimental or observational results can look and behave just like good data, and can rigorously follow valid physical theory. Without care, such data invite erroneous conclusions. By its nature, systematic error is difficult to detect and remove.
Remember?
And none of the data you consider holy comes from field-calibrated instruments, so that no one has any idea of the magnitude of systematic error contamination in the record.
You wrote, “Your reference to filtering confuses the issue even further, because AGGREGATE averaging of time-series is a wholly separate issue from TIME-DOMAIN averaging of physical variables with their distinctive stochastic structures, which are almost never i.i.d.
An irrelevance again. Averaging a time series consisting of individual measured magnitudes, e.g., temperatures across a month, must take notice of the structure of the error in each point. Non-normal error in the individual points propagates into the mean and conditions that mean with an uncertainty.
Averaging an aggregate data set, in other words, is not independent of the error in the elements of the set. Your “wholly separate” is obviously wrong. I know of no area of physical science where it would hold that individual non-normal systematic measurement errors do not enter into an aggregate average.
And you just had to throw in “stochastic structures” didn’t you. You just can’t break that assumption addiction, can you. And why would you? Your career depends upon it.
Let’s note as well, that one does not average “physical variables” as you have it, but rather physical measurements. Variables, obviously, are the experimental or observational conditionals that influence measured magnitudes.
You wrote, “Ultimately, the proof that you severely overestimate the effect of systematic measurement errors in practice is provided by the high repeatabilty when comparing totally disjoint aggregates of regionally representative time series and by the high coherence found throughout all frequencies in cross-spectral analysis with satellite measurements.
Systematic error can never be appraised by internal comparisons. How do you know, by the way, that regionally representative time series are disjoint? According to Hansen and Lebedeff (1997) JGR 92(D11), 13,345-13,372 regional time series are highly correlated.
Satellite temperature measurements are not accurate to better than about ±0.3 C. Comparisons of satellite measurements with the surface air temperature record are a worthless indication of physical fidelity, given the extensive manipulations entered into the latter.

1sky1
Reply to  Pat Frank
May 3, 2016 4:59 pm

Pat Frank:
Sadly, you continue to pretend that I once claimed that individual non-normal error-distributions are turned into normal. Meanwhile you notably fail to cite the money quote of April 21 that prompted my extensive comments in the first place: “There is no reason whatever to assume that combining non-normal error distributions produces a normal error distribution.” You then conclude: “The CLT does not apply to unknown error distributions.” So much for your putative comprehension of convergence in the aggregate mean. The only ground I ever shifted is the pedagogical one, pointing out that climate data typically consists of averages of averages, thereby hoping that it would help clarify the issue.
The various errors of situ thermometry are far better known than you seem
aware of. Starting decades ago, numerous technical reports have thoroughly explored those errors based upon measurement schemes nearly an order of magnitude better than those found at typical stations. They invariably show fixed calibration biases and normal error distributions. Historically, Gauss’ very formulation of his distribution is empirically rooted in measurement errors. The major time-varying deterministic component is usually due to shelter deterioration over many years. The 60-day measurement comparison over a Swiss glacier that you show is pitifully short and made in a highly atypical setting; the apparent “trends” are extremely tenuous. It provides no scientific basis for any general conclusions.
Strangely, you refer to the physical effects of winds and insolation upon
temperature as uncontrolled variables, ostensibly contributing to the
measurement error. Inasmuch as these factors are entirely natural, common
sense tells us they produce intrinsic features of the in situ temperature
signal, not measurement error. What also totally escapes you is the fact
that the signal variance necessarily has to rise well above the noise level
(total variable error) for nearby stations to show highly correlated
time-variations. Inasmuch as your claimed uncertainty levels for station
records and satellite measurements roughly equal their respective r.m.s.
values, the observed high correlations would be mathematically impossible.
Given the complexities of real-world systems, geophysical data acquisition,
analysis and interpretation is not for novices or amateurs. The enormous
strides made in our understanding of geophysical processes since WWII have
relied extensively upon modern methods of signal and system analysis as a
an investigative tool. The very fact that you mockingly dismiss the whole
concept of “stochastic structure”–which is by no means limited to the
simple i.i.d. of classical statistics–speaks volumes. Your muddled notions
of independent and/or disjoint sampling and measurement only multiply that
volume. And your risibly cheap ad hominem about my “whole career depends upon it”
convinces me that further discussion here is fruitless.

Reply to  Pat Frank
May 3, 2016 10:17 pm

1sky1, you wrote, “Sadly, you continue to pretend that I once claimed that individual non-normal error-distributions are turned into normal….
There you go shifting my ground again. Here’s what I pointed out about your claim: “Let’s see, you wrote, “CLT provides a rigorous reason to expect the aggregated distribution of non-Gaussian variables to be Gaussian. (bold added)”
Very clever of you to shift your position yet again, from claims about aggregated distributions to “individual non-normal error-distributions”, and then falsely assign it to me.
That’s two falsehoods in one sentence.
You’re displaying evidence of pathological thinking; I’ll leave the causal diagnosis to others.
You wrote, “Meanwhile you notably fail to cite the money quote of April 21 that prompted my extensive comments in the first place: “There is no reason whatever to assume that combining non-normal error distributions produces a normal error distribution.” You then conclude: “The CLT does not apply to unknown error distributions.” So much for your putative comprehension of convergence in the aggregate mean.
So, once again you claim that aggregation of non-normal error distributions converges to a normal error distribution. Wrong again.
Here’s a bit of authoritative literature for you, from V. R. Vasquez and W. R. Whiting (2006) Accounting for Both Random Errors and Systematic Errors in Uncertainty Propagation Analysis … Risk Analysis 25(6) 1669-1681 doi: 10.1111/j.1539-6924.2005.00704.x:
Experimentalists have paid significant attention to the effect of random errors on uncertainty propagation in chemical and physical property estimation. However, even though the concept of systematic error is clear, there is a surprising paucity of methodologies to deal with the propagation analysis of systematic errors. The effect of the latter can be more significant than usually expected. … as pointed out by Shlyakhter (1994), the presence of this type of error violates the assumptions necessary for the use of the central limit theorem, making the use of normal distributions for characterizing errors inappropriate. (bold added)”
You wrote, “Starting decades ago, … blah, blah, blah, … The major time-varying deterministic component is usually due to shelter deterioration over many years.
Refuted by post references 7, 8, 9, 10, 11, 12, 18, and 19. The main time-varying deterministic component of error is insolation and wind-speed.
You wrote, “The 60-day measurement comparison over a Swiss glacier that you show is pitifully short …
It was a two-year experiment, representing thousands of air temperature measurements. I showed a representative part of it.
…and made in a highly atypical setting;” Right. Over snow. In the Alps. Very atypical. Corroborated in reference 10.
It provides no scientific basis for any general conclusions.” Right. Extended calibration experiments of different instruments carried out over years in multiple locales all yielding cross-verified data have no general meaning. Great thinking. You’d chuck the fundamental meaning of repeatability in science in order to save your assumptions about random error.
You wrote, “Inasmuch as [winds and insolation] are entirely natural, common sense tells us they produce intrinsic features of the in situ temperature signal, not measurement error.
Refuted by post references 7, 8, 9, 10, 11, 12, 18, and 19. Just to let you know, the sensor shield are heated by insolation and rely upon wind to exchange the atmosphere inside the shield. When wind is less than about 10 ms^-1, the inner atmosphere heats up, inducing a temperature error.
Your comment, 1sky1, shows you know nothing of meteorological air temperature measurement.
You wrote, “What also totally escapes you is the fact that the signal variance necessarily has to rise well above the noise level (total variable error) for nearby stations to show highly correlated time-variations.
That hasn’t escaped me at all. It also is not true by inspection because the same weather variables that play into air temperature also cause systematic sensor error.
…the observed high correlations would be mathematically impossible.” But not physically impossible. I already have analyses demonstrating this. I hope to publish it.
Given the complexities of real-world systems, geophysical data acquisition, analysis and interpretation is not for novices or amateurs.
This from a guy who has shown no understanding of the instruments under question or of their sources of error. Pretty rich, 1sky1.
The very fact that you mockingly dismiss the whole concept of “stochastic structure”–…
Dismissed the whole concept, did I? Peculiar, I thought the dismissal was about your tendentious assignment of stochastic to air temperature measurement errors. That was the context, wasn’t it.
You wrote, “And your risibly cheap ad hominem about my “whole career depends upon it” convinces me that further discussion here is fruitless.
Where’s the ad hominem in referring to career? I always thought “ad hominem” was ‘to the man.’ Let’s see . . . yup. Wrong again, 1sky1.
Tell me — without the assumption of random error throughout, what cosmic meaning is left in the global air temperature record? Whose careers would fall if that assumption is disproved?
That’s the difference between science and philosophy, by the way. In science, assumptions are provisional upon data. In philosophy, assumptions are invariantly retained. Your approach to the assumption of random error is in the realm of philosophy.

Reply to  1sky1
April 21, 2016 4:36 am

Pat,
“Nick, you claim the CLT “is not needed,” and yet it is invariably invoked”
OK, can you explain what it is used for?
People here are getting CLT and LOLN mixed up. The LOLN says in various ways that averaging larger samples gets you closer to a population mean. That is what we need. The CLT says that a sum or average of variables tends toward a normal distribution, even if the components are not normal. That may be nice to know, and probably holds here, but why do you need it?
As to averaging reducing uncertainty in means, I’ve worked out a real data case upthread.

Reply to  Nick Stokes
April 21, 2016 9:59 pm

Nick, The CLT is used to dispense with measurement error.
The universally applied assumption is that all historical measurement errors are normally distributed with a constant offset. All sorts of pendent theorizing is made to estimate that offset. The estimate is removed, and the CLT makes all the rest of the error disappear.
Except, of course, that the assumption is unwarranted and unwarrantable.
My reply to your dispensation of resolution is also upthread.

April 19, 2016 2:13 pm

Pat,
You don’t seem to mention recording accuracy. Temperatures are now measured electronically but were previously recorded to plus or minus 0.5 degrees
http://www.srh.noaa.gov/ohx/dad/coop/EQUIPMENT.pdf page 11

Reply to  dradb
April 20, 2016 10:39 pm

Thanks, dradb. You’re right, there are lots of other sources of error.
As mentioned to oz4caster above, who made the same point, I’m just trying to estimate a lower limit of uncertainty by looking at the errors from the instruments themselves.
Everything else including recording accuracy just adds on to that.

Christopher Hanley
April 19, 2016 2:29 pm

Anyone with even a passing knowledge of world history, European exploration of Africa, Australia, the Arctic and Antarctic for instance, the vast area of Siberia, not to mention the 70% area of oceans, knows that the notion of a global average temperature anomaly to fractions of one degree C back to 1840, 1860 or even 1880 is patently ridiculous.

Reply to  Christopher Hanley
April 19, 2016 3:03 pm

Many readers might have a thermometer on their porch. If it isn’t a digital one, it will have a liquid that moves up and down in a tube with markings on the glass.
That’s all they back than.
How many of them were marked in fraction of a degree?
Yet those are the readings they are saying we are hotter than by point somethingsomething of a degree…and another point somethingsomething will be the dome of us all!
Lots of theory and speculation in the foundations of “CAGW” theory. Also lots of cracks.

Reply to  Gunga Din
April 19, 2016 3:09 pm

Sorry. Skipped a few words there but I think the readers can plug them in for themselves.
IE “That’s all theyhad back then.”
(I’d starve if I made my living as a typist!)

David A
Reply to  Christopher Hanley
April 20, 2016 5:29 am

Christopher you are correct, and it is not easy now. I drive up and down the 99 in California often. The T varies constantly, often within minutes due to micro climates EVERYWHERE.

April 19, 2016 4:19 pm

“That’s not to say the satellite measurements don’t provide some value, but it is an indication why the surface temperature data analyzed and reported by NASA, NOAA and others is viewed as the gold standard.”
Gavin Schmidt.
http://www.climatecentral.org/news/what-to-know-februarys-satellite-temp-record-20091

Christopher Hanley
Reply to  Mark M
April 19, 2016 5:58 pm

Well he would say that wouldn’t he — but he didn’t.
That’s a quote from Brian Kahn author of the article.

April 19, 2016 7:42 pm

Simply by looking at the data, and especially at how the data has been altered over the years, it is easy to determine that GISTEMP’s asserted confidence intervals (95%) are fantasy.
http://www.elcore.net/ClimateSanity/GISTEMPsOverconfidenceIntervals.html
http://www.elcore.net/ClimateSanity/GISS%20LOTI%20Changes%202007%20to%202015%20ELCore%20small.jpg

April 19, 2016 10:45 pm

Nice work Pat. I’ve been on that bandwagon myself for the past 10 years. I haven’t even approached your rigor, which I expect will be lost on non-scientists and particularly on people that don’t do experiment design.
My favorite way to communicate the problem is to ask if folks can imagine an elderly fellow wearing bi-focals and a bathrobe, reading a thermometer he got mail order from Washington D.C., in South Dakota, at 6pm, in February, during a blizzard, in 1895.
That seems to communicate the point pretty well.

Reply to  Bartleby
April 20, 2016 10:43 pm

Thanks, Bartleby. Mine is looking through distorting glasses and claiming a dark fuzzy blob is really a house with a cat in the window. 🙂

Editor
April 20, 2016 3:35 am

Pat Frank, John Kennedy has an excellent paper on the uncertainties of sea surface temperature data:
http://onlinelibrary.wiley.com/doi/10.1002/2013RG000434/full

Reply to  Bob Tisdale
April 20, 2016 10:48 pm

Thanks, Bob. 🙂 I have that paper, and it’s very useful.

garymount
April 20, 2016 4:30 am

Pat Frank, If I was to plot two separate trend lines using the data of figure 12 on the right, one of which creates the steepest positive slope possible and another trend that also selected data within the error ranges to create the steepest negative slope, would I be correct to claim that either of these slopes / trends are just as likely as the other ?

Reply to  garymount
April 20, 2016 10:53 pm

garymount, yes, but each of them would be very low probability because of the very large number of equally likely possibilities.
Drawing possible trend lines would not be done by selecting data points, though, because the points have the displayed uncertainties.
It would be done by drawing some sort of physically-real-seeming line through the uncertainty envelope. And then having to contend with the fact that the drawn line is no better than any other line.

Bindidion
April 20, 2016 7:34 am

Maybe Mr Frank is willing not only to show us a chart visible to anybody on Berkeley Earth’ web site, but also to spend some more time in a deep reading of the following document:
Berkeley Earth Temperature Averaging Process
http://www.scitechnol.com/berkeley-earth-temperature-averaging-process-IpUG.pdf
Maybe he then understands that, while all his remarks look perfect, all what he pinpoints nevertheless builds in the sum no more than a small part of the problems encountered by surface temperature measurement groups.
Wouldn’t the trends show by far much greater disparities among the different institutions, if his claims about errors and measurement quality were so relevant? I really don’t know, maybe Mr Frank has the appropriate answer…
1. Trends since 1850 – all in °C / decade with 2σ, by Kevin Cowtan’s trend computer
– Berkeley Earth: 0.058 ± 0.006
– HadCRUT4: 0.049 ± 0.006
2. Trends since 1880
– Berkeley Earth: 0.074 ± 0.007
– HadCRUT4: 0.065 ± 0.008
– GISSTEMP: 0.070 ± 0.008
– NOAA: 0.068 ± 0.008
3. Trends since 1891
– Berkeley Earth: 0.079 ± 0.007
– HadCRUT4: 0.073 ± 0.008
– GISSTEMP: 0.078 ± 0.009
– JMA: 0.073 ± 0.001 (*)
– NOAA: 0.077 ± 0.008
(*) not available at York, Linest used instead, since its trends are the same as Cowtan’s ± 0.001 °C for the 4 others when starting in 1891; only the 2σ differ due to Linest not accurately considering matters like white noise etc.
We have trend differences below 0.01 °C / decade, even though these institutions don’t share all raw data and perform highly different computations on that data…
A look at a highly scalable plot of 5 surface temperature datasets (common baseline: 1981-2010, anomalies in °C) shows us, that’s evident, pretty good the same:
http://fs5.directupload.net/images/160420/2sxqhvwh.pdf

Reply to  Bindidion
April 20, 2016 7:48 am

“Maybe he then understands that, while all his remarks look perfect, all what he pinpoints nevertheless builds in the sum no more than a small part of the problems encountered by surface temperature measurement groups.
Wouldn’t the trends show by far much greater disparities among the different institutions, if his claims about errors and measurement quality were so relevant? I really don’t know, maybe Mr Frank has the appropriate answer…”
Or maybe it shows they all use the same basis methodology and biases.
But even at this, a temperature increases does not mean it is attributable to Co2. And they are regional increases, not global.
How do you explain a well mixed gas only warming part of the planet? They hide this by making the narrative global temperature.

Bindidon
Reply to  micro6500
April 20, 2016 9:02 am

1. “Or maybe it shows they all use the same basis methodology and biases.”
Is that the ‘average skeptic answer’ ? Why don’t you inspect the documents these people produce, instead of supposing a priori they do wrong?
Bob Tisdale has published a link to Kennedy’s methodology paper, I did for Berkeley. You are kindly invited to read the stuff, but my little finger tells me you probably won’t. It’s so easy to write critique, isn’t it?
2. “But even at this, a temperature increases does not mean it is attributable to Co2. And they are regional increases, not global. ”
Within less than a week, you are the 3rd person who replies to one of my comments by mentioning this poor CO2 guy, although I did NOT… Slowly but surely it gets really hilarious.
Just a little hint for you: the average surface temperature increase since 1850 (about 0.55 °C/century, i.e. 0.91 °C) is still below the logarithm of CO2’s atmospheric concentration. Understood?
3. “And they are regional increases, not global. ”
Aha. That’s quite interesting: the 5 surface records I mentioned are all global records…

Reply to  Bindidon
April 20, 2016 10:59 am

Is that the ‘average skeptic answer’ ? Why don’t you inspect the documents these people produce, instead of supposing a priori they do wrong?,/blockquote>
I decided to go look at the data myself, and the came up with my own method.
https://micro6500blog.wordpress.com/2015/11/18/evidence-against-warming-from-carbon-dioxide/

Within less than a week, you are the 3rd person who replies to one of my comments by mentioning this poor CO2 guy, although I did NOT… Slowly but surely it gets really hilarious.

Okay, you didn’t mention Co2, but those graphs are the common example of proof.

“And they are regional increases, not global. ”
Aha. That’s quite interesting: the 5 surface records I mentioned are all global records…

You didn’t mention a regional series, but I was trying to point out that global series are a good way to hide the real source of the change in climate.

Bindidon
Reply to  micro6500
April 20, 2016 9:11 am

I mean of course (still for short) the logarithm of its atmospheric concentration’s delta since that year.

Bindidon
Reply to  micro6500
April 20, 2016 1:14 pm

micro6500 April 20, 2016 at 10:59 am
“I decided to go look at the data myself…”
“Okay, you didn’t mention Co2, but…”
“You didn’t mention a regional series, but…”
Many thanks in advance to avoid such meaningless replies in the future.
Sorry: you are incredibly unexperienced.

Reply to  Bindidon
April 20, 2016 1:34 pm

Many thanks in advance to avoid such meaningless replies in the future.

You’re welcome.

Sorry: you are incredibly unexperienced.

Some things sure, others not so much.

Reply to  Bindidion
April 20, 2016 10:56 pm

Bindidion, they’re all working with the same set of numbers. Why shouldn’t they get the same result?
The fact that the numbers have large ± uncertainties associated with them does not change their recorded values.

April 20, 2016 8:41 am

Pat . Great post. It would be a great contribution if you could produce a similar analysis of the satellite data sets.

Bindidon
Reply to  Dr Norman Page
April 20, 2016 9:23 am

YES! That indeed would be welcome.
Especially when considering
– the huge differences between UAH5.6 and UAH6.0beta5 (especially in the North Pole region);
– the differences to be expected between RSS3.3 TLT and RSS 4.0 TLT;
– Kevin Cowtans comparison of the accuracy of surface vs. satellite temp measurements (reviewed by Carl Mears): http://www.skepticalscience.com/surface_temperature_or_satellite_brightness.html
(I know: Kevins paper is at SKS, I’m by far not Nucitellis greatest fan. But it’s worth to be read.)

Reply to  Dr Norman Page
April 20, 2016 10:59 pm

Thanks, Norman. One thing at a time. After 3 years of steady effort, I’ve still not managed to publish my critical analysis of climate model projections.
As here in temperature-record-land, no one in climate-modeling-land is capable of evaluating physical error. Whatever they don’t understand is ipso facto wrong.

Michael Carter
April 20, 2016 1:03 pm

One of the most important posts on this site IMO
What I see so often lacking in this debate is simple common sense. Aside from events of extreme heat or cold why would anyone care about a few degrees here or there while reading and recording a mercury thermometer during the era 1850-1950? Take a look at the scales in F on the typical thermometer of the era. Add to this the influence of shrouds and wind and I doubt that readings were accurate within 3 C. Back then it did not matter!
I am not a statistician but my gut feeling is that if all readings right up to present day were read and recorded to the nearest 1 C we would end up with something just as accurate and useful as what is being reported: “A new global record by over 0.3 C and over 1 C of the 19th century average!”. Yea right!

Bindidon
Reply to  Michael Carter
April 20, 2016 2:26 pm

“Add to this the influence of shrouds and wind and I doubt that readings were accurate within 3 C”
Maybe, but… how did so many datasets, constructed out of sometimes quite different raw data, processed by totally different algorithms, manage to be so similar?
Global Conspiracy?

Reply to  Bindidon
April 20, 2016 7:11 pm

” Maybe, but… how did so many datasets, constructed out of sometimes quite different raw data, processed by totally different algorithms, manage to be so similar?”
Because they are mostly made up of infilled and homogenized data, that overwhelms the measurements. It is even possibly to actually be warming, but little if any is from co2.
So, that’s my answer to the same question as earlier, pretty much the same answer.

Reply to  Bindidon
April 20, 2016 7:28 pm

Actually, I’ve changed my mind, I think there’s been little if any warming, and when the surface stations are aggregated without infilling and homogenization, what they have recorded is indistinguishable from 0.0F + / – 0.1F.

Michael Carter
Reply to  Bindidon
April 20, 2016 9:36 pm

I cannot see that there would be many data sets during the period 1850-1950. Each country had only one official met service where continuous records were kept. Modern analysis comes out similar because they consider the historical data to be accurate enough to establish the + 1 C/ century. How foolish. How could a thesis supervisor approve a work based on such fragile evidence?
As has been pointed out many times before here, the other great flaw relates to the limited distribution of data locations
Based on this very important topic (accuracy) we may even have had cooling over the last 150 years and would not know! – ya can’t measure mean global temperature change throughout the last century within 1 C! Over the next century maybe
.

Bindidon
Reply to  Bindidon
April 21, 2016 4:25 am

Michael Carter, I have the feeling that any attempt to convince you will fail. My reaction in such cases is to do exactly the inverse 🙂
http://www.cao-rhms.ru/krut/OAO2007_12E_03.pdf
You will enjoy.

Michael Carter
Reply to  Bindidon
April 21, 2016 1:00 pm

Bindinon – convince me of what? I don’t follow. I was talking about accuracy within 1C throughout the period 1850-1950 . I thought we were on the same page 🙂
Actually to get to the bottom of this subject what is required is a historian:
Re historical weather stations: What, where, when, why and how? Show me historical records recording 0.1 of a degree. 95+% of the time reading of temperature was a mundane daily affair where a degree here or there did not matter in the objective – to record everyday weather. I know that that in my country readings were often made by volunteers from the general community e.g. farmers. Did they take readings at exactly the same time each day?
Add to this the physical variables relating to shrouding and there is a huge question mark over accuracy within 3 C . To calculate this one would have to build the historical shrouds at their original location and conduct experiments. Take for example latent heat effect in a windy location where brief showers are common

Reply to  Bindidon
April 28, 2016 6:12 am

Bindidon writes:

Michael Carter, I have the feeling that any attempt to convince you will fail. My reaction in such cases is to do exactly the inverse :-)”

and cites “Global temperature: Potential measurement accuracy, stochastic disturbances, and long-term variations”, which on the surface would appear to agree in large part with Mr. Carter’s criticism and with Pat Frank’s analysis, leaving me with no understanding of what “convincing” needs to occur? It seems the three parties involved in the discussion largely agree; the purported accuracy of the historical written records is highly questionable, and conclusions concerning the effects of rising carbon dioxide in Earth’s atmosphere are without foundation.
Are we all arguing with each other for the sake of argument?

Bindidon
April 20, 2016 2:55 pm

In his message posted April 20, 2016 at 8:41 am, Dr Norman Page wrote:
It would be a great contribution if you could produce a similar analysis of the satellite data sets.
Here is a plot comparing, within the satellite era, the actually ‘coolest’ surface temperature measurement record (JMA, Tokio Climate Center) with the actually ‘coolest’ lower troposphere brightness measurement record (UAH6.0beta5, University of Alabama at Huntsville):
http://fs5.directupload.net/images/160420/ujlplpla.jpg
I’m all you want but a statistician. So I can only intuitively imagine it might be some hard work to isolate here what’s good, what’s bad and what’s ugly in such a comparison.
Anyway: it’s amazing to see that, though
– distant in abolute temperature by about 24 °C
– recorded by completely different hardware
– processed by completely different software
the surface and the lower troposphere column show, within a 37 years long time series of their so unluckily “anomalies” named deltas, such a degree of similarity (deltas shown here in °C).

Bindidon
Reply to  Bindidon
April 20, 2016 3:01 pm

The two thick lines in red (UAH) and blue (JMA) are the 37 month running means of the corresponding plotted monthly data.

Reply to  Bindidon
April 20, 2016 3:32 pm

See Fig 5 at
http://climatesense-norpag.blogspot.com/2016/03/the-imminent-collapse-of-cagw-delusion.html
To see what is going on based on the RSS data.
Just add the two trends in Fig 5 on your UAH data showing the millennial peak at 2004 and cut off at 2014 . The current El Nino is a temporary aberration which obscures the trend.

Reply to  Bindidon
April 28, 2016 6:28 am

I would be tempted to say your chart give good evidence there is a high degree of agreement between the JMA record and the UAH satellite record, such a degree that it makes very good sense to consider the satellite record sound and systematically superior to the ground based measurement method simply because:
a) It is regularly calibrated and so, internally consistent.
b) it has true global coverage with no necessity to infill or otherwise approximate.
c) the exact same instrument is used to collect all data, removing error and uncertainty.
For all of these reasons (and certainly more), it would seem the argument against standardizing on these data is very weak?

Bindidon
Reply to  Bindidon
April 20, 2016 4:15 pm

Oops?! Where is my reply to Dr Norman Page? Evaporated?

Bindidon
Reply to  Bindidon
April 21, 2016 2:38 am

My experience shows rather the other way round: many people persist in explaining nearly everything with millenial, centennial and other cycles or even of their combination (Loehle & Scafetta are a pretty good example of that).
And these observations, based on nothing more accurate than other observations, are imho exactly what “obscures the trend”.
Moreover, it is known since longer time that trends over time periods as short as the one you presented (2004-2014) have few significance: the standard error associated to these trends is too high.

Bindidon
April 20, 2016 4:12 pm

Pat Frank
I can understand your unsatisfaction about the potential inaccuracy of surface temperature measurements.
But as we know, everything has its counterpart. What about you having a look at
http://www.moyhu.blogspot.de/2016/04/march-giss-down-006-hottest-march-in.html
where you can see within the comment thread what really happens with unusually hot temperatures actually recorded in the Arctic…
Look at this amazing discussion between Nick Stokes and commenters Olof R and Kevin Cowtan!

Reply to  Bindidon
April 23, 2016 2:17 pm

There’s the temperature record, and then there’s the cause of that record, Bindidon. It’s the cause that powers the debate about AGW. Absent any evidence that air temperatures are influenced by human GHG emissions, what’s the cosmic importance of a couple tenths of a degree?
Apart from that, look again at the plot your site. Where are the physical error bars? How do you or anyone else know any of the differences are physically significant?

Reply to  Bindidon
May 1, 2016 5:12 pm

bindidon,
Cowtan & Way have been so thoroughly deconstructed that you’re only being amusing by mentioning them. Put their names in the search box, and get educated.
And I noticed that you claimed Antarctic temperatures have been rising. That’s about as accurate as your other comments:comment image

basicstats
April 21, 2016 6:18 am

“Central Limit Theorem is adduced to assert that they average to zero.”
There seems to be confusion over the CLT. It provides assumptions under which a sample mean has an approximately normal (gaussian) probability distribution. This is more than is required to justify using averages of large samples to eliminate random error. A Law of Large Numbers will suffice for that. Basically, LLNs say the sample mean converges to the population mean with increasing sample size. Nothing about probability distributions.
The key features required for LLN are independent (or comparable) observations having the same expected value. As with other kinds of error in temperature measurement, these assumptions obviously do not apply to systematic error of the kind described in the post. This is not to say that large sample averages (plus anomalies) will not knock out a lot of measurement error. Just not to the claimed uncertainty limits.

April 21, 2016 7:42 pm

It is a very common technique to use oversampling and decimation to improve the resolution of analog to digital converters, that is to use a converter with maybe 10 bit resolution to achieve 16 bit precision. That is quite similar to what is being discussed here.
http://www.atmel.com/images/doc8003.pdf

Reply to  Eli Rabett
April 22, 2016 9:04 am

Not correct, Eli. The point at issue is the accuracy of the waveform itself as a representation of physical reality, not whether oversampling the waveform can reproduce the waveform.

Reply to  Eli Rabett
April 24, 2016 5:56 pm

Eli Rabett April 21, 2016 at 7:42 pm

It is a very common technique to use oversampling and decimation to improve the resolution of analog to digital converters, that is to use a converter with maybe 10 bit resolution to achieve 16 bit precision. That is quite similar to what is being discussed here.

Let me see if I can clarify this with a simple example of measurement error. Note that we have a variety of possible situations, and I list some of them below in order of what is generally increasing uncertainty.
1. Unvarying data, same instrument, repeated measurements. This is the best case we can hope for. An example would be repeated measurements of the length of a board using the same tape measure.
2. Unvarying data, different instruments, repeated measurements. An example would be repeated measurements of the length of a board using different tape measures.
3. Varying data, same instrument. An example would be measuring the lengths of a string of different boards as they come off some cutting machine using the same tape measure.
4. Varying data, different instruments. An example would be measuring the lengths of a string of different boards as they come off the cutting machine using different tape measures.
5. Different data, different instruments. An example would be measuring the lengths of a string of different boards as they come off identical cutting machines in a number of factories using different tape measures.
Let me discuss the first situation, as it is the simplest. Let’s say we have an extremely accurately marked tape measure that reads to the nearest millimetre, and we’re reading it by eye alone, no magnification. Now, by squinting, I could possibly estimate another decimal point.
Let’s say for the discussion that the board is 314.159625 millimetres long.
So I measure the board, and I estimate it at 314 and a quarter mm. I hand the tape measure to the next person, they say 314.1 mm. The next person says 314 and a half mm. I repeat the experiment until 10,000 people have read the length. Many of them round it off to one half simply because it is between two marks on the tape. Others estimate it to the nearest quarter, others give a variety of different decimal answers.
Then I take a look at the mean (average) of the estimates of the length. Since lots of folks said 314 and a half, the estimate is high. I get say 314.246152, with a standard error which can be calculated as the standard deviation of the estimates (which will be on the order of 0.25 mm) divided by the square root of the number of trials. This gives us an answer of 314.36152 mm, with an uncertainty of ± 0.0025 mm.
Here is the important point. This tiny uncertainty is NOT the uncertainty in the actual length of the board. What we have measured is the uncertainty of our ESTIMATE of the length of the board. And in fact, in this example the true length of the board is far, far outside the indicated and correctly calculated uncertainty in the estimate.
I have simplified my example to highlight the difference between the uncertainty in our ESTIMATES, and the uncertainty in the ACTUAL LENGTH.
The uncertainty in our ESTIMATES can be reduced by making more measurements … but even with 10,000 repeated measurements we can’t measure to two thousandths of a millimetre by eye.
Finally, let me repeat that I’m discussing the simplest situation, repeated measurement of an unvarying length with the same instrument. As we add in complications, it can only increase the uncertainty, never decrease it. So moving to say measuring an unvarying length with different instruments, we have to add in instrumental uncertainty. Then when we go to measuring a varying length with different instruments, yet more uncertainty comes in.
This leads to my own personal rule of thumb, which is that I’m extremely suspicious of anything more than a one-decimal-point increase in accuracy from repeated measurements. If the tape measure is gradated in 1 mm increments, on my planet the best we can hope for is that we can get within a tenth of a millimetre by repeated measurements … and we may not even be able to get that. Here’s why
You may recall the concept of “significant digits”. From Wolfram Mathworld:
Significant Digits

When a number is expressed in scientific notation, the number of significant digits (or significant figures) is the number of digits needed to express the number to within the uncertainty of calculation. For example, if a quantity is known to be 1.234+/-0.002, four figures would be significant
The number of significant figures of a multiplication or division of two or more quantities is equal to the smallest number of significant figures for the quantities involved. For addition or subtraction, the number of significant figures is determined with the smallest significant figure of all the quantities involved. For example, the sum 10.234+5.2+100.3234 is 115.7574, but should be written 115.8 (with rounding), since the quantity 5.2 is significant only to +/-0.1.

Note that this means that when we consider significant digits, the average of say a week’s worth of whole-degree maximum temperatures of say (14,17,18,16,18,14,14) is not 15.857 ± 0 .704 …
It is 16.
Now, like I said, my rule of thumb might allow a value of 15.9 … but I’d have to check the Hurst Exponent first.
Best to all,
w.

Reply to  Willis Eschenbach
April 24, 2016 8:00 pm

” Note that this means that when we consider significant digits, the average of say a week’s worth of whole-degree maximum temperatures of say (14,17,18,16,18,14,14) is not 15.857 ± 0 .704 …
It is 16.”
What would you define the average difference as?
0 +/- 1
or
0 +/- 0.3 (7796……) (1/7^-2)
or something else?
And then to make it more specific(to my method), imagine you are measuring the growth rate of say a bamboo stalk, where you are not actually making a cut, but you measure the entire length everyday to say calculate the monthly growth rate? Is the subsequent measurements more accurate because they are correlated?

Reply to  Willis Eschenbach
April 24, 2016 8:11 pm

” (14,17,18,16,18,14,14) ”
To make sure I explained the above in a cogent manner, based on your numbers each Day you measure
14
31
49
65
83
97
111
So while 49 is +/- 1, the day you measure 49, the 31 from the prior day, can’t still have a +/-1 error while 49 is +/- 1

Editor
April 26, 2016 6:57 pm

1sky1 April 26, 2016 at 5:14 pm

A fundamental point is continually being missed here. In climate, as opposed to real-time physical-weather statistics we are always dealing with averages of averages. The monthly mean is the average of daily means at a particular station. The yearly temperature average is the average of the monthly means, whether they pertain to a single station or any aggregate spatial average thereof.

While that is generally true, I don’t see people missing it … this is why I ask people to quote what they object to.

Neither the Hurst exponent of the underlying of individual variables, nor the shape their ordinate-error distributions is material to the determination of such climatic means, which is the only context in which I invoked the CLT here.

As the person who brought up the Hurst Exponent, it is not “material to the determination of such climatic means”. However, it is applicable to the CLT, as I demonstrated and linked to above.

It’s applicability to the issues at hand should be all the more evident, when the fact that in many cases the sampling, say for May 1945, is EXHAUSTIVE, with zero sampling error. Nowhere do I claim that this changes the shape of any error-distribution.

I have no clue what you are talking about. The sampling of WHAT was exhaustive in May 1945? Again, a quote or link might make your claim understandable … or not.

I’m traveling and will not divert valuable time to respond to any further out-of-context castings of proven statistical ideas or my own words.

You say that as though we care about your oh-so “valuable time” … in my opinion, the most valuable use of your time would be to never post here again. Your vague claims are tedious and boring and you seem unable to respond directly to anyone’s direct points. Instead you wave your hands, tell us once again how smart and important you are, make meaningless uncited unreferenced statements, whine that we’re wasting your valuable time, and contribute nothing in the process.
w.