Can Temperature Adjustments Right a Wrong?

Guest Post by John Goetz

Adjustments to temperature data continue to receive attention in the mainstream media and science blogs. Zeke Hausfather wrote an instructive post on the Climate Etc. blog last month explaining the rationale behind the Time of Observation (TOBS) adjustment. Mr. Hausfather pointed to the U.S. Climate Reference Network (CRN) as a source of fairly pristine data that can be used to analyze TOBS. In examining the CRN data, there is no doubt that the time of observation affects the minimum, maximum, and average temperature recorded on a given day. Also, changing the TOBS one or more times during a station’s history can affect the station’s temperature trend.

Temperature adjustments have bothered me not because they are made, but because there is a broad assumption that they skillfully fix a problem. Somehow, climate scientists are capable of adjusting oranges into apples. However, when adjustments are made to temperature data – whether to correct for TOBS, missing data entries, incorrect data logging, etc. – we are no longer left with data. We are left instead with a model of the original data. As with all models, there is a question of how accurately that model reflects reality.

After reading Mr. Hausfather’s post, I wondered how well the TOBS adjustments corrected the presumably flawed raw temperature data. In the process of searching for an answer, I came to the (preliminary) conclusion that TOBS and other adjustments are doing nothing to bring temperature data into clearer focus so that global temperature trends can be calculated with the certainty needed to round the results to the nearest hundredth of a degree C.

The CRN station in Kingston, RI is a good place to examine the efficacy of the TOBS adjustment. This is because it is one of several CRN pairs around the country. Kingston 1 NW and Kingston 1 W are CRN stations located in Rhode Island and separated by just under 1400 meters. Also, a USHCN station that NOAA adjusts for TOBS and later homogenizes is located about 50 meters from Kingston 1 NW. The locations of the stations can be seen on the following Google Earth image. Photos of the two CRN sites follow – Kingston 1 W on top and Kingston 1 NW on the bottom (both courtesy NCDC).

Kingston_Stations

Locations of Kingston, RI USHCN and CRN Stations

Slide 1

Kingston CRN 1 W

Slide 1

Kingston CRN 1 NW

The following images are of the Kingston USHCN site from the Surface Stations Project. The project assigned the station a class 2 rating for the time period in question, 2003 – 2014. Stations with a class 1 or class 2 rating are regarded as producing reliable data (see the Climate Reference Network Rating Guide – adopted from NCDC Climate Reference Network Handbook, 2002, specifications for siting (section 2.2.1) of NOAA’s new Climate Reference Network). Only 11% of the stations surveyed by the project received a class 1 or 2 rating, so the Kingston USHCN site is one of the few regarded as producing reliable data. Ground level images by Gary Boden, aerial images captured by Evan Jones.

KINGSTON_ RI_ Proximity

Google Earth image showing locations of USHCN station monitoring equipment.

KINGSTON_ RI_ Measurement_001

Expanded Google Earth image of USHCN station. Note the location of the Kingston 1 NW CRN station in the upper right-center.

Kingston_looking_south

Kingston USHCN station facing south.

Kingston_looking_north

Kingston USHCN station facing north. The Kingston 1 NW CRN station can be seen in the background.

CRN data can be downloaded here. Download is cumbersome, because each year of data is stored in a separate directory, and each file represents a different station. Fortunately the file names are descriptive, showing the state and station name, so locating the two stations used in this analysis is straightforward. After downloading each year’s worth of data for a given station, they must be concatenated into a single file for analysis.

USHCN data can be downloaded here. The raw, TOBs, and homogenized (52i) files must be downloaded and unzipped into their directories. All data for a station is found in a single file in the unzipped directories. The Kingston USHCN data has a file name that begins with USH00374266.

Comparison of Kingston 1 NW and Kingston 1 W Temperatures

Both Kingston CRN stations began recording data in December, 2001. However, the records that month were incomplete (more than 20% of possible data missing). In 2002, Kingston 1 NW reported incomplete information for May, October, and November while Kingston 1 W had incomplete information for July. Because of this, CRN data from 2001 and 2002 are not included in the analysis.

The following chart shows the difference in temperature measurements between Kingston 1 NW and Kingston 1 W. The temperatures were determined by taking the average of the prior 24-hour minimum and maximum temperatures recorded at midnight. The y-axis is shown in degrees C times 100. The gray range shown centered at 0 degrees C is 1 degree F tall (+/- 0.5 degrees F). I put this range in all of the charts because it is a familiar measure to US readers and helps put the magnitude of differences in perspective.

Figure_1

The y-axis units are degrees C times 100. The gray band has a y-dimension of 1 degree F, centered on 0.

Given the tight proximity of the two stations, I expected their records to track closely. I found it somewhat surprising that 22 of the months – or 15% – differed by the equivalent of half a degree F or more. This makes me wonder how meaningful (not to say accurate) homogenization algorithms are, particularly ones that make adjustments using stations up to 1200 Km. With this kind of variability occurring less than a mile apart, does it make sense to homogenize a station 50 or 100 miles away?

Comparison of Kingston 1 NW and Kingston 1 W Data Logging

A partial cause of the difference is interruption in data collection. Despite the high-tech equipment deployed at the two sites, interruptions occurred. Referring to the previous figure, the red dots indicate months when 24 or more data hours were not collected. The interruptions were not continuous, representing a few hours here and a few there of missing data. The two temperature outliers appear to be largely due to data missing from 79 and 68 hours, respectively. However, not all differences can be attributed to missing data.

In the period from 2003 through 2014, the two stations recorded temperatures during a minimum 89% of the monthly hours, and most months had more than 95% of the hours logged. The chart above shows that calculating a monthly average when missing 10-11% of the data can produce a result with questionable accuracy. However, NOAA will calculate a monthly average for GHCN stations missing up to nine days worth of data (see the DMFLAG description in ftp://ftp.ncdc.noaa.gov/pub/data/ushcn/v2.5/readme.txt). Depending on the month’s length, GHCN averages will be calculated despite missing up to a third of the data.

Comparison of Kingston USHCN and CRN Data

To test the skill of the TOBS adjustment NOAA applied to the Kingston USHCN site, a synthetic TOBS adjustment for the CRN site was calculated. The B91 forms for Kingston USHCN during 2003-2014 show a 4:30 PM observation time. Therefore, a synthetic CRN 4:30 PM observation was created by averaging the 4:00 PM and 5:00 PM observation data. The difference between the USHCN raw data and the synthetic CRN 4:30 observation is shown in the following figure. Despite a separation of approximately 50 meters, the two stations are producing very different results. Note that 2014 data is not included. This is because the 2014 USHCN data was incomplete at the time it was downloaded.

Figure_2

The y-axis units are degrees C times 100. The gray band has a y-dimension of 1 degree F, centered on 0.

Although the data at the time of observation is very different, perhaps the adjustment from midnight (TOBS) is similar. The following figure represents the TOBS adjustment amount for the Kingston USHCN station minus the TOBS adjustment for the synthetic CRN 4:30 PM data. The USHCN TOBS adjustment amount was calculated by subtracting the USHCN raw data from the USHCN TOBS data. The CRN TOBS adjustment amount was calculated by subtracting the synthetic CRN 4:30 PM data from the CRN midnight observations. As can be seen in the following figure, TOBS adjustments to the USHCN data are very different than what would be warranted by the CRN data.

Figure_3

The y-axis units are degrees C times 100. The gray band has a y-dimension of 1 degree F, centered on 0.

Adjustment Skill

The best test of adjustment skill is to take the homogenized data for the Kingston USHCN station and compare it to the midnight minimum / maximum temperature data collected from the Kingston CRN 1 NW station located approximately 50 meters away. This is shown in the following figure. Given the differences between the homogenized data from the USHCN station and the measured data from the nearby CRN station, it does not appear that the combined TOBS and homogenization adjustments produced a result that reflected real temperature data at this location.

Figure_4

The y-axis units are degrees C times 100. The gray band has a y-dimension of 1 degree F, centered on 0.

Accuracy of Midnight TOBS

Whether the minimum and maximum temperatures are read at midnight or some other time, they represent just two samples used to calculate a daily average. The most accurate method of calculating the daily average temperature would be to sample continuously, and calculate an average over all samples collected during the day. The CRN stations sample temperature once every hour, so 24 samples are collected each day. Averaging the 24 samples collected during the day will give a more accurate measure of the day’s average temperature than simply looking at the minimum and maximum for the past 24 hours. This topic was covered in great detail by Lance Wallace in a guest post two and a half years ago. It is well worth another read.

The following chart shows the difference between using the CRN hourly temperatures to calculate the daily average, and the midnight minimum / maximum temperature. The chart tends to show that the hourly temperatures would produce a higher daily average at this station.

Figure_5

The y-axis units are degrees C times 100. The gray band has a y-dimension of 1 degree F, centered on 0.

Discussion

Automated methods to adjust raw temperature data collected from USHCN stations (and by extension, GHCN stations) are intended to improve the accuracy of regional and global temperature calculations to, in part, better monitor trends in temperature change. However, such adjustments show questionable skill in correcting the presumed flaws in the raw data. When comparing the raw data and adjustments from a USHCN station to a nearby CRN station, no improvement is apparent. It could be argued that the adjustments degraded the results. Furthermore, additional uncertainty is introduced when monthly averages are computed from incomplete data. This uncertainty is propagated when adjustments are later made to the data.

A Note on Cherry-Picking

Some will undoubtedly claim that I cherry-picked the data to make a point, and they will be correct. I specifically looked for the closest-possible CRN and USHCN station pairs, with the USHCN station having a class 1 or 2 rating. My assumption was that their differences would be minimized. The fact that a second CRN station was located less than a mile away cemented the decision to analyze this location. If anyone is able to locate a CRN and USHCN pair closer than 50 meters, I will gladly analyze their differences.

================================

Edited to add clarification on y-axis units and meaning of the gray band to the description of each figure.

Edited to add links to the CRN and USHCN source data on the NCDC FTP site.

Advertisements

276 thoughts on “Can Temperature Adjustments Right a Wrong?

    • Whether the minimum and maximum temperatures are read at midnight or some other time, they represent just two samples used to calculate a daily average.
      ============
      calculating min and max temperatures at midnight or some other time of day is nonsense. the Six’s min/max logging thermometers was invented in 1782. min and max temperature happen do not happen at any specific consistent time during the day or night.
      http://en.wikipedia.org/wiki/Six%27s_thermometer

      • In any case, a min_max record meets the required Nyquist criterion for validity in sampled data systems ONLY if the cyclic data function is a pure sinusoidal function.
        If not, then there must at least be a second harmonic spectral component in a truly periodic function), and in that case a min_max record will not even give a believable average value for the function since the aliasing noise spectrum folds back all the way to zero frequency (which is the average value).
        As to changing the value of a previously recorded observation, that is permissible ONLY in one situation. That situation wherein altering a previously recorded value is permissible, is when the typist makes a typo and enters a number different from that registered on the instrument being reported. In that case, replacing that number with the correct number that was read by the instrument, is allowed, but only by the observer that actually read the instrument, if (s)he wrote it down wrong, or by an aid who converted the written record to a machine record of the exact same data.
        Any other instance of altering an instrumental record is blatant fraud in my book, and should call for instant dismissal.
        Look at the photos of those sites above showing that the station parameters have been altered from time to time.
        How is someone going to be able to re-assess the credibility of that station in light of its alterations, if somebody goes and changes the numbers.
        Eventually, nobody knows just what numbers go along with what station conditions.
        I don’t believe ANY global oceanic Temperature data taken before about 1980, because such data was oceanic WATER Temperature at some uncontrolled depth, and it was blithely assumed that the air Temperature was the same.
        From buoy measurements of both water and air Temperatures simultaneously, starting circa 1980, John Christy et al reported in 2001 (I believe) that they are not the same; and they are not correlated, so you cannot recover the oceanic air Temperatures for that old obsolete water Temperature data.
        For the 20 odd years of dual data that they recorded, the water Temps were something in the range of 40% higher than the air temps. (ONLY for that 20 years of data; no idea what the relation is for other times because of the lack of correlation.)
        So I don’t buy any lame excuse for why they cook the books. That is scientific fraud of the worst kind in my book.
        Is it that hard to grasp the concept, that later users of recorded data, can themselves take into account recorded information about site differences, when making use of data from that site.
        It is very common to re-measure data, when improved instrumentation comes along. But you don’t change the earlier records done with older instruments.
        Look at the history of attempts to measure the velocity of electromagnetic waves. But we don’t go back and “adjust” what Roemer, or Michelson, or anyone else actually measured at the time they measured it.

    • Reply to M Moon ==> “The y-axis is shown in degrees C times 100” — thus 0.01°C would show as “1” and 0.6 would show as “60”.
      (Don’t ask me — not my graph….)

      • The GHCN data collected by NCDC is recorded in degrees C times 100. This is propagated through the GISS analysis, and results are presented by GISS in this manner. This form of presentation should be familiar to those who follow the temperature data.

      • Some Physicists use systems of units wherein they put …. c = h = k =1 and they seem to keep it straight somehow.
        In those systems, Energy (E) is identical to mass (m) and also to frequency (nu) and Temperature (K).
        Makes life simpler for a particle physicist I guess.
        g

      • Which seems to be a propagation of BS. If someone presents temperatures to 1/100th °C one might ask how they preserved their reference ice baths and/or calibration for more than one day. Whereas if they present their data modified by multiplying it by 100 a casual reader might not note the level of precision they are claiming.
        Or maybe it’s all an effort to preserve ink/storage by getting rid of decimal points.

  1. I am awaiting a Zeke/Mosher demolition job on this or Goddard/Heller is perhaps closer to the “facts”? This is an interesting analysis. I am in favour of Inhofe starting an enquiry into this whole surface temperature adjustment “carry-on” and additionally USA should follow Australia in setting up a commission filled with independent experts in statistics without an axe to grind to looke at TOBS, UHI, homogenisation and other “adjustments” to US and global surface temperatures.

    • The real question is why the TOB adjustments for the past 100 years is Hockey Stick shaped.
      I get that TOB might create a bias, and that adjustments might be warranted, but why does the adjustment so consistently cool the past and warm the present?
      My recollection is that something like 40% of the claimed global warming over the past 120 years is due to TOB adjustments. Why should worldwide data collectors in China, Russia, Paraguay, Kenya, New Guinea, etc. all decide to shift their observations from afternoon to morning at just the right decades to warrant a Hockey Stick shaped “correction” to the observations?

      • Because the great majority of the changes in observation time were from evening to morning. That introduces a cooling bias and it has to be corrected by increasing the trend.
        The problem arises when there is missing metadata. pairwise infers that and I strongly disapprove.

  2. These adjustments have always seemed misguided, illogical and not based upon good scientific theory. I have experience with analog/digital conversion techniques and what they are doing just does not follow A/D conversion techniques. PERIOD. In essence their methods are analogous to assigning a test grade to a student that is out sick by averaging the grades of those to either side of his seat or by his last two grades – it is simply a WAG disguised as a good technique. Many times during the winter the coldest time of the day (0000 – 2400) is at 0001 and a few days later the warmest time of the day is at 0001. And the same in the summer. It all depends on the movement of the weather fronts. How do they factor this in their adjustments? How are they even aware of it?

    • Rich March 6, 2015 at 7:08 am
      I have experience with analog/digital conversion techniques and what they are doing just does not follow A/D conversion techniques. PERIOD.

      My experience is in data acquisition and processing in aircraft flight test where manipulation of data such as is done in Climate Science is strictly forbidden. Climate scientists seem to think it is OK to make their own rules.

  3. Could you calarify what you meant by your 4:30 CRN synthetic measurement? The USHCN measurement would be based on both min and max thermometers recorded at that time would it not? Just using the actual CRN temperature at the time of observation would not give you the same value. Did your synthetic calculation go from 4:30 of the previous day til the 4:30 that day and take the min and max between those two times to match the period during which the USHCN station would have made those measurements?

    • The CRN recorded temperatures on the hour. So 4:30 was the average of the min/max reading from 4:00 and 5:00 PM. By min/max reading, at 4:00 PM, what I mean is I looked backward 24 hours in time and used the minimum and maximum temperatures over those 24 hours. The same was done at 5:00 PM. I did not just use the min/max recorded during the 4:00 PM hour.

      • John,
        acutally the CRN temperature measurements are made every 5 minutes. Hourly average, min, max, etc. are derived from them.

      • Gary–
        OMG! So you’re saying that 12 measurements are taken each hour, which are then used to produce a single hourly “average”. Then, of the 24 hourly numbers, 22 are discarded to retain only the min and max. These two numbers are then “averaged” again to produce the daily “average” temp at that location, which must be “averaged” again to produce the “average” monthly or annual temperature, which is then compared to the historical “average” to determine how much colder or warmer we are. Not to mention further “averaging” across the state, region, nation and world so that we know how close to ultimate catastrophe we are. Remembering, of course, that the physical process that concerns us is the capture and retention of radiative energy, which varies as the fourth power of temperature and has nothing at all to do with “average” temperature! OK, I think I’ve got it.

      • skorrent1-
        Nope, not even close. Please don’t confuse comparisons of the USCRN hourly data with the historic data. The USCRN hourly data was reduced to daily min/max values used to create values as a convenience to compare it with historic data daily min/max data. USCRN data is retained on line at the 5 minute resolution but most folks prefer to use the calculated hourly versions. By the way even the 5 minute values are produced from multiple reads. The algorithms are available on the USCRN. They are fairly simple and straightforward. No TOBS correction or Homogenization used with this data.

  4. This is all about whether or not “it” is getting warmer. It begs the question of what does CO2 have to do with it?
    The sea ice/sheets/caps on Antarctica/the Arctic/Greenland/Iceland are shrinking/growing yes/no/maybe depends on who’s counting.
    Polar bears and penguins are endangered/having a hard time/pretty much as usual yes/no/maybe depends on who’s counting.
    The sea levels are rising, land is subsiding yes/no/maybe depends on who’s counting.
    The global temperatures are rising/falling/flat lining based on satellite/tropospheric/sea surface/land surface with or without UHI/TOB/homogenization/adjustments/bald faced lying yes/no/maybe depends on who’s counting.
    Nothing but sound and fury, tales told by people missing the point, signifying nothing. The only meaningful question is what does CO2 have to do with any of this? How are these contentious topics connected to CO2?
    IPCC’s dire predictions for the earth’s climate are based entirely on computer models, models which have yet to match reality. The projections began with a 4 C increase by 2100 which has since been adjusted down to 1.5 C without stilling those shrill Cassandras. The heated discussions mentioned above attempt to validate or refute those models, models driven by the radiative forcing/feedback of CO2 and other GHGs. IPCC AR5 TS.6 says that the magnitude of the radiative forcing/feedback of CO2 “…remains uncertain.” (Google “Climate Change in 12 Minutes.”) implying that IPCC was also uncertain in AR4, 3, 2, 1.
    IPCC is not uncertain about one issue, though, redistribution of wealth and energy from developed countries to the underdeveloped ones to achieve IPCC’s goal of all countries enjoying above average standards of living.
    Besides, the greatest threat to mankind isn’t CO2, it’s lead.

    • ” IPCC’s goal of all countries enjoying above average standards of living.”
      If this is what the IPCC wants, they will never acheive it. Whatever the standards of living are, half will always be at or below average.

      • To be fair they shown themselves to be constantly rubbish at statistics , so of course they cannot understand why everyone cannot be on the ‘average’ .

      • Well, now you’re assuming that average is the [median] mean in that calculation. (Sorry, given the statistical discussion, I couldn’t resist. : ) )

      • Ugh, median not mean.
        [On average, most of us think we know what you mean about the medium modes. The large and extra-large modes? Not so much. .mod]

    • “IPCC is not uncertain about one issue, though, redistribution of wealth and energy from developed countries to the underdeveloped ones to achieve IPCC’s goal of all countries enjoying above average standards of living.”
      We should refer to this IPCC goal in future as the “Lake Wobegon Goal”. 😉

      • That the goal of the entire “progressive” movement everywhere. They attempt to tell the “have nots” that if one just votes for them, they can improve their standard of living. Trouble is, in every case of which I am aware, it never works out that way in practice. If you look at places where they have been in control the longest, you find the greatest disparity of income and the most poverty. But this actually HELPS the “progressives” because then they have more people to whom they can sell their line of baloney.

    • IPCC is not uncertain about one issue, though, redistribution of wealth and energy from developed countries to the underdeveloped ones to achieve IPCC’s goal of all countries suffering with sub-standard living conditions while bureaucrats, politicians, and other special individuals have excellent living conditions.

      Fixed that for you.

      • well, truth be told, they are planning the standard of living primarily to be improved by tossing a significant portion of humanity out of the lifeboat for the greater good.

  5. Thanks John for an excellent analysis. The biggest problem with such adjustments is not that they exist and are applied, but are applied with a broad brush without determining the state of the data to begin with. That’s the lack of skill that John refers to. The “cure” is often arbitrarily applied and far worse than the symptoms.

  6. Am I missing something on your chart labels? Your +/-0.5 degF gray area spans 28 to -28 on the the y-axis. That makes no sense to me.
    One thing I’ve never understood about TOBS bias is how it introduces a trend. I get how time of observation can bias the measured temperature, but I’ve no clue how that bias introduces a trend that must be corrected for. Sure, a *change* in TOBS could introduce a one time step change in bias, and if, somehow, many stations, at different times made the same shift in time of observation, in the same direction, introducing a bias that was higher than the previous bias, I can see how the overall average might trend upwards.
    But such an orchestrated change seems, well, improbable. It seems that a change in time of observation would be just as likely to introduce a lower bias as it would be to introduce a higher bias.

    • I was hoping the gray band would not be confusing. It’s y-dimension is 1 degree F, or 0.56 degree C. Since the y-axis units are in degrees C times 100, I center that band from -28 to 28. I personally find the band useful to see the significance of the variation, because 1 degree F is something I can relate to.
      TOBS introduces bias when a station changes the standard time observations are made. If a station operator has been recording temperatures at 5:00 PM for 30 years, and then a new operator comes in and records temperatures at 7 AM for the next 30 years, a cooling bias is introduced by the change from afternoon to morning. I was always skeptical of that until I downloaded the CRN data after reading Zeke’s blog post.

      • Well, it would help if the axis were labelled in the same units you describe the gray band with.
        I do get that TOBS changes can introduce a bias, but what evidence is there for a system shift that would impact average temperature trends? I mean in your example, if the shift is in the opposite direction there is a warming bias.

      • After reading Zeke’s blog post, I could find nowhere the affect made to the monthly data you are posting here. Zeke ‘says’
        At first glance, it would seem that the time of observation wouldn’t matter at all. After all, the instrument is recording the minimum and maximum temperatures for a 24-hour period no matter what time of day you reset it. The reason that it matters, however, is that depending on the time of observation you will end up occasionally double counting either high or low days more than you should. For example, say that today is unusually warm, and that the temperature drops, say, 10 degrees F tomorrow. If you observe the temperature at 5 PM and reset the instrument, the temperature at 5:01 PM might be higher than any readings during the next day, but would still end up being counted as the high of the next day. Similarly, if you observe the temperature in the early morning, you end up occasionally double counting low temperatures. If you keep the time of observation constant over time, this won’t make any different to the long-term station trends. If you change the observations times from afternoons to mornings, as occurred in the U.S., you change from occasionally double counting highs to occasionally double counting lows, resulting in a measurable bias.
        To show the effect of time of observation on the resulting temperature, I analyzed all the hourly temperatures between 2004 and 2014 in the newly created and pristinely sited U.S. Climate Reference Network (CRN). I looked at all possible different 24 hour periods (midnight to midnight, 1 AM to 1 AM, etc.), and calculated the maximum, minimum, and mean temperatures for all of the 24 hours periods in the CRN data. The results are shown in Figure 4, and are nearly identical to Figure 3 published in Vose et al 2003 (which was used a similar approach on a different hourly dataset).If you keep the time of observation constant over time, this won’t make any different to the long-term station trends. If you change the observations times from afternoons to mornings, as occurred in the U.S., you change from occasionally double counting highs to occasionally double counting lows, resulting in a measurable bias..

        If the TOA does not trend in the monthly average, then a single change would not change more than the month in which the time change was made in. His study only looked at the daily temperature changes based on TOA. He never shows how much it would change the monthly average, most likely because it doesn’t.

    • The same thought occurred to me. Over past decades there may well have been systematic TOBS changes globally. If they occurred at different times then, yes, it could produce an overall trend that would need to be corrected.
      Recently there was a quite extraordinary post about data adjustments. It showed a set of raw and adjusted data sets for a number of stations. Incredibly, every station had been adjusted from a negative trend to a strong positive trend. Even if the adjustments were correct, it seems to me that if such enormous adjustments are required, then the original data is worthless. Hopefully this kind of industrial-scale adjustment only occurs in climate science. If engineers designing an airliner use this kind of adjustment then we have a big problem.
      Looking at those graphs, I did think about TOBS. There was no sign of a step change that could have been caused by TOBS. It was an overall positive trend that came exclusively from the adjustments.
      And here’s the point that you made:
      TOBS could cause a trend in the global average. But there’s no way it could cause a trend in the data for a single station. It would have to be a step change when the change occurred. I severely doubt that the TOBS was being changed every year for a single station – and always in the direction that produced a positive trend.
      The whole thing screams scientific fraud on a massive scale.

      • It seems a step change detection algorithm, at the station level, would correct TOBS issues as well as issues related to siting and instrumentation changes. If there is no step change, I see no reason to adjust the measured temperature at all. It is what it is, measured whenever at whatever time of day it was measured, and the data is sufficient to measure the existence of any trend at that location.

  7. An experimental physicist’s bon mot:
    “It’s hard, if not impossible, to fix in software the mistakes one made in hardware.”

    • In this case, the weather stations are providing physical data.
      I have to laugh, and every experiment has limits on accuracy and precision, but instead of adding the error band which is the correct way to handle this data – and this is NOT a model but the raw data – they try to tweak it to say it was 20.15426 degrees C instead of saying 20 +/- 0.5 deg. C. Or they miss the fact they aren’t measuring the same thing when the area surrounding changes – but it also simply introduces more sources of error (no two meadows are the same, and putting asphalt nearby affects it, but just changes the error band).
      The models are worse: http://wattsupwiththat.com/2015/02/24/are-climate-modelers-scientists/
      The stock market has a specific closing price each day, and you can create models that hit the curve (backtesting), but they don’t work going forward even though there is zero error in the input.

      • “… they try to tweak it to say it was 20.15426 degrees C instead of saying 20 +/- 0.5 deg. C. …”
        They are not trying to “tweak” anything. They are simply changing the data to suit their needs and desires. Some would call that cheating. Some would call that “Climate Science”.

  8. How can you tell if an adjustment is ‘right’ or ‘wrong’? It’s like when they refer to the last 18+ years as The Pause. How do you know it’s just a pause? I guess they purchased a crystal ball with their last government grant.

    • And “pause” presumes it will resume going UP, when, in fact, if it goes down, it wasn’t a pause at all, it was (is?) a stop. Climalarmists control the language.

    • How can you tell if an adjustment is ‘right’ or ‘wrong’?

      You carry out validation testing. John has now done that on the TOBS idea and shown that the adjustment is wrong.
      The homogenization adjustments could be easily validated by taking this Rhode Island observation out of all the observations and then using the homogenization algorithm to ‘synthesize’ the reading from Rhode Island. Then compare the actual Rhode Island reports with the synthesized version from the homogenization process. Then set an acceptable error bound – say 0.05 degrees C (or whatever) and if the synthesized figure is outside those bounds then tell the programmer to try again, or more likely, say that synthesizing in that way is not possible with sufficient accuracy.
      This validation should be automatically carried out against every single reporting station. Every run do a homogenization to provide a synthetic homogenized value for every reporting station and then do a comparison against the actual value. The same validation should be done for oceanic areas where ship observations could be used to check synthesized values. It would rapidly become apparent that synthesizing Rhode Island by using reports from stations up to 1200 Km distant (e.g. Indianapolis) doesn’t work.
      But this validation approach is too much like engineering, trying to get the correct value. Climate ‘scientists’ do not appear to worry about correct values they want values that support their hypothesis and their funding.

      • Well Ian, if I were you, I would enter what you just wrote above in the Bullwer Lytton competition. It would be a sure winner.
        Now just what in that logical sequence of adjustments that you described, determines which of the garbage is the “actual value.” ?? And which are the faux values ??

  9. Once data are adjusted, they cease to be data. Rather, they are someone’s estimate of what the data might have been had they been collected timely from properly selected, calibrated, installed and maintained sensors and sites.
    Consensed climate scientists dealing with surface temperature data appear to believe that they are the modern day, much improved embodiment of Rumpelstiltskin, able not only to spin straw (bad data) into gold (good data), but also to spin nothing (missing data) into gold (good data). I suspect the Brothers Grimm would be much impressed; or, perhaps, much distressed. 😉

  10. I think the main problem is not when the measurements are made. Regardless whether they’re made in the morning or in the afternoon, they’re always minima and maxima over past 24 hours.
    The problem is how the average is calculated. Because calculating it as arithmetic average of the two values is wrong. When the measurement is taken in the morning, you calculate average temperature between yesterday’s maximum and today’s minimum. When the measurement is taken in the afternoon, you calculate average between today’s maximum and minimum. But in both cases you’re omitting the other average. Because there are two average temperatures every day, the average temperature between maximum and minimum, and average temperature between minimum and another maximum. And the real average temperature should be calculated from both, not just one of them.
    By calculating just one average, you’re omitting half of temperature behavior.

  11. “does it make sense to homogenize a station 50 or 100 miles away?”
    No.
    I installed a Vantage Pro II weather station in 2009. Placed it 300 feet (called for 200 foot minimum) from my house in a clear area. I’ve been tracking since then. A rancher about 10 linear miles away (SW) also has a Vantage Pro II. His site is used by the Weather Channel to document “weather” data for our small town. We differ significantly on High Temp, Low Temp and rain fall on almost a daily basis. So no way 50 or 100 mile homogenizing makes any sense. If you go to 1200 km then it is just crazy.

    • I live in Ferntree Gully Victoria approx. halfway between the BOM sites of Ferny Creek and Scoresby. On an average, the temperature at Ferny Creek is 4C lower than Scoresby with 50% more rainfall. At Ferntree Gully we normally get Scoresby’s temperature with Ferny Creek’s rainfall. Try homogenising that.

    • Have you ever encountered the situation where a day occurs when your station reads hotter than your previous day, but the rancher’s station reads cooler than his previous day?

  12. I don’t want to argue with you guys about science. I want to tell my observation. When I was a child I remember looking at all of the things in awe. Each day I could not wait to wake up to run outside to watch all of the things that were crawling or fluttering about making colours and shapes.
    Then they made me go to school and they said to me 1thing + 1thing = 2 things. I said ” No, No, No, that is not correct “. but then I quickly understood that this was the society way and I should shift my brain into that way of thinking and always with me is a great sadness about relinquishing that childish understanding.

  13. around 2% of the world is urban but 27% of the weather stations are in these areas, that must skew the data.

    • That’ll be the “thirty acre pond” in the photo. & Right next door to the Cli-Sci Hundred Acre Wood.

  14. Goetz == thank you for this. I have been planning and gathering data for a major essay on TOB “correction” for some time. I’ll add your, and Lance Wallace’s, data to the resources file.
    BTW, my preliminary data show that the differences between averaging High/Low and averaging hourly temps are realtively huge, in both directions. The application of a single-directional numerical adjustment based on time alone is non-scientific.

  15. But, but I thought that there were statistical modeling methods that would eliminate all that noise and improve the accuracy to 0.01 degree over many stations?

    • 😉 I have pondered whether such methods render the result precisely inaccurate or inaccurately precise.

  16. TOB is indeed questionable, but those adjustments are not very large.
    Best adjusted LA down .4 degrees for a century of warming due to UHI. How do they know that was not a large under adjustment?
    In talking about Peterson 2003 Steve McIntyre stated…
    In this data set that supposedly shows the following:
    “Contrary to generally accepted wisdom, no statistically significant impact of urbanization could be found in annual temperatures.” actual cities have a very substantial trend of over 2 deg C per century relative to the rural network – and this assumes that there are no problems with rural network – something that is obviously not true since there are undoubtedly microsite and other problems. (five times the Best LA UHI adjustment)
    Still well above the Best .4 adjustment to LA, is this study done in South Korea.https://notalotofpeopleknowthat.wordpress.com/2015/02/14/uhi-in-south-korea-ignored-by-giss
    “the amount caused by urban warming is approximately 0.77 °C.
    GISS adjustments however were essentially zero. ” A crude average of the above adjustments is –0.05C, so, in net terms, no allowance has been made at all for UHI.”
    Surface T has risen much faster the radioscond… https://stevengoddard.files.wordpress.com/2014/12/screenhunter_5107-dec-10-21-45.gif
    Roy Spencer published this chart… https://stevengoddard.files.wordpress.com/2015/02/screenhunter_7153-feb-14-23-46.gif?w=640 Note how areas considered rural may well not be, thus the real difference may not easily be determined.
    So when a significant under adjustment is done to an urban area, and that area is used to fill in rural areas up to 1200 K away, the downward adjustment is actually an upward adjustment to the record.
    rural urban measured 7 real rural measured 5 UHI adjustment. – .4 urban changed to 6.6
    rural station not used UHI adjusted down urban in filled to final rural number 5 changes to 6.6
    (UHI down adjustment creates 1.6 degrees of warming)
    I have often wanted Best to explain just the adjustments to one station. One station should be easy to explain right. Just say the algorithm did this for these reasons and all is well. However if that one station happens to match what they have done to the entire record, and those adjustments to that one station are clearly wrong, then their may be a systemic problem to the record
    The meteorologist at the Icelandic Met Office is saying – the GHCN and GISS adjustments are unjustified. The adjusted data suppresses the earlier warming (1930 and 40) along with the cooling period to 1979, creating the impression of unprecedented recent warming, and flattening the decline to the late 1970s.
    https://stevengoddard.files.wordpress.com/2014/12/reykjavikgiss2012-2013.gif
    The fact that this clearly wrong Icelandic adjustment happened to most of the Iceland stations, and happened to the entire US record, and in other nations as well, far beyond in TOBS controversy, is alarming.
    From: Tom Wigley
    To: Phil Jones
    Subject: 1940s
    Date: Sun, 27 Sep 2009 23:25:38 -0600
    Cc: Ben Santer
    It would be good to remove at least part of the 1940s blip, but we are still left with “why the blip”.
    di2.nu/foia/1254108338.txt
    https://stevengoddard.wordpress.com/2015/01/24/before-they-could-create-the-hockey-stick-they-had-to-get-rid-of-the-post-1940-cooling/
    The adjustments greatly exceed reason, and largely explain why UHA and RSS both still show 1998 s considerably warmer then 2014. If you just used the satellite data sets you would say with about 100 percent confidence that 1998 was warmer then 2014.

    • David A March 6, 2015 at 8:23 am
      TOB is indeed questionable, but those adjustments are not very large.

      NOAA declared 2014 the hottest year on record by 0.04 deg. Decisions are presumably made on that not very large number which they also later said had a 68% chance of being wrong. A very small number all of a sudden becomes a very big problem.

      • I agree. My point is that the historic record has been adjusted far more then the TOBS adjustments. Look at the Iceland adjustments in my post. They did the same thing to all of Iceland, to all of the US, to much or much of Australia, etc…

      • When your adjustments exceed the precision of the data, you can make no conclusions about the data. Using it to claim that 2014 was the hottest year on record is just a travesty.

  17. I have no problem with the concept of TOBS. I do have a problem that adjustments to the past are such an on-going process.

  18. The questions are pretty simple to answer.
    Why are computer model generated data used as “fact” when related to climate conditions?
    Does “GIGO” (Garbage In Garbage Out) still apply in this wonderful word of modeling?
    How can arbitrary adjustments to computer model calculations make them more accurate or meaningful?
    Why do we never hear the words Sun Cycles in any of the climate commentary?

    • The question is are they ‘arbitrary’ in the first place , given the user has a ‘need’ and the ability to change the parameters in such a way to ensure that ‘need’ is meant and given that even when reality proves them wrong their is no down side to making adjustments that reflect their ‘need ‘ rather than good science. Is it possible to rule out that ‘arbitrary’ is problem not the way these changes are made.

  19. Africa makes up one fifth of land mass and has practically no data whatsoever- it needs 5000 new temp stations for coverage- WMO So Africa is mostly estimated!!
    Most of the stations that do report are based at cities or Airports. In a points system the WMO gives Urban stations a zero for quality.
    WMO-
    “This disparity and unevenness in national network coverage introduces bias to the data and, therefore, it must be used with caution inresearch and applications.It cannot be expected that the quality of global climate products is currently adequate to effectively address national scale issues in Africa”
    WMO
    “This shortage of data is exacerbated by the uneven distribution of stations, leaving vast areas of central Africa unmonitored and representing the lowest reporting rate of any region in the world”
    WMO-
    “At the same conference it was revealed that despite covering a fifth of the world’s total land area, Africa has the worst climate observing networks of all continents, and one that is still deteriorating”
    WMO-
    “The density and coverage of existing climate observations in Africa have been described in many
    publications as poor or sparse (Parker et al., 2011; Institute Water for Africa; Washington et al., 2006)”

    • Indeed its back to the issue of ‘better than nothing ‘ that is seen time and again in climate ‘science’
      In reality if they where honest they could say that the amount and quality of data that is available is not enough to give a accurate value. But we can give one which is ‘better than nothing ‘ , but that will not keep the grant cash flowing in and will do nothing for political desires.

    • If the figures on that chart are temperatures – notice all the hot air round the university:)

      • LOL yes, and they range from plus 8 to negative 3. Essentially you can create whatever graph you want using any given station. But clearly the urban areas should be adjusted to the true rural, and those are far and few between. Best, GISS, BOM, the all do the opposite.

  20. This is lovely work, and far more like this needs to be done. This one example shows that it is entirely plausible if not actually likely that any sort of homogenization or TOBS correction is almost certainly intrinsically biased compared to a) the “true time average of temperature” (which even your 24 point/day integration does not properly reveal, although it is doubtless a lot more accurate than a 2 point/day integration to form the average). Indeed, simply using Euler integration compared to e.g. splining the data and integrating the spline (or otherwise using a higher order smooth curve fit to the hourly data) very likely introduces both integration error and bias depending on things like the mean convexity of the actual data.
    It further shows that two sites located just over a kilometer apart, both (from the look of them) extremely well constructed and maintained, show quite different temperature readings. As you say, 0.01 C resolution is obviously impossible. 0.1 C resolution is dubious. I’d eyeball a guess that 0.2-0.3 C resolution is about right. One wonders what one would get if one scattered a collection of sites like these in a tight array spaced at (say) 1 km over 100 square kilometers, without regard to the suitability of the site — if a grid point drops a station in the middle of a parking lot, so be it. One wonders what one would get comparing the spatiotemporal average of these 100 sites to the results obtained from a glass thermometer read by a grumpy old man who refused to work on Sundays and perhaps was less than pristine in recording both his observations as might have been the case back in 1870. Such an individual might not have recorded his temperatures at a combined precision and accuracy of 1 whole degree C in the first place (given indifferent instrumentation) and might have failed to get up at midnight every night to make an observation and might have put the thermometer right up on his porch to avoid having to trudge two hundred meters to the thermometer, placed carefully in a box in the middle of a carefully mowed field, to read it by lantern-light with his presbyopic eyes.
    The point being that if this one triplet of stations cannot produce a representation of the mean temperature in a single ~1 km patch of the earth that is self-consistent to within 0.1 C now, and which has a clearly visible bias relative to both the attempt to apply “corrections” and to the actual mean daily temperature that one is trying to represent from only two measurements with what is likely a seasonally shifting systematic bias you aren’t even looking at (cooling rates from sundown will lead to different temperature drop by midnight in the winter and the summer, relative to the entire shift over the day, and a 4:30 observation could be mid-afternoon heat in the summer and after dark in the winter at many locations) then what chance is there that measurements from 1870 can be meaningfully compared to measurements in 1970 or 2015?
    The answer is slim to nil. Now imagine subtracting an imperfectly known base to create a temperature anomaly. This introduces another compounding of error. Now imagine taking the station data and kriging it to make it de facto cover the lake and the mountain range located immediately to the west and the desert to the east of a site.
    One doesn’t need to go fifty meters. If one located stations like this in my front and my back yard, one would get differences like this, or even larger ones. If one located the station in the trees of a nearby wood, one would get a different reading.
    And here’s a kicker. Look at these stations. They are beautiful! They are located in open, carefully mown fields, with a minimum distance to ANYTHING that might confound the measurements, painted white, and so on. All according to the rules.
    But how much of the Earth’s surface is, in fact, mown field within a 50 meter border of anything like trees or water or mountains or ocean? Would that be (in spite of human activity) almost none?
    It would.
    If I wanted to measure the improvement in the grade performance of high school students, I would not go into fifty high schools and only measure the grades of the best students in the class as the basis of the “anomaly”. It’s like nobody ever heard of Monte Carlo (unbiased) sampling. Bizarre.
    rgb

    • An excellent analysis. An anecdote speaking to the difficulty in obtaining “true” temperatures. I am staying on Hilton Head Island, SC this week. When we left our ocean ward (1/4 mile off beach) cottage my car thermometer showed 62F. Driving landward , literally in 4 minutes the temperature jumped to 74 before I got off the island. On such a small island I am not sure what the official temperature will be for some historical analysis 50 years from now.

      • Adding to what you observed, MOST of the energy stored in the air around us is not accurately presented by just the temperature, but must also include the amount of energy stored in the water vapor/liquid/ice. Even if the temperature in any given area stays the same, if the humidity changes, then there has been a drastic change in the amount of energy contained in a cubic volume of air. Likewise, often temperature and humidity change in equal and opposite directions to create a CONSTANT energy condition. Just measuring the temperature gives a very inaccurate picture of energy transfer. Now add to that the fact that CO2 is also fluctuating in any given location and you have a very, very muddied picture that in no way can be reduced to the simple equations that chicken-little-climatologists have been clucking as gospel.

    • @rgbatduke: Monte Carlo sampling sounds good, but how do you get a random sample without randomly dropping (say) a few thousand radio-transmitting thermometer “white boxes” (wiffle balls?) at randomly selected spots on the earth’s land and water surfaces? And then you need to do it again the next day, (week? random period?) right? Well maybe if they were small, cheap, biodegradable, and delivered by a large fleet of cheap aerial drones?
      Or maybe satellite temperature measurements are the reliable method, even if they don’t take the atmosphere’s temperature within two meters of the surface.

  21. I’m rather [confused] by [what] what you are trying to do here. The TOBs adjustment corrects for a change in time of observation. There was no change in time of observation at the Kingston USHCN station during the time which the nearby USCRN station has been operating. In fact, there are effectively no adjustments at all done to the raw Kingston station during the years which the USCRN station is active, either for TOBs or homogenization steps. You can see the raw and adjusted anomalies for Kinston USHCN here:
    http://i81.photobucket.com/albums/j237/hausfath/Kingston%20Raw%20Adj_zpszpr23ty9.png
    Given that there are literally zero adjustments during the time you are examining, all you are really doing is looking at absolute offsets between the stations. These are not of interest, as the stations are set to aligned baseline periods when converted into anomalies, which removes the impact of elevation differences and other factors that result in a (in this case slight) offset. Here are USHCN raw and adjusted data compared to the USCRN data:
    http://i81.photobucket.com/albums/j237/hausfath/Kingston%20Raw%20Adj%20CRN_zpsnw6vyq3j.png

  22. When it is being announced all over the media that 2014 was the “warmest year ever” by 0.01 C, then these adjustments are not trivial. And to get a true reading on the temperature, is an average of the high and low the best way to do it, however practical it might be? I have always surmised that hourly readings have to be integrated by time to give a true picture, since highs and lows are not necessarily 12 hours apart.

  23. Can Adjustments Right a Wrong?
    Not if you do not know what these adjusting or how it needs adjusting .
    In the good old days we accepted that by its nature our knowledge about weather , and what affects it and how well can predict it was a bit hit and miss, therefore although we may have moaned about weather forecast being poor it was mostly no big deal .
    With creation of ‘settled science’ and we it the demands for massive changes to society and the spending of vast amounts of money . We sent people rightly asked , what has actually changed form the ‘good old days ‘ In reality not a lot , the failure of the climate ‘models ‘ is just one indication there is still much we do not know .and therefore we cannot allow for in building into these models .
    The same is true for ‘adjustments ‘ for the most part we can take a guess what adjustments may be needed , but that is it.
    On micro scale we can ask how we could know , therefore adjust for , the environmental effects on a single weather station from 30 years ago . On a large scale we see that problem across thousands of weather stations over hundreds of years.
    The idea that you can throw maths at it and fix it is rubbish , for the maths you use to ‘correct’ are based on guesses themselves, its models on models again and like these models the guesses can be highly subjective and subject to a degree of ‘personalization’.
    Oddly it is hear that computing power ,rather becoming useful has become an hindrance, when you can change a few numbers and get the results in minutes the temptation is to keep changing numbers until you get the ‘right results ‘ has it cost little effort and time .
    And even better has your speculating about the future, and that in reality is mostly is what is being done , you always get the fall back of claiming that although it may not have happened as you claimed but it would it will ‘in the future ‘, meanwhile you get the rewards in the hear and now .
    In the good old days we accepted that there was things we didn’t know but that was necessarily a ‘bad thing’ with the advent of AGW and ‘settled science with its highly politicized nature , we seen a rush to claim that only do we now know the unknown, but we should also reject the notion of their being the possibility of there being ‘unknowns’ in the first place . An approach which although common enough in religion has no place at all within science.
    One of many things that climate ‘science’ has issues with , is that often the data is in reality ‘better then nothing ‘ in quality , such has tree rings . Oddly despite the vast amount of money being thrown at the area little has gone to change that situation. Spending 100 million on a new computer does not in any way ,improve the quality of the the data you feed that computer in the first place. GIGO remains not matter how many teraflops you have.

  24. Data adjustement after the fact is FRAUD. More so if your justification is not clearly defined, measureable and proveable.

    • You’re right about needing a clearly defined, measure-able and prove-able method, but weather stations, no matter how carefully maintained, will have gaps in data. If my station was offline for 12 hours and the data needed to be infilled nobody was trying to be fraudulent on purpose, especially if there is a clearly defined method for infilling. And infilling the data is better than reporting 57 degrees for 23 hours during a heatwave…

  25. This reminds me of the guy with his head in an oven and his feet in a bucket of ice: on average, he felt quite comfortable

  26. Regarding the first plot, I’d have to disagree a bit. If you can get two temperature sensors to agree within .5°F, you’re doing pretty well. Even the fancy USCRN sensors are only good to about .3 F at the average temperature around Kingston, RI. I will say that the .25°F step increase starting at 2013 is a little fishy, though. I wouldn’t be surprised if that much could be caused by bad bearings in the aspirator fan.

  27. We have the technology to create an international space station, send probes to Mars and Jupiter, send up satellites for tv, radio, weather, and defense, build super computers to scan the phone calls and email of everyone around the globe, and this is the best we have to measure temperatures on land? I don’t get it.
    Maybe the billions spent on climate research should instead be spent on better technology in this area?

    • @NancyG22 — My thought exactly!!! All you are looking for in a 24-hour period is Tmax and Tmin. It should be easy easy easy to have a continuous temperature recording and at 2400 hours, only those two data points and times be written to a hard drive for record keeping. I find the methodology employed here to be ridiculously convoluted and unnecessarily complicated for what essentially are just two data points.

      • “Unfortunately, the millions we’ve spent on time travel research have yet to pay off by allowing us to travel back in time and install this sort of equipment back in 1900.”
        And so we will just make up whatever “data” we want to make up to support the mindless scare of cAGW.

      • Exactly Mark, and our new super duper computer will give us wrong answers far quicker.
        Zeke or Mosher never try to explain exactly how the algorithm made the adjustments for ONE station, and that station happens to match the exact pattern of the adjustments of the US data set, and others as well. If Zeke cannot explain ONE station, why would anyone trust the entire super duper expensive network?
        https://stevengoddard.files.wordpress.com/2014/12/reykjavikgiss2012-2013.gif?w=700

      • Brandon Gates, you have proved David A’s point exactly as BEST show that station should not be adjusted.
        The reason that he asks the question of Zeke or Mosh about GISS adjustments is because on at least 2 forums they have been defending those very adjustments using BEST techniques.

      • A C Osborn,

        … you have proved David A’s point exactly as BEST show that station should not be adjusted.

        Oh dear me, no I have not proved any such thing. And I’m not sure you want to go there, because if you’re going to stump for BEST’s methodology … well here, I’ll show you the can of worms you might not wish to open:
        http://climexp.knmi.nl/data/it2m_land_best.png
        http://climexp.knmi.nl/data/igiss_land.png

        The reason that he asks the question of Zeke or Mosh about GISS adjustments is because on at least 2 forums they have been defending those very adjustments using BEST techniques.

        Mmmmok … I don’t suppose you could spam a few links so I can read along for myself first hand?
        mpainter,

        Because it’s a good question.

        Let’s hope his answer contains a slightly better-developed argument than that.

      • As I’ve mentioned elsewhere:
        “Iceland is an interesting case. NCDC adjusts the mid-century warming down significantly, while Berkeley does not. As Kevin Cowtan has discussed, homogenization may make mistakes when there are geographically isolated areas with sharp localized climate changes (e.g. parts of the Arctic in recent years, and perhaps in Iceland back in the mid-century). For more see his discussion here: http://www-users.york.ac.uk/~kdc3/papers/coverage2013/update.140404.pdf

      • Gates:
        You could learn here, if you have any desire to learn.
        But perhaps you don’t realize how much you lack in understanding.

      • mpainter,

        But perhaps you don’t realize how much you lack in understanding.

        Any time you wish to demonstrate your own understanding — with specifics — would be a good one. Right now, all you’re showing is blah blah blah blah.

      • Brandon, it was you that brought up BEST, I have already destroyed it’s credibility to my own satisfaction.
        Especially the data that they provide to the public about countries and Continents.
        There treatment of individual stations would have some merit if it did not smear the comparison over such large distances.
        For instance the Irish Valentia station is compared to London & Paris, which is absurd.
        Mosher’s description of BEST says it all, “if you want to know what the actual temperature was use the Raw data, if you want to know what the models think it should be use BEST “Final”.
        The 2 forum posts are
        http://judithcurry.com/2015/02/09/berkeley-earth-raw-versus-adjusted-temperature-data/
        http://judithcurry.com/2015/02/22/understanding-time-of-observation-bias/
        And to one on SKS, which I do not have a link to.

      • Brandon Gates says…
        David A,
        Why would you ask Zeke or Mosh about GISS adjustments? And why not first go see for yourself what BEST does with Reykjavik?
        http://berkeleyearth.lbl.gov/stations/155459
        Why are you asking about Reykjavik at all when the topic of this post is Kingston?
        ==================================================================
        Let us start at the bottom Brandon. (BTW Hope you are feeling better) Brandon, I am giving you a real answer. but not certain if you are prepared to take an honest look at it.
        1. The title of the post is, :Can Temperature adjustments right a wrong?”
        2. The cogent corollary is, “Can T adjustments create a wrong.” Making my comment relevant and pertinent.
        (I recall a recent post where an WUWT author showed how a media outlet which formerly had only talked about all the projected harms of CO2, had posted an article about the NOWN benefits of CO2, And you Brandon decided to insult the man because the IPCC refers to some of the benefits of CO2, completely missing the point of the story, and in addition you were shown that the IPCC still ignores a large body of scientific work demonstrating the benefits to the bio-sphere.)
        3. There are many temperature adjustments that greatly exceed the tiny TOBS adjustments, and therefore these questionable adjustments are of greater importance. That is why Brandon, now please try to follow my answers.
        Brandon as to; ” And why not first go see for yourself what BEST does with Reykjavik?”
        1. Assumptions are not often productive. I have.
        2. The Best analysis is deeply flawed. They apparently did not know the Iceland folk at the Met Office had already adjusted for station moves, missing data, and any TOBS problems. (Personally I think those adjustments were overboard, but that is not the aim of this post)
        3 and 4. Best always has this “Difference from Regional Expectation” in their “best” objective. Beyond the fact that local changes can be as high as ten degrees, ( http://wattsupwiththat.com/2015/03/06/can-adjustments-right-a-wrong/#comment-1876716 ) 4. let us take a look at the other Iceland stations and we find the “Difference from Regional Expectation” was not at one time true. They changed ALL of Iceland, to make this true!
        https://notalotofpeopleknowthat.wordpress.com/2015/02/20/the-official-iceland-temperature-series
        https://stevengoddard.files.wordpress.com/2011/08/chart_116.png
        5. and 6 They had to make the late 1930s-1940 warmth disappear…
        https://stevengoddard.wordpress.com/2014/02/18/making-the-1940s-warming-spike-disappear-in-iceland/
        Phil Jones talks about removing the decline to the late 1970s…
        “No satisfactory explanation for this cooling exists, and the cooling itself is perplexing because it is contrary to the trend expected from increasing atmospheric CO2 concentration. Changing Solar Activity and/or changes in explosive volcanic activity has been suggested as causes… but we suspect it may be an internal fluctuation possibly resulting from a change in North Atlantic deep water production rate.”
        So, Jones said in 1985 that: “the cooling itself is perplexing” – but why not say so today? And why don’t we see a “perplexing” cooling after 1940 in the IPCC graphic today? And furthermore, back in the early 1980´ies Jones appears to accept data as is, at least to such an extent that here he is considering how nature has produced these “perplexing” cooling data – like a real scientist should.
        I’m especially fond of that “internal fluctuations”…”from a change in North Atlantic deep water”. Gee, maybe the planet has cycles all on it’s own. Guess he’d not been “in church” long enough at that time and was still wondering about the status of the Apocrypha… (Of course now the ENSO cycles can explain the pause, but still they cannot explain the warming!)
        6. The GISS homogeneity adj, which is supposed to remove the UHI effect (and nothing else), only allows 0.2C for the change in UHI since 1940 in Reykjavik. Surely this is not enough?
        Another common GISS explanation is “Moving stations out of town to avoid UHI explains warming corrections”
        Again, this is really Nonsense. Despite relocation of some temperature stations, UHI still induces far too much warming in temperature data in general world wide:
        http://hidethedecline.eu/pages/posts/urban-heat-island—uhi—a-world-tour-159.php
        Thus: Any correction in connection with UHI should overall be towards colder temperature trends. If you make a warm correction due to stations moved out of town, you should make a much larger cold correction for the much larger UHI effect. The UHI is generally very much larger than the effect of relocating globally.
        So it looks like Jones and friends DID believe in UHI back when they needed to do a “wrong way correction” to explain some cooling of the past, but now find it an inconvenient thing about the present…
        7. What they cannot erase are the recorded statements of scientist at the time. https://stevengoddard.wordpress.com/2013/08/12/latest-giss-data-tampering-in-iceland/
        8. They have completely turned the entire GLOBAL data set into FUBAR. They became so lost in their computer algorithms, and anomalies, the did not know they had actually made their “Hottest year evar 2014” more then four degrees cooler then what they themselves said it was a couple of decades earlier. https://stevengoddard.wordpress.com/2015/02/23/noaa-caught-cooling-the-past/

      • David A,

        1. The title of the post is, :Can Temperature adjustments right a wrong?”
        2. The cogent corollary is, “Can T adjustments create a wrong.” Making my comment relevant and pertinent.

        I get this part of it …
        [snip comments relating to previous discussion which are not relevant here]
        … its that kind of thing I’m talking about. There’s no need to go to Reykjavic OR to CO2 as plant food (which I don’t dispute, if you read me properly) to discuss the topic of this post.

        3. There are many temperature adjustments that greatly exceed the tiny TOBS adjustments, and therefore these questionable adjustments are of greater importance.

        Ah. Well, adjustments on balance is something I think is interesting and appropriate to discuss. That can be illuminated by looking at specific cases, but eventually what matters to me is the net — which is typically where’d I’d start. Then I’d ask, “how is this net change happening” and drill into some examples.
        The rest of your comments are basically you asking questions and answering them for yourself by inferring motive (“They had to make the late 1930s-1940 warmth disappear…”) or arguing by fiat (“They have completely turned the entire GLOBAL data set into FUBAR.”) and a few other things that, honestly, are not honest. Which makes your statement …
        I am giving you a real answer. but not certain if you are prepared to take an honest look at it.
        … ring rather hollow in my ears. My read is you’ve offered little or nothing in terms of how to make the process better, and your critiques are too laden with opinion and speculation to be useful, so I’m quits for this round.
        Oh, just saw this:
        http://s2.postimg.org/eclux0yl5/GISS_Global_Adjustments_Feb_14_2015.png
        I have no clue whether those are “valid” or not, certainly not just from looking at an image from the data. I don’t run around making silly claims of omniscience when I’m anything but.

      • Brandon, as my post was full of historical accounts, data and graphs, and links to information, and statements from the proponents of CAGW as well, all logically leading to the analysis, your rebuttal is meaningless.
        As an example you refused to comment on the graphic I copied, but you failed to follow the link to the Bill Ilis comment. A sincere person interested inwhat was being presented would have clicked the link to verify these continues changes to the past, all still happening with ZERO explanation, see here…
        http://wattsupwiththat.com/2015/03/06/can-adjustments-right-a-wrong/#comment-1877500
        (All of which you could have done with a simple click, and an honest look at what was being shown to you. Your actions appear to consistently demonstrate a lack of real interest in understanding)

      • David A,

        Brandon, as my post was full of historical accounts, data and graphs, and links to information, and statements from the proponents of CAGW as well …

        … along with editorializing appeals to motive, such as: So it looks like Jones and friends DID believe in UHI back when they needed to do a “wrong way correction” to explain some cooling of the past, but now find it an inconvenient thing about the present…
        We’re discussing an evidence-based science. Stick to arguments about theory and observations and you’ll have my rapt attention. Put me on the hook to read Jones’ mind like you purport to be able to and my answer will often be: ______________________________
        I can’t “prove” that your personal opinion is wrong, and I’m disinclined to try.

        Your actions appear to consistently demonstrate a lack of real interest in understanding.

        Well yes, that line full of weasel-wording is the desired payoff behind the Gish Gallop, isn’t it. Ok. Prove it. Reach into my mind and demonstrate beyond a shadow of any doubt that I lack any real interest in understanding. You can’t do it, any more than I can “prove” you’re Gish Galloping or weasel-wording.
        Stick to the evidence itself, not the motives of the people behind it. I find that the science is far more interesting to talk about.

      • Poor Gates, quotes my commentary about a climate scientist, without including, my included quote from the climate scientist.
        Phil Jones talks about removing the decline to the late 1970s…
        “No satisfactory explanation for this cooling exists, and the cooling itself is perplexing because it is contrary to the trend expected from increasing atmospheric CO2 concentration. Changing Solar Activity and/or changes in explosive volcanic activity has been suggested as causes… but we suspect it may be an internal fluctuation possibly resulting from a change in North Atlantic deep water production rate.”
        So, Jones said in 1985 that: “the cooling itself is perplexing” – but why not say so today? And why don’t we see a “perplexing” cooling after 1940 in the IPCC graphic today? And furthermore, back in the early 1980´ies Jones appears to accept data as is, at least to such an extent that here he is considering how nature has produced these “perplexing” cooling data – like a real scientist should.
        I’m especially fond of that “internal fluctuations”…”from a change in North Atlantic deep water”. Gee, maybe the planet has cycles all on it’s own. Guess he’d not been “in church” long enough at that time and was still wondering about the status of the Apocrypha… (Of course now the ENSO cycles can explain the pause, but still they cannot explain the warming!)
        ——————————————————————
        Brandon did you forget…
        In 2009, the world’s top climate scientists were looking for ways to make the 1940’s warming spike disappear.
        From: Tom Wigley
        To: Phil Jones
        Subject: 1940s
        Date: Sun, 27 Sep 2009 23:25:38 -0600
        Cc: Ben Santer
        It would be good to remove at least part of the 1940s blip, but we are still left with “why the blip”?================
        At one point in the 1980s the 1940s showed a .6 degree rise above the 1970s.
        I know you are closed minded because of your biases. You skip facts, graphics and data points regularly. You ignore the clear intent of the author and ignore inconvenient facts, thus you quote the criticism, without the historical basis of it as shown above, and as demonstrated in your refusal to respond to many other facts relayed to you like this…
        http://wattsupwiththat.com/2015/03/06/can-adjustments-right-a-wrong/#comment-1877500
        You are an excellent example of your own criticisms. There are reasons folk call you a troll.

      • A C Osborn,

        Mosher’s description of BEST says it all, “if you want to know what the actual temperature was use the Raw data, if you want to know what the models think it should be use BEST “Final”.

        Thanks for the linky links. I can’t find that actual quote in the two links you provided. I’ve seen him write things like that before as part of a larger point. Context is important.
        http://judithcurry.com/2015/02/09/berkeley-earth-raw-versus-adjusted-temperature-data/#comment-673083
        You blew it there, because it is worse than a rookie mistake to persist in thinking that this …
        http://rankexploits.com/musings/wp-content/uploads/2014/06/Averaged-Absolutes.png
        … is the way to figure out global temperature trends. Even after Zeke told you it wasn’t a viable way to do it, you still wrote in the followup: Those upward Steps were produced by something, it was not the change of a few stations, it could not possibly have that much effect. So your work has lost that valuable data.
        Yes, “something” DID cause those upward spikes. Here’s a clue:
        https://wattsupwiththat.files.wordpress.com/2010/08/ghcn_by_latitude.png
        Before you can say things like “it could not possibly have that much effect” you might actually wish to consider the well known fact that absolute temps are hotter toward the equator than at higher latitudes. Then further consider that taking a simple arithmetic mean of absolute temperatures from the raw data will more than probably generate spurious results since the distribution of participating stations over the entire course of the history isn’t uniform from one year to the next.
        http://cdiac.ornl.gov/epubs/ndp/ndp041/graphics/ndp041.temp.gif
        First year stats class suggests that at the very least you need to take a weighted average of some sort to get a meaningful result, otherwise the results are going to be biased toward the regions where the sampling density is highest. That’s a basic, extremely basic, no-no.

      • David A,

        So, Jones said in 1985 that: “the cooling itself is perplexing” – but why not say so today?

        You’re asking me, a non-expert, to comment on a working scientist’s field of expertise. You may as well ask me about the finer points of neurosurgery. Best I can do here is note that the first quote is from 1970, you’re again quoting him 16 years later, and asking me to comment on why he’s not saying something different 21 years on from that.
        Shucks. I mean, have you considered the possibility that the man has learned a few new things about how the planet works over the course of 45 years? The point of science is the progression of knowledge, is it not? If Jones knew everything there was to know about climate in 1970, he could have retired then and written his memoirs, could he not?

        There are reasons folk call you a troll.

        I’ve noticed that people who dispatch crap arguments with alacrity around here earn that label with despairing frequency.

  28. The comparison of CRN NW to W is quite revealing. 1.4 km apart; one might expect some slight mocroclimate offset as there appears to be. But the random discordance within that offset look to be about 0.3-0.4 F on average. That implies the uncertainty even CRN is of that order. It would have to be worse given lesspritine station siting. Strong evidence that the NASA GISS assertion of 0.1C uncertainty is understated by a factor of two. That is important,
    Note the following moving goalposts concerning ‘pause’model falsification. NOAA 2009: 15 years. Santer 2011: 17 years. Newly, Roberts: 20 years. McKittrick’s 2014 paper found no statistically significant warming in RSS for 26 years, and in UAH for 19 years. But for HadCru it was only 16. (He did not analyze GISS or NCDC). So now RSS chief scientist Mears is arguing the land data sets are better then his own RSS series (to avoid the falsification bind. But if one more than doubles intrinsic Measurement uncertainty in the land series, then the period of no statistically significant warming increases notably ( I have not redone McKittrick’s calculation.) and once again, the pause falsification exceeds the most recent goal post move even by Mear’s revised criterion.

  29. The results are counter-intuitive. “Intuitive” is what I think the supporters of homogenization and adjustments think is going on. Someone important (?) claimed that the adjustments would be net-zero based on this thinking, a quick test showed him to be wrong.
    Judith Curry supported the Zeke conclusion that the TOBS adjustments were reasonable. But I think she did the intuitive thing also. If this work is consistent for other similarly close sites – there has to be more than one left AND/OR previously considered but not used anymore, the value of the adjustments has to be questioned.
    Looking at some of the graphs, I got the feeling that the net effect was to warm the records. Is that so?

    • Mean USHCN Kingston TOBs adjustment: -0.74
      Correct USCRN Kingston TOBs adjustment: -0.70
      Looks like a nice independent validation of TOBs adjustments at this location to me…

      • John,
        We’d expect month-to-month variations, as the factor driving bias (double counting highs and lows) is highly stochastic and weather-dependent and won’t be captured perfectly by prescriptive models. As I’ve mentioned elsewhere in these comments, I see the fact that the mean and stdev of the adjustments to the USHCN station as quite similar to the mean and stdev of the “ground truth” adjustments based on the USCRN station as evidence that they are doing a reasonable job of correcting for tobs-associated biases in the mean (I haven’t looked at min/max separately yet).

  30. On Zeke’s TOBS thread I proposed a sanity test to try to see if TOBS was worth applying.
    As it is now they take the minimum temperature series and the maximum temperature series, apply TOBS to each, then average the adjusted daily values to come up with an average temperature, from which they report an anomaly.
    Assuming that “global warming” would impact both the min and max temps in a similar way (unlike UHI which would have a strong bias in the minimum temps), why not go back to the historical temperature readings taken in the afternoon and use only the unpolluted minimum temperatures to calculate the anomaly? And then after the station switches to a morning reading time, use the unpolluted maximum temps to calculate the temperature anomaly? The two anomalies could be spliced together one time when the reading time was changed for that station to produce a full anomaly record without needing any TOBS adjustment.
    I think this is another alternative approach to try to get at this question of whether these TOBS adjusted data are actually reflecting anything realistic, or are just being thrown around haphazardly without taking efforts to scrutinize them.

  31. IMHO:
    – Surface temperature readings should be used for the nightly news weather reports, and deciding whether to wear a coat, or a jacket, or none at all.
    – Calibrated MW satellite soundings of lower troposphere temperatures should be the only source of data used to determine climate shifts, changes, baselines.

  32. John Goetz says: “In examining the CRN data, there is no doubt that the time of observation affects the minimum, maximum, and average temperature recorded on a given day.”
    ==============
    I doubt this. If the Tmax and Tmin are automatically recorded on a medium and I then go out and write down those readings whether I write them down at 1700 on Wednesday or 0700 on Thursday is not material.
    If Tmax is set after 1700 I will record it the next day and the same with Tmin. TOB adjustments are nonsense.

      • The problem with hourly data is that they don’t give you a true min and max temperature. According to Karl, the difference can be as large as -0.53 C and 0.58 C.

      • Zeke, that plot shows that average temperatures are the worst ones to try using, since they are the most affected by small changes in observation time. The minimum and maximum temps each have an 8+ hour plateau where the variation due to TOBS is within the range of the instrumental error of most USHCN installations. Why take two flawed measurements, adjust them, and combine them into an even more flawed product than you started with?
        If anomalies are the goal, why not just use the unadjusted min or max depending on which plateau the data were collected in?

      • Whoever [labeled] that CRN average temperature graph doesn’t know the difference between “Mean” and “Median”

      • Zeke, please hint to me how to interpret Karl’s Table 3, Mean difference of hourly minimum and maximum temperatures from the true 24-hour maximum and minimum temperature at Bismark, ND.

    • You cannot change the definition of a day. You can select different 24 hr periods. Once you go passed midnight a new day has started. But that is not the point it is recording something that has already happened. If I note it on Wednesday or Thursday does not change the Tmax or the Tmin.

      • I agree mkelly – but as I don’t actually know how they ‘define’ or extract Tmax or Tmin from automated data logging stations I’m certainly not sure why they have to have TOBS adjustment, especially for any of the traditional old ways of obtaining Tmax and Tmin during a 24 hour period. In the old days a small metal rod recorded the Tmax and Tmin on a mercury thermometer. It was reset every day (we did it as part of a Metoffice school weather program some 40+ years ago), in readiness for the next 24 hour period. I struggle to grasp how they can use modern instruments and compare to such old methods. For example, an old instrument would have taken a small period of time to reach the actual air temperature after any change, perhaps as much as 30 minutes I guess? – hence, peak and min temps would actually be naturally ‘smoothed’ by the thermal inertia of the thermometer. I have no idea what the thermal inertia of modern instruments is, but I would presume it is much less, seconds or minutes perhaps? Add to that the data logging every few seconds or so and clearly, it means that modern measured peak temps may be slightly higher than an old instrument. I guess min temps would be slightly lower too? So presumably, modern instruments introduce a possible slightly wider temp range across any given day compared to an old thermometer? Obviously, using Tmax-Tmin divided by 2 therefore gives a slightly higher reading for the standard ‘average’ temperature for modern instruments over the old ones? I’m sure this partly justifies some of the various adjustments, but I have never seen it fully explained as far as I can recall, and I certainly don’t understand why most adjustments are upwards (or seem to be!).

      • Conceptually you are correct. If you had a contiguous, uninterrupted record of daily minimum and maximum temperatures then the averages will not need to be adjusted as the length of that record approaches infinity. The problem occurs when boundaries are inserted into the record-keeping. Daily records have historically had monthly boundaries inserted. These boundaries cause discontinuities in the calculation of averages that causes the time of observation phenomenon.
        Here is a link to a small spreadsheet that shows the problem. I extracted August, 2010 data from Kingston 1 NW CRN. I selected that month because no records are missing and none were missing from the prior month. Columns H, I, and J represent the Max, Min, and Average temperature over the last 24 hours at the time shown in column D. Columns M, N, and O represent the average monthly Max, Min, and Average that would be reported by the station to the NCDC at two different observation times: 8AM and 5PM. The difference is just over 0.2C. I’ve seen bigger differences elsewhere but hopefully this illustrates the point.

    • As I understand it, the Tobs problem is like this: If the temperature at Tobs is higher than next day’s Tmax, then it wil be recorded as next day’s Tmax instead of the real Tmax. So on some days only, Tmax will be too high. On all other days it will be correct, but overall there is a bias. Changing Tobs can change the bias.

      • Not sure – I follow you as this doesn’t really make much sense unless the Tmax for two days is exactly the same and at exactly the same time (i.e TOBS). Take a time of 9.00am, for example – (and ignoring the fact that Tmax is highly unlikely to occur at 9.00 am in most locations). I cannot forsee a likelihood of Tmax being the same exactly at 9.0am for two consectutive days (24hr periods) over the short measuring period (say 30 minutes) continuing or spanning ‘across’ the 9.00 am TOBS point, can you? Moreover, on the extremely rare occasion this might occur, this has no real effect on the averages and would technically be ‘correct’ anyway if temperature over the next 24 hours do not exceed the ‘initial’ value? yes/no? In other words, wtf is there to adjust?

      • Kevin, 9am is not the main issue. The afternoon readings are. Let me add one modification to Mike’s explanation.
        “If the temperature at TODAYS AFTERNOON READING, WHICH RESETS THE 24 HOUR CLOCK FOR TOMORROW is higher than next day’s Tmax, then THAT READING JUST MINUTES AFTER THE RESET will be recorded as next day’s Tmax instead of the real Tmax. So on some days only, Tmax will be too high. On all other days it will be correct, but overall there is a bias. Changing Tobs can change the bias.
        This correction method assumes much however, and is nothing more then guesswork. It also ignores the fact that some readers may have simply corrected this because the noticed the problem. Buy the way, this was not the officially recommended method to take T readings. The correct method avoided all TOBS issues.

  33. If you look at the second, third, fifth and sixth photos you may notice common grasses, groomed in varied ways. Who adjusts the adjustment for the fact that mowing the lawn can change the temperature in your yard? And who is responsible for insuring that the maintenance regime for the grass is identical from year to year? Answer: some guy with a G.E.D. and 6 years of experience tinkering with small engines.

    • I was unable to find the data in my initial search, but found some useful data sets on http://www.john-daly.com/tob/TOBSUM.HTM. After looking at a few I noticed they were all taken from stations located at airports. I went back to the Karl paper and noticed a number of airports as well. It was about that time I ran into Zeke’s post on Climate Etc. and kicked myself for not thinking of the CRN sites. I think you can use the CRN data and follow Karl’s methodology to get a set of useful results.

  34. John,
    As the SurfaceStations Project participant who documented the Kingston USHCN site, I want to make some additional observations about the location. Since 2001 there has been continuing encroachment of parking lots to the east of the USHCN and CRN-1NW instrumentation. In fact, asphalt now covers the whole area between W. Alumni Ave and the curve of Flagg Rd. (aka Plains Rd.) with the final construction (not shown in the aerial view) occurring two years ago. None of that existed fourteen years ago. The site now is more likely to be influenced by heat acquired by the parking area than it was when the CRN site was established and the USHCN site was relocated in the 1970s. The encroachment brings into question the quality of the data in the last few years and going forward. Comparison with the CRN-1W data may show an artificial heating effect. It looks like there may be enough to analyze, particularly at the hourly level. I had wanted to do this myself but time has not permitted. At the very least, a review of the siting should be conducted to consider if moving the CRN-1NW farther away from artificial heat sources is warranted.
    Gary Boden

    • Gary, wouldn’t the lake near the CRN-1NW affect it’s temperature when the wind is blowing from that direction just as much as heat from the asphalt does when the wind is blowing from that direction?
      If so then you need to record the wind direction as well as the temperature, as the 2 of them could cause bigger swings in temp.

      • AC,
        The pond is a natural component that was there when the station was established. The parking lots are artificial and recent. Wind direction typically is from the SW and SE in the summer and since the station is only ten miles from the Atlantic coastline, there often is a cooling afternoon sea breeze. Prevailing winds in the winter are from the NW and NE, The issue is what heat is coming from the parking lots in the evening? Does it artificially boost the minimum temperature? Can we tell if the hourly data are matched to the other CRN station a mile to the south?

    • Thanks for this comment Gary. It would be an interesting analysis. To me, it isn’t about urban vs rural stations when talking about UHI,,the key is one that is being encroached on vs one that isn’t. I’m betting many stations (majority) are in areas where they are being encroached on.

    • Thanks Gary. That is an interesting observation. I checked the trends of the two stations (using anomalies) and it does appear that the NW station has an ever-so-slight upward trend relative to the W station, especially since 2007. But it is pretty small and may be insignificant at this point, given the short time period in question. We may have to wait a number of years to see if there really is a divergence. Leaving the NW station in place as the surrounding development closes in would provide an interesting case study in UHI.

  35. A case for making adjustments to the temperature record can be made. However, only if the criteria and methodology are well defined and withstand completely open public scrutiny. Also, I would place the additional requirement of: all the raw data and adjustments are made completely open to public review.

  36. So is it warming everywhere or just CONUS? Based on the lack of data in Africa, 1/5th of the land mass, I presume global temperatures are pretty much a WAG.
    And how is any of this connected to CO2 concentrations? Temperatures aren’t rising, ice sheets aren’t melting, sea levels aren’t rising. What, did CO2 take a break? Or did CO2 never really mean much? Are GHGs radiative forcings trivial compared to other climate changing forces? (Answer: Yes.)
    All of this is interesting, but let’s keep eyes on the prize, killing the beast: useless, wasteful, CO2/GHGs, carbon free, anti-fossil fuel, dumb ass (what else?) government, economy trashing, freedom killing, solutions to a non-existent problem.

  37. Zeke has commented on this piece in SkepticalScience. As scientists it is probably best to read what he says before rushing to conclusions. Perhaps the author of this piece will respond to Zeke on SkS

    • Hi Ian,
      I’m happy to respond to folks here. I see the fact that the mean results of TOBs adjustment using CRN data are nearly identical to the actual TOBs adjustments as a validation of the approach. John, in the original post, highlights the difference between the two. While there is some difference, particularly month to month, this is expected. The TOBs algorithm provides constant monthly adjustments; in reality, the actual propensity of double-counting high or low days for a given time of observation will have a large stochastic component that will never be captured by a prescriptive model. As long as the mean and variance of the adjustment are comparable to that seen in the ground truth data (as is the case here), I see that as evidence that the adjustment is correct.

      • Zeke, a few questions. Do you know exactly what percentage of stations changed observation time from late afternoon to morning? Also, do you have morning data for those stations from back then — or do you base your calculations strictly on 21st century data?

      • Zeke, thanks for the graph. Do you know what the data show at stations that did NOT change TOB? Hundreds of stations take the measurements at midnight, e.g. How much has temperature changed at those stations? Same with other stations that did not change, either from the morning or evening.
        If there is a bias, it should show up in the data from those actual stations. (Assuming other factors did not enter in.) We wouldn’t need a model to estimate them, would we? Just arithmetic?

  38. Instead of making a TOBS adjustment to the data, why didnt they just simply go back to measuring the temperature at the same time of day as the previous measurements, then no adjustment would be required. Instead it seems they PREFER to be able to make an adjustment, it gives climatists more “wiggle-room” in order to get the data to agree with the models.

  39. It’s my understanding that TOBS “adjustments” are only done on U.S. raw data. If this is correct, then TOBS is essentially a non-factor in global temperature anomaly trend estimates. That’s because all of the world wide data outside the U.S. were measured and recorded without any errors whatsoever…

  40. I have checked Boulder for 2014 to learn what this TOB is about. It really gives a difference that for this station can be 0.6C between the yearly average of al the 24 measurements each day and the Max-Min average taken at the worst time, that happens to be 2pm. Boulder might be exceptionel because of very large changes in temperature.
    Never the less i wonder if it gives any meaning to prefer one type of measurement for the other.
    The problem comes when the climate is mixed in, and the ever seaching for a trend and warmest year/month/week/day.
    Compare it to the daily and yearly variations, then this 1C change in an average is blown out of proportion. The changes you feel and observe would be weather, if you where not constantly told that it is dangerous climate change. I feel in danger every night when the temperature drops 10C as i do during day, when it raises the same 10C.
    That said, if they try to make trensds and so on, it must be on a sound and replicable official ground.
    Especially because it has political implications, and is not just an academic discussion.

    • I did a similar analysis of Boulder data here.
      “Never the less i wonder if it gives any meaning to prefer one type of measurement for the other.”
      No one is saying that any TOB is better. The point of the TOBS adjustment is to fix what happens when the time (TOB) is changed.

      • The point is, DOES it fix what happens when a TOBS adjustment is applied?
        The adjustment its itself an estimate to an unknowable, untestable quantity. Any adjustment applied to something else must ADD to the uncertainty of the measurement.
        I see no evidence that added uncertainty is properly accounted for. — It is NOT random error but entirely systematic and therefore cannot diluted through the use of large sample sizes.

  41. You know, I’ve never gotten the Time-of-day adjustment. It makes absolutely no sense why it would apply at all when using a min-max thermometer.
    The reason is that you have the minimum in a 24 hour period and the maximum. due to the communicative properties of addition, you end up with an annual average as Min1/2+Max1/2+Min2/2+Max2/2 etc. The max is normally in the Early afternoon to evening and the min is normally in the hour before dawn. The time of reading might switch which max goes on which day, and it will occasionally bleed a heat or cold wave one day longer or shorter, but it should not have a significant affect on total measurement or trend.
    If it non-trivially affects the annual average, then that means that you have such irregular peaks that the min-max thermometer is a completely inappropriate measuring tool.
    Which, actually, I think is the correct conclusion.

    • Ben, the TOB logic goes like this: A true daily min/max temperature must be measured midnight to midnight – that’s a definition of a day. It you read a min-max thermometer at 1 am, you are probably recording a temperature for a previous day, not for today, so you got a one day shift.
      Now we have opened a Pandora’s box of which definition of a day to use. A day of a local time? A day of a local time zone? A day of the Universal Time?

      • I don’t see how it matters. Let’s say you sample min/max at some specific time for 10 years, and the. Change it to different time and do it for another 10 years. This would cause you to possibly incorrectly assign max or min to wrong day for that particular day only – you’d have a tobs error for 1 day out of 10 years worth of data.
        I really don’t understand this “look at my data for examples” that everyone talks about. Why?
        This appears to be fairly simple and straightforward issue that is known to anyone who done any ADC conversion. What am I missing here?

      • Udar, it’s more to it than that. You can double-count cold waves or heat waves if they occur across the measurement time (for example, a cold wave that lasts 4 hours from 6AM to 10AM would be a minimum on 1 day if you measure at midnight or 2 days if you measure at 8AM), so they won’t be precise one day shifts. However, that should be a random occurrence for both mins and maxes, averaging out over time to no significant difference.

      • Ben, I too feel that the difference should average out. But Karl determined (I am trying to find out how) that reading min-max temperature at 6 pm increases an April average temperature by more that 1 degree C.

    • Coming back and answering my own question, that double-counting a heat wave does actually occur a lot when you measure in the late afternoon, near the peak of the day’s temperature. If you measure in the very early morning (like the hour before dawn), you have a good chance of double-counting the coolest part of the day. This bias disappears when you check it in a non-peak time such as mid-morning or late evening.

      • If you believe that temps for official records are measured near the high of the day, day after day, the you have to assume the recorder is either an idiot or someone who just doesn’t care to do it right.
        My father kept his own min-max temperature records for 30 some years for his own account, recording on K&E 1 mm red graph velum in the evening when he came home from work. But if a cold snap came through in the evening, he might reset the max the next morning — because he didn’t like to record crappy data. He wanted to know the max of that day and he knew yesterday’s 6pm max probably wouldn’t be topped today.
        The logs may say when Time of Observations when temperatures were recorded. But do we know that time was always when the min or max markers were set?

  42. Does this analysis shed light on why it is that these adjustments always cause a upward change in the trend line compared to the unadjusted figures?

      • Far to vague Nick. Taking the reading to early in the am could lead to a missed low vs a high. Take a reading in the afternoon, and the following days high was lower then the beginning of that days 24 hour period, then the 24 hour high taken actually the day before, right after the previous days reading was taken would register.
        However, both before and after a certain afternoon time frame, or am time frame, this affect would diminish. So simply saying afternoon and evening, is nothing short of gross guesswork.

  43. One simple soluton to the TOBS is to do nothing- start a new series with the time of observaton change and keep the data prior to time of observation change and after time of observation change separate.

    • Yes, I completely agree, but that wouldn’t allow the Mosher’s of the world to put their imprint on what they think the data should be.

  44. Mr. Layman here.
    It always struck me that trying to arrive at a “Global” temperature from all these instruments that were installed years ago to give local conditions is like trying to squeeze blood out of a turnip. It just ain’t there.
    With or without a computer, we can’t project a rise in future “Global” temps if we don’t really know what they were in the past, let alone now.
    As much as we may want one, no reliable baseline exist for the globe. Fiddling with the numbers only complicates things.

  45. Temperature Adjustments
    NOAA will calculate a monthly average for GHCN stations missing up to nine days worth of data (see the DMFLAG description in ftp://ftp.ncdc.noaa.gov/pub/da…..readme.txt). Depending on the month’s length, GHCN averages will be calculated despite missing up to a third of the data.
    calculating a monthly average when missing 10-11% of the data can produce a result with questionable accuracy.
    If reasons for missing data include problems accessing the data which are most likely to occur on the coldest days in higher latitudes [*fact] is there not a TOBS bias for warming the data present.
    Given that the colder stations drop out more often does this not also warm the homogenization done?

  46. Ben of Houston March 6, 2015 at 1:10 pm You know, I’ve never gotten the Time-of-day adjustment.
    Me neither but it is correct, Zeke has an article at Climate Etc which sort of helps.
    It’s a bit like the three door puzzle picking the car and the host shows you a goat behind one of the other 2 doors. Do you change your door to the last door ? Yes
    Why? It’s very hard to explain and still doesn’t feel right though it assuredly is.
    TOBS work both ways like a cooling bias if you check in the am then moved to a pm system?

  47. A fantastic amount of time and effort is spent analysing surface temperature adjustments and, while it may well be an interesting topic in itself, it has no real influence on the argument for or against global warming. UAH is totally independent of GISS (and Hadcrut) but the warming trends are of similar magnitude. Since 1990 the trends are:
    UAH +0.165 deg per decade
    GISS +0.15 deg per decade

  48. Adjustment is an excuse to jigger the data in favor of one’s thesis. Any changes to time of observation are unfortunate but are best dealt with by increasing the overall uncertainty of the data. Isn’t there a science of bias? The existence of double blind experiments in the medical field is evidence of how easy it is to introduce bias even with the best of intentions.

    • Yes, where is the C.B adjustment. (Confirmation Bias.)
      https://chiefio.wordpress.com/2010/12/13/the-rewritten-past/
      Phil Jones, 1985, about the temperature decline after the 1930´ies:
      “No satisfactory explanation for this cooling exists, and the cooling itself is perplexing because it is contrary to the trend expected from increasing atmospheric CO2 concentration. Changing Solar Activity and/or changes in explosive volcanic activity has been suggested as causes… but we suspect it may be an internal fluctuation possibly resulting from a change in North Atlantic deep water production rate.”
      ==============================
      “So, Jones said in 1985 that: “the cooling itself is perplexing” – but why not say so today? And why don’t we see a “perplexing” cooling after 1940 in the IPCC graphic today? And furthermore, back in the early 1980´ies Jones appears to accept data as is at least to such an extent that he is considering how nature has produced these “perplexing” cooling data – like a real scientist should. (EM Smith)

  49. Here are the changes made to GISS temperatures on just one day this February. Yellow is the new temperature assumption and strikeout is the previous number. Almost every single monthly temperature record from 1880 to 1950 was adjusted down by 0.01C.
    I mean every freaking month is history suddenly got 0.01C colder. What the heck changed that made the records in 1880 0.01C colder. Did the old thermometre readers screw up that bad?
    http://s2.postimg.org/eclux0yl5/GISS_Global_Adjustments_Feb_14_2015.png
    GISS’ data comes from the NCDC so the NCDC carried out the same adjustments. They have been doing this every month since about 1999. So 16 years times 12 months/year times -0.01C of adjustments each month equals -1.92C of fake adjustments.
    Lots of opportunity to create a fake warming signal. In fact, by now it is clear that 1880 was so cold that all of the crops failed and all of the animals froze and all of the human race starved to death or froze to death and we went extinct. 135 years ago today.

    • GISS and GHCN adjust for apparent non-climate events, and adjust backward, so present day readings are not affected. For example, when MMTS came in, past readings were raised or lowered to remove a discrepancy with MMTS.
      There is nothing unusual about this. I looked here at how stock exchanges present historic stock prices. They adjust for dividends, share splits and issues etc. They are not the closing prices on the day, except recently. And when there is a dividend etc, it is all previous prices that are lowered.

      • Bad analogy, Nick. If a stock splits the effect is known and well defined.
        If I own one share of Telstra valued at $100, and it splits, I now own two shares of Telstra worth $50 each — my total investment remains the same, and change in valuation applies to every shareholder.
        If I own a weather station instead, the change in temperature data is arbitrary and not well defined. My weather station may or may not be affected by UHI effect, TOB adjustments. Regardless of the actual true affect on my data, my results (data) are adjusted and ultimately hidden.

      • Reg,
        It’s still adjusting the past, constantly, and in the same way – back from the event. And for the same reason – something makes a change to the measured number, but reflects an effect different from what you are interested in.
        But the effects aren’t necessarily well known. What if Telstra hives off part of its business and gives shareholders a stake in that? The worth of that has to be estimated.

      • What a crock Nick. What a great post Bill.
        Data is data. ‘Adjustments’ are not data, any more than averaging.

      • Given the stock markets ability to get it wrong or be caught out they are hardly a an example of how this approach works without problems.
        Bottom you cannot ‘adjust’ worth a dam if you do have a very good idea of what needs ‘adjusting ‘ and in what way . You can guess it , but that is all your really doing.

      • Nick Stokes explains…”For example, when MMTS came in, past readings were raised or lowered to remove a discrepancy with MMTS”
        ==============================================
        So what “came in” to cause this change hum? ,,,and the one before that? …and before that?
        (Prove me wrong, but I do not expect a real answer)

      • Basically all Nick said (with a convoluted stock market analogy) is there is a reason for the .01 changes still happening to the past.
        Nick, we know there is a reason, it could be a great reason, it could be fraud, so explain it, and the others that keep happening, month after month.

    • Here are the changes made to GISS temperatures on just one day this February. Yellow is the new temperature assumption and strikeout is the previous number. Almost every single monthly temperature record from 1880 to 1950 was adjusted down by 0.01C.

      Here’s a way to check what Bill Illis is saying using TheWayBackMachine.
      http://data.giss.nasa.gov/gistemp/
      (The basic site.)
      http://data.giss.nasa.gov/gistemp/tabledata_v3/GLB.Ts+dSST.txt
      (One of the data tables.)
      http://web.archive.org/web/20120104220939/http://data.giss.nasa.gov/gistemp/tabledata_v3/GLB.Ts+dSST.txt
      (The “same” data table as it was in 2012)
      (To use http://archive.org/web/web.php enter the address of the page you’re looking into the main search box. If old versions have been archived you’ll see a calendar with years above it. The 2012 page above is the oldest GISS I could find.)

  50. May I propose an experiment, to evaluate the skill of temperature correction?
    The idea is that we take pairs of “relatively close” unadjusted stations, and treat them as a single station for the purposes of our analysis. (This will only work if the stations did not move/change over the time period in question.)
    We can then create an artificial change-event for our meta-station by plexing it’s record from one station to the other. This is now a “known wrong”. If the adjustment software works correctly, it will do a good job of identifying the event and tracking the “correct” version of the meta-station. (Ie, the one we switched from.)
    This gives us the ability to make “known wrongs” at will, and compare the “corrected” records with the “known right” records. (We can run almost unlimited versions of this experiment, by choosing different times and directions to our event.)
    I’d call this a HASH analysis. (Holdback Analysis with Synthetic Errors) Who has the data to do this?

  51. The problem of the TOB’s adjustment is more subtle than most people seem to think. It is only a problem if the usual observations take place around the normal high or low temperature of the day. Which can cause either a double counting of the highs or lows. Or if the location has temperature shifts that overpower the daily shifts.
    It also presupposes that the record keepers were idiots and blindly recorded the position of the metal bars, instead of the actual temperature. In other words if the recorder normally records the temperature at the hottest part of the day and the mercury is a few degrees below, the recorder has to make a judgement call as to whether that really is the high temperature of the day.
    Personally I think it is more accurate to trust the judgement of the original record keeper who knows whether a cold front came through and if the days min/max temperatures accurately reflect the days actual temps or if they are a residual of yesterdays temps.

  52. I drove about 8 miles from home and saw a 5F swing in temps, the airport 20 miles away was 8 warmer still. So a 12 – 13F range over 30 miles, which is the average?
    Plus i wonder about the difference of the temp taken in the middle of a field, vs in those woods off to the side.

  53. Zeke,
    Would you explain what is happening in the above data Bill Illis presents using “elimination of outliers and homogeneity adjustment” and how often those adjustments are made. Thanks.

  54. Might be me, but seems many are missing the forest for othe trees here. Forget about TOBS and corrections. RGB is dead on. The big aha here is the variance in the observations among three stations in such close proximity. Where are Zeke’s and Mosher’s curiosity? Why aren’t they asking themselves “what implications does this have on our BEST models?”

  55. Nice article. And real problems.
    Algorithms adjusting temperature
    will never work.
    Real actual temperatures can
    vary greatly based on so many micro
    factors. I savor(and collect) “original
    data”.

  56. Averaging the temperature over the global scale is meaningless. General climate classifications we use temperatures of the global stations but not for calculating the trend. For this purpose extrapolation and interpolation distort the real picture. For different purposes we use different network. For studying micro-climate variation or urban to rural temperature change with changes in urban sprawl growth with the time. With the time changed the met network over the globe. Changed the manual observations to sophisticated automatic recording, etc. The average of a maximum and minimum thermometers based observations taken once a day may differ with the automatic data.
    The time of observation will not affect maximum, minimum and average temperature recorded on a given day under traditional system but the temperature is measured for the one decimal point only [like 30.3 oC] but while taking an average depending upon the odd or even number of 1st decimal number, the rounding of second decimal of 0.05 will change by 0.1 oC after 1956 and 0.1 oF before 1957..
    Dr. S. Jeevananda Reddy
    .

    • The issue with automated stations is that as long has they chuck out data the temptation is to regard the data has valid without even knowing about any changes in the area of the station . After all not using the mark one eye ball is the very reason for automating it in the first place , but without that how do you know someone has not just planted a load of trees or built a drive way right next to it ?

  57. In an ideal situation and if we actually doing good science, how many measurements of temperature would we need to have a worthwhile ‘average’ and how far are we actually away form that number ?
    Is a question I never managed to get an answer to , you have to start from the basics , and these are do you have the ability to measure in the first place the very thing your basing your claims on.
    If not your claims are never going to be better than ‘a guess’ no matter how intelligent.

  58. I wish Zeke had explained and defended the “elimination of outliers and homogeneity adjustment” above. Maybe he still will? Zeke has twice presented guest posts at Climate Etc.about the adjustments saying, among other things that the adjustments have not changed the global temperature record significantly. To me, that’s one more reason not to do it. I am on record there as saying that the recorded data should be inviolate and footnotes should be used to explain differences in measurement methodology.
    In the heated climate war atmosphere, we all know persons on both sides who would, from noble cause corruption, to change the recorded data. Much more likely- and I fear this is happening- confirmation bias might well play a part in creating those adjusting algorithms.
    Thank you John Goertz for the post. My takeaway is the substantial difficulties of accurate temperature measurement outside laboratory conditions and the false precision of claimed measurements. When NOAA and NASA use this false precision to claim record warmth for year 2014 , we have propaganda, not science. Zeke- where are you?

  59. we are no longer left with data. We are left instead with a model of the original data.

    I have been flummoxed as to what to call what is commonly referred to as the temperature data record. “Model of the original Data” is the most precise description I have seen to date and brings further into focus the gaps between climate science claims and reality.
    The “Model of the original Data” description points out that the temperature data sets are not data, they are an abstraction or a best guess model of the data. I am ok with best guesses, since often that’s all we have for a starting point in exploring new areas of knowledge. In this case however best guesses are based on the presumption that the past has to be cooler because CO2 has increased. That is the cart leading the horse or the old, “my conclusion is supported by my conclusion”.
    Bottom line, in terms of past temperature data, we do not have reliable data. It simply does not exist, we don’t even have a consistent theoretical definition of what a days temperature is. Regardless, fix the unreliable data, then create a new modeled data-set, and feed it into more models so we can pretend we have solved the mystery of climate.
    In a court anyone presenting this kind of evidence, would be skewered under cross-examination. Under cross-examination any credibility of the data being representative of global temperature would plummet. But this is science and not a court of law, and science often requires experimentation, exploration, speculation to get to the next experiment or hypothesis and so on. The sin of climate science is they are misrepresenting where they are in the process of understanding global temperature as well as the mechanics of climate. Is is the simple age-old character flaw of dishonesty.

    • @Alx,
      I think there is usable it the historical temp record, they just shouldn’t be adjusting the data.
      which if you pay attention to the data, what it has to say is something completely different from what they show it to be.

  60. I think there is enough evidence above to indicate that “outliers” should not be adjusted by stations miles away just because an “adjuster” thinks the station is showing “the wrong results” (from his perspective). This is particularly so if the stations used as basis are affected by UHI such as airport proximity thus warming the outlier. How many “outliers” are in rural locations and affected by adjustments upwards? How many of the “E” stations i.e. no longer extant are getting readings from UHI affected stations even although they may be assumed to be located in rural locations. As I understand the issues, the main problem is not only TOBS, but really the “double adjustments” such as to Iceland temperatures, but also the adjustments to non exisiting stations and the lack of realistic UHI adjustments to stations at encroaching constructions retaining heat evening/night and affecting min. readings. I simply think that for as long the surface stations show a divergence from satellite measurements there is a problem to be investigated by independent statisticians such as is now happening in Australia. This is a statistical issue of enormous proportions. Maybe a suggestion made that error bands should be increased rather than numerous adjustments made is a good one. Makes it more difficult to “cry wolf” but perhaps more ealistic. Don’t like the idea that all the adjustments increase current and decrease past temperatures. When I look at the current US temperatures from the thirties I don’t recognize what I read about the dustbowl in Steinbeck’s novels when I was young.

  61. Hi John Goetz
    Excellent analysis! I posted the note below in the comments at
    http://judithcurry.com/2015/02/22/understanding-time-of-observation-bias/
    “Are you going to respond on this WUWT post. If you do, I suggest posting at both there and on Judy’s weblog. ”
    Roger Sr.
    P.S. Nolan Doesken at the Colorado Climate Center did a comparison of two sites in Fort Collins when they were building a transit center there that was very near the site of the long term Fort Collins climate station. It is not as ideal as the location you have looked at but it would be another near station comparison that could be informative. I will ask if he still has that report.

  62. About 5 years ago I visited a marine lab where their thermometer(s) measured temperature continuously and recorded in real time electronically. As a result the 24 hour Tmax, Tmin and Tmean required no daily readings, no adjustment and no TOBS. Incidentally Tmean did not necessarily equal (Tmax + Tmin)/2.
    I understood that the only part of the apparatus which required updating on a regular bi-monthly basis was the paper for printouts, if and when these were needed in paper form. I wondered why these devices were not used throughout and Steven Mosher explained that this was so that current records could be better compared with previous ones where such modern devices had not been available. Which made sense then. But is there any reason, today, why we should not have had at least 5 years of unadjusted daily records?
    Nick Stokes has explained on his website the need for TOBs and I found his argument persuasive but, now that we are in an electronic age and there is no need to physically inspect the thermometers, I have never really understood why the climate world could not ensure that all readings are taken as at, say 12.00 p.m. GMT worldwide (i.e. measuring the same 24 hour period worldwide).
    I am sure that there must be a good reason and perhaps someone will enlighten this layman.

  63. Average is not (max+min)/2. Average is sum(observations)/(number of observations). So no need for any TOBs adjustments.
    If you want a daily average, use all data points for one day.
    If you want a monthly average, use all data points for one month.
    If you want a yearly average, use all data points for one year.
    Why is this difficult?

  64. If you consult original temperature records in meteorolgical year books you will see that TOBS change is a rare event. Those i believe that TOBS changes are justified here and there and everywhere have no idea how rare actual TOBS changes are.
    Further, When some “correct” for TOBS in recent years, then they are at the same time claiming that the versions made public just a few years earlier should have been published while forgetting TOBS …!
    Kind Regards, Frank Lansner

  65. Wow. I’ve been saying it for about 14 years now:
    The science just isn’t there. Not good enough.
    As to those who do this sort of stuff, who are these bozos, and what box of Cracker Jacks did they get their science decrees out of?
    I mean, this is really BAD science.
    I used to say that when they change over a station for TOBS or change of instruments or change of location they should run a year with BOT old and new and adjust the new one till it matches the old.
    Now it appears that even THAT won’t give continuous data that is good enough to pee on.
    And THEN add in the proxy data that is presumed to be a good fit. (And don’t forget the Divergence Problem in tree rings there…)
    GIGO. GIGO. GIGO. GIGO.

    • Steve Garcia,

      As to those who do this sort of stuff, who are these bozos, and what box of Cracker Jacks did they get their science decrees out of?

      Apparently not the box you got yours out of: the one with a time machine as the prize, by which you can travel back and retroactively do the changeover procedure you describe. I’m sorry that the laws of physics don’t conform to the GIGO principle, but here in the real world we must use the data we’ve got. There’s no other option.

      • Well the ‘its better than nothing ‘ approach does not change the fact that is often poor at best , especially when great claims , which are supposed to go unquestioned , and great demands are made on the back of this ‘better then nothing’ data .
        Lets take an example your trying to measure something that changes by 0.1 but you can only measure accurately to 1.0 , you bases of its ‘better than nothing ‘ means so we can make claims about the nature of the changes but thinking about for a second shows this is nonsense .
        You cannot magic garbage data into good by saying that its all we got while GI is garbage in , so yes GIGO very much applies .
        Climate ‘science’ very big problems with knowing the reality of past climate should be reason for concern and
        caution not claims of ‘settled science’ at all.

      • Gates:

        I’m sorry that the laws of physics don’t conform to the GIGO principle, but here in the real world we must use the data we’ve got. There’s no other option.

        BUT! The laws of computer programming approximate and inaccurate computer models of real world physical processes that are only approximated in the computer simulations DO follow exactly the Garbage In Garbage Out model. Precisely and every time. GIGO. No “science” in there at all.

      • There is an option, Gates: do not use questionable data. But this option does not appeal to the climatologist/climateers.
        No statistically significant warming for 25 years. AGW. RIP.

      • mpainter,

        There is an option, Gates: do not use questionable data.

        Somewhere along the way, someone has to determine what data are questionable or not. How do you propose one look at raw surface data from a single and tell if it is reliable or not?

      • RACookPE1978,

        … inaccurate computer models of real world physical processes …

        You’re changing the subject for some reason. Very well. Which physical processes have been incorrectly modelled, and how do you know they’re incorrect if the observational data themselves are suspect?

        • A valid question.
          We cannot. Which is why “climate change” is NOT a Catastrophic-we-must-harm-billions-now-by-restricting-energy-immediately!!!!! problem. We have centuries to figure out the right data, and what is “weather” …

      • knr,

        Well the ‘its better than nothing ‘ approach does not change the fact that is often poor at best , especially when great claims , which are supposed to go unquestioned , and great demands are made on the back of this ‘better then nothing’ data.

        The guy typing this message never said anything about not questioning claims. On that note, who is it that is saying you should not question conclusions? Specific citation if you please.
        And yes, I will reiterate: I am not aware of any “better” data from the past. Are you? I can’t change that reality for the both of us as much as I’d like to. I’m perfectly happy to spend more money on better observation going forward, because, like you, I don’t “settle” for better than nothing if, and only if, it can be helped.

      • m p a i n t e r,
        My replies to you are consistently being eaten by WordPress, creative spelling of your handle as a test to see whether that makes a difference.
        [visible, not in pending queue. .mod]

      • Looks like that might essplain it. Anywho, similar question to you: how do you propose to determine what are questionable data and what are not? “Don’t use questionable data” doesn’t mean anything unless someone defines what “questionable” means. See the problem yet?

      • ‘I am not aware of any “better” data from the past.’
        So in what way does that create the magic process by which poor data becomes good ?
        If you had no data at all , would be OK to just make it up and apply the same magic to say its valid .
        Its really simply to valid measure something you have to match certain parameters for taken measurements , if you cannot match them they you cannot produce a valid measurement.
        What you got is ‘a guess ‘ no matter how much computing power you throw at it , and if your ‘guessing ‘ your in no position to make grand statements, especially about accuracy, based on the guess .
        Now that is not usual , indeed ideas like error bars are designed to cope with such issues .
        But within ‘settled’ climate ‘science’ with seen an active movement against this very idea to meet the political demands of the area . And so time again we seen great claims for precision in data that are simply not justified by the means of collecting the data .
        The idea can should spend trillions and make large changes to society on the back of ‘better than nothing data ‘ should be regarded as a joke.

    • knr,

      So in what way does that create the magic process by which poor data becomes good?

      You tell me, that’s your argument not mine. How do you know the data are “poor” to begin with, hmmm?

      If you had no data at all, would be OK to just make it up and apply the same magic to say its valid.

      No, I would consider that fraudulent.

      Its really simply to valid measure something you have to match certain parameters for taken measurements, if you cannot match them they you cannot produce a valid measurement.

      When doing carpentry or lab bench chemistry, yes, it’s a cinch compared to the scope of the system we’re discussing here.

      What you got is ‘a guess ‘ no matter how much computing power you throw at it …

      I prefer the word estimate myself because it appropriately describes the rigor and thought that went into the process, but why quibble over semantics.

      … and if your ‘guessing ‘ your in no position to make grand statements, especially about accuracy, based on the guess .

      That’s a big if … we wouldn’t want to be making bad assumptions here now, would we?

      Now that is not usual , indeed ideas like error bars are designed to cope with such issues .

      Mmm, error bars don’t fix anything either. Nor are they just magically drawn on the plot. All they do is communicate an estimate — or a guess if you will — about the level of uncertainty in the data. And those estimates are still only as “good” as the human beings doing the analysis. Your speech here has solved nothing. The data are still “bad”. The human bias has not been removed — they can still “cheat” on the estimates of the error.

      But within ‘settled’ climate ‘science’ with seen an active movement against this very idea to meet the political demands of the area .

      ROFL!!! I thought the whole idea was to take the political motives OUT of the science, not demand that researchers accede to political agendas. Yet more evidence that my own assumptions about things are quite often wrong, I ‘spose.

      And so time again we seen great claims for precision in data that are simply not justified by the means of collecting the data .

      How do you propose improving data which were collected in the past?

      The idea can should spend trillions and make large changes to society on the back of ‘better than nothing data ‘ should be regarded as a joke.

      By and large, the world is a funny place that way. “Shoot first, ask questions later” is about as pure a survival instinct as I can think of. Why mess with success?
      Perhaps you need to do some reevaluation about how the real world works, because unreasonable fantasies about pristine data — or failing that, slapping magically derived error bars around the cruddy stuff — are the quickest way to learn nothing about anything. One tends to wonder if that’s the whole point.
      As for me, I’m all for spending the money to improve our observations, and estimates derived from them. Better instruments, more of them, more coverage … you name it. What say you? A few more billion on instrumentation and the resources to process and analyze it sound right to you?

  66. You pick as many regional stations as you can find with long station histories, and you run with that. That’s your temperature history.
    All this nonsense trying to reconstruct temperature using some sort of model of temperature history is a problem not in need of a solution. We’re talking about fractions of 1C. Step back. It’s unimportant. These people should not have jobs and if one can’t find useful things for them to do, they should be let off.

    • Ditto
      If the data is questionable, discard it. Don’t infill, don’t homogenize, don’t adjust for tobs or whatever. Fabrication is fabrication, no matter how plausible the rationale.
      Why is that principle universally ignored by the temperature data keepers?

      • @mpainter
        “If the data is questionable, discard it.”
        No you don’t, that’s just another format of bias.
        You can remove a bad value. Which for the temperature data I’ve used is +/-199 or larger absolute value. Which I think most would agree are not likely actual temperatures on earth. Other than that, the rest of your comment holds.

  67. Anybody still reading this thread?
    If so, please help me out with this (sorry if it was already discussed above and I missed it):
    In a 24 hour period, midnight to midnight local time at a station, we get a Tmax of 75 degrees F and a Tmin of 55 degrees F.
    What is the average temperature for that station on that day?
    Answer: somewhere between 55 and 75 degrees F. We do not have enough information to know what the daily average might have been. If, for example, a cold or warm front came through in the first or last 4 hours of the day it may have changed the actual daily average by 10 degrees or more.
    I can see where Tmax/Tmin can help provide long term trends, but how can they be used to obtain a global average surface temperature?

    • JohnWho commented

      If so, please help me out with this (sorry if it was already discussed above and I missed it):
      In a 24 hour period, midnight to midnight local time at a station, we get a Tmax of 75 degrees F and a Tmin of 55 degrees F.
      What is the average temperature for that station on that day?
      Answer: somewhere between 55 and 75 degrees F. We do not have enough information to know what the daily average might have been. If, for example, a cold or warm front came through in the first or last 4 hours of the day it may have changed the actual daily average by 10 degrees or more.
      I can see where Tmax/Tmin can help provide long term trends, but how can they be used to obtain a global average surface temperature?

      I can tell you in the global summary of days data set I get from the NCDC data server, Temp is listed as the mean temp for that day, I have run an average of min and max temp and tested for differences between that result and the mean temp field, and there wasn’t any.
      Mean temp = Min + Max /2 , At least in the GSoD data.

      • Thanks for the response, Mi Cro.
        If that is true, then in my opinion, “that ain’t right”.
        Maybe the comment I often see here, regarding the “global average temperatures” is correct.
        Oh, the comment:
        It’s worse than we thought!.
        Just sayin’…

    • Let me add one more comment.
      I think using min and max temp to compare daily warming to nightly cooling, and to calculate a daily rate of change at each station, and then looking at the how the daily rate of change changes through the year is very useful, and while the data isn’t very good, I think the rate of change info can provide a unique view of the station data, one which BTW doesn’t show any loss of cooling at a minimum of since 1950.
      I have lots of data at the url in my name in this post.

      • Perhaps, as long as the station isn’t moved and TOBS remains the same at the station, and it isn’t “adjusted” to match other, uh, “nearby” stations, and the station is properly sited, and the proper siting has remained the same over the years, and it isn’t now nor ever was in a UHI, and …, well, you get the idea.

        • JohnWho commented on

          Perhaps, as long as the station isn’t moved and TOBS remains the same at the station, and it isn’t “adjusted” to match other, uh, “nearby” stations, and the station is properly sited, and the proper siting has remained the same over the years, and it isn’t now nor ever was in a UHI, and …, well, you get the idea.

          Of course, it just seems such as waste of good information when you filter out the dynamic response to the surface in the time frame that should matter, DWIR happens at the speed of light.
          Here the Annual average of the day to day change in both max and min temps. A station has to have both min and max to be included, and when you look at smaller areas you see the swing in min temps are from specific areas and at different dates. (degrees F)
          http://www.science20.com/sites/all/modules/author_gallery/uploads/1871094542-global.png
          Here’s the slope(rate) of temp change over the year, by year from spring to fall, and fall to spring.
          http://www.science20.com/sites/all/modules/author_gallery/uploads/543663916-global.png

  68. the discussion is on TOBS. Is anyone aware what happened to Watts et al (2012) paper? it was withdrawn because of TOBS issues. Almost three years passed and the final version is not yet out…

  69. The essence of the so-called “TOBS bias” of daily extreme readings is that the temperature AT TIME OF RESET of max/min thermometers is occasionally mistaken for the diurnal (midnight to midnight) extreme. This occurs only when Treset is either greater than the following diurnal Tmax (afternoon reset) or less than the following Tmin (morning reset). The empirical TOBS “adjustment” based on determining the extremes of HOURLY readings fails to address this problem

Comments are closed.