“Thorough, not thoroughly fabricated: The truth about global temperature data”… Well, not *thoroughly* fabricated.

Guest post by David Middleton

Featured image borrowed from here.

ARS Tecn

If you can set aside the smug, snide remarks of the author, this article does a fairly good job in explaining why the surface station temperature data have to be adjusted and homogenized.

There is just one huge problem…

US Adjusted

“In the US, a couple systematic changes to weather stations caused a cooling bias—most notably the time of observation bias corrected in the blue line.
Zeke Hausfather/Berkeley Earth”… I added the the natural variability box and annotation.  All of the anomalous warming ince 1960 is the result of the adjustments.

 

Without the adjustments and homogenization, the post-1960 US temperatures would be indistinguishable from the early 20th century.

I’m not saying that I know the adjustments are wrong; however anytime that an anomaly is entirely due to data adjustments, it raises a red flag with me.  In my line of work, oil & gas exploration, we often have to homogenize seismic surveys which were shot and processed with different parameters.  This was particularly true in the “good old days” before 3d became the norm.  The mistie corrections could often be substantial.  However, if someone came to me with a prospect and the height of the structural closure wasn’t substantially larger than the mistie corrections used to “close the loop,” I would pass on that prospect.

 

Just for grins, I plotted the UAH and RSS satellite time series on top of the Hausfather graph…

 

US Adjusted_wSat

US raw, TOBs-adjusted and homogenized temperatures plotted along with UAH and RSS global satellite temperatures.  Apples and oranges? Sort of… But still very revealing.

 

I think I can see why the so-called consensus has become so obsessed recently with destroying the credibility of the satellite data.

 

Addendum

In light of some of the comments, particularly those from Zeke Hausfather, I downloaded the UAH v5.6 “USA48” temperature anomaly series and plotted it on Zeke’s graph of US raw, TOBs-adjusted and fully homogenized temperatures.  I shifted the UAH series up by about 0.6 °C to account for the different reference periods (datum differences)…

USA48

I used a centered 61-month average for a 5-yr running average.  Since there appears to be a time shift, I also shifted the UAH ahead a few months to match the peaks and troughs…

USA48x.png

The UAH USA48 data do barely exceed the pre-1960 natural variability box and track close to the TOBs-adjusted temperatures, but well below the fully homogenized temperatures.

 

214 thoughts on ““Thorough, not thoroughly fabricated: The truth about global temperature data”… Well, not *thoroughly* fabricated.

    • I believe so… I thought about using Muttley as the featured image… But figured there might be copyright issues.

    • Anthony — my opinion is that from figure 2, the satellite data is in line with the nature. The surface temperature rarely accounts the “climate system” and “general circulation pattern” impacts. The satellite data accounts these. Because of this the satellite data series are below the raw surface data.

      Dr. S. Jeevananda Reddy

      • The satellite will always be “below” the surface data because the satellites measure an average weighted to the surface but which contains data to the tropopause at ~217K. The surface thermometers measure a nano layer at about the height of a human nose. What is important is when the trend is different, and it is.

      • gymnosperm — it is o.k. when we are dealing with single station data but it is not o.k. when we are drawing a global average. The groundbased data do not cover the globe, covering all climate systems and general circulation patterns. That is ground realities. This is not so with the satellite data. They cover all ground realities around the globe. If there is some mix in terms of ground and upper layer contamination in satellite data, this can be easily solved by calibating the satellite data with good ground based met station data that is not contaminated by urban effect and contaminated by urban effect. Calibration plays the pivotal role.

        Dr. S. Jeevananda Reddy

      • cont— In previous posts under discussion section — Some argued that the atmospheric temperature anomalies are necessarily different from surface anomalies. Usually, atmospheric anomalies are less than the surface maximum in hot periods and higher than the surface anomalies in cooler periods. It is like night and day conditions. We need to average them and thus they should present the same averages both surface & satellite measurements.

        Dr. S. Jeevananda Reddy

      • @gymnosperm:
        Shouldn’t the warming effects of CO2 be most apparent as anomalies in the mid-troposphere – exactly where the satellites (and balloons) measure?

  1. “I think can see why the so-called consensus has become so obsessed recently with destroying the credibility of the satellite data.”

    And now that the El Nino is starting to ease, that desperation will become MANIC !

    And great fun to watch….. as the dodgy bros salesmen go to work !

  2. “We must get rid of the Medieval Warm Period!” “We must get rid of the pause!” “We must get rid of the Satellite data!” “We must discredit any one who would questions us!” “We must exaggerate the threat so that people will listen!” “We must get the people to do what we tell them to do!”

    Does any of that sound scientific in the slightest. No, of course not. In order for the AGW myth to continue, science itself must be redefined or discredited, and that is exactly what is happening.

    • Indeed. From the Climategate emails, one Phil Jones, a formerly respected scientist:

      “I can’t see either of these papers being in the next IPCC report. Kevin [Trenberth] and I will keep them out somehow — even if we have to redefine what the peer-review literature is!”

    • The IPCC redefines “science” in AR4 ( https://www.ipcc.ch/publications_and_data/ar4/wg1/en/ch1s1-2.html ) with an argument that is of the form of proof by assertion: the conclusion of the argument it true regardless of contradictions. The conclusion is that it is OK to replace falsifiability with peer-review. Among other contradictions to this conclusion is the motto of the Royal Society: nullius in verba (take no one’s word). To sustain its bias toward curbs on greenhouse gas emissions the IPCC is forced to try to trash falsifiability as the projections of its climate models are not falsifiable.

  3. The satellite data was right…until it was wrong…because they are falling out of orbit….even though they have been adjusted for that since day one…………

  4. What would you get for a global mean if you used the temperature data as concentration for mineral X using the tools that you use? ± 1°C?

    I don’t know how difficult it is to homogenise the data in oil and gas exploration but the data is simply the mass of mineral divided by the sample mass for many samples taken in places for that specific purpose. You are then using this to get the mass of product in a large volume of the Earth’s crust. This is a lot different to using means of max and min temperatures that are affected a lot by very localised conditions other than the change in amount of thermal energy in the atmosphere in the region so that data from even nearby stations rarely look alike (if you expand the axis to the change in GMTA since 1880). On top of that, would you base your decision on a result that is merely a few % of the range of values that you get in the one spot?

    The changes to the global temperature anomaly are the very suspicious ones. Just what was needed to reduce the problem of why there was a large warming trend in the early 20th C that wasn’t exceeded when emissions became significant. And this is the difference between data homogenised in 2001 and now. The problem is not (just) homogenisation but the potential to adjust the data to what you want to see.

  5. As far as I can see they will have to do a major adjustment to the ocean temperature record as well, to get the observations in line with the predictions: Have you ever wondered if any warming is really missing?

    In terms of temperature:
    For 0 – 2000 m ocean depth, a temperature increase of 0,045 K is reported for the period from 2005 – 2015:

    The temperature increase deduced from the theory put forward by IPCC is :
    0,064 K for the lowest limit for radiative forcing (1,2 W / m2)
    0,13 K for the central estimate for radiative forcing (2,3 W / m2)
    0,19 K for the highest limit for radiative forcing (3,3 W / m2)

    Hence, the observed amount of warming of the oceans from 0 – 2000 m is also far below the lowest limit deduced from IPCC´s estimate for anthropogenic forcing.

  6. Can you show the ballon data also with sat data? Always makes for a better conversation with the agw religious brother in law?

  7. I think can see why the so-called consensus has become so obsessed recently with destroying the credibility of the satellite data….

    Are you missing an ” I ” ??…..I think ” I ” can see why the so-called consensus has become so obsessed recently with destroying the credibility of the satellite data.

  8. The divergence of the raw data from the processed curves in the first graph seems to coincide with the Great Thermometer die-off. Raw data are thinned out to provide more scope for adjustments. Anyone who thinks that these wildly interpolated and thrice-processed data have any useful degree of accuracy should just hand back their science diploma.

  9. “NOAA climate scientists”?
    Excuse my skepticism, but anyone who works for NOAA and also calls himself, or herself, a climate scientist, has two strikes against them as far as credibility is concerned. As for adjusting past temperature data, that should be a no go. Too many questions can be raised. Just graph the data as it was measured. When collection instrumentation and/or methods change, draw a lineplot the new data, and note what and why it changed.

    • As someone who started out trying to interpret sparse 1960’s vintage 6-fold 2d seismic data in East Texas, I actually have a fair bit of respect for what these folks do.

      My problem is that their “prospect” could be nothing but mistie corrections.

      • I found a lot of oil on old six fold data….and learned even more about data corrections. Corrections need to be done, but the potential for abuse is ever present. It all pales in comparison to the abuse one can conjure up with a modelling program. What answer do you want?

  10. I would suggest that what needs adjustment the most is their willing suspension of critical thinking, but then I remember they’re getting paid to put this stuff out. Sad that to make the necessary adjustments to science to get it back on track, we’ll first have to make a major adjustment to the political climate.

  11. You know it is political when Karl Mears of RSS starts to disown his own work. No matter that the satellites agree with the radiosondes (weather balloons).
    The consensus will have a harder time disowning them. I imagine that attack will soon come along the lines of mostly land. We already know NOAA chose to fiddle SST rather than land to ‘bust’ the pause via Karl 2015. Much easier to fiddle sketchy ocean data pre Argo. And Argo aint that great.

  12. “There is just one huge problem…
    Without the adjustments and homogenization, the post-1960 US temperatures would be indistinguishable from the early 20th century.”

    It’s not a huge problem. The US is not the world. It did indeed have a very warm period in the 1930’s, whereas the ROW had a much smaller variation.

    And of course, comparing US and global RSS etc is also pointless. Regional averages are usually much more variable than global.

    In the US, TOBS adjustment has a particular trend effect, for well documented reasons. But globally, adjustment has very little effect.

    • Perhaps Nick can help out.

      Can you find and put pictures up for the GHCN station in Addis Ababa.

      Thanks.

    • The US is not the world. It did indeed have a very warm period in the 1930’s..
      ===========

      Nick forgot to mention that his region in Australia also experienced a very warm period at the same time.

      Argus, November 29, 1937 – “Bushfires menace homes at the basin”
      January 13, 1939 – Black Friday bushfires in Victoria:
      * * * * * * * * *
      “In terms of the total area burnt, the Black Friday fires are the second largest, burning 2 million hectares, with the Black Thursday fires of 1851 having burnt an estimated 5 million hectares.”
      – wikipedia
      * * * * * * * * *
      Argus, February 14, 1939 “Incendiary Peril” & “Sweltering Heat: 106.8 Deg. In City”
      Argus, February 14, 1939 “Hose ban soon”

      Current weather + 7 day forecast for Melbourne in the middle of a super-Nino “global warming” summer 80 years later:
      http://www.bom.gov.au/vic/forecasts/melbourne.shtml

      • Yes, January has been not too hot, though with some lapse. Pretty warm last quarter of 2015. If you are interested, the history of hot days in Melbourne is here. You can look up the summers (incl 1939) in unadjusted detail. 114.1F on Jan 13. But that was just one hot summer. They have been getting more frequent. 115.5F in 2009, Black Saturday.

      • There was an interesting letter in today’s Australian commenting on the suggestion that rising temperatures might jeopardise the future of the Australian Open.
        ‘..is not supported by data.Melbourne maximum temperature records for January, readily available from the Bureau of Meteorology website, show no long term trend and the warmest January in the series was 1908. Unfortunately, in an act of scientific vandalism, in January 2015 the BoM closed the Melbourne observing site that had operated for more than 120 years.
        Future trends will be contentious’
        The problem for Australians is to know what the data is and the reasons for homogenization.
        So what was the temperature in Melbourne after Jan 2015?
        How was it calculated?
        Are the only ones to know this those who hacked the BoM?

      • @Nick Stokes

        You should be aware of how poor the former La Trobe site was,
        http://climateaudit.org/2007/10/23/melbournes-historic-weather-station/
        while the 19thC temperatures came from a site in the Botanic Gardens and how the automated stations read higher. Sometimes there is over a degree difference between a short spike in temperatures and the half hour readings. What are the chances such spikes would be picked up as someone popped out to take a reading?

        You then cherry pick this one station and look at extreme temperatures to highlight global warming is real, even though there are only a few degrees difference between the 20 hottest days recorded (pretty obvious that if taken under the same conditions in the 19thC that the readings could be >2°C more), then claim that 7 of the 20 are in the last 20 years is meaningful.

        Then you ignore that both the highest monthly mean for Jan and Feb at the site were over 100 years ago, with the Feb readings taken in the gardens.

    • Ah yes. In the small part of the world where there is extensive if not pristine temperature data, there is a cooling. But this cooling is overwhelmed by warming in regions of the earth where temperature data is scarce to non existent. Nevertheless, through the prestidigitation of adjustments and homogenization, these wizards of climate science can determine the earth’s temperature anomaly to the hundredth of a degree. I stand slack jawed in amazement.

      • 100 years ago, data was recorded to the nearest degree.
        Yet through fancy statistical manipulations, they claim to be able to know the actual temperature at the time to 0.01C. (And that’s without getting into data quality and site management issues.)

  13. March 13th 2016. World Wide Discharge a CO2 Extinguisher Day.

    I’m up for it – might even open a few 2L bottles of Lemonade simultaneously – and make some bread – and . . . .

    Cannot wait to see how unprecedentedly hot April will be as a result of all that ‘harmful’ gas.

      • Beer, Sodium Bicarb, Limescale Remover, you name it . . . . It’s gonna be fizzing on the 13th March.

    • Ha, Ha, I envisioned a similar stunt at the next World Climate March, fake a Liquid CO2 truck crash (use some benign substance) where multiple ‘leaks’ have to be plugged. It would be entertaining to see the crowd go into full crisis mode.

      • I can see the headlines . . . . “185 Million CO2 Fire Extinguishers were discharged simultaneously around the world yesterday in a silent protest by the sceptic community and, surprisingly, the expert’s prediction of Armageddon hasn’t happened after all.”

    • Consider completely replacing water vapor with CO2 and temperatures do what? Now consider deluding water vapor with CO2 and temperatures do what? Consider that solar heat is constant, thus there is a fixed number of photons that can heat the atmosphere, therefore the higher the concentration of CO2 the higher the number of photons that “heat” CO2 rather than heating water vapor. It’s a duh moment. Cheers!

      • solar luminosity is approximately constant (there is low-level variability, but let’s ignore that for now).

        solar *magnetic activity* is far from constant. And solar magnetic activity affects the greatest greenhouse gas, ‘water vapor’ via mediation of cosmic ray flux. See the work by Nir Shaviv et al.

    • I own a couple of 20# CO2 extinguishers that have the original seals and still weigh out. Sequestered gas from 1945. Maybe that’s a good enough reason to ice down some lovely beverages with them at the solstice.

      [Keep the bottles for the next hot day: Jan 22, Feb 22, March 22, April 22 …. or June 22. Might be warm again by Sept 22, since we believe in equal opportunity hemispheres. .mod]

  14. One key point they didn’t discuss was that the high quality 1/2 stations have a smaller warming trend than the low quality stations. They say all these corrections are eliminating the biases, yet the biases clearly remain.

    And then they set their biased data on a pedestal to undermine all other datasets. In the case of the Pause-buster data, it sticks out like a sore thumb against many other curated datasets put out by establishment groups, yet they insist it’s the new standard.

    For the two highest quality datasets, USCRN and ARGO, they have now adjusted BOTH of them to match low-quality data. The set the initial conditions for USCRN at an anomaly of almost +1 degree, based on historic USHCN data. And they adjusted the ARGO data to match the ship intake data, rather than doing the opposite.

    It’s as if they don’t WANT high quality data as a solid reference point.

    • Low resolution information is their ally. That allows them to infer that large areas surrounding a favorable warm reading can then be said to match that warm reading. I think that I see that same method at play at NCEP with their ENSO region data as compared to the other data sets for the ENSO regions. They are then able to legitimately state that their “picture” of the regions is correct according to their rules.To see what I am referring to click on the current NCEP ssta graph, and then compare that to what Weather Zone or Tropical Tidbits show.

    • No uscrn stations.. 110 in the USA.. Match the other thousands of “bad” stations. Which means that the science of ” rating ” sites isn’t settled.

  15. “I think I can see why the so-called consensus has become so obsessed recently with destroying the credibility of the satellite data.”

    You should have plotted the RAW RSS data.

    Nobody is obsessed with destroying the credibility of the modelled data produced from satellites.
    People are interested in what the actual adjustments are and the uncertainties.

    But Look Dave..

    You like RSS. RSS has to perform a TOBS correction.
    Do your trust the physics of that TOBS correction?

    If yes… Then you just trusted a GCM.. because RSS corrects its data with the aid of a GCM

  16. I fail to see the point in debating the various temperature effects (cart) of climate change before the CO2 cause (horse) has been solidly demonstrated. Anthro CO2 is trivial, CO2’s RF is trivial, GCM’s don’t work. Trust the force, Luke!

    Prior to MLO the atmospheric CO2 concentrations, both paleo ice cores and inconsistent contemporary grab samples, were massive wags. Instrumental data at some of NOAA’s tall towers passed through 400 ppm years before MLO reached that level. IPCC AR5 TS.6.2 cites uncertainty in CO2 concentrations over land. Preliminary data from OCO-2 suggests that CO2 is not as well mixed as assumed. Per IPCC AR5 WG1 chapter 6 mankind’s share of the atmosphere’s natural CO2 is basically unknown, could be anywhere from 4% to 96%. (IPCC AR5 Ch 6, Figure 6.1, Table 6.1)

    The major global C reservoirs (not CO2 per se, C is a precursor proxy for CO2), i.e. oceans, atmosphere, vegetation & soil, contain over 45,000 Pg (Gt) of C. Over 90% of this C reserve is in the oceans. Between these reservoirs ebb and flow hundreds of Pg C per year, the great fluxes. For instance, vegetation absorbs C for photosynthesis producing plants and O2. When the plants die and decay they release C. A divinely maintained balance of perfection for thousands of years, now unbalanced by mankind’s evil use of fossil fuels.

    So just how much net C does mankind’s evil fossil fuel consumption (67%) & land use changes (33%) add to this perfectly balanced 45,000 Gt cauldron of churning, boiling, fluxing C? 4 GtC. That’s correct, 4. Not 4,000, not 400, 4! How are we supposed to take this seriously? (Anyway 4 is totally assumed/fabricated to make the numbers work.)

  17. Is that “raw” data the data from all surface stations including those that are not properly sited and those that are in recognized Urban Heat Islands BEFORE any adjustment(s)?

  18. This data “adjustment” stuff reminds me of contract claims disputes in the construction industry. You have raw data, which typically everyone can agree on. Then you have methods and assumptions for calculating the claim amount, which are disputed almost 100% of the time. I’ve seen cases with highly credentialed experts on both sides of a contract claim coming up with widely differing cost analysis, depending on whether they represent the owner, or the contractor.

    Especially considering the tiny temperature anomaly scales, it strikes me as extremely likely the final adjusted graphs being produced by these environmental activists posing as scientist are showing wildly exaggerated warming.

    What’s really disturbing to me as that the public only sees the “warmist” version of the (adjusted) data. And that data is presented to the laymen as concrete evidence, as if the graphs themselves represent undisputable raw data.

    • It’s more like geotechnical reports that don’t “tell” you what’s going on below ground between borings. You can (must) make assumptions, but once you break ground, you have a geoengineer on site taking gathering additional data.

      • Jmarshs: Yes, many times I have changed designs and construction procedures on projects I worked on because of that “TWEEN” stuff. Water, rock, contaminants, grave sites, to name a few. Murphy’s Law.

      • Key difference between climate history reconstruction and geotechnical reports: With geo-reports, you can, as you correctly stated, have a geoengineer go on-site to gather additional data and therefore improve your knowledge of what is below ground. But with climate history that is not possible because there are no time-machines around to go back and gather the missing data. Aside from using “proxy data”, they are forever stuck with the limited information that was gathered at the time. That is why I will instantly not trust anyone claiming they have figured out an accurate global temperature trend from thermometers over the past 150 years to a high degree of certainty.

      • The key to Geotech investigations (my field) is the consistency of the data from a grid that is established according to the economic and social status of the building project. Say one was building a hospital compared to a chicken house. In the case of the hospital should there be a great variation over the standard grid one must go back in and bore more holes and keep drilling more holes until the entire subsurface is understood and measured for strength/stability etc with a factor of safety (e.g. 3+) far exceeding the demands of the building. Should there be a soft spot that does not meet the minimum standard one must map this with a degree of accuracy of a meter scale. Not too many hospitals built on land tested to western standards fail. Here is the proof

        How does this compare with temperature measurement on a global scale? Climate scientists could learn a lot from engineers

  19. I have a question for anyone with knowledge: These “TOBS” adjustments. Are they done on a case by case basis? Or as a global change? In other words did someone actually comb through each and every record looking for time of observation changes? or did they just sort of “wing it” with a single adjustment to all the data at once?

    • In the US, it was supposedly combed through. TOBS is trivial compared to US UHI and microsite issues. Surface stations project. Previous guest post on same.

    • “I have a question for anyone with knowledge: These “TOBS” adjustments. Are they done on a case by case basis? Or as a global change? In other words did someone actually comb through each and every record looking for time of observation changes? or did they just sort of “wing it” with a single adjustment to all the data at once?”

      Which TOBS adjustment are you talking about.

      A) The TOBS adjustment for the US
      B) The TOBS adjustment for Satellities.

      They BOTH do TOBS adjustments.

      For the US.

      There are Three seperate approaches all validating each other.

      1. The Case by Case approach. This has been validated EVEN BY CLIMATE AUDIT SKEPTICS
      and by John Daly. Every station is treated seperately
      2. NOAA statistical approach. Matches the case by case. every station is treated separately
      3. Berkeley earth statistical approach. Matches the case by case. every station is treated separately

      For SATELLITES.

      The TOBS adjustment is performed by using a single GCM model
      Different GCM models give different answers.

      • I have to question the method here. How is it possible to perform a case by case analysis on every individual site when wind speed, wind direction, moisture conditions, local activity, time of day shading and wind blockage are all changing constantly in so many ways? Seems to me these stations should either be classified as standard compliant or unreliable/useless. This would leave us with fewer readings but at least they would be believable.

      • Simple John. Go read the validation papers. Years of hourly data. Out of sample testing. It works. Your questions are not important.

      • Steven:

        Links to one or more validation papers would be welcome. I’m skeptical as validation is impossible in lieu of identification of the sampling units but my current understanding is that no sampling units underlie the climate models..

        Terry Oldberg

    • Thanks ristvan, Nick and Steven for answering my question.

      Stephen, I was referring to TOBS adjustment for thermometers. BTW, Just to point out it’s pretty obvious what you are trying to do with associating the satellite data with “GCM models”. A adjustment based on a calculation is just that. The accuracy of an adjustment depends how well they can verify the accuracy of the calculation against real data. Whether you call it a “model” or something else means nothing. Nice try though.

      • Read their paper. Different gcms have different diurnal cycles. They don’t validate against the real world.

  20. This is not science, it is political speech. Re-read Mr Scott K Johnson’s statements . They are attacks on congressman L.Smith.
    Note the insults to him, the misrepresentation of of the facts that prompted his interest. Note the term “stump speech”
    He received Whistle blower’s statements that something was wrong. He is required by law to investigate.
    Since Mr Johnson choose to make it a campaign issue , the forums distributing his “political ads” must grant equal time for dissent.

    michael.

    • It’s like the days of the old “Fairness Doctrine”.
      It’s only political, when it disagrees with the current govts position.
      Anyone agreeing with the govt (or the Democrats for that matter) are by definition, not being political.

  21. People who work in the applied sciences often have to make assumptions and/or interpolate data when no better data is available. However, they do so on the condition that they will receive feedback in the future which will allow them to make adjustments to account for incorrect assumptions. If we have to know everything there is to know before we start a project, then no buildings will get built, no patients will get cured and no oil will be found.

    But historical temperatures are just that – historical. And historical data, with known errors, should be scrapped, not tweaked. There is no way to go back in time to see if you are “right”. They cannot form the basis of geoengineering i.e. trying to control the “temperature” of the planet.

    This brings to light the two factors that make geoengineering the climate impossible. 1) Insufficient power to affect the system and 2) Lack of timely feedbacks to make adjustments.

    Irregardless if CAGW is true or not, the most we can do is what humans have always done – adapt.

  22. Your figure with RSS and UAH data look quite odd; are those their U.S. lower 48-only numbers?

    UAH doesn’t seem to have U.S.-only processed data available for their new beta version; for the current operational version (5.6) they do, and they are available here: http://vortex.nsstc.uah.edu/data/msu/t2lt/uahncdc_lt_5.6.txt

    If I plot UAH 5-year averages on my graph (which gives me from 1983-present, since prior to 1983 there isn’t 5 years of UAH to average, as it begins in 1979) for all the series from 1983 to present, aligning all series to the 1983 5-year average, I get this. UAH at least (version 5.6) seems to agree much more with adjusted data than raw data, and looks nothing like you graph.

    If you know where I might find U.S. data for UAH v6, I’d be happy to plot that as well.

    Also, you neglected to show the figure featuring global adjustments. Turns out you actually get less warming in the adjusted data, not more. Funny that.

    • Ahh, looks like you are actually comparing global land/ocean UAH and RSS data to land-only U.S. lower 48 temperatures. That would explain it.

      Oddly enough, the U.S. is not the entire globe (we are a measly 2% of it), so comparing a global land/ocean record to a U.S. land record is not very useful or revealing. Since UAH actually produces a U.S. 48 land record (which I linked above), I’d suggest using it. It seems to agree significantly better with the adjusted data than with the raw data.

      • Zeke,
        I stated that in the post…

        “Just for grins, I plotted the UAH and RSS satellite time series on top of the Hausfather graph… Apples & oranges? Sort of.”

        Although, it is funny that the satellite data for the globe track along the raw US instrumental data.

    • “UAH doesn’t seem to have U.S.-only processed data available for their new beta version; ”

      Yes they do.

      • You are right; I snooped around the FTP site a bit more and found it.

        Turns out that the new version of UAH (6.0) is closer to the raw data, while the older version (5.6) is quite similar to the adjusted data.

        So you could use the recently-adjusted UAH data to argue against the adjustments in U.S. temperature data. However, there is more than a little irony in that given the size of the adjustments that were made this year to UAH data. As Carl Mears has shown, you can get a wide range of trends for satellite data depending on the parameters you choose for orbital decay and diurnal cycle adjustments, much wider than the range of uncertainty in the surface record:

      • Also, being “an exact match for USCRN trends” doesn’t tell you much. USCRN trends are actually slightly higher than USHCN trends (though not significantly so in the mean) during the period of overlap:

      • Seriously zeke.. two totally different measuring regimes, and you think that the almost exact match is just a coincidence.? roflmao.

        I thought you were a mathematician. !!!!

        I’ve been waiting for someone to start playing that game, it was so obvious from the start.

      • Carl Mears graph of “uncertainty” is astonishing. Does this teeny tiny little sliver of uncertainty in his graph include uncertainty from UHI, poor station siting, and in-filling of the data for the massive parts of the globe with no record? They must have God-like powers of divination to be that certain of the global average temperature to such accuracy.

      • Just some guy,

        Carl Mears graph of “uncertainty” is astonishing. Does this teeny tiny little sliver of uncertainty in his graph include uncertainty from UHI, poor station siting, and in-filling of the data for the massive parts of the globe with no record?

        It’s worth noting that Carl Mears’ main contribution to that plot was for RSS, not the surface data. The calculations for that plot are Kevin Cowtan’s, and no, the HADCRUT4 portion of that plot does NOT include all the estimated uncertainty. When the MET informed him of his error, he factored the additional uncertainties into the calcs:

        http://skepticalscience.com/surface_temperature_or_satellite_brightness.html#115558

        The original spread in the trends was about 0.007C/decade (1σ). Combining these gives a total spread of (0.007^2+0.002^2+0.002^2)^1/2, or about 0.0075 C/decade. That’s about a 7% increase in the ensemble spread due to the inclusion of changing coverage and uncorrelated/partially correlated uncertainties. That’s insufficient to change the conclusions.

        They must have God-like powers of divination to be that certain of the global average temperature to such accuracy.

        Precision, you mean. This is what the MET have to say about uncertainty in global average surface temperature anomalies:

        http://www.metoffice.gov.uk/hadobs/hadcrut4/data/current/time_series/HadCRUT.4.4.0.0.monthly_ns_avg.txt
        http://www.metoffice.gov.uk/hadobs/hadcrut4/data/current/time_series/HadCRUT.4.4.0.0.annual_ns_avg.txt

        From 1979-2012, the mean monthly uncertainty is ±0.151 K and the mean annual uncertainty is ±0.087 K.

        A 2-sigma uncertainty in linear trend of ±0.015C/decade calculated over the same interval is a different animal entirely.

      • Brandon Gates said “Precision, you mean….”

        Well no, Brandon, I meant accuracy. Perhaps I misinterpreted Zeke’s comment about “uncertainty range”. I assumed his little graphic by Mears was referring to the uncertainty range with respect to accuracy and not precision, because accuracy is what really matter here. If that graph posted is about “precision” and not “accuracy”, then it’s kind of irrelevant, in my opinion.

      • Just Some Guy,

        If that graph posted is about “precision” and not “accuracy”, then it’s kind of irrelevant, in my opinion.

        On review, I may have implicitly overstated my case. When dealing with temperature anomalies, we do care about accuracy when it is confirmed or suspected that the mean absolute error of an instrument has changed abruptly or is changing over time, both of which tend to introduce bias in trends. Otherwise, what we care about is precision — by how much we expect a given reading to deviate from its mean absolute error.

        That all said, the graph Zeke posted is part of an article wherein Kevin Cowtan is making an explicit argument about trend precision, which I think is entirely relevant when the topic is comparing the reliability of (A)MSU-derived trend estimates vs. thermometer-derived trend estimates.

      • “We do care about accuracy when it is confirmed or suspected that the mean absolute error of an instrument has changed abruptly or is changing over time, both of which tend to introduce bias in trends. ”

        AKA: Fiddling with the data. Trying to outsmart the data which you “suspect” is is showing the wrong trend is a recipe for wrong results and user bias.

      • Just Some Guy,

        AKA: Fiddling with the data. Trying to outsmart the data which you “suspect” is is showing the wrong trend is a recipe for wrong results and user bias.

        Quite possible, which is why I think it is good scientific practice to detail such adjustments in peer-reviewed literature, retain the unadjusted data so that it can be compared at its most granular level to the adjusted data, and to publish the computer codes which perform the adjustments.

        Conversely, naively assuming that the raw data contain little to no error might be a recipe for wrong results due to data bias. Again, I think it is good scientific practice to always suspect that such errors exist, and either attempt to rule them out, or upon finding them estimate the error they introduce and correct for that error.

        Now, both RSS and UAH have applied adjustments over the years because they suspected that things like orbital decay and diurnal drift were biasing the results of their trend estimates. Since that is a “recipe for wrong results and user bias” according to you, do you therefore reject their temperature anomaly products?

      • Brandon said, “Now, both RSS and UAH have applied adjustments over the years because they suspected that things like orbital decay and diurnal drift were biasing the results of their trend estimates. Since that is a “recipe for wrong results and user bias” according to you, do you therefore reject their temperature anomaly products?”

        No I do not and I will explain why.

        Orbital decay and diurnal drift are not “suspected” problems and do not require any human judgement in the corrections. They are issues which are known for a fact to exist and can be corrected with mathematical formulas. The accuracy of those formulas can be verified by calibrating the data with known measurements made by the weather balloons. This is far different from the case with the incomplete and flawed ground-based thermometers. (note I am not talking about TOBS adjustment here, I’m talking about the so-called “homogenization” and problems like UHI and station-siting which are being incorrectly assumed by yourself as a non-problem.) You yourself used the phrase “suspected that the…. error (of a particular instrument) is changing over time”. You have no way to verify the accuracy of such suspicions and so must rely on human judgement and computer models. Any time human judgement gets involved with a complex analysis of data, there will inevitably be human bias in the final product.

        And btw, yes I’ve read the studies which try to dismiss the significance of UHI in the ground-based temp record. These studies miss the concept of UHI entirely. They attempt to categorize stations by either “urban” or “rural”, as if UHI were a sort of “diseases” which infects some thermometers but does not infect others. UHI effects do not work that way.

      • Just Some Guy,

        Orbital decay and diurnal drift are not “suspected” problems and do not require any human judgement in the corrections.

        They weren’t always known, were not initially corrected for, and would not have been identified if they had not first been suspected issues. Since humans are applying the corrections, I cannot for the life of me understand why you’d think no human judgement is involved.

        They are issues which are known for a fact to exist and can be corrected with mathematical formulas. The accuracy of those formulas can be verified by calibrating the data with known measurements made by the weather balloons. This is far different from the case with the incomplete and flawed ground-based thermometers.

        Rhetorical question: why not just use radiosonde data then instead of futzing around with orbital corrections?

        (note I am not talking about TOBS adjustment here, I’m talking about the so-called “homogenization” and problems like UHI and station-siting which are being incorrectly assumed by yourself as a non-problem.)

        No, I don’t assume that. Read my previous statement again: Conversely, naively assuming that the raw data contain little to no error might be a recipe for wrong results due to data bias. Again, I think it is good scientific practice to always suspect that such errors exist, and either attempt to rule them out, or upon finding them estimate the error they introduce and correct for that error.

        I do not consider surface-based observations exempt from those same principles.

        You yourself used the phrase “suspected that the…. error (of a particular instrument) is changing over time”. You have no way to verify the accuracy of such suspicions and so must rely on human judgement and computer models.

        Well heck, if we knew all there is to know, we wouldn’t need to do science at all. On that note, I don’t understand how it is you know that “I” have no way of verifying suspected issues with the surface temperature record?

        Any time human judgement gets involved with a complex analysis of data, there will inevitably be human bias in the final product.

        I agree with that. It’s THE reason for peer review, and even that doesn’t catch every error.

        And btw, yes I’ve read the studies which try to dismiss the significance of UHI in the ground-based temp record.

        Such as?

        These studies miss the concept of UHI entirely. They attempt to categorize stations by either “urban” or “rural”, as if UHI were a sort of “diseases” which infects some thermometers but does not infect others. UHI effects do not work that way.

        I must confess, I do have a difficult time imagining that a weather station surrounded by corn fields in the dead center of Nebraska is going to be “infected” by UHI from Los Angeles.

      • [blockquote] I cannot for the life of me understand why you’d think no human judgement is involved. [/blockquote]

        I am surprised that you still seem to not get my point that not all adjustments are equal. All I can say at this point is please re-read my previous comments. Or if you still disagree, then we’ll just have to agree to disagree.

        [blockquote]I don’t understand how it is you know that “I” have no way of verifying suspected issues with the surface temperature record?[/blockquote]

        I think you are missing the distinction between [i]knowing[/i] when there is an issue, and merely [i]suspecting[/i] one.

        [blockquote]I agree with that. It’s THE reason for peer review, and even that doesn’t catch every error.[/blockquote]

        I’m glad we can agree on something. Unfortunately, where it involves climate science, the peer review process has been subverted into a gate-keeping function. We have fraudsters like Michael Mann to thank for that.

        [blockquote]I must confess, I do have a difficult time imagining that a weather station surrounded by corn fields in the dead center of Nebraska is going to be “infected” by UHI from Los Angeles [/blockquote]

        No. But a station that was next to dirt road in 1972 but next to a small town shopping center in 2015 might still be counted as “rural” and yet would still show some UHI effects. Likewise a station that was installed the middle of an already urbanized downtown Denver in 1950 would be considered “urban” but might not show any UHI effects all between 1950 and 2015. UHI is caused by the [i]growth[/i] of manmade structures over time. It’s not a virus that only affects all urban stations and none of the rural ones. A study which just compares the trends of “urban” vs “rural” is meaningless, even more so when one considers that most weather stations have rather short time periods. What’s more revealing is the [b]fact[/b] heavy urban areas show significantly higher current temperatures than those in nearby rural areas. As far as I’ve seen, none of the warmists’ studies are able to reconcile their “results” against the proven reality of UHI effects.

      • [blockquote] I cannot for the life of me understand why you’d think no human judgement is involved. [/blockquote]
        I am surprised that you still seem to not get my point that not all adjustments are equal. All I can say at this point is please re-read my previous comments. Or if you still disagree, then we’ll just have to agree to disagree.
        [blockquote]I don’t understand how it is you know that “I” have no way of verifying suspected issues with the surface temperature record?[/blockquote]

        I think you are missing the distinction between [i]knowing[/i] when there is an issue, and merely [i]suspecting[/i] one.

        [blockquote]I agree with that. It’s THE reason for peer review, and even that doesn’t catch every error.[/blockquote]

        I’m glad we can agree on something. Unfortunately, where it involves climate science, the peer review process has been subverted into a gate-keeping function. We have fraudsters like Michael Mann to thank for that.

        [blockquote]I must confess, I do have a difficult time imagining that a weather station surrounded by corn fields in the dead center of Nebraska is going to be “infected” by UHI from Los Angeles [/blockquote]
        No. But a station that was next to dirt road in 1972 but next to a small town shopping center in 2015 might still be counted as “rural” and yet would still show some UHI effects. Likewise a station that was installed the middle of an already urbanized downtown Denver in 1950 would be considered “urban” but might not show any UHI effects all between 1950 and 2015. UHI is caused by the [i]growth[/i] of manmade structures over time. It’s not a virus that only affects all urban stations and none of the rural ones. A study which just compares the trends of “urban” vs “rural” is meaningless, even more so when one considers that most weather stations have rather short time periods. What’s more revealing is the [b]fact[/b] heavy urban areas show significantly higher current temperatures than those in nearby rural areas. As far as I’ve seen, none of the warmists’ studies are able to reconcile their “results” against the proven reality of UHI effects.

        [Not sure what you’re trying to do, but you can ONLY use the html “angled brackets” signs, in this WordPress site. Use the “Test” section link at top the home page to edit this entry, and leave [ ] square brackets for the mods. .mod]

      • Attention Mod: Sorry about the formatting errors. Here is a (hopefully) fixed version. 

        I cannot for the life of me understand why you’d think no human judgement is involved.

        I am surprised that you still seem to not get my point that not all adjustments are equal. All I can say at this point is please re-read my previous comments. Or if you still disagree, then we’ll just have to agree to disagree.

        I don’t understand how it is you know that “I” have no way of verifying suspected issues with the surface temperature record?

        I think you are missing the distinction between knowing when there is an issue, and merely suspecting one.

        I agree with that. It’s THE reason for peer review, and even that doesn’t catch every error.

        I’m glad we can agree on something. Unfortunately, where it involves climate science, the peer review process has been subverted into a gate-keeping function. We have fraudsters like Michael Mann to thank for that.

        I must confess, I do have a difficult time imagining that a weather station surrounded by corn fields in the dead center of Nebraska is going to be “infected” by UHI from Los Angeles

        No. But a station that was next to dirt road in 1972 but next to a small town shopping center in 2015 might still be counted as “rural” and yet would still show some UHI effects. Likewise a station that was installed the middle of an already urbanized downtown Denver in 1950 would be considered “urban” but might not show any UHI effects all between 1950 and 2015. UHI is caused by the growth of manmade structures over time. It’s not a virus that only affects all urban stations and none of the rural ones. A study which just compares the trends of “urban” vs “rural” is meaningless, even more so when one considers that most weather stations have rather short time periods. What’s more revealing is the fact heavy urban areas show significantly higher current temperatures than those in nearby rural areas. As far as I’ve seen, none of the warmists’ studies are able to reconcile their “results” against the proven reality of UHI effects.

    • Question: how can any satellite record be “relative to 1900-1920”. I thought satellite records started in ’79.

    • Zeke, at first glance your last graph has three significant features. First, the green line is well above the red line from 1880 to 1940. Second, the lines run together until the very last date. Third, the green line goes way above the red line at the right edge of the graph.

      What causes the third feature?

  23. Thanks to USCRN, all US temperatures have been brought under some semblance of adjustment control.

    Trouble is, that leaves the rest of the world for the alarmista to play with. And they do.

  24. The cited article (at ArsTechnica) begins, as its very first example of the scientific need for adjustments, the measurement of groundwater level over time. This example contradicts the article’s argument, stating: “Automatic measurements are frequently collected using a pressure sensor suspended below the water level. Because the sensor feels changes in atmospheric pressure as well as water level, a second device near the top of the well just measures atmospheric pressure so daily weather changes can be subtracted out“.

    Thus the device collects two raw measurements, subtracting one from the other, producing a “raw” difference between the two sensors. This is the design of the apparatus. Neither sensor is adjusted.

    David Middleton closes with “I think I can see why the so-called consensus has become so obsessed recently with destroying the credibility of the satellite data. Indeed. The so-called consensus has claimed that satellite also “requires adjustments,” ignoring the fact that — unlike ground weather stations — the “adjustments” are part of the measurement design, not after-the-fact fiddling.

    • Oh really, orbital decay adjustments and diurnal cycle change adjustments are part of the measurement design? That would be news to UAH and RSS. Especially UAH who released a new (and quite different) adjusted version of their record just 6 months ago.

      • I am happy to be corrected — but I thought even school children knew about orbital decay and am surprised the scientists did not expect it. I would have thought brightness differences during the day would also have been expected. I’d appreciate any references about this you might supply. TIA.

      • They expected it, however actual decay was not the same as expected decay, which was way the adjustment was necessary.

      • Really … the lack of proper adjustments 15-20 years ago was the subject of the recent video attacking Christy and Spencer.

      • Orbital decay and diurnal adjustments are made by both rss and uah. Without these adjustments you get nonsense data.

    • Zeke,
      I stated that in the post…

      “Just for grins, I plotted the UAH and RSS satellite time series on top of the Hausfather graph… Apples & oranges? Sort of.”

      Although, it is funny that the satellite data for the globe track along the raw US instrumental data.

      Maybe you should have read the middle of my post.

    • NeedleFactory,

      The so-called consensus has claimed that satellite also “requires adjustments,” ignoring the fact that — unlike ground weather stations — the “adjustments” are part of the measurement design, not after-the-fact fiddling.

      https://courses.seas.harvard.edu/climate/eli/Courses/global-change-debates/Sources/10-Mid-tropospheric-warming/more/Christy-etal-2007.pdf

      11. Caveats
      [52] We point out that data sets based on satellites undergo constant examination by the developers and users. These data are observed by complicated instruments which measure the intensity of the emissions of microwaves from atmospheric oxygen, requiring physical relationships to be applied to the raw satellite data to produce a temperature value. Further, the program under which these satellites were designed and operated was intended to improve weather forecasts, not to generate precise, long-term climate records.

      [53] Since 1992 the UAH LT data set has been revised seven times or about once every 2 to 3 years. There is no expectation that the current version (5.2, May 2005) will not continue to be revised similarly as better ways to account for known biases are developed and/or new biases are discovered and corrected. Thus the production of climate time series from satellites will continue to be a work-in progress.

      Emphasis added. Paragraph 53 there is an example of what I consider particularly good scientific thinking, and would be an entirely appropriate statement even if the (A)MSUs and related instrumentation had been purpose-designed for generating long-term climate records — which, like most of their surface-based weather station cousins, they weren’t.

  25. Thank you for a very informative article, and also for linking to the ars technica article, which as you say has a snarky tone but does a good job of broadly describing the whys and wherefores of homogenisation.

  26. You should read the comments forums to this article.
    I comment frequently in their forums when they are on climate topics which they often are.

    They are a fully committed to the cause crowd, enforcing the AGW dogma.
    If one offers any dissenting ideas they are immediately attacked, some times correctly so. But in other cases cogent arguments are made, then there is a coordinated squad of enforcers that will attack, eventually devolving to swearing at the poster with assorted an hominums.

    You should look at the comments to that story and see the partisan tatics.

    • I was motivated to make comments by such behaviour just before the Climategate emails came out. I made a comment that was hardly sceptic but defensive of those who wanted to question “The Science” and point out flaws. Got a bollocking for an innocent and non-oil-funded scepticism.

  27. Since when was the argument that temperature record was good as is? A quick look at nearby stations shows how you couldn’t be sure of anything unless the change was tenfold bigger and you restricted yourself with rural sites with few changes.

    The argument is that with such large adjustments that you can’t have much confidence in a result that shows such a small trend globally and far from uniform across the globe. Even without accusations of fudging, it still isn’t good enough for basing policies on.

    But the result of the adjustments is not the data being offset across the range, a small increase in the trend over the whole range nor is the plot of the differences from previous estimations noisy. The difference is a very smooth plot of what the activists wanted to see.

  28. Back in the day, we would call NOAA “Shits and Giggles.”

    Ha ha

    Even today they can’t figure out the meaning of an arithmetic mean from a mean.

    Don’t ask a NOAA employee what is a geometric mean to an arithmetic mean! That, at a bar in Bethesda, would start a fight and the police would be called to close the bar.

    Ha ha

  29. News Flash: Climate change caused by newly discovered super planet with orbit of 20,000 years. The planets are aligned right now. Big changes expected. Lost Pluto but found Micky Mouse.

  30. Problem with USHCN : The logic around TOBS adjustment appear sound, but the documentation that observers actually did change their time of observation as needed for the huge adjustments from 1970, to 1980 to 1990 to 2000 to 2015 appear weak. So the foundation is weak.

    When at the same time the number of missing datapoint in the most recent years has exploded, it becomes increasingly hard to explain adjustment as usual: “Well in the old days, collecting of temperature data was so very bad..” style.

    On top, it seems that Karls UHI writing 1986 for USA has been forgotten and people are forced to believe this issue is tiny.

  31. I understand homogenisation to get a global temperature. I understand adjusting data to get a consistent set of data for comparisons. What I don’t understand is where the “better” data comes from that is used for homogenisation and adjustment.

    If we have lots of high quality data showing this warming, lets see it. If we don’t then you can’t use lower quality data to adjust higher quality data and claim you end up with a better result.

    I don’t care what field you are talking about, you just cannot.

  32. Why is this post only discussing the effect adjustments and such have on temperatures for the United States when one gets radically different results if they look at temperatures for the entire planet? Did the author just not look at what happened with global results, or is this just massive cherry-picking? Either way, it’s pretty ridiculous. Whatever value there may be in looking at US temperatures, there would be far more value in looking at global ones.

    Especially since the post compares the temperatures it examines to global satellite temperatures. Why use global temperatures in one case but US temperatures for the other? Given basically nothing the post says holds true for global temperatures, it just looks like massive and intentional cherry-picking to me.

    • Did you read my post? How about the article?

      I focused on the US because the adjustments to the US temperature measurements equals the warming anomaly. The adjustments to the US data look artificial. I also find this passage to be unbelievable…

      In most of the world, the effect of all these non-climatic factors is neutral. Changes that raised temperatures have been balanced by changes that lowered them. The US, however, is different. Here, weather stations are run by volunteers who report to the National Weather Service. Compared to other countries, the US has more stations but less uniformity among those stations.

      This would seem to be inconsistent with the assertion that the US measurements had a systematic cooling bias, while the rest of the world’s errors were neutral.

      Regarding plotting the global satellite data on Zeke’s US graph, I clearly stated in English, “Just for grins, I plotted the UAH and RSS satellite time series on top of the Hausfather graph…”

      Regarding the moronic accusations of cherry-picking… The corrections to the US data were the only ones presented in the article that clearly appeared to be artificial. That’s why I also wrote, in English, “If you can set aside the smug, snide remarks of the author, this article does a fairly good job in explaining why the surface station temperature data have to be adjusted and homogenized.

      There is just one huge problem…”

      • David Middleton, you say:

        Did you read my post? How about the article?

        I focused on the US because the adjustments to the US temperature measurements equals the warming anomaly.

        But that does nothing to contradict the idea you cherry-picked US temperatures for your post. Your post never even acknowledges the fact the article you’re discussing talks about global temperatures. Nobody could possibly know that unless they went to the article and read it for themselves. A person who had only read your post would naturally assume it only discussed US temperatures since that’s all you ever mentioned in reference to it.

        That you find the US adjustments suspicious doesn’t justify completely ignoring the global temperatures. If you had wished to give a fair discussion of the US adjustments, you should have acknowledged your post says nothing about global temperatures, which you accept don’t suffer from any of what you describe and was a key topic of the article you responded to.

        Regarding plotting the global satellite data on Zeke’s US graph, I clearly stated in English, “Just for grins, I plotted the UAH and RSS satellite time series on top of the Hausfather graph…”

        This does nothing to justify your actions either. Saying you did something “for grins” doesn’t change whether or not it is misleading. If you wanted to publish the figure you made fairly, you should have cautioned readers you were comparing results for very different areas, and thus your figure really has no meaning. Instead, you provided the figure and said:

        I think I can see why the so-called consensus has become so obsessed recently with destroying the credibility of the satellite data.

        Which could only be based on the figure you published comparing the US land record to the global satellite record, a nonsensical comparison.

        This post rests entirely upon cherry-picking. Your claims now, that you just didn’t want to talk about the things you didn’t cherry-pick, does nothing to change the fact you cherry-picked US results and presented them as though they were of central importance. If anything, your response just makes it clear what I said was true.

      • Brandon S? (@Corpus_no_Logos):

        You say to David Middleton

        But that does nothing to contradict the idea you cherry-picked US temperatures for your post. Your post never even acknowledges the fact the article you’re discussing talks about global temperatures. Nobody could possibly know that unless they went to the article and read it for themselves. A person who had only read your post would naturally assume it only discussed US temperatures since that’s all you ever mentioned in reference to it.

        Any reasonable person would consider that the processing of US temperature data is a sample of the processing applied to all the temperature data.

        Are you claiming that US data is processed differently from elsewhere?
        If it is then how is that justified?
        And if it is not then what are you complaining about?

        Richard

      • @Brandon S? (@Corpus_no_Logos) on January 22, 2016 at 3:17 am

        My statement renders the moronic accusation of cherry-picking totally moot.

        Focusing on what appears to be a problematic aspect of an article is not cherry-picking. Your accusation is akin to blaming professorial cherry-picking for all of the red marks on your test.

        Regarding this comment…

        If you wanted to publish the figure you made fairly, you should have cautioned readers you were comparing results for very different areas, and thus your figure really has no meaning. Instead, you provided the figure and said

        The caption clearly states that these are two different areas and an apples & oranges comparison.

        Regarding my closing comment, I was being somewhat snarky. However, the fact is that the global satellite temperatures track at or below the raw US temperature measurements which don’t exceed the natural variability of the early 20th century.

        Clearly, my attempt at biting sarcasm has diverted attention away from the point of my post… All of the post-1960 anomalous warming in the US temperature record is due to adjustments to the recorded temperatures.

      • richardcourtney, the very article this post responds to discusses how US temperatures are different from most of the rest of the world. It isn’t a matter of how the data is processed either. It’s because the US record has certain traits not found in most of the rest of the world.

        Besides which, the author of this post decried the idea he was using this post to address anything other than US temperatures when responding to me. That means he rejects the idea he was making the sort of argument you claim. That means not only have you managed to ignore what the original article said, you’ve also managed to contradict the author of this post.

      • David Middleton, you can repeat claims like:

        My statement renders the moronic accusation of cherry-picking totally moot.

        But they won’t become true simply because you say them a lot and make derogatory remarks about the people who disagree. When you do nothing to address anything your critics say, you’re not actually contributing anything. I explained exactly why what you did was cherry-picking, and your response is nothing more than, “Nuh-uh, that’s stupid.” That isn’t how decent people behave, and it doesn’t do anything to actually show I am wrong. It just shows you’re obnoxious and don’t want to actually have discussions with people who disagree with you.

        The caption clearly states that these are two different areas and an apples & oranges comparison.

        Which does nothing to address what I said, which was that you should have cautioned readers you were making a nonsensical comparison and warned them that meant it had no meaning. Ignoring half of what I said to claim I am wrong does nothing but support the idea you cherry-pick things to misrepresent them.

        Regarding my closing comment, I was being somewhat snarky. However, the fact is that the global satellite temperatures track at or below the raw US temperature measurements which don’t exceed the natural variability of the early 20th century.

        This isn’t actually a fact, as it depends on a variety of factors and assumptions, but even if it were, it would be completely meaningless. Temperatures for a small fraction of the globe tell us very little about temperatures for the entire planet. Comparing satellite records to surface records for the US is no more appropriate than comparing them to surface records for Australia, Russia, South America or any other area. That you can cherry-pick one comparison and get a good rhetorical effect out of it doesn’t tell us anything.

      • Brandon S? (@Corpus_no_Logos):

        I asked you

        Any reasonable person would consider that the processing of US temperature data is a sample of the processing applied to all the temperature data.

        Are you claiming that US data is processed differently from elsewhere?
        If it is then how is that justified?
        And if it is not then what are you complaining about?

        and you have replied saying in total

        richardcourtney, the very article this post responds to discusses how US temperatures are different from most of the rest of the world. It isn’t a matter of how the data is processed either. It’s because the US record has certain traits not found in most of the rest of the world.

        Besides which, the author of this post decried the idea he was using this post to address anything other than US temperatures when responding to me. That means he rejects the idea he was making the sort of argument you claim. That means not only have you managed to ignore what the original article said, you’ve also managed to contradict the author of this post.

        Say what!?
        I was “making” a “claim” about “the sort of argument” provided by Middleton?
        I “managed to ignore what the original article said”?
        And I “managed to contradict the author of this post”?

        NO, NO and NO.
        I asked you for clarification of what YOU were saying.

        If you don’t have a valid answer to my requests for clarification then say you don’t. Waving ‘straw men’ about things I did not mention does not ‘cut it’.

        And it is meaningless armwaving to say “the US record has certain traits not found in most of the rest of the world” when you don’t specify those “traits” or what you think are their causes.

        Richard

    • “Why is this post only discussing the effect adjustments and such have on temperatures for the United States when one gets radically different results if they look at temperatures for the entire planet? Did the author just not look at what happened with global results, or is this just massive cherry-picking? Either way, it’s pretty ridiculous. Whatever value there may be in looking at US temperatures, there would be far more value in looking at global ones.”

      Global surface temps are the epitome of cherry picking. You assert it is more accurate to use a 1200km radius from a single temperature measurement point? This will give an accurate representation of the temperature over that entire area?. So temperatures in Portland OR = Phoenix or death valley?
      It is a PRACTICAL impossibility to get an accurate global surface temperature measurement when you have areas the size of the US with 1 measurement point.

      “Well we don’t have coverage so we have to make due with what we have..” is not justification for doing so, The correct response is “We don’t have the coverage or sensor integrity to draw any useful conclusions from the measurements. Until we get something that gives us global coverage, (cough *satellites*) the best we can do is look at the trends from the places where we do have good coverage and see if we can independently confirm those measurements. BEFORE we conclude we see warming, cooling, or wiggling about a mean.

      Balloon data and satellite data agree very well with each other, so either they BOTH have identical systematic errors, or they are confirming each other to within their inherent measurement accuracy. Which is more likely?

      If the surface temps track the satellites/balloons you can trust the measurements because you now have 3 different measurements telling you the same thing. If they don’t which one would you question? The two that agree or the one that doesn’t?

  33. I think I can see why the so-called consensus has become so obsessed recently with destroying the credibility of the satellite data.

    It’s the only thing keeping them remotely honest. Without the sanity check from Satellites they have nothing holding them back.

  34. Again we forget to show that FOUR radiosonde datasets agree with the TWO satellite sets. There is no argument and NO warming

  35. Re: Thoroughly fabricated … data, 1/21/16:

    [W]hy the surface station temperature data have to be adjusted and homogenized.

    Data evolve to fit the model, to earn tenure, to make the catastrophe really scary, to loosen the money, to pay the salaries, to buy the next gen computers, to regulate out the capitalists, to elect the socialists.

    It’s not wrong — it’s Post Modern Science.

  36. Here is where it all began:

    With 1998 El Nino in, it was clear that 1934 in the US was still the record high year. Knowing that this was likely to be followed by a La Nina, it was likely going to be a long time they would have to suffer with the embarrassing fact that 1934 remained the record. The email from an FOI request shows chronologically how his assistant adjusted the figures until Hansen was happy!! I urge everyone who is wondering about this issue to read the historical first egregious tampering with the official record.

    Remember that Hansen in 1988 had the airconditioning turned off and the windows all closed the night before in preparation for his alarmist speech to a sweltering Congress (an obliging congressman [Wirth??], I believe arranged this). Here, ten years later with the 1934 warmth dogging hime, he showed another example of his lack of scruples. Today, we have largely forgotten this historic fact of tampering with official US temperatures. We agonize over and measure the comparatively small changes made by T. Karl and whether the pause is 18 years or whatever, when the pause might in actuality be 80 years. Everyone argues that the US is only 3% of the Globe so it doesn’t mean that globally it is similar. However, the Iceland, Greenland, Canadian and Northern Russian temperature records also had 1930s-1940 as the warmest. WUWT? The Canadian all time high was in two places in Saskatchewan in 1937 when it was 47C!! and it was in the high 30s and 40s throughout much of the rest of Southern Canada as well.

    • Gary Pearse wrote:

      “With 1998 El Nino in, it was clear that 1934 in the US was still the record high year. Knowing that this was likely to be followed by a La Nina, it was likely going to be a long time they would have to suffer with the embarrassing fact that 1934 remained the record.”

      I think 1934 should be cited as the “Hottest Year Evah! when we are talking about records, and not 1998. The Earth is currently *not* experiencing “unprecendented heat” as alarmists would have us believe. We would have to get hotter than 1934, to be experiencing unprecedented heat.

      ” Everyone argues that the US is only 3% of the Globe so it doesn’t mean that globally it is similar. However, the Iceland, Greenland, Canadian and Northern Russian temperature records also had 1930s-1940 as the warmest. WUWT? The Canadian all time high was in two places in Saskatchewan in 1937 when it was 47C!! and it was in the high 30s and 40s throughout much of the rest of Southern Canada as well.”

      Every unmodified temperature chart or data chart I see, from around the world, shows the period around the 1930’s as being the hottest period since that time. I have a list of newspaper headlines of weather events during the decade of the 1930’s that shows there were massive heat waves all over the planet during that time period.

      TA

  37. It seems curious to me that the adjustments made post 1980 are significantly larger than the adjustments made pre-1980. Did we just not know what we were doing 1980 – present to cause the temps to be consistently measured lower (and a lot lower at that)?

  38. I live in that area of Saskatchewan that had 47C in 1937 although I’m not that old. The local weather regularly shows record highs for a particular date as occurring in the 30’s. As well, we had significantly hot, dry weather in the 1980’s and I certainly remember some serious heat in the 60’s when I was a kid. Outside all the time and no A/C in most buildings might affect my memories but I’m curious about the 16-17 year gap between major el nino’s. Does this interval show up in records previous to 1998?

  39. Brandon S? (@Corpus_no_Logos) January 22, 2016 at 1:57 pm Edit
    David Middleton, you can repeat claims like:

    It’s not a “claim.” It is a fact. Identifying the problematic portion of an article is not cherry-picking.

    Your point about my not going way overboard in clearly identifying where I was being snarky is valid. And clearly I should have used much larger, eye-catching fonts when I clearly stated that I was comparing the global satellite data to the US surface data.

    I have since appended the post with a comparison of the UAH satellite data for the US and Zeke’s graph from the article. The result still illuminates the so-called consensus’ obsession with destroying the credibility of the satellite data.

  40. Curious George
    January 22, 2016 at 9:21 am

    “Could you please provide a link?” [Query re comment above on 1934 still hotter than 1998 according to Hansen:

    https://wattsupwiththat.com/2016/01/21/thorough-not-thoroughly-fabricated-the-truth-about-global-temperature-data-well-not-thoroughly-fabricated/#comment-2126590%5D

    Here is the original link to an expose from an FOI request concerning the fiddling with 1934 record hot temperatures to reduce them below that of 1998 and the anger and excitement it garnered. Sorry I forgot to add it to my comment.

    https://wattsupwiththat.com/2010/01/14/foiad-emails-from-hansen-and-giss-staffers-show-disagreement-over-1998-1934-u-s-temperature-ranking/

    We have to keep hammering this stuff!!

  41. Gary Pearse wrote:

    “With 1998 El Nino in, it was clear that 1934 in the US was still the record high year. Knowing that this was likely to be followed by a La Nina, it was likely going to be a long time they would have to suffer with the embarrassing fact that 1934 remained the record.”

    Can someone please tell me why El Nino increases global mean temperature and La Nina decreases it? according to the model they do not change mean temperature, they just redistribute it around the globe

    Deal to this guys. It is important. Why? because should a logical explanation not be found it proves that surface measurements are not accurate. Do the satellite measurements show the same correlation? If they don’t then ‘bingo!” – you’v found the strongest arguments yet that surface measurement are biased by environment. How the heck could 18 months of El Nino increase mean marine temperature by any degree measurable??

    • Much is made of the 98 Spike. If the records are remotely accurate then this deserves much more attention. A spike can only be the result of 2 factors or a combination of both:

      A genuine pulse in mean global temperature due to increased energy input or decreased output

      A distinct environmental change that resulted in elevated temperature readings within the zones measured that did not reflect the global mean

      Either way it appears almost impossible for mean global temperature to increase at the rate shown for 98. Why – the sea. It is a huge energy soak that is comprised of millions of zones and cells measurable on a meter scale. Any swimmer knows this. Measure the mean? Who do they think they are fooling

Comments are closed.