Approximately 92% (or 99%) of USHCN surface temperature data consists of estimated values

An analysis of the U.S. Historical Climatological Network (USHCN) shows that only about 8%-1% (depending on the stage of processing) of the data survives in the climate record as unaltered/estimated data.

Guest essay by John Goetz

A previous post showed that the adjustment models applied to the GHCN data produce estimated values for  approximately 66% of the information supplied to consumers of the data, such as GISS. Because the US data is a relatively large contributor to the volume of GHCN data, this post looks at the effects of adjustment models on the USHCN data. The charts in this post use the data set downloaded at approximately 2:00 PM on 9/25/2015 from the USHCN FTP Site.

According to the USHCN V2.5 readme file: “USHCN version 2.5 is now produced using the same processing system used for GHCN-Monthly version 3. This reprocessing consists of a construction process that assembles the USHCN version 2.5 monthly data in a specific source priority order (one that favors monthly data calculated directly from the latest version of GHCN-Daily), quality controls the data, identifies inhomogeneities and performs adjustments where possible.”

There are three important differences with the GHCN process. First, the USHCN process produces unique output that shows the time-of-observation (TOBs) estimate for each station. Second, USHCN will attempt to estimate values for missing data, a process referred to as infilling. Infilled data, however, is not used by GHCN. The third difference is that the homogenized data for the US stations produced by USHCN differs from the adjusted data for the same US stations produced by GHCN. My conjecture is that this is because the homogenization models for GHCN bring in data across national boundaries whereas those for USHCN do not. This requires further investigation.

Contribution of USHCN to GHCN

In the comments section of the previously referenced post, Tim Ball pointed out that USHCN contributes a disproportionate amount of data to the GHCN data set. The first chart below shows this contribution over time. Note that the US land area (including Alaska and Hawaii) is 6.62% of the total land area on Earth.

Percentage of Reporting GHCN Stations that are USHCN

Percentage of Reporting GHCN Stations that are USHCN

How Much of the Data is Modeled?

The following chart shows the amount of data that is available in the USHCN record for every month from January, 1880 to the present. The y-axis is the number of stations reporting data, so any point on the blue curve represents the number of measurements reported in the given month. In the chart, the red curve represents the number of months in which the monthly average was calculated from incomplete daily temperature records. USHCN will calculate a monthly average with up to nine days missing from the daily record, and flags the month with a lower-case letter, from “a” (1 day missing) to “i” (nine days missing). As can be seen from the curve, approximately 25% of the monthly values were calculated with some daily values missing. The apparently seasonal behavior of the red curve warrants further investigation.

Reporting USHCN Stations

Reporting USHCN Stations

The third chart shows the extent that the adjustment models affect the USHCN data. The blue curve again shows the amount of data that is available in the USHCN record for every month. The purple curve shows the number of measurements each month that are estimated due to TOBs. Approximately 91% of the USHCN has a TOBs estimate. The green curve shows the number of measurements each month that are estimated due to homogenization. This amounts to approximately 99% of the record. As mentioned earlier, the GHCN and USHCN estimates for US data differ. In the case of GHCN, approximately 92% of the US record is estimated.

The red curve is the amount of data that is discarded by a combination of homogenization and GHCN. Occasionally homogenization discards the original data outright and replaces it with an invalid temperature (-9999). More often it discards the data and replaces it with a value computed from surrounding stations. When that happens, the homogenized data is flagged with an “E”. GHCN does not use values flagged in this manner, which is why they are included in the red curve as discarded.

Reporting USHCN Stations and Extent of Estimates

Reporting USHCN Stations and Extent of Estimates

The next chart shows the three sets of data (TOBs, homogenized, discarded) as a percentage of total data reported.

Extent of USHCN Estimates as a Percentage of Reporting Stations

Extent of USHCN Estimates as a Percentage of Reporting Stations

The Effect of the Models

The fifth chart shows the average change to the raw value due to the TOBs adjustment model replacing it with an estimated value. The curve includes all estimates, including the 9% of cases where the TOBs value is equal to the raw data value.

Change to Raw USHCN Value after TOB Estimate

Change to Raw USHCN Value after TOB Estimate

The sixth chart shows the average change to the raw value due to the homogenization model. The curve includes all estimates, including the 1% of cases where the homogenized value is equal to the raw data value.

Change to Raw USHCN Value after Homogenization Estimate

Change to Raw USHCN Value after Homogenization Estimate

Incomplete Months

As described earlier, USHCN will calculate a monthly average if up to nine days worth of data are missing. The following chart shows the percentage of months in the record that are incomplete (red curve) and the percentage of months that are retained after the adjustment models are applied (black curve). It is apparent that incomplete months are not often discarded.

Number of USHCN Monthly Averages Calculated with Incomplete Daily Records

Number of USHCN Monthly Averages Calculated with Incomplete Daily Records

The next chart shows the average number of days that were missing when the month’s daily record was incomplete. After some volatility prior to 1900, the average incomplete month is missing approximately two days of data (6.5%).

Average Number of Days Missing from Incomplete USHCN Monthly Averages

Average Number of Days Missing from Incomplete USHCN Monthly Averages

A Word on Infilling

The USHCN models will produce estimates for some months that are missing, and occasionally replace a month entirely with an estimate if there are too many inhomogeneities. The last chart shows the frequency this occurred in the USHCN record. The blue curve shows the number of non-existent measurements that are estimated by the infilling process. The purple line shows the number of existing measurements that are discarded and replaced by the infilling process. Prior to 1920, the estimation of missing data was a frequent occurrence. Since then, the replacement of existing data has occurred more frequently than estimation of missing data.

Infilled data is not present in the GHCN adjustment estimates.

Amount of USHCN Infilling of Missing Data

Amount of USHCN Infilling of Missing Data


The US accounts for 6.62% of the land area on Earth, but accounts for 39% of the data in the GHCN network. Overall, from 1880 to the present, approximately 99% of the temperature data in the USHCN homogenized output has been estimated (differs from the original raw data). Approximately 92% of the temperature data in the USHCN TOB output has been estimated. The GHCN adjustment models estimate approximately 92% of the US temperatures, but those estimates do not match either the USHCN TOB or homogenized estimates.

The homogenization estimate introduces a positive temperature trend of approximately 0.34 C per century relative to the USHCN raw data. The TOBs estimate introduces a positive temperature trend of approximately 0.16 C per century. These are not additive. The homogenization trend already accounts for the TOBs trend.

Note: A couple of minutes after publication, the subtitle was edited to be more accurate, reflecting a range of percentages in the data.

It should also be noted, that the U.S. Climate Reference Network, designed from the start to be free of the need for ANY adjustment of data, does not show any trend, as I highlighted in June 2015 in this article:  Despite attempts to erase it globally, “the pause” still exists in pristine US surface temperature data

Here is the data plotted from that network:

Of course Tom Karl and Tom Peterson of NOAA/NCDC (now NCEI) never let this USCRN data see the light of day in a public press release or a State of the Climate report for media consumption, it is relegated to a backroom of their website mission and never mentioned. When it comes to claims about hottest year/month/day ever, instead, the highly adjusted, highly uncertain USHCN/GHCN data is what the public sees in these regular communications.

One wonders why NOAA NCDC/NCEI spent millions of dollars to create a state of the art climate network for the United States, and then never uses it to inform the public. Perhaps it might be because it doesn’t give the result they want? – Anthony Watts

192 thoughts on “Approximately 92% (or 99%) of USHCN surface temperature data consists of estimated values

  1. Is any of this behavior considered criminal, or fraudulent, or deceptive, or misleading, or argumentative, or speculative, or even immoral ??

    One advantage of doing statistical mathematics (pure fiction) rather than real science, is that there is a built in pre-supposition that your results automatically incorporate uncertainty.

    Well actually, your results are never uncertain, if you know how to do 4-H club arithmetic correctly.

    The uncertainty rests entirely on what you claim your results mean; all of which is entirely in your head.


    • I think 90% the people on this site are delirious. I mean hell I did research in undergrad on how a certain insect population is being affected by climate change, AKA LOWER YEARLY AVERAGE RAINFALL and HIGHER YEARLY AVERAGE TEMPERATURES. It is factual information that can be checked and nobody would dispute it. Do you not trust the temperature gauges? Are they conspiring against us humans? LOL seriously though if you say the word “global warming” everybody loses it. STOP BELIEVING WHAT PEOPLE TELL YOU AND CHECK FACTS FOR YOURSELF. FACTS NOT OPINIONS. I mean have all of you who just soaked in this article actually checked the facts presented (as ceist8 said, maybe it is “false and is just made up”). Use your brains, stop being gullible, and RESPECT THE EARTH THAT WE LIVE ON!!!

      • The overall leading research into the amphibian fungus cause and possible cure has found only one result. And cure. The frogs were subjected to higher water temps, and the fungus died, frogs were cured. Colder water temps/ environmental conditions slowed the fungus to thrive. This extinction of the amphibians has been pursued for years. They are the filter of the surrounding environment. Hard facts my boy including these frogs dispute your theory. Its only that.

      • You’re an absolute joke! You put out this vague “I did this average rainfall….higher yearly blah blah blah” and yet you say nothing at all. Your thoughts are those of a child wanting and wishing away in a brew of cognitive dissonance. Just because you put something in caps and talk about some stupid ass probably drunken “science project” you did you are somehow an expert in critical thinking? What, if any research have you done on global temperature variations? What, we’re all just supposed to follow along with the deal leader and his red, I mean green movement because you and a group of government funded hippie statists say it is so? Time after time it has been proven by men with distinguished credentials and indisputable evidence that indeed there is a concerted effort to misinform, and misrepresent evidence as it pertains to true temperature change. If you and your goose stepping friends really want to save the environment then head on over to one of the dozens of rivers throughout Asia and eastern Europe where the pollution is so thick you can walk on it. Until then keep your middle school intellect and your embarrassing comments to yourself.

      • Josh
        If you’re sitting there in clothes you made from the hemp you grew yourself in your garden and you submitted your comments above scratched out on a piece of slate you found nearby which you then walked personally to the WUWT offices to submit, then I might be prepared to pay some sort of heed to what you’re saying – even though I don’t agree. But if, as I suspect, you’re sitting there utterly drenched in something like a hundred times your fair share of all the trappings, wealth and privilege that a fossil-fueled existence can bring you, then please, just shush…

        Your viewpoint is, in global terms, an affliction that only the very rich are ever affected by.

        (Maybe your undergrad research was garbage. There are undergrads these days who think all rivers flow north to south because north is at the top, and gravity pulls the water downwards…)

      • josh
        San Francisco area newspapers are full of projections of catastrophic sea level rise. I checked the tide gauge records back to 1854 and found what the newspaper reporters could easily find themselves: sea level increased less than 4 inches per century, and the rate of increase has declined. Since sea level is a better global indicator of the rate of climate change than your backyard insects, why don’t you go check these facts for yourself?
        Too bad the San Francisco newspapers can’t just go on line like this and get the facts for themselves.


        The planet is showing more water vapor with warmer temperatures not less.

        Obviously you didn’t check your facts first.

      • “Do you not trust the temperature gauges?” What are you talking about? This is exactly what this post is about, reading the actual temperature gauges. Did you read it? The whole thing is just data and facts; the only opinion is the author’s little bit of incredulous musing in the last paragraph or two. But somehow vague memories of your silly “undergrad” bumbling in relation to some insect is pertinent? A mind is a terrible thing to waste…

      • ” Do you not trust the temperature gauges? ”

        If you read the article josh you would realize why many of us are laughing at the stupidity of this statement.
        Your undergrad work and your entire worldview….is based entirely on not on people who don’t trust the temperature gauges.

        Your failure to read this article yet have such a strong opinion about it Is more evidence that confirmation bias exists in global warming research…yours included. You don’t even fully read articles that might have a bearing against your preconceived notions.

        Your statement is the kind of disconnected gem I hope to find in litigation to destroy the credibility on cross examination of a ‘self described’ expert like you in litigation.

        Your lower level of critical thinking and articulation is astounding for a college grad, let alone for someone who calls everyone else ‘delirious’.

    • If something is not mentioned, it is because the data does not support messaging. Well, I’ll wait. 20 years more is nothing

      • Don’t agree – 20 years more of irrational political decisions against the valuable and water-saving plant food CO2 mean a tremendous amount of harm for mankind and a potential enhanced food production and greening of half-deserts.

        Let’s hope for a clear and unadjustable global cooling trend soon (which would be a bad thing actually, I’m afraid) in order to stop at least the general acceptance of the CAGW madness as soon as possible…

  2. USCRN and the satellite data both show a complete lack of warming. Those are the best data sources we have, by far.

    The unadjusted surface temperature record shows no warming.

    The conclusion is clear: global warming is caused exclusively by an artifact of some highly questionable statistical adjustments.

    • UH, …. 1885 is simply the officially designated “start date” for the high-jacking of all surface temperature increases that were/are a direct result of Interglacial Global Warming, …… which has been in progress for the past 22,000 years.

      • If the last 13-some glaciation cycles in (relatively) recent history can be seen as a sine curve, even the top climatologists cannot point to our current location on the curve. Are we still in the warming phase of the current interglacial period, or are we on the downward slope, as James Hansen falsely predicted in the mid-1970’s? My guess is we are still on the upslope, which could mean several more millennia of general warming, and unfortunately, more unabashed politicians frightening a gullible public into yielding money and liberty in the name of false security. Curiously, not one of Hansen’s bold predictions have ever come true… yet he continues to get press coverage on every new prediction. P.T. Barnum was right.

  3. This post clear shows that the so-called data sets are not data at all. The “data” set is false and is just made up. One wonders why there is not one single main stream investigative reporter willing to write a story on this issue. Not one.

  4. This article gets a few things wrong.

    First, the effect of TOBs adjustments to U.S. temperature data is larger than those of pair-wise homogenization (~0.25 C per century vs. 0.2 C).

    Second, the USCRN data matches USHCN data almost exactly:

    If its helpful, here is a detailed analysis of the effect of and reasons for U.S. temperature adjustments:

    • Mr. Hausfather,
      Thank you for the input and links.

      Is it in no way bothersome to discover that some 92% of the data is adjusted? I’ve seen yours and Mosher’s discussions in the past but still find that it heightens my sense of concern that so much data gathering is considered to be so invalid in supposedly an improving state of the art system of instrumentation.

      Being the consumer of said system, I’d have no choice but to ask for a refund or substantial discount.

      • Danny Thomas,

        Are you coming around to the realization that data is being manipulated? If so, congratulations on being a thinker instead of a believer. Good for you!

        What is important in the basic debate is not temperatures per se, it is the temperature trend that is being debated. Which way is it heading? Up? Down? Or is it in stasis?

        If raw temperatures are used without any adjustments, that data will show the real trend. The only thing ‘adjustments’ are used for is to alter the trend. That has been shown so many times anyone can find examples with a cursory search. (If anyone needs examples, ask and I’ll post.)

        Further, the ‘adjustments’ invariably end up showing more alarming warming, by lowering past cooler T and raising more recent T. That makes for a more rapid warming. But in the real world, if random adjustments were made to fix a software problem, then some adjustments would show more warming, and some whould show less warming, or cooling. But we see almost all adjustments result in faster and greater warming.

        With more than a $billion in play every year, the pressure to show what the grant payers want is enormous. Resisting the temptation of easy money takes real character. But in ‘climate studies’ Gresham’s Law also applies: like bad money (fiat paper) forces good money (gold) out of circulation, ethically-challenged scientists push the honest ones out of the process.

        The result is that we’re left with paper money while gold is hoarded, and dishonest rent-seekers end up running the peer review system and the UN/IPCC, while honest scientists are ignored — or worse.

      • Hi Danny,

        Unfortunately USHCN is not by any measure a “state of the art” system; we didn’t have the foresight to set up a climate monitoring system back in 1895, rather we are relying on mostly volunteer-manned weather stations that have changed their time of observation, instrument type, and location multiple times in the last 120 years. There are virtually no U.S. stations that have remained completely unchanged for the past century. The whole point of adjustments are to try and correct for these inhomogenities.

        The reason why adjustments are on balance not trend-neutral is that the two big disruptions of the U.S. temperature network (time of observation changes and conversion from liquid-in-glass to electronic MMTS instruments) both introduce strong cooling biases into the raw data, as I discuss here (with links to various peer-reviewed papers on each bias):

        I also did a synthetic test of TOBs biases using hourly CRN data that might be of interest:

        The new USCRN station is a state of the art climate monitoring system. So far its (unadjusted) results agree quite well with the (adjusted) USHCN data, which is encouraging. Going forward it will provide a useful check (and potential validation) of the NOAA homogenization approach.

      • Mr. Hausfather,

        Thank you for the response, but I’m not sure it’s focused on my question. I do grasp the basics of what is done (but could not replicate myself).

        My question gets to the heart of what I can best term as a ‘skeptical’ (I prefer those less climate concerned if I must use a label) issue. 92%!

        We see multiple sources of evidence that our planet is warming. Almost no one that I’m aware of who invests any effort denies that.

        As DB puts it trends are important.

        Well trends being see include:
        92% of USHCN data records are adjusted.
        Historic Sea level rates are adjusted (tide gauges, Hay, et al)
        Sea buoy temperature sensing equipment adjust to match ship bucket methods. (Karl, et al)

        I could go on, and am not in any way a believer in some sort of massive scientific conspiracy. My concern is how science is being done. Mosher has commented numerous times that if adjustments were not made “we wouldn’t like it”. Well the climate discussion is not (should not be) about what one likes or not but should indeed be about the data. It seems that in this age of information there should be the ability to provide adjusted and unadjusted side by side.

        These trends are bothersome (at least to me) but I’m a nobody. So while I have the opportunity to speak directly with a professional, published, active in the field person I wish to not miss that opportunity. This is not intended to put you on the spot, but is in order to gain perspective.


        PS, I’ve asked Mosher but he gets sidetracked. Is there a method to subscribe to the BEST blog. So far when I’ve looked I’ve not been able to find that or comments. Apologies for O.T.

      • Zeke, are there plans to expand the USCRN network globally? It would be nice to have one global network that didn’t need continual adjustments. A new temperature network would not be useful for telling us how much the climate has changed in the past, but it would inform us about current temperature trends. It would also be of value going forward as a check on other temperature networks.

      • “Is it in no way bothersome to discover that some 92% of the data is adjusted?”

        Not in the least.

        1. The method has been tested double blind.
        2. the PROOF is in the pudding

        Take CRN. Its perfect right
        comapre it to adjusted USHCN…

        Guess what?

        They match


        becuase an alogorithm that adjusts 92& of the data is a good algorithm

      • Steven,
        While an algorithm which 92% of the data may be verifiable as a good algorithm the point is not about the algorithm. It’s about the data.
        What are we (science) doing that’s so wrong that we can’t get good data? We obviously cannot measure temperatures. We cannot measure sea levels (Hay). We cannot measure SST’s (Karl). Each and every example cited here is so suspect and so lacking in value it MUST be adjusted to a new and predicted value (not an observed or measured one).

        You know I’m no scientist, but it’s IMO no longer about observation as the observations are no good if they all require adjustment?

        So why do we bother?

        I’m not pointing a finger at you, Zeke, BEST or anyone in particular. Heck, for all I know you may be accurate. But it’s the trends (data ‘adjustment’) that are important and the trends are concerning.

      • Yes Mosh.. we KNOW that USCHN has been adjusted to match USCRN.

        For that time period.

        What else could they do !!

        The coincidence is way to close to be anything but massively manipulated and fitted for a very specific purpose….. which you are now attempting to use.

        Sorry.. but you ain’t selling to anyone !!!

        Same with ClimDiv.

      • Mosher,
        AndyG55 is making the relevant point. You gleefully announce agreement of the USCHN result with the USCRN result, and proudly declare that the USCHN must be correct back to 1880, since the two records do agree for the last ten years. Your conclusion does not follow at all.

        The adjusters for the USCHN record are sitting there with the pristine 2005-2015 U.S. data. How do expect them to put a positive trend on the USCHN for that period and get away with it? They can fiddle the global data as much as they like, since that data is horrible to begin with, and thus justifies lots of complicated adjustment to “fix” it. They can fiddle the old U.S. data since that data is also problematic. However, they CAN’T fiddle the USCRN data. Thus, agreement for the U.S. for the last ten years, means nothing.

      • @dbstealey
        “With more than a $billion in play every year, the pressure to show what the grant payers want is enormous. Resisting the temptation of easy money takes real character. But in ‘climate studies’ Gresham’s Law also applies: like bad money (fiat paper) forces good money (gold) out of circulation, ethically-challenged scientists push the honest ones out of the process.

        The result is that we’re left with paper money while gold is hoarded, and dishonest rent-seekers end up running the peer review system and the UN/IPCC, while honest scientists are ignored — or worse.”

        Brilliance. I will gleefully purloin this quote, but promise to put to good use.

    • Zeke’s math is incomplete as always.

      You add ALL of these adjustments together to get to the total adjustments.

      And John Goetz’s analysis is more believable because he is not smoothing the data so that the impact of the full adjustment is lost.

      • The divergence from the satellite data sets is fairly conclusive proof that…
        …either the surface adjustments are wrong,
        …or CO2 has nothing to do with the surface warming.

        Also the adjustments are far bigger the Mosher indicates, having changed the 1980s global and NH graphics by .3 to .8 degrees C.

    • What is the standard deviation of that monthly data from the USCRN, USHCN plot? By eyeball, it looks to be at least 1 deg C.

      Note the Y – axis scale: it is 19 deg F (!) full scale. The data range is at least 12 deg F (7 deg C) So ” almost exactly ” really means they differ by a hard to see 0.2 deg C.

      What does that imply about the uncertainty of the mean for 12, 24, 60 month windows? The 95% confidence has a width of least 1.0, 0.8, 0.5 of a deg C respectively.

      Add to that the uncertainty of the anomaly of each individual month. With only 25 stations in 2003 to 218 in 2013, this uncertainty of the mean monthly USCRN value must be large, at least 0.5 deg C.

    • “Second, the USCRN data matches USHCN data almost exactly”

      LOL.. now think about how that might happen. ;-)

      And does that particular rendition of USHCN data match the USHCN data that is sent to GHCN.?

      • The only reason USHCN data matches USCRN is so they can do exactly as you have done…

        … that being to make a claim of accuracy for USHCN.

        I predicted that they would do this a couple of years ago.

        But you see, they have overplayed their hand. The chances of two different systems giving such a close match would be infinitesimal, unless one was being manipulated to specifically match the other.

      • To put it another way, apparently we don’t need the order of thousands of stations tallied in USHCN. The order of 100 in USCRN are apparently enough! To within what, a few hundredths of a degree! Or is that a few thousandths of a degree, when plotted on the Mosher-linked site above? Consistently!

        Seriously? That’s some good shootin’, pardner. I hesitate to say unbelievably good shooting, but one does have to wonder a bit…


    • It should also be noted that all three of USCRN, ClimDiv, USHCN show cooling over the last 10 years.

      This cooling trend is very close to that shown by UAH USA48 (which shows a slightly smaller cooling trend.
      I don’t have the USA data for RSS, but assume it will be similar.

      This coincidence in trends over a very large sample VERIFIES the data collation of the UAH and RSS temperature series.

      GISS and its stable mates remain totally UNVERIFIED by any source.

      In UAH and RSS there has been no warming in 18+ years.

      In fact, since the 1998 El Nino and its associated events (nothing to do with CO2) lead to a step warming of about 0.26ºC, that is actually the ONLY warming evident in the whole of the satellite data.

      The slight warming from 1979 -1997 has been almost completely cancelled by the cooling since early 2001.

      That means that …



        Funny that. I suppose someday those of us who have been saying that CO2 does not warm the surface will be listened to finally. Someday. At my age, it will be in the sweet by and by no doubt. (but I do hope to be looking down to see it never the less)


        I have wondered about this – I can’t see any significant warming in the whole record of satellite data either. By the United Nations climate theory, energy is supposed to be trapped in the atmosphere by increased level of CO2 in the atmosphere, but fails to warm the atmosphere.

        The energy which fails to warm the atmosphere is then supposed to pass the upper oceans, without warming it, before it hides in the deep oceans where it cannot be found by measurements due to measurement and sampling uncertainty and lack of historical data.

        This makes me wonder if United Nations is this bad on all the other things they (try to) do also.

      • “This makes me wonder if United Nations is this bad on all the other things they (try to) do also.”

        Answer: Yes. They are that bad, and worse.

      • These are points that I repeatedly make.

        It is important to bear in mind that there are 2 ‘pauses’ not 1 ‘pause’ in the satellite data. The first ‘pause’ runs from launch (circa 1979) and runs for about 18 years through to the run up of the 1998 Super El Nino. During this period, the temperature trend is flat. It is also flat for the past 18 years. This is the second ‘pause’ seen in the data.

        There is no first order correlation between CO2 and temperature in the satellite data set. To date, I have seen no evidence presented that there is some second order correlation because say trending in aerosols creating a negative forcing exactly equals and opposites the positive forcing of CO2 thereby one cancelling out the other.

        The satellite data merely shows a one off isolated natural warming event, ie., the Super El Nino of 1998. May be there will be another step change in the data following the 2015/16 El Nino, but obviously we do not yet know whether that will be the case, or whether the 2015/16 El Nino will leave no long term impact on atmospheric temperatures being in effect negated by a La Nina in in 2016 or 2017 etc.

        Our best temperature measuring devices show no warming trend consistent with CO2 driven warming, whether this be the Satellite dada, or USCRN, or ARGO.

        All 3 data sets are of rather short duration, but the fact that all 3 show no correlation with CO2 is a good indicator that the ‘Climate Sensitivity’ to CO2 (if any at all) is small. At any rate, it is so small that it cannot be observed/measured by our best measuring devices within the limitations and margins of errors of those devices.

        Don’t forget that the so called ‘basic physics’ is clear; whenever there is an increase in CO2 there MUST ALWAYS be a corresponding increase in temperature (which increase could of course be cancelled out by some negative forcing if such negative forcing could be ascertained and identified). The so called ‘basic physics’ is not that an increase in CO2 may some times lead to warming, or may sometimes lead to warming of the atmosphere, but not the oceans, and at other times may sometimes lead to no warming of the atmosphere but instead to a warming of the oceans. Whatever be the physics of CO2 in our atmosphere (subject saturation) that physics is the same in 2015 as it was in 1990 as it was in 1950 as it was in 1920 etc.

        Further, if there is locked in warming due to the fact that CO2 has already increased from about 270ppm to about 400ppm, then that locked in response should also show up. If there is such a locked in response, it is more difficult to get a ‘pause’ when CO2 is running at the 380 to 400ppm level than when it is running at say the 320 to 340ppm level.

        Now I do not know what the true and proper error bounds of our best measuring equipment is. I doubt that it is as small as ‘Climate Scientists’ would have one believe, but if it is that small, then it suggests that ‘Climate Sensitivity’ (if any) to CO2 must also be correspondingly small.

        As the ‘pause’ continues, it is inevitably the case that ‘Climate Scientists’ will have to start facing up to reality, namely that ‘Climate Sensitivity’ is much smaller than they have to date been suggesting.

      • Not just ENSO and global cloud albedo contributed to virtually all the warming since the 1980’s, but also especially the AMO too. It would not be a surprise to say they are all linked together and shows no CO2 signal at all.

        When global temperatures have the AMO removed from them, there is remarkably no trend with RSS. Yet the alarmists have claimed it’s all AGW since the 1980’s and they wonder why a lot of the public don’t trust a word that comes out of their mouths.

      • Coincidence? No way!
        This is the best correlation ever seen within climate science. Unfortunately for the proponents of the climate theory put forward by United Nations, this curve demonstrates that it is the adjustments of temperature which increases proportional to CO2 – not temperature itself. I repeat – the adjustments correlates with CO2. Even the most innumerate proponents of United Nations climate theory should be stunned by this figure. The figure is a clear indication of …… (Put in your own words here, mine will probably be stopped in moderation). This figure is in itself sufficient reason to suspend further action. If that correlation is a coincidence then everything else in life must be a coincidence.

    • Zeke: Thank you for commenting here. Could you please plot the difference between USCRN and USHCN so a different in trend could be seen? Globally, warming trends of roughly 0.1 degC/decade currently exist, so differences as small as 0.01 degC over a decade are relevant. Your graph is much too coarse to show this.

  5. Like many others I’ve offered various alarmist this factoid:
    “Despite attempts to erase it globally, “the pause” still exists in pristine US surface temperature data”

    Their reply is always that the US is not the globe.
    I believe the US temp record (and their reply) is hefty evidence to demonstrate that there is heavier evidence to support the notion that contemporary warming has been “regional” vs claims that MWP was.

    All things considered is this not the case? Is there reliable evidence that 20th century warming was the very kind of regional warming alarmists have fallaciously tried to use to dismiss the MWP?

    • UAH USA48 trend matches USCRN trend quite closely .

      ….. UAH temperature data extraction procedures are therefore VERIFIED. (and with them RSS)

    • Steve,
      There is another point here. The alarmists dismiss the U.S. record when it suits them, but there have been what has to be thousands of alarmist articles in the past 10 years about “climate change” in the U.S. itself. Earlier springs in Massachusetts. Drought in California. Changed migration patterns for birds and insects. Tropical storm Sandy. Hurricane Katrina etc, etc, etc.

      Surely the flat or declining record for the U.S. ought to shut them up for U.S. climate stories, right?

    • “Their reply is always that the US is not the globe.”

      But: “The US accounts for 6.62% of the land area on Earth, but accounts for 39% of the data in the GHCN network.”

    • Steve: “Their reply is always that the US is not the globe.”
      However, the US is part of the globe. AGW asserts that CO2 emissions have a GLOBAL effect on temperatures. The hypothesis is the CO2 is well mixed and the effect is global and furthermore it is the predominant determinant of temperature.
      For there to be be no temperature increase over the continental US you would the magic molecule to lose its effect over the US, or there to be a hole in the CO2 layer over the US or some other undocumented effect. actually this applies to any location.

  6. Steve

    They said the Medieval Warm Period wasn’t global. It is the standard dodge of the record falsifiers. The other dodge is to say it was only seasonal, as they did with the Holocene Optimum; originally it was called the Climatic Optimum and prior to that the Hypsithermal.

    I repeat what I wrote on the last post. There are no weather stations for approximately 85 percent of the land and water surface. They ‘adjusted’ what little they have. Fascinating, the idea of adjusting an already inadequate record.

    It is also important to note that the data submitted by WMO member nations is already adjusted before it is submitted to the central agency. The original, original raw data never gets used.

    • Would you agree then that the regional or seasonal pitch alarmists make to wipe away the MWP actually fits our contemporary warming better?

    • I made a similar point on one of the other recent articles.

      There is a huge spatial coverage issue. Most people do not appreciate that large swathes of the globe are not measured at all. Quite frankly, there is no global data from which to ascertain global temperatures, or from which to assess an anomaly.

      Climate is regional, not global (other than being in an ice age, an inter glacial or nearly ice free epoch). We should be collating data on a regional basis so that we can make an assessment of the consequences of any change to the region in question. Changes are not uniform across the globe (some areas are seeing little change, some may be seeing some cooling, and some seeing some warming). Further, the impact will differ from region to region. Some regions will greatly benefit from warming. Sea level change may have a dramatic impact on some, but little material impact on others (eg., some countries are land locked, some countries, such as Norway, have a steep rising cliffy coast line etc, whereas some countries have major settlements situated only just above current sea level).

      But of course, this is political. If one were consider climate on a regional basis (as it should be say in accordance with the Koppen classification), each country would act individually in accordance with its own interests. There could be no global response and no global governance/control. For political reasons, it is necessary to shout the mantra that we are all in this together and a world solution is required, and this of course creates world control, and hence the reason why it is framed as Global Warming,.

    • Dr Tim, do you mean like the BOM and the adjusted mess they send in?

      The Australian Bureau of Meteorology have been struck by the most incredible bad luck. The fickle thermometers of Australia have been ruining climate records for 150 years, and the BOM have done a masterful job of recreating our “correct” climate trends, despite the data. Bob Fernley-Jones decided to help show the world how clever the BOM are. (Call them the Bureau of Magic).

      Firstly there were the Horoscope-thermometers — which need adjustments that are different for each calendar month of the year – up in December, down in January, up in February… These thermometers flip on Jan 1 each year from reading nearly 1°C too warm all of December, to being more than 1°C too cold for all of January . Then come February 1, they flip again.

    • Similarly, 9 years of shovelling and blowing snow in eastern Massachusetts. convinced me to live in Tucson, Arizona.

  7. John,

    The GHCN adjusted(QCA) data is an identical match for the USHCN final data(52j) for months where GHCN uses the USHCN data. This is true whenever you compare the files for the exact same date. I think you probably compared data for two different dates which is why you found differences. Both datasets are constantly adjusting data from day to day, but they change in sync.

    That being said, here is how large the discrepancy in data usage is between the datasets for the past few years after removing all months marked as defective.
    Temperature data: Station annual months >= 1
    Year Mean Data Stations
    2010 11.75 13653 1183
    2011 11.87 13746 1183
    2012 13.01 13727 1189
    2013 11.33 13759 1190
    2014 11.14 12524 1124
    2015* 13.49 9273 1119
    2010-2014 mean 11.82
    Temperature data: Station annual months >= 1
    Year Mean Data Stations
    2010 11.65 10923 982
    2011 11.73 10752 970
    2012 12.81 10404 941
    2013 11.10 10000 903
    2014 10.82 8671 804
    2015* 13.18 5786 708
    2010-2014 mean 11.62

    • That is a possibility, Bob. I did download the files two days apart and plan to go back and look more closely. I only noted the difference because a diff of the two data sets showed nearly every temperature record was different.

      • The “data” changes everyday. I spent 2014 downloading mUCH of the daily “data”, then did some comparisons from one download to the next (and various other interesting combinations). Rarely do 2 days match. You will find some “historical data” that has variations of more than 2 degrees C over the course of a month. If you look at the variations within the “data”, you’ll see that gradually newer temps will trend upward, and older ones downward. My suspicion is this is built into the “State-of-the Art” PHA, as you do not see this same sort of constant change in the TOBS data. My conclusion: the “data” is worthless.

        Note: I am not talking about changes in temperature over time. I chose “data” from 1936, and would compare one day that I downloaded in May 2014 versus the same “data” from June 2014 (as an example). Amazingly: the temperatures in 1936 were still changing as of 2014!

      • Or maybe you should probably look “more closely” more often and start being more of a real skeptic and less of a sensational conspiracist ?

  8. Some days, I am so disheartened. I am seriously worried for the future my kids (and hopefully grandkids) will have. Energy poverty, destroyed economies, all to satisfy what? It’s madness. I’m doing my best to enlighten others, but my command of all the facts is limited. I read here daily, educating myself and hopefully others. Thank you Mister Watts.

    • I’m an American. I’m over 60. Similar concerns crossed my mind even before I was married, though the causes of those concerns were different back then.
      Love God and love your kids. Teach them to do the same. If the world goes to hell in a handbasket, that doesn’t mean your family has to be part of it.
      There is always a current “crisis”. Given time, it no longer becomes “current” or a “crisis”. It might be replaced by something else in the media or by those who would use such as a lever to power or money, but you do what I suggested. Take care of those in front of you.
      And keep learning to make informed decisions.
      (And, yes, “Thank you Mister Watts”.)

      • @ Gunga and Katherine, Same age group and both of you keep on teaching your kids and grandkids, prepare them beyond the cr.p they get fed at “schools” you are doing the right thing!
        ( and thanks Mr. Watts)

      • If you haven’t read “1984”, you should. The description of the Ministry of Truth was a pretty good description of what global warmists are actually doing, and the federal economists with no change in the national debt for months, etc..

  9. So a hoo ha of a degree spread acros an entire planet can’t be reversed engineered? Basicaly global temperature is irreverent! Wow! How long did it take you to come up with that. Lmao

  10. John Goetz wrote: “An analysis of the U.S. Historical Climatological Network (USHCN) shows that only about 8%-1% (depending on the stage of processing) of the data survives in the climate record as unaltered/estimated data.”

    So what? Historical climate data was not collected under conditions that would enable scientists to later accurately quantify global warming rates of about 0.1 degC per decade. Except for the USCRN, we still aren’t collecting surface data in such a way that it can be accurately used without being reprocessed. The raw data is a mess. Anyone capable of using a spreadsheet can download hourly temperature data and prove that a change in time of observation introduces a large bias into the daily average temperature reported by a min/max thermometer. Station moves and growing UHI are other problems. Some adjustments are needed.

    The real problem is that homogenization algorithms are correcting breakpoints at many stations as often than once a decade – probably too often to be attributed to any known cause. We don’t know whether correcting undocumented breakpoints removes or introduces biases.

    Since each correction introduces some uncertainty into the overall trend, frequently adjusted stations should have – but don’t have – great uncertainty associated with their overall trend. The adjusters pretend that each correction is 100% accurate.

    Simplistic moaning about the percentage of stations requiring adjustments is worthless. Almost all station data needs some adjusting. The crucial issue is how much uncertainty is added by the adjustment process and whether the BEST kriging methodology avoids the pitfalls of adjusting.

    • “The adjusters pretend that each correction is 100% accurate.”

      Oh hell no.

      we fixed a bug once that effected 600 stations.. the final answer didnt change..

      In testing adjustments in blind tests we aim for decreasing the error.

      there will always be error. the adjustments will always be wrong… The key is they are less wrong than not correcting data

      • Hey Mosh, since you are an expert at fixing past temps, maybe you can fix future temps. I bet you could make some real money then. Computer models can do everything, right?

      • It’s not in the climate change rentseekers’ fianacial and professional interests to report that the CO2 linkage was all just a mistake. So the scam will continue until it catastrophically collapses, like Bernie Madoff’s decades long financial scam. Their were people on the outside who knew, who warned of the fraud, who were ignored.

      • So that’s why the adjusted, homogenized, data always show a warming trend. Because you knew it was warming so your mangled the data show it is warming.

      • So how was this “bug” affecting 600 stations that resulted in no change in output after the fix? That’s sounds like real shonky coding to me and I’ve seen enough in my time to crave for a return to blackboard and chalk for real science.

      • Steve: Thanks for taking the trouble to reply. Consider two stations. Station 1 has one breakpoint detected and corrected with 1.00 degC of corrected warming between 1900 and 2014. Station 2 has a dozen break points detected and corrected with the same 1.00 degC of warming. I think the confidence interval for the trend at Station 1 should be much narrower than for Station 2. If the uncertainty associated with each adjustment were 0.1 degC, Station 1 would be 1.00 +/- 0.1 deg and Station 2 would be 1.00 +/- 0.35 (assuming I did my error propagation correctly).

        The problem is even worse because N adjusted stations are not N statistically independent measurements. If you look at records with many break points, the station trend and the regional trend always agree closely. BEST shows breakpoint corrections for Toyko that reduce its real high trend from growing UHI. I don’t think a station with a large number of breakpoint corrections adds any useful information because its corrected trend always matches the regional trend. Imagine a station in a microclimate where there was 0 or 2 degC of warming instead of a regional average of 1 degC. If many break points are corrected, the info about the microclimate will be lost.

        Breakpoint correction introduces a bias whenever station maintenance suddenly restores earlier observing conditions that had gradually deteriorated. The screen gets dirty and is washed or painted. Ventilation gradually slows due to debris that is removed. Vegetation grows below or around the station and is then removed. In all these scenarios, the resulting breakpoint should NOT be corrected.

        Unless you know the CAUSE of a breakpoint, you can’t be sure whether correction improves the record or not. Correct TOB. Correct most station moves (unless they were intended to negate urbanization or correct a similar growing bias). Then recognize that the remaining correctios may or may not improve the record. They represent UNCERTAINTY in our measurement record, not certainty in the best answer. If global warming were 0.8 degC with documented break points corrected and 1.0 degC with all break points corrected, IMO the best answer is 0.8-1.0 degC of warming.

    • “The adjusters pretend that each correction is 100% accurate.” Mosher has taken exception to that.

      On the one hand, he is right: he and his colleagues can admit the limitations of their algorithms among themselves and even in venues such as this one (thanks, Mr. Watts!) when called out.

      On the other hand, you are right: as presented to politicians, policy makers, the press, and the public, yeah, the adjusters effectively present their procedures as 100% accurate, and to the hundredth of a degree to boot.

  11. The data does need to be controlled for Time of Observation.
    That can mean an adjustment that is justifiable.

    It’s the fact that all the adjustments are always so convenient for those who want the data to be exciting… that’s the issue.
    And the fact that that the adjustments aren’t highlighted to allow for open discussion. That’s not ideal.

    • They lost me when the adjustments created trends for Time of Observation and homogenization that were big and positive, but the UHI trend, which ought to be substantial and negative, turns out in the adjusters hands to be effectively zero. What a surprise.

    • Does it really have to be?

      Say you have 30 years of data which is collected at 10am and 10 pm each and every day where the min/max is recorded of the preceding 24 hours. Then the station changes its procedure, and you have 30 years of data where the min/max of the preceding 24 hours is collected at 4pm and 4am, each and every day. You thus have 60 years of data.

      What real difference does this make when there is a lengthy data stream and when one is not seeking to ascertain the temperature at any given hour of the day, but rather what was the minimum temperature observed and what was the maximum temperature observed during 24 hour cycles, so that an average of these two extremes is ascertained?

      Of course, when you are collecting the maximum temperature say on 4th January at 10 am (ie., the first 30 year period), you are observing the maximum temperature which was reached some time on 3rd January. When you are collecting the temperature at 4pm on 4th January (ie., the second 30 year period) you are observing the maximum temperature which was reached some time on 4th January. But is that actually significant when you are not really interested in looking at what temperature was on any given day of the year, but rather what the average temperature (difference between max and min) has been over a period of 60 years.

      Of course, there may be some unusual weather event, a day when there was a temperature inversion, that may ‘screw’ matters up, but these are rare. As long as one is only seeking to ascertain the difference between max and min temperature and provided that the min/max is re-set each day and provided that the time of observation whatever it is remains constant and provided one is dealing with lengthy data streams TOB would appear to be of little significance.

      In my first example (30 years of observations taken at 10am and 10pm), ignoring leap years, one would have 3650 observations of maximums and minimums, and in my second example where observations were taken at 4pm and 4am, one would have 3650 observations of min/max temps. One could not make a comparison of the temperature of any given day in one set with the same given day of the other set. But that is not important. It does not matter that 4th January in one set may not be telling you what the max temperature on 4th January really was.

      As I see it, TOB only becomes important in short data sets, or when TOB at a station is a movable feast (ie., the station has no consistent policy of when it makes its observations – and in which case, I would question why such a station is being used since it suggests that quality control may be a serious issue) or one is trying to compare temperature at a specific time of the day. But that is not what is being undertaken. I am sceptical as to whether TOB truly reduces error, or whether it leads to false trend. When you have say 3000 stations, one would expect TOB adjustments one way to cancel themselves out the other way, but of course that is not what is happening and it has been used to cool the past and warm the present..

      • This puzzles me too. I would like to see the functional relationship showing the relation between time of observation and average temperature for the previous 24 hours based on the Max and Min reading for the previous 24 hours.

        From Zeke´s explanation:
        “At first glance, it would seem that the time of observation wouldn’t matter at all. After all, the instrument is recording the minimum and maximum temperatures for a 24-hour period no matter what time of day you reset it. The reason that it matters, however, is that depending on the time of observation you will end up occasionally double counting either high or low days more than you should.”

        I tend think that Tony Heller has a point in saying “The total NOAA adjustment is nearly two degrees F. It is unsupportable nonsense”.

  12. I don’t understand the need to adjust and homogenize temperature data to measure a trend in global warming. If the warming is truly global, it will affect the entire globe. So a few accurate thermometers placed around the world should be enough to show the general trend if there is one. To say that the earth is warming in the places where there are no thermometers is to say that the warming is regional and not global. If it is global, it will eventually spread around the planet. For measuring future trends, I’d rather have a few highly accurate temperature stations spread around the world that don’t need adjustments than have millions that do.

    • Louis,
      Thank you for stating in words how I think about this topic. Much as NOAA does with showing SLR graphically with hundreds of data points why can this not also be done with temperatures? If (just to toss out a number for discussion) 1000 sites are used and a preponderance show an increase (or decrease) a trend can be sussed out.

    • Louis Hunt – exactly. I’ve said this a few times in comments over the years. It’s not global warming we are worked up about, it’s whether it’s is significant or not. If we are going to be facing 2 to 5C warming then a few dozen thermometers are all we need. Indeed, since warming is amplified in the polar regions by about 3x (according to the experts – and there is evidence of polar amplification at least in the Arctic) then the best “early warning system” would be to have these thermometers distributed in the arctic, say on the tundra. A couple of dozen pairs (for redundancy) distributed along the 60th paralell and built with steel and concrete housing to be durable would not be too costly for this important purpose.

      This amplification is a super vernier-type measure that allows us to monitor what amounts to global temperatures with good accuracy. One would simply divide any temperature change by ~3 to get an idea of what the globe was up to. I believe the Greenland cores do, in fact give a good global picture of the record of temperature changes although I’ve never heard of any researchers making use of this ‘vernier’ idea to compute global anomaly changes. Indeed in one breath they say there is amplification and in the next they say that it is only a regional record.

      The same with sea level. Let us figure out what amount of sea level rise in a century is a concern. If it’s one metre, then a scary figure is >1cm a year. We are suffering only 1/5th of that over many decades so we should be cool with this. There is no necessity to adjust the figures by 0.03mm a decade to account for “glacial rebound” – a stupid adjustment that actually disappears the concept of sea level (the adjustment is to the volume of the ocean basin that is not reflected in real sea level). Nor do we have to rush down to the sea with a micrometer to see what is happening.

      The trouble with all this futile effort toward minute precision in climate measures is it is really done to be able to sound the alarm almost daily. There is no way a threatening change in sea level or global temperatures could sneak up on us with the deployment of a few simple indicative measures. First, why on earth, in this day and age, should there be any reason to make a TOBS adjustment – take the G.D. thing at the specified time. Automate the whole works and have them in pairs. Have I wowed my learned readers with this revolutionary idea?

      Also, I’ve had a puzzlement about Ozone holes at the poles. The Nasa imagery shows the holes are surrounded by thicker ozone between the hole and the equator. If ozone is the protector of our skin from UV related cancer, it would seem that the danger to us should only be if we take our shirts off in the polar regions.

      • Except the North Pole did not get covered with ice and glaciers during any Ice Ages. Much of Alaska and Siberia were clear of all ice which is how so many animals and humans easily walked over into North and South America.

        Hudson Bay was the epicenter of all Ice Age events with the most ice and the surrounding lands which was nearly all of Canada aside from the West Coast and much of the northern tier states of the US were under a mile of ice as well as Europe especially England.

        To track coming Ice Ages, we don’t look at Barrow, Alaska, we look at Hudson Bay.

      • Environment Canada records for a few arctic stations I have downloaded don’t show the massive warming claimed for the arctic. Does anyone have satellite anomaly data for north of 60, 70, 80? What about DMI? Do they have anomaly data? Maybe I need to download more sites but Gary Pearse’s comments made me think about it. Frankly, living in a place that goes from -40 to plus 35 C every year and varies by 20 C regularly (daily or weekly) makes a degree variation in the AVERAGE (which might be caused by LESS cold) pretty irrelevant. It was 20 degrees today. Last week it hit 28 C. Next Monday the forecast low is -6 and snow. Of course that is just weather.

        What am I missing?


  13. “It should also be noted, that the U.S. Climate Reference Network, designed from the start to be free of the need for ANY adjustment of data, does not show any trend,”

    Actually it does show a trend – a downward trend – a cooling trend.

    Just saying.

    • ” a downward trend – a cooling trend.”

      As do ClimDiv and USCHN (almost certainly because they have been adjusted to match USCRN closely)

      UAH UAS48 matches this trend quite closely.. thus verifying the data extraction procedures of UAH.

      • Trafamadore below is about to say “So you think that there are little thermometers dangling from the satellite?
        No Sir, but there are hundreds of little thermometers dangling in the weather balloons which verify the satellites thousands of times. Mosher makes the same biased statements, never pointing this out.
        The satellites clearly show that 1998 was the warmest year, and we have cooled since then.

        They also prove that what ever is causing the surface warming; 1. fraud, 2. confirmation bias, 3, noble cause corruption, or unlikely 4. they are correct; it has nothing to do with CO2 as all the IPCC physics cannot explain a cooling troposphere and a warming surface.

        I wish WUWT would do a post on this.

      • trafamadore is correct, satellites measure microwave brightness, not temperature. The model is used to convert the microwave data into “temperature.”

      • Trafamadore, are you unaware that all methods of measuring temperature rely on one or another property or properties of matter?
        Some use thermal expansion of materials, others make use of infrared or microwave EMR emissions, and still others may use thermal variations in electrical resistance.
        Regardless of the method used, all rely on converting a measured physical property of the state of a material into a number.

      • While we’re on the subject, nothing measures temperature anyway. So, lets just agree to the fact that satellites represent the best quality record we have for “temperature” globally (As much as possible anyway, better then the ~15% coverage of ground based systems deployed) which shows NO WARMING of any significance.

      • Its what Ive been telling you FOR YEARS.

        The satellite sensor sits in space.

        Its sensor records photons. That gets turned into voltage.

        The voltage represents the BRIGHTNESS AT THE SENSOR IN SPACE.

        Imagine you shined a flashlight from 6 miles away and I recorded the brightness AT 6 MILES..

        Next I have to do an inversion to figure out the BRIGHTNESS at the SOURCE

        To do that I have to know how the atmopshere effects EM as it propagates. I need a MODEL of how
        light passes through the atmosphere.

        So the satellite sits in space. It records brightness AT THE SENSOR then you have to apply a physics model

        That Physics model is a radiatiave transfer model for microwave,.

        What is radiative transfer physics? Its the physics you use to design FLIRS, and radars, and cell phones
        and IR telescopes, and IR missiles, and ANY device that senses or transmits EM through our atompshere

        Its the PHYSICS that underpins global warming theory.

        So you have brightness at the sensor then you have to work an inverse problem to get “brightness” at the source (troposphere) and you also need to apply some “idealized” parameterizations or atmopsheric “profiles”

        From this data and model you estimate the temperature at the troposphere.. Well actually you have to do
        some interpolation and then you have to do corrections.

        Corrections Yes you have to do what amounts to a TOBS correction for satellite data… didnt know that did you?

        Some folks also use GCMs to adjust the satellite data. basically you have 35 years and 9 different instruments.. quite a challenge

        Raw data? the final product is a 100 adjustments away from the raw data

      • Steven: you argument is silly.
        What you are saying is in effect that more data processing is a bad thing and results in less accurate results.
        I beg to differ.
        Suppose we have an X-Ray, a CAT scan, and an MRI (NMR) scan. There is essentially no data processing involved with developing the X-Ray plate. However, the data processing required to produce a CAT scan or an MRI (NMR) scan is considerable, and the algorithms for producing them are very sophisticated, However, I don’t think you would find a radiologist who would claim that an X-Ray gives you better information than an MRI or a CAT scan simply because it requires less manipulation.

      • He means that the satellite data is interpreted as temperature through a mathematical model. The satellite uses a reference and compares the radiation from the atmosphere against that reference. That in turn is converted to a temperature. In fact that is all that a thermometer does. The reference is a volume of e.g. mercury at a specific temperature, and the change in volume as temperature changes is interpreted as temperature. So, in effect, the satellite instrument really is a thermometer, just a more complex one with many more parts and an unstable orbital position that also has to accounted for since a change in orbital height will effect relative brightness and thus apparent temperature.

      • “Its the PHYSICS that underpins global warming theory.”

        You seem o have microwave sounding down pretty well. Why is it you have so much trouble with saturation?

      • The point is that the satellite data is calibrated against Radiosonde data.

        It may be measuring temperature as Steven Mosher suggests (a voltage response to brightness), and it may then go through some modelling as tramadore suggest, but at the end this is tested against balloon temperature measurements, so we can be confident that the satellite data is properly calibrated and correlated to temperature and is telling us what we need to know.

      • Duster:
        GPS location also comes from satellites in orbit. However, if you fail to account for the effects of Special Relativity (well known) and General Relativity (not well known) the position you calculate will be off substantially. To say that making these corrections amounts to “data manipulation” and therefore you can not trust the results is not true.
        The reason why they are trying to discredit ARGO buoys and satellite data is that they contradict the global warming dogma.

    • Easy example.
      What is the average temperature of your car? How would you calculate it and what would it mean? If the average temperature changed, what would it mean? Is there a single factor that controls the average temperature of your car?

    • I would settle for a global average temperature that displays with the max and min values on the y-axis. Charting daily average temps between 35C and -80C would show a stubbornly stable temperature for the earth.

      Hard to sell pop corn at a paint drying exhibition though

      • Global temperatures are irrelevant to all life on Earth whether it changes 1.0 c or not. Not one life form on the planet notices any difference and it is what happens in local regions that matters. Many local regions have had similar changes in local temperatures to that of global temperatures. The main difference nobody is going to notice it being 22.5 c instead of 22 c during the day and nobody is going to notice it being 11.5 c at night compared to 11 c.

        The general basis of it is that is doesn’t matter at all and what little changes occur, humans just have to adopt to it. For virtually all of us adopting to it that is no different than before, just means we don’t even need to adopt.

      • Oh but didn’t you just read. “Its the PHYSICS that underpins global warming theory.”
        Physics is neutral. It is the fudging and trickery that produce global warming.

  14. Rather than adjusting the majority of data, you increase the confidence intervals around said data, or admit it is too inaccurate to meaningfully use and draw conclusions from.

    Endless adjustment is simply torturing data to obtain the results you expect to see. After all, how else do you determine what to adjust going back decades or even centuries? Inherent bias plays a huge role in all of this, as does this pervasive idea that we can obtain a greater level of precision than data allows by using ‘innovative’ techniques to tease out the signal we believe to be there. Statistics in general, but especially climate modelling, is chock full of false precision that demonstrates no real predictive or analytical power.

    Basically, we have nothing reliable from which to draw ANY conclusions about the climate and how it has changed over time. Making significant policy changes from such data is incredibly negligent, at best.

  15. I have never understood why all the individual station data has to be used to create a single long data set.

    A single station is claimed to need adjustment because equipment changes, time of observation changes etc.

    Ignore that.

    Just take each section of the data set during which there are no changes and find its trend. Then after the change go on to the next section and find its trend — until you have run through that single station data completely.

    All that you are looking at are the trends for data collected in a similar fashion. No adjustments are necessary.

    Do that for all the stations. Thus you have collected all this data on trends.

    Since all the data has been reduced to trends you should be able to use it to determine what the overall trend of all the stations really is.

    No adjustments are necessary — but you do need the real raw data.

    Am I missing something here? This method seems to let the data speak for itself.

    Eugene WR Gallun

    • Am I missing something here? This method seems to let the data speak for itself.

      I believe you make an excellent point! Granted, taking the 6 PM reading when the maximum could be at 3 PM would not capture the max for the day. However 6 PM readings for 170 years would certainly show the true trend, which is what we are really after, is it not?

      • The justifications for such “adjustments” have always sounded very simplistic and one sided, and mostly resemble sophistry more than science, IMO.
        Evidence of the lack of proper justification include comparisons of locations that have been corrected, such as this one:

        Here are some others:

      • But Zeke, what convinces me, above all else, that whatever the justifications are, the over all result of the various “adjustments” (you must count me on the side which does not countenance referring to adjusted numbers as data, per se) is to introduce a huge and obvious bias. Specifically, the sum of all the alterations forms a pattern which is very nearly identical to the CO2 graph.

      • I made a similar point above. I cannot see that TOB makes any (significant) difference when one is dealing with large data sets in which one is only seeking to ascertain the difference between the max and min of 24 hour cycle. We are not seeking to compare 3pm and 6pm temperatures, if we were then TOB would be important and an adjustment would be necessary.

        Climate is not global, but regional, but that said, we do not need that many really well sited and properly maintained/managed weather stations to really see what is going on.

        There are only 6000, now only about 2500 stations in the global network. It would not take long to properly audit those. Heck every university could have a field trip and part of the course study could be to properly audit local weather stations for quality of siting (and siting changes), maintenance, procedures, equipment (and equipment changes), length and quality of data streams etc. From this one could quickly identify the best sited quality stations with the longest un-interrupted record. Given what the IPCC says (ie., manmade warming only post 1950s), it would not matter whether the data stream only goes back to the 1950s, although it would be good to ascertain some stations that covered the period 1900 to 1950 for comparison purposes.

        We should identify stations that need no adjustment to the raw data, and use only such stations and find the trends of each of these. Then these trends can be combined on regional basis to ascertain trends on a region by region basis. Since climate is regional, not global, that would be more informative and it reduces the spatial coverage issue.

        There is a lot to be said for the point raised by Mr Eugene Gallun above

    • Eugene, I [have] suggested something very close to what you outline here.
      As for what you are missing here, I must be missing it too, as are most on the skeptical side.
      There is no scientifically justifiable rationale for the methods used.
      The reason they do what they do is clear, and stated many times and in many ways on [this] and other comment threads…the data as it was did not support the meme, so it has been altered to do so.
      In addition to what you suggest here, I would compile a list, for each time interval desired, of all of the locations that have exhibited a warming trend, those that have no trend, and those that have exhibited a cooling trend.
      Locations could be further separated according to such parameters as degree of urbanization and other factors.
      These intervals and location sets could then be presented or displayed in any number of ways to discern relevant information.
      I believe such a method would make it very clear, and do so very easily and recognizably, just what is going on in each area, region, and in the aggregate…the world.

    • Exactly, Eugene. I have said this before and I’ll probably say it again. The process of averaging monthly averages (which are themselves averages of daily averages) to produce a national or global annual average is going to hide a vast amount of variability and in the end, produce numbers that don’t have a real-world meaning. GHCN and USHCN are (as far as I can see on a quick browse through) totally silent on how they calculate averages over large spatially diverse data sets. I’m not even sure they say what a daily average is (mean of hourly readings? halfway between the min and max? I couldn’t see it anywhere).

      How do they calculate the average over a large area? If it’s a straight arithmetic mean, it will be biased in favour of densely populated areas, and if they use any 2-dimensional gridding procedure, there are a lot of different ways of doing it (linear interpolation, minimum-curvature, polynomial fitting and subsets of the above, probably more that I’m not even aware of). All of these are basically designed to produce a picture of the spatially variable parameter being measured at regular intervals, so that it can be visually portrayed by colouring, contouring, etc., and so that you can use mathematical techniques to draw conclusions from the way it varies through space. We do a lot of this in my business of mineral exploration, especially with geophysical data, and I just can’t imagine how (or even why) you would want to derive an average…… Well, it’s obvious why we need those averages, so we can show a global warming trend (or not, depending on our preference) in simple terms that politicians can understand. But if you must calculate an average, it seems to me that the choice of method you use is going to have an influence on the outcome, perhaps greater than the variation over time that you are looking for, because it’s small fractions of a degree at most.

      My message is, average time trends across space if you want to (after you have looked at them individually and perhaps seen enough similarity that averaging will be useful), but do not trend the spatial averages over time. You are going to hide far more than you reveal.

      This from a simple geologist who feels threatened by the Merchants of Doom. Feel free to point out my errors or omissions, anyone who knows more than I do.

  16. “Approximately 92% (or 99%) of USHCN surface temperature data consists of estimated values”
    That’s why ‘Climatology’ has become ‘Climastrology’ or as some would say ‘Crimastrology’.

    • Except that the adjustments have a time dependence, mainly from assumptions about TOBS of past measurements. A fit for the short period of time to USCRN says nothing about the validity of the adjustments of past data.

    • Sorry Steve, but Im just not buying the “corrections” anymore.
      In 1998 temperatures were claimed to be measured properly. Earlier data needed correction.
      In 2003, temperatures were claimed to be measured properly. Earlier data need even more correction and 1998 data also needed correction.
      In 2008, temperatures were claimed to be measured properly. Earlier data, including 1998 and 2003 data needed adjusting.
      In 2013, temperatures were claimed to be measured properly. Earlier data, including 1998, 2003 and 2008 data needed some more adjusting.
      Now in 2015, you claim temperatures are FINALLY being measured properly. Earlier data including 2013 and 2014 need adjusting.
      So how much you want to bet if the unadjusted data in 2017 shows no additional warming or some cooling, you wont find mistakes in the 2015 data and start applying adjustments to that??

      • Exactly!

        Which brings us right to the point that I make again and again: if they cannot get early 21st-century temps correct using early 21st-century technology, why should we believe they get early 20th-century temps, or mid 20th-century temps, correct?

        True, they acknowledge greater uncertainty for older temperatures. (See the differences in height of the three error bars.) Still, why should we believe the temperature trend over the past 135 years is what they say it is?

      • They may have acknowledged wider error bars, but then subsequent alterations went outside these error bars!
        I am fairly certain that is a no-no.
        What on Earth do the bars signify if alterations are allowed to place the data points outside the bar!
        * insert head ‘splodin’ video*

    • Why on earth would anyone set the most pristine, most accurate Climate Reference Network to begin at a temperature anomaly of +1.7?

      If it’s the “reference” it should start at ZERO. Even USCRN is a victim of the adjusters.

  17. John,

    A couple other things you might be interested in.

    They also use stations not listed in the database to adjust the data. I found this out back in January when I did a file comparison between the GHCN files for the 15th & 16th. The raw files had identical usable data, yet the adjusted files were different. Only N. American data was affected. Some of the changes were more than +-/0.8C. I inquired, and was told they have a N. American database containing stations not in their QCU/QCA databases which they also use when making adjustments to data. Never got an answer when I inquired where I could find this database. Didn’t persistently in following up. Without that database it impossible to exactly replicate their adjustments. If those extra stations are suitable for making adjustments they should be suitable for inclusion in their public database and handled the same way.

    Then you have stations like the two shown below which have identical records in the raw GHCN file for 1961-1970. Such obviously bogus data has to skew their adjustments for all stations using them for reference. There are 100s of station pairs in the raw file scattered across the world having 7 or more months of identical data when a comparison of stations within the same country is done. It is literally trillions to one against two stations having identical data for 12 months in one year, let alone an entire decade. See table below. The USHCN portion of their raw database has no duplicates of more than 7 months.

    I told GHCN about this problem several months before they moved on to their new database, yet they have done little correct it. It appears that if they like the result they don’t want to rock the boat by correcting or removing bad data. It is disheartening to think how much money is being wasted using such a poorly maintained dataset. Little confidence can be had in the output.

    The example US station pair listed below are not considered part of the USHCN network, but are in the GHCN database and have identical raw data for 1961-1970. They are about 35 km apart and differ by 107 meters in elevation.
    42572550000 41.3 -95.9 299 EPPLEY FIELD
    42572553000 41.37 -96.02 406 OMAHA, NE.

    Here is a table of stations pairs by number of annual matching months. If either or both stations have a missing value that month is ignored.

    Matching months, station pairs
    0 148386809
    1 2461997
    2 162093
    3 16871
    4 2169
    5 322
    6 90
    7 35
    8 45
    9 46
    10 60
    11 68
    12 166
    total station pairs examined 151030771

    • I downloaded their source code months ago and I am guessing links to the “hidden” data can be found there. I have not examined that code closely, however. While it did give me a bit of a headache to go through it I have to give someone there credit for trying to add useful comments.

  18. With my ERP background, I still cannot for the life of me understand why a transactional approach to temperature readings, stored on proper databases, would not be a worthy research objective. That is: for each measurement, each day, each site, the adjustments are added as separate transactions to the originating ob. This then allows standard database query techniques to be used to see just exactly how the final value was arrived at. A made-up example:

    DateTime SiteID Type Value Process and Comment
    20150615 08:15:00 704367 RAW 12.6 Obs ex site
    20150615 08:15:00 704367 TOBS 0.6 TOBs adjustment V3.09
    20150615 08:15:00 704367 HOM1 -0.3 Homogenization V8.45.a Correct for site UHI
    20150616 12:00:00 704367 RAW 99999 Missing obs ex site
    20150616 12:00:00 704367 INF1 11.9 Infill ex V5.7.12 code average nearest 5 sites

    And so on. Database engines are made for this sort of storage and query capability.

    Use ’em!

  19. So Anthropological Global Warming does in fact exist. However, it is caused by people manipulating the data and not by CO2 emissions from burning fossil fuels as was previously postulated. Meanwhile the change of actual temperatures, particularly the oceans, which comprise 2/3 rds of the Earth’s surface, have changed very little.

    • UHI is anthropogenic warming. The effect of agriculture on mesoscale “climate” is anthropogenic. The question not whether it exists but how important it is, and whether CO2 has any influence to speak of. The probable answers are “not very” and “very little” based on geological data.

      • I think a lot of people agree with you Duster. We have a huge impact on local and regional climate. The construction of a dam affects the micro-climate for many kilometres around the reservoir. Logging, farming, ranching, cities, transportation, electricity generation and transmission, solar farms, wind farms – lots of small affects. But the world is 70% water, just a small amount of land is actually inhabited. So the question is how much are humans affecting climate? I suspect not much, but I have been wrong before.

      • Duster: I agree. Also, the fundamental question of how the CO2 emissions from burning fossil fuels affects the total CO2 in the atmosphere is not well understood. What is known is that regardless of the effect of CO2 on temperature, more CO2 is beneficial.

  20. Approximately 92% of the temperature data in the USHCN TOB output has been estimated.

    How much of the TOB data itself has been “estimated”? And has anyone done any analysis on the data? For example if a station reports a reading time of exactly 5.00pm every day for a month then I would think it likely this is not the actual reading time.

    Maybe it was generally say 4.50pm through 5.10pm but then again maybe it was more or less randomly spread between morning and evening depending on what was happening on that day for the person responsible for reading it.

    I am highly sceptical of the TOBs adjustment, not based on the maths or even the nature of the adjustment but rather because people were doing the readings and people have other priorities than being available at the right time for a daily min-max reading. People, however, don’t necessarily like to document their own inadequacies such as reading at a time that is not according to “policy”.

  21. If almost all temperature data is being adjusted, the U.S. Historical Climatological Network (USHCN) must be relying on inappropriately sited Stevenson Screens used by NOAA’s National Weather Service which breach the Climate Reference Network Site Information Handbook developed by NOAA’s National Climatic Data Centre.

    It means that, for starters, the temperature data is inaccurate (just an estimate), and then it is adjusted by further estimates. Yet we know that an estimate applied to an estimate and then subjected to another estimate, simply cannot provide an accurate answer.

  22. Whenever, the data is presented, the unadjusted raw data set should be displayed (plotted) alongside the adjusted/homogenised data set, and this would then provide some insight into the error bounds of the adjusted/homogenised data set.

    • I’ve been working with what is supposedly raw daily data that has been converted from F to C, but with flags indicating the data is suspect. Missing data (never logged) is indicated with -9999.

      The flags indicating suspect data usually make sense. When you see a summertime temperature of -44°F in Arizona you know something ain’t right. But when you cull both missing and suspect data, you have a lot of missing days.

      Flagged data has one of these flags:
      A = failed accumulation total check
      D = failed duplicate check
      G = failed gap check
      I = failed internal consistency check
      K = failed streak/frequent-value check
      M = failed megaconsistency check
      N = failed naught check
      O = failed climatological outlier check
      R = failed lagged range check
      S = failed spatial consistency check
      T = failed temporal consistency check
      W = temperature too warm for snow
      X = failed bounds check

      • Isn’t the whole point of the MMST to avoid “missing” days and “suspect data”? If you get a reading, as per the example you gave, of -44 degrees F, then something is wrong with the transmission system or the actual measuring device itself. Why, then trust the other days? What if the other days the temperature is only slightly off, and falls outside of any correction algorithm? What then? Humans would actually be better than the instruments on those occasions.

  23. GWPF (and others) have suggested that an enquiry is being undertaken into the various temperature data sets and how these are put together and the adjustments made. The recent articles on WUWT should be presented to them for their consideration since these contain much insight.

    • richard verney says:
      September 28, 2015 at 1:45 am
      GWPF (and others) have suggested that an enquiry is being undertaken into the various temperature data sets and how these are put together and the adjustments made. The recent articles on WUWT should be presented to them for their consideration since these contain much insight.
      doesn’t seem any point sending them stuff – seems they may have destroyed their own credibility:

      I don’t think any credible scientist would object to scrutiny of data sets and adjustments made providing it was done by indepentant scientists who understand the physics.

  24. Physics students in schools during the 1980’s were taught that mercury in glass thermometers would be precalibrated against a constant volume gas thermometer prior to use in a critical situation, for example, a weather station. Such thermometers came with a higher price, but not too much compared to the total price of the station. Any change in the thermometers characteristics over time due to the supercooled liquid nature of class could be accounted for by careful study. From first principles I would say that the ageing would cause the thermometers to under indicate the temperature due to the bulb expanding over time. I was always taught that the constant volume gas thermometer is used to calibrate all the others. The difficulty for me in understanding the adjustments from historical data is the direction of the adjustment and the lack of trust in the scientists then who made the readings, mostly with vernier scales to obtain an extra decimal point.

  25. John Hinderaker at Powerline just referenced this post. A thread currently exists at “memeorandum” for both Hinderaker’s post and this post. Hinderaker used a line in his post similar to the one below that you used in this post:

    >> Note that the US land area (including Alaska and Hawaii) is 6.62% of the total land area on Earth.<<

    Is this true? The world is 71% water and 29% land. I assume the U.S. is 6.62% of the Earth's total mass (including oceans), not just it's land mass. Since this post is getting much attention I hope that data is accurate.

    • Since we are dealing with the land thermometer record, I took the 6.62% figure to mean 6.2% of the land surface area of the globe, not 6.2% of the total surface area of the globe.

      A very quick internet search suggests that in sq kms, the Earth’s surface area is about 510,072,000 sq. km, the land surface area to be about 148,940,000, and the surface area of the US (which I assume includes the Great Lakes) to be about 9,147,400 sq. km.

      So a figure of 6.2% appears to be a reference to the land surface area of the US in relation to the land surface area of the globe.

      • Yes, thanks. I wish I had Googled before posting my comment. After I posted the comment and did a couple of other things it was bothering me so I decided to dig into it. What I found is similar to what you found. I came back to clarify. Thanks again.

  26. The collection and averaging of local temperature “data” by central governments, or at a global level, appears to be a complete waste of taxpayers’ money.

    Average temperature is a meaningless statistic.

    If there was evidence of significant harm to humans, animals or plants from climate change, and there is none so far, it MIGHT be useful to compile average temperature statistics. I’m not sure why.

    And while I’m in a good mood, any Pope who encourages MORE poverty by opposing capitalism, opposing the use of cheap high density sources of energy, which poor people desperately need, and thinks CO2 is a satanic gas, in spite of the fact that it greens the Earth … must hate poor people.

  27. I estimate that is was really, really, really cold, back in the good old days, after all, I and everyone else used to walk to school 10 miles in a blizzard. Meanwhile, we know it is burning, scorching, crazy hot now. After all, Dakota James said so. As we can see, whereas, in the good old days, people wore nice neat 3 piece suits, long coats and hats, now, all we see are puke rock playing, underwear as outerclothes wearing waifs, who are mere moments away from becoming climate refugees. It will happen, and did happen, in 1997. Lake Michigan is now dried up, and there are now horse races in Antarctica. Oh jees, I need to adjust my meds! / sarc.

  28. The US accounts for 6.62% of the land area on Earth, but accounts for 39% of the data in the GHCN network.

    Of the 96,191 stations in the GHCND-Stations list as of 1/2015, 53% are in CONUS and Alaska.

    61% are in CONUS, Canada, and Mexico.

    I’m wondering how it can be that 61% of the data comes from 47% of the stations. Is this due primarily to the long duration of European stations?

  29. Even if… and that’s a mighty big IF… that so-called “global warming” turns out to be true, and all those Ph.D.-level scientists turn out to be right instead of deliberately putting on a hoax for the fun of it… even if they’re right and global climate change really is happening… the way nice white people live has absolutely nothing to do with it. Absolutely nothing! So let’s all keep driving alone to the apocalypse. Our grandchildren will curse us for being idiots, but what the hey! At least we tried our darndest to protect the lil baby fetus!

  30. Looking around and reading some here and there, here’s what I see:

    When they changed the Tobs in the 1940’s onward, the raw data doesn’t show much of a glitch. If this is supposed to introduce a “cooling” effect, then why don’t the raw data reflect this? Makes no sense. But, maybe there really wasn’t a “cooling” effect. Maybe that’s the real answer.

    The raw data doesn’t show much of glitch throughout for the entire 20th century. But here are two very curious observations:
    (1) when stations were switched to MMTS from LiG, the number of months where data wasn’t collected increased? How can that be? This is supposed to be automatic, not human. Why the missing data?
    (2) the amount of “warming” that has occurred over the last century is of the same measure as the “cooling” correction of MMTS over LiG, around 0.5 degrees. IOW, when you start trying to include such “correction” via a computer, you’re playing with a figure that is equal to the variable you’re trying to gauge itself. One has to be extremely careful.

    Finally, related to point #1: how, exactly, is the MMTS temperature “recorded”? Is there some sort of electric wire that connects all of this up to some computer? Well, what about this wire? Is it being taken into consideration?

    First, is the lack of connectivity the reason for all the missing days? If so, then one wonders what you’re really dealing with.

    Second: temperature affects the conductivity of metals. Colder days should produce ‘less’ of a signal, and warmer days ‘more’ of a signal: IOW, colder temperatures on cold days, and warmer temperature on warm days. If you have more stations in warmer areas of the country, then this will skew your temperatures.

    From the data I’ve seen, once the MMTS went into effect, all hell broke loose. This needs scrutiny. Otherwise: “junk in; junk out.”

    • There is a limit to how long the cord is allowed to be, which has resulted in MMTS stations being closer to buildings than the Stevenson Screens they replaced.
      For one thing.

  31. When Tony Heller exposed this same information, Anthony Watts trashed him in the media. Just when it first hit the light of day and Drudge picked it up, Anthony became the poster child for the climate lunatics to discredit the premise. At that point the momentum collapsed.

    Anthony you planning to come out and fess up that you were wrong about that, that Tony was right? It would be the honest and manly thing to do.

  32. Professing themselves to be Wise,they became Fools.Romans,KJV,Apostle Paul>well said,Pastor Paul,😊😁😃😉👆👀!!!

  33. Climate has turned into a religion….because of the money. Hard scientific facts like this article is facts, not believes.
    There is of no scientific interest how many scientists supporting a theory. A theory is leading as long as it can best explain the relevant observations, and one experiment is enough to disprove the theory. Basic skepticism about the current theories is thus a major driving force in the development of science.

  34. Reblogged this on Climate Collections and commented:
    Further analysis of the extent USHCN data adjustment.
    Executive Summary:
    The US accounts for 6.62% of the land area on Earth, but accounts for 39% of the data in the GHCN network. Overall, from 1880 to the present, approximately 99% of the temperature data in the USHCN homogenized output has been estimated (differs from the original raw data). Approximately 92% of the temperature data in the USHCN TOB output has been estimated. The GHCN adjustment models estimate approximately 92% of the US temperatures, but those estimates do not match either the USHCN TOB or homogenized estimates.

    The homogenization estimate introduces a positive temperature trend of approximately 0.34 C per century relative to the USHCN raw data. The TOBs estimate introduces a positive temperature trend of approximately 0.16 C per century. These are not additive. The homogenization trend already accounts for the TOBs trend.

Comments are closed.