Even more on the David Rose bombshell article: How NOAA Software Spins the AGW Game

Guest essay by Rud Istvan

The disclosures by Dr. Bates concerning Karl’s ‘Pausebuster’ NOAA NCEI paper have created quite the climate kerfuffle, with Rep. Smith even renewing his NOAA email subpoena demands. Yet the Karl paper actually is fairly innocuous by comparison to other NOAA shenanigans. It barely removed the pause, and still shows the CMIP5 models running hot by comparison. Its importance was mainly political talking point pause-busting in the run up to Paris.

Here is an example of something more egregious but less noticed. It is excerpted from much longer essay When Data Isn’t in ebook Blowing Smoke. It is not global, concerning only the continental United States (CONUS). But it is eye opening and irrefutable.

NOAA’s USHCN stations are used to create the US portion of GHCN. They are also used to create state-by-state temperature histories accessible on the NOAA website. A 2011 paper[1] announced that NOAA would be transitioning to updated and improved CONUS software around the end of 2013. The program used until the upgrade was called Drd964x. The upgrade was launched from late 2013 into 2014 in two tranches. Late in 2013 came the new graphical interfaces, which are an improvement. Then about February 2014 came the new data output, which includes revised station selection, homogenization, and gridding. The new version is called nClimDiv.

Here are three states. First is Maine, with the before/after data both shown in the new graphical format.


Second is Michigan, showing the graphical difference from old to new software.


And finally, California.


In each state, zero or very slight warming was converted to pronounced warming.

One natural question might be whether upgraded homogenization (among other things ‘removing’ urban heat island (UHI) effects) is responsible? No from first principles, because the NOAA/NASA UHI policy is to warm the past so that current temperatures correspond to current thermometers (illustrated using NASA GISS Tokyo in the much longer book essay). This might be appropriate in California, whose population more than doubled from 1960 to 2010 (138%) with current density ~91 people/km2. Maine represents a similar ocean/mountain state, but is much more rural. Maine’s population grew by only a third (34%) from 1960 to 2010, and its current population density is just 16.5 people/km2. Maine should not have the same need for, or degree of, homogenization adjustment. Without the newest version of the US portion of GHCN, Maine would have no warming; its ‘AGW’ was manufactured by nClimDiv.

It is possible albeit tiresome to analyze all 48 CONUS states concerning the transition from Drd964x to nClimDiv. NOAA gave 40 out of 48 states ‘new’ AGW. The Drd964x decadal CONUS warming rate from 1895 to 2012 was 0.088F/decade. The new nClimDiv rate from 1895 to 2014 is 0.135F/decade, almost double. Definitely anthropogenic, but perhaps not actual warming.

[1] Fennimore et. al., Transitioning…, NOAA/NEDIS/NCDC (2011) available at ftp.ncdc.noaa.gov/pub/data/cmb/GrDD-Transition.pdf

322 thoughts on “Even more on the David Rose bombshell article: How NOAA Software Spins the AGW Game

      • Rud and Willis,
        First let me say I appreciate your work and comments on Anthony’s site.
        I am a land surveyor by trade. We deal with past measurements in my profession with some dating further back than these temperature sets.
        I was taught early on that data is sacred…it is why we write notes in pen and cross out errors and not erase them.
        GPS allows us to measure more accurately and with more precision but it does not change what was measured in the past. Since a man’s land is very precious to him the court system has stepped in to set precedent on how land is measured and how physical monuments (walls and fence lines, roadways etc.) are to be treated. We surveyors then learn about proper techniques for measuring data and noting it and although it is not perfect we do not throw out or adjust the old because we can measure better now.
        I know other fields have similar histories and my question to you two and the rest of the commentators is at what point does data cease to be data because of constant manipulations and why hasn’t the courts stepped in as the did in land surveying? After all we are now talking about huge sums of money and policy decisions not unlike dealing with someone’s land.

      • TC in OC,
        I totally agree with you on that, I know my license would be jerked if I changed the distance, elevation or position every time I went to the field. They change the data everytime they change their underwear.

      • I second and third what TC and AK say as a professional surveyor in Kansas.
        (I didn’t realize so many surveyors were interested in AGW)

      • On land surveying, that’s an excellent point, I’ve been trying to think of a comparable discipline where historical data is important.
        As an aside I remember (over half a century ago) many school holidays spent in the ‘back blocks’ in the hot Australian sun holding a staff for my father reading the theodolite or level.

      • Surveyors are out in the elements running a traverse or level loops across the landscape and actualy know about the climate, unlike the climate modelers behind a computer who don’t even look out their basement window to see the weather.
        We have been out there in the real world and I personally don’t see this supposed warming.

      • TC — Basically you are asking where are the audit trails? Answer: There aren’t any.
        In fairness, Berkeley Earth — the newest data set — does keep the original data and shows the original and “corrected” value as well as error estimates. AFAIK NOAA does neither.
        Real scientists apparently do not need audit trails (or error estimates).

      • Back in the day, when we used steel tape measures we would attached thermometers to them so we could adjust for temperature change. The steal tapes were calibrated at 70 Deg F. When measuring over asphalt or concrete roads, the temperature could be up to 15 degrees warmer than other. The same when measuring over a freshly plowed wheatfields too. We know first hand about UHIs.

      • DCA February 7, 2017 at 3:01 pm
        “Back in the day,… The same when measuring over a freshly plowed wheatfields too. We know first hand about UHIs.”
        Yes of course, but wheatfields are not urban so not usually considered as part of UHI. As you have observed, as have those who go gliding, farmland can be an excellent source of heat and the reason for that heat is absolutely nothing to do with CO2

      • Let me add to what the surveyors have written. It gets even better (or worse) when the cooked temperature is used to generate an also-cooked future sea level to establish a coastal boundary. ACSM BULLETIN December 2008 covers how it should be done, averaging over the previous 19-year tidal epoch.
        Note that sea level has a periodic component from the ~19-year lunar/solar Metonic cycle that the local coastal agency attributed to wind-driven upwelling. As a result of this and other shaky reasoning, there are three different sea level rises for 2100 at the same location, 11″, 42″, and 66″.

      • Don K – You said:
        “Don K February 7, 2017 at 2:59 pm
        In fairness, Berkeley Earth — the newest data set — does keep the original data and shows the original and “corrected” value as well as error estimates. ”
        Do they keep actual original data or do they keep the data they were given originally?

      • JohnWho

        Do they keep actual original data or do they keep the data they were given originally?

        I think you are asking if their “original” values have been tinkered with. Answer: I don’t know, but I think they attempted to avoid that. Doesn’t mean that there aren’t an unknown number of transcription errors, many decades or centuries ago, in copying data, reading handwritten journal/log entries, etc. And of course, a lot of the real, actual, thermometer readings were probably none too accurate.
        But systematically tinkering with the damn data without records of exactly what was changed, by how much, and why is — as far as I know — an innovation developed by college students in the early 20th century (or maybe earlier) when it was known as “dry-lab”ing and was frowned upon. It has been refined into the main stream by British and American “climate scientists” in the past couple of decades.
        Perhaps someone who knows more about the Berkeley Earth project than I do can provide a better answer.

      • Don K,
        “shows the original and “corrected” value as well as error estimates. AFAIK NOAA does neither.”
        NOAA shows both. The directory is here. The adjusted, which is all people here seem to care about, is called qca. The unadjusted is qcu.
        How do we know it is really unadjusted? It hasn’t changed since V1, early ’90s, which was issued on DVD. You can’t issue adjustable data on DVD. But if you really want to check US data, NOAA shows facsimiles of the original often hand-written B19 forms here.

      • NOAA shows both. The directory is ftp://ftp.ncdc.noaa.gov/pub/data/ghcn/v3/

        Thanks Nick. But from the README file, that’s a Global HCN not the US HCN and while there’s lots of probably useful metadata, the actual data in each data set seems to be only only three values per month — monthly minimum, maximum and mean temps (although they obviously have the daily values). I’ll download a couple of files and look at them, but not today as it’s 0400 local and I need to do some outside work tomorrow during a predicted brief interval of above freezing, dry, weather.

    • It would be really interesting to run their new AlGorethms against the Death Valley station and see if it also creates the same pronounced warming. See if the 1913 Furnace Creek reading is also dropped by 3deg

    • I thought there were about 50 states in the US, why are you talking about only three of them? Fake news!

      • Obama said there are 57 states. Not even a month gone and Trump has already lost SEVEN states!.

      • There are 48 in Conus. Defined specifically in the post. As a troll, you are not doing so well. Keep trying, and I will delight in continuing to shoot you down. Shooting ducks in a barrel.

      • It is silly but I have reply to Jorgekafkazar, Obama did not say there were 57 states he implied there were 59 state, he sai e ad been to 57 states with two to go, but they wouldn’t let him go to two states.

      • With a CO2 output of approx 1.2% of the USA national number, why would Washington State give a crap about it? If they all died tomorrow they could save 1.2%?!?!?!?! Waste of time and money.

    • From above article:

      “because the NOAA/NASA UHI policy is to warm the past so that current temperatures correspond to current thermometers”

      UHI policy is to warm the past?
      Is that specific to UHI only?

      • The ‘official’ NOAA/NASA solution to UHI. Read the essay in the book. Just is. All I did was irrefutably document the official policy. Screen captures.

    • One small beef Rud, which does not detract from your excellent article. The WUWT glossary says that CONUS stands for conterminus united states, not continental. That is a weird term, which I think means it covers only 48 of the USA’s 50 states. In other words, Alaska & Hawaii are not included.

      • I’m pretty sure the term was invented by the US military sometime in the mid 20th century. While today’s military would be quite comfortable with the word “Contiguous”, the military of the 1940s and 1950s would have been most unlikely to use it. “Continental United States” OTOH would have made perfect sense to them.

      • Until Alaska was admitted as a state, Con US as both Contiguous and Continental would have been correct. Only after states 49 & 50 were added would there have been a difference between the contiguous states and all the states.

      • The Contagious United States. Watch out! Even if nowhere else has come down with it since Hawaii, you never know when it might start spreading again. •¿●

    • Could you please publish the exact data sources for your graphs, or the URL of the plotter which produced the graphs?
      Thanks in advance.

      • The before are no longer available. They were screen captures made while writing the book essay. I substituted Joe D’Aleo’ Maine because he accidentally got lucky updating a talk and got themnew display with the old data. The nClimDiv versions are at data.noaa.gov. Google nClimDiv and it takes you directly to the home page, with everything organized under it.

  1. The fact of the matter is that Karl et al 2015 showed that it was possible to “torture the data” to say anything your fiscal masters wanted.

  2. Wow! Living in Maine, I guess I should be thankful that it is getting warmer!
    Of course, the Sea-Level isn’t cooperating. There’s a slight rise over the past 107 years, but at year’s end last year, the Portland Tide Gauge read NO CHANGE from 1910 (to the millimeter). Looks like we’ll have to ask for even more warming! Excuse me, now, as I have to go outside and shovel about a half foot of Albedo out of my driveway!

    • Got a link to that Tom?
      From an article in the Australian Newspaper.
      “ONE of Australia’s foremost experts on the relationship between climate change and sea levels has written a peer-reviewed paper concluding that rises in sea levels are “decelerating”.
      The analysis, by NSW principal coastal specialist Phil Watson, calls into question one of the key criteria for large-scale inundation around the Australian coast by 2100 — the assumption of an accelerating rise in sea levels because of climate change.
      Based on century-long tide gauge records at Fremantle, Western Australia (from 1897 to present), Auckland Harbour in New Zealand (1903 to present), Fort Denison in Sydney Harbour (1914 to present) and Pilot Station at Newcastle (1925 to present), the analysis finds there was a “consistent trend of weak deceleration” from 1940 to 2000.
      Mr Watson’s findings, published in the Journal of Coastal Research this year and now attracting broader attention, supports a similar analysis of long-term tide gauges in the US earlier this year. Both raise questions about the CSIRO’s sea-level predictions.
      Climate change researcher Howard Brady, at Macquarie University, said yesterday the recent research meant sea levels rises accepted by the CSIRO were “already dead in the water as having no sound basis in probability”.
      “In all cases, it is clear that sea-level rise, although occurring, has been decelerating for at least the last half of the 20th century, and so the present trend would only produce sea level rise of around 15cm for the 21st century.””
      May be paywalled.

      • Sea level “rise” at Fremantle since 1897 has been approximately 200mm
        Anthropogenic subsidence around the Fremantle tide gauge [due to removal of groundwater ] since 1897 has been estimated at 1.2mm/year until 1975 and around 1.75mm/year since 1975. [http://onlinelibrary.wiley.com/doi/10.1002/2015JC011295/full] for a total subsidence in the period 1897 – 2015 of approximately 164mm .
        That suggests a sea level rise of 36mm at Fremantle over 118 years or a rate of 0.30mm/year.
        Head for the hills Noah……..

      • The warmists bleat about fearing for their jobs, while at the same time, actually destroying others’.

  3. These people are experts at not cooperating with Congress. Send in the FBI and raid the place and the ex-employees’ homes too.

    • The F.B.I is crooked, too. Comey refused to face all the crimes committed by Clinton and it’s been discovered that Clinton loaned Comey’s wife a hundred thousand dollars. The F.B.I. isn’t going to be indicting or putting ANYBODY from Washington DC in jail.

  4. Not being an American, I did not understand anything that you pretend to have explained..Could you please state it in English?

      • Merci cher ami. USHCN, CHCN, Conus, nClimDiv, don’t make sense to me. Shenanigan does not either, it’s not yiiddisch, it definitly is not mexican, is it trumpish?.

      • Also:
        EN shenanigans French translation
        shenanigans {pl} FR combines
        shenanigan {noun} FR mystification fumisterie

      • François,
        USHCN – US Historical Climatology Network: “The United States Historical Climatology Network (USHCN) is a high quality data set of daily and monthly records of basic meteorological variables from 1218 observing stations across the 48 contiguous United States.” Here: http://cdiac.ornl.gov/epubs/ndp/ushcn/ushcn.html
        GHCN – Global Historical Climatology Network: The Global Historical Climatology Network (GHCN) is an integrated database of climate summaries from land surface stations across the globe…The data are obtained from more than 20 sources.” Here: https://www.ncdc.noaa.gov/data-access/land-based-station-data/land-based-datasets/global-historical-climatology-network-ghcn
        CONUS: Continental United States
        nClimdiv: This is merely the name of a software program, with no inherent meaning relevant to this discussion.
        J’espere que c’etait utile.

      • Francois — USHCN, CHCN, Conus, nClimDiv are acronyms like RADAR(Radio Detection And Ranging).
        . United States Historical Climatology Network
        . Canadian Historical Climate Network
        . Contintental Unites States
        . NOAA Climate Division
        “Shenanigan” is indeed an americanism although its been around since the mid 19th century and I think has spread to other English speaking countries.. Origin is unknown — possibly a corruption of Celtic(Irish) sionnachuighm or German Schenigelei both of which refer to playing tricks.

      • Francois,
        USHCN – United States Historical Climate Network
        CHCN – ? Perhaps mis-typed GHCN – Global Historical Climate Network
        CONUS – Continental United States (United States minus Alaska and Hawaii)
        nClimDiv – name of a program used to process and display USHCN data
        I had never thought of shenanigans as being English slang, but perhaps it is. It means usually underhanded tricks implying the person responsible is up to no good.
        Hope this helps, I often miss the meaning of French and German idioms when reading in those languages so understand the confusion.

      • I had always thought shenanigans to be of Irish origin. It was commonly used by elders on both sides of my family tree with roots deep in the Auld Sod. Usually in an admonishing tone of voice.

      • vukcevic February 7, 2017 at 2:21 pm
        The Gold Rush was still going strong in 1855. My family, which is Californian back to the Gold Rush always attributed shenanigans to Irish. but then half the family is.

      • Thanks, I think I made my point : the word has no known etimology, you do not not know what you are talking about.

      • François February 7, 2017 at 1:46 pm
        Thanks, I think I made my point : the word has no known etimology, you do not not know what you are talking about.
        Your above statement in an example of the the meaning of the term. Another example is the Term “pulling a fast one” Take your pick. As to making a point, nope. Personal ignorance or lack of comprehension by one party cannot be construed as a deficiency of other parties.
        You asked for information and clarification as to the meaning of a word or phrase, thus you had no point to make. To make a point you would have had to have a clear concise understanding of the term and any synonyms of it..
        “Shenanigan” you played a good one 🙂

      • Shenanigan is a well known word and well understood on this side of the pond. Often used by Irish immigrants and could have had an origin other than Ireland.
        The American-English language is filled with words that people in other parts of the world don’t understand what they mean. Take Native American words for example that are commonly used on this side of the pond.
        American-English words number about 260,000 in a 26 volume dictionary.

    • François February 7, 2017 at 1:13 pm “Merci cher ami. USHCN, CHCN, Conus, nClimDiv, don’t make sense to me. ”
      WR: François has a point. For us, foreigners (and there are a lot of them on this international website) abbreviations are difficult to read. We must not only learn other (English) words and other grammar but we also have to learn all kind of abbreviations that represent thousands of possible word combinations. For foreigners it is very difficult to guess which combination of words is meant, even more than for native speakers, because you know better all the different words that can be meant by a certain letter.
      Therefore, typing a bit more words makes for us every text much easier to read! For every text without abbreviations: thanks in advance! Or, compromise: first time all the words with the abbreviation between ‘( )’ behind the words.

      • Wim, of course you are right. I did exactly that in the guest post above for CONUS. But not otherwise, assuming commenters and lurkers here would be familiar with climate alphabet soup. I stand corrected. I am fluent in idiomatic German but would drown in (German gov agency alphabet soup also. You just raised the quality standard on an influential international blog. Highest regards.

      • Even for us whose first language is English, the abbreviations, “alphabet soup”, is tough to digest! 😎
        ( I’d never seen “nClimDiv” myself. From hanging around here, I’d picked up on the concept of such a thing but never knew it had an abbreviation.)

      • Not really on-topic, but I agree about the need to explain acronyms or (non-obvious) abbreviations the first time they are used in a document. It’s all too common that they are assumed to be understood, or left until late in the document to define. It’s maddening!

      • It doesn’t help that the same TLA (Three Letter Acronym) can be used for several very different things. It can cause major confusion.
        An example I heard one gaming night: If you’re going off base for a night of playing Dungeons & Dragons, and while stopped at the gate they ask if you have anything in the trunk, don’t answer “Just some RPG’S”. ~¿~

      • When in 2007 I got overwhelmed by acronyms I browsed on to Steve McIntyre’s website https://ca.org AKA climate audit. There was ( still is ? ) an alphabetic, comprehensive list of acronyms. In English of course. Maybe the No Tricks Zone might cover a comparable list in German/French or lead you to one. Hope that helps.

    • The first words are acronyms and contractions, which do not have a direct translation or denotation. Shenanigan can be traced to the Spanish word “chanada” meaning trick or deceit. The word “yiiddisch” is misspelled. It should be Yiddish, which means Jewish in German. Mexicans share a common language, Spanish, with Spain. Trumpish has the same etymology as trollish.

      • I read here that trump is a slang term for fart in Britain . . and pence . . So it’s the fart penny administration, headed by Don Juan ; )

    • On moderation.
      [yes, that’s right Nick, and you’ll stay there- your ugly comments on the original Rose/MoS thread accusing David Rose of being a liar with nothing more than your own angst has earned you moderation again. By all means be sure to alert “Sou” aka Miriam O’Brien to your terrible, terrible, treatment here. We tolerate your endless diatribes, but we don’t have to give you unfettered access, which is more than RC, SkS, and other friends of your do on their websites for skeptics.
      BTW, still waiting that apology over those accusations you made on “death threats” to Aussie climate scientists which turned out to be nothing but hype.
      And, I’m still of the belief that you are a paid commenter. So please, feel free to be as upset as you wish.
      -Anthony Watts]

      • “endless diatribes” … hmmmm. Oh, you must mean informed substance and logic …. I can see why that threatens you.

      • Just my two cents, I was somewhat ‘won over’ to some skeptic positions based on the treatment RP jr received on RC many years ago, demonstrating inadvertently that he must have some ‘real points’ that couldn’t be rufuted through normal dialog. Anymore, this has become quite (the opposite) echo chamber that RC used to be. IMO, if Nick and or Mosher don’t comment, we Never get the opposing side to any of these arguments. You may disagree with them, but they truly add great value to this site for those of us who still actually want a dialog instead of the echo chamber 100% of the time. Thanks!
        [moderation simply means that Nick’s comments get inspected before being published, Mosher, given his tendency to be emotional and shotgun style in commentary, has been on moderation for quite some time to filter out/give him a chance to reword some of his nasty comments from time to time. I’ll point out that for most of it’s history, WUWT had been FULLY MODERATED, meaning all comments had to be approved. A couple of years ago I went to unmoderated, with a few exceptions. Nick is now an exception, but can still comment, and still does John@EF’s caterwauling is as usual, uniformed – Anthony Watts]

      • Re: John@EF February 7, 2017 at 5:07 pm “Oh, you must mean informed substance and logic”
        Nick comes across as intelligent and informed. But he is clearly selling something. As for logic, the evidence shows that CO2 follows temperature, and ergo cannot be a driver of temperature. Ipso facto.
        Re: S. Geiger February 7, 2017 at 5:40 pm
        Mr. Watts gives wide leeway to a wide range of opinions and voices here. This is no echo-chamber. But nastiness is not tolerated from anyone. This is not a mud-flinging zone. If you want that, there is plenty of other places that are one-sided echo-chambers of mud (and worse) flinging.

      • Nick, do you get paid anything for being relegated to the “cooler”? Oh, that’s right–you can’t respond. Bang twice on the pipes for “Yes”; once for “No”. Or remain silent and we’ll all be happyl
        The obligatory /sarc.

      • @ myNym February 7, 2017 at 6:14 pm
        Historically, CO2 has primarily been a lagging feed feedback. During Snowball Earth events CO2 was a forcing. With the advent of mankind, CO2 is now a forcing. Your comment that CO2 cannot be a driver of temperature isn’t accurate. This is pretty basic stuff, Nym
        Stokes is one of the least nasty posters on this board, and he add a lot of clear thinking to the discussion. If you’re talking about nasty as a criteria for moderation and banning, there are very many here that top the list above NS.

      • John@EF, Nick is allowed to post here despite his antics here and elsewhere.
        Why are warmistas so “threatened” by skeptics? “Informed substance and logic?” Why the drastic efforts of censorship?

      • @Michael Jankowski February 7, 2017 at 7:48 pm
        Why are warmistas so “threatened” by skeptics? “Informed substance and logic?” Why the drastic efforts of censorship?”
        ??? … this makes no sense. Try again.

      • What a pity. I always look up Nick’s comments in any thread, to get an intelligent and informed sight of the “other part”. I don’t think he is always balanced. Quite the contrary, but you need a counter balance to get a balanced view.
        Anything less than a free pass is a loss. Me thinks.

      • I sincerely take no sides, though would just like to point out that Nick’s responses in comment threads have been generally civil, as well as cogent. Perhaps those with more knowledge of the situation than I, and more time on the site have reason to disagree with me. Regardless, I think it would be ideal for skeptics to approach rebuttals with less attitude and character attacks.
        Seems that calling his responses “endless diatribes” and accusing him of being a paid commenter is uncalled for.
        [you only have the current perspective, I have years of dealing with Nick Stokes. Steve McIntyre gave up on him years ago, see: https://climateaudit.org/2014/10/01/sliming-by-stokes/ and https://wattsupwiththat.com/2014/03/03/monday-mirthiness-the-stokes-defense/
        Due to Nick’s inability to concede any point, ever, and his factual/logical twisting, McIntyre gave Nick a label Nick “Racehorse” Stokes, after the sleazy, but sometimes effective lawyer, Richard Haynes:
        Nick has thousands of comments on several blogs, here alone he has 1710 comments. Inability to concede a single point over thousands of comments seems pretty much like “endless diatribes” to me. YMMV. – Anthony Watts]

      • John@EF February 7, 2017 at 7:45 pm
        500 million years ago atmospheric CO2 was at 8000 ppm. If it was possible for CO2 to cause a runaway Glow Bull warming, it would have happened then.
        CO2 may have a minor positive feedback, but it cannot be a major driver.

      • Come on everyone , apart from any abuse, Nick and Mosher and any of the other names they write under are great fodder.
        I used to meet some of them at the Telegraph in the environment comments section, with time as you pulled them apart they would go fruit bat crazy, it was eye wateringly funny.

      • I always look for Stokes and Mosher posts on here. Not because I think they are telling the whole truth, but because I want to know how the other side is twisting the story to fit their narrative.

      • Kyle, you should read what Nick says about WUWT and posters here after he runs off to other sites.

      • “Stokes is one of the least nasty posters on this board, and he add a lot of clear thinking to the discussion. If you’re talking about nasty as a criteria for moderation and banning, there are very many here that top the list above NS.”
        I would have to agree with that.

      • Faux politeness is one of the most common passive aggressive smokescreens. Always sounding “nicer” than your detractors, so you can always fall back on complaining about how others are being so rude to you (or count on the innocent unwittingly doing it for you). Like a grandmother with a knife in her false teeth. The nastiest can sometimes be the most impeccably mannered.

      • And, I’m still of the belief that you are a paid commenter. So please, feel free to be as upset as you wish.
        -Anthony Watts]

        Isn’t the suggestion that skeptics are paid meant to be one of the stupidest alarmist conspiracies?

  5. Man-made global warming, only seen in man-made charts.
    Now speaking of shenanigans, is anyone else curious about the apparent stepwise drop in sea ice at BOTH poles that coincedently happened after one of the satellites suffered a malfunction? This also happened precisely when the sea ice page here went haywire.

    • No, I’m not curious, it’s pretty clear what’s happening … according to the Russian and Norwegian ice breakers that are currently ice-bound there is not a bit of ice anywhere to be seen anywhere in the Arctic, the satellites are spot-on

      • Mark from the Midwest February 7, 2017 at 12:52 pm
        according to the Russian and Norwegian ice breakers that are currently ice-bound
        Norwegians Too??

      • (Shhhh, exnay on the eaicesay)
        Yep, that Arctic is definitely going to be ice free THIS summer. In fact, this would be a great time to plan a ship of fools Arctic Sailing Expedition. Take some measurements for science, turn in a few media articles from onsite to Huffpo, maybe even get a picture taken with one of the few remaining Polar Bears, before they’re all gone.
        Oh, and don’t worry about bringing an Icebreaker, it definitely won’t be needed. In fact, you know what would be the best ship for this 3 hour tour? Dicaprio’s yacht! Think of all the Climate Communicators that could sail into history in style and comfort on that baby. Somebody need to get Leo in on this, pronto.
        There could even be a website with continuous updates to show the world how much progress has been made. What could go wrong. ○¿○

    • The ice should start coming back now, because the Sun is rising again (well at 71N or so) and the Modis visible satellite cameras will prove where the sea ice actually is.

      • The thing is, the change was most apparent in the southern hemisphere. Antarctic sea ice was chugging along as normal or even slightly above average for years with a slight increasing trend, then a satellite goes down, and ever since the southern sea ice has been consistently below 2 standard deviations.

      • RW, maybe the satellites have been in error for a long time, or maybe the one currently being used is in error. Either way it means we all would be wise to take all satellite-based, global climate products with a large dose of salt. I think technology is fantastic, but have no faith that anyone can calculate global sea level to an accuracy greater than 50cm using satellites. To my simple mind, the effects of tides & winds make the task impossible, let alone the calculations taking isostatic changes into account.

      • The Ross Sea Antarctic resupply vessels (incl. USS Polar Star), had several times the usual heavy sea ice to get through this summer. The meridional winds blowing onshore around both poles this last twelvemonth have piled ice up that would have otherwise spread and seeded more extensive growth. It has not been hot though, and thickness is increasing overall.

  6. Remember, the charter of the IPCC is to identify MAN MADE climate change. Only 5% of the annual CO2 is man made. 95% is natural. Without CO2 as a warming agent, the IPCC has no charter. Now, one must also remember that it is JUST that 5% that is responsible for any global warming. Let’s see, what is 5% of 0.01% (CO2) again??? Oh yeah, that’s 5 ppm. Anyone else see this disconnect. 5% of a trace gas is responsible for hurricanes, droughts, floods, snowstorms, and everything else that is undesirable in the climate. No WONDER they have to fudge things to try to stay relevant.

    • John,
      Wrong reasoning… The 95% natural + 5% human is going in, 97.5% (natural + partly human) is going out: 2.5% more out than in… The natural cycle is temperature driven, mainly seasonal, and is about as large in as out. The extra uptake is pressure driven: more CO2 in the atmosphere drives more uptake by oceans and vegetation. As humans emit twice what is extra absorbed over a full seasonal cycle, the extra 2.5% accumulates in the atmosphere and humans are near fully responsible for the 30% increase…
      The temperature increase since the LIA is good for more uptake by plants and some release by the ocean surface: maximum 16 ppmv/K, according to Henry’s law…

  7. It must be really galling for the climate alarmists that their lucrative bilking and misuse of the worlds wealth is threatened by the children refusing to listen to their betters and carry on being scared, despite the unrelenting efforts of their chums in the media.

  8. Tony Hellers blog shows the distortion in all us data . Most of the warming is due to adjustments.

    • In the United States ALL the warming is due to “adjustments.” Even James Hansen agreed that the US was hotter in the 1930s:
      “It is clear that [in the USA] 1998 did not match the record warmth of 1934.”
      -James Hansen, 1999
      Arguably the 1930s was also hotter globally than today, as the appearance (in graphs) of it being warmer now is also due to adjustments, vanishing rural stations, and the urban heat effect.

    • exactly again….it’s the algorithm they invented
      A few of us have been harping on this for years…..it went no where
      Every time they enter a new set of measurements….it adjusts the past < mostly down
      If the past is adjusted every time they enter new data….then you will never know what it really was
      …and any of their claims based on past data (warming??) will be bogus

  9. “No from first principles, because the NOAA/NASA UHI policy is to warm the past so that current temperatures correspond to current thermometers”
    I’m not understanding how this statement correlates to the charts. If this were so, wouldn’t the past data be higher in the current model (nClimDiv) than the past model (Drd964x).
    Unfortunately, I can’t get the scales readable on my computer display. Are they the same vertical scale?

    • You are correct. It wasn’t some accidentally bungled homogenization. Vertical scales for old and new versions are identical, and old/new anomalies are from same baseline, as Maine best shows.

    • As I understand it, they attempted to account for heat island effect for measurements at the SAME physical location year after year by estimating what the old data would have been over the years IF the heat island had been gradually changing the readings. ie if the estimated heat island effect says the temp is now 2 deg higher that it would have been without the heat island the OLD data would be arbitrarily raised 2 deg to make the change caused by anything else more apparent.

      • NW, yes, that is what they say they do. The book essay provides detailed ‘official’ specifics. But in fact they do the opposite. The book essay provides several tens of referenced, specific, irrefutable examples. This post was just one of those, using a single software change announcement and the ‘really’ sophisticated technique of screen capture.

  10. Thanks Rud, good stuff. It seems most all adjustments over the years push the left side of the graph downward and the right side upward.
    One wonders how much of this said by this UN official (Christiana Figueres) is actually true? It sure seems like what our previous POTUS was up to in most of his 2nd term also. Just sayin, is there an agenda? Ya think?

    • Yup. And its not just CONUS. And its not just NOAA. The book essay shows examples from around the world. My favorite is Rutherglen Australia, fiddled from cooling into warming by their BOM software adjustments. This example happens to be very stark because it relates purely to a single ‘documented’ (not really) software change.

  11. yeah NOAA homogenization tricks are probably just one part of the dirty tricks pseudoscience toolkit.
    the real NOAA shenanigans happen in places around the world where there are no pesky temperature records to hold them back. infilling… that’s probably the real climate hacking tool for the fraudsters.

    • bingo, they infill most of the Arctic and Antarctic … and magically thats where the most “warming” is occurring …

    • 🙂
      If it was settled, there would be one climate model, not 100+.
      And, they could all go get other jobs, as there would be no further need for investigation.
      🙂 🙂

  12. What happened to the new set of met stations in the US that was designed to be sited in areas where there was no likelihood of urban infiltration, no localised heat sources like airports and jet exhausts, no concrete roads or brick buildings nearby? Have the records been discontinued or are they hidden away so that people cannot get at them now?
    I remember the first results after 10 years being published here showing, IIRC, a negligible warming or cooling. It is now about 15 years later, is there a new set of results available? Or are they only being published every 10 years, so the next set will be published in 2021?

    • DH, covered in the book essay. They are showing less warming than adjusted US GHCN, which means homogenization is not scrubbing UHI and not catching all the microsite problems documented by the surface stations.org project.

    • That would be the Climate Reference Metwork (USCRN). You can find the data on the NCDC. The graph they provide indicate that there essentially no difference (no statistical difference) between the USHCN and USCRN. But you still have to trust the Department of Commerce, under which aegis NOAA and NASA operate.

  13. Can Rud or anyone else help me with this problem? In 2010 Phil Jones had an interview Q&A with the BBC and listed the warming trends from 1850 to 2009. This during their Climategate fiasco.
    First trend was 1860 to 1880 0.163 c/ decade
    Second trend was 1910 to1940 0.150c
    Third trend was 1975 to 1998 0.166 c
    Fourth trend was 1975 to 2009 0.161 c.
    But now using the York uni tool the trends are——-
    1860 to 1880 0.113 c/dec
    1910 to 1940 0.129 c/dec
    1975 to 1998 0.172 c/dec
    1975 to 2009 0.188 c/dec
    Why have the two earlier trends dropped and particularly the first trend 1860 to 1880 has dropped from 0.163 c to 0.113c ?
    I’m using HAD 4 L&O, but there is a global HAD 4 Krig and that shows a higher trend for 1860 to 1880 of 0.167 c.
    Just for interest I checked the trend from 1910 to 1945 and found it to be 0.140 c/dec or higher than Jones’s second trend is now. BTW HAD 4 global Krig was 0.151 c/ dec for 1910 to 1945. What is going on?
    I just wish Willis or somebody would write a summary of the temp since 1850 or 1880 and of course since Dec 1978 as well? But just for now will someone give me an answer to Jones’s HAD temp warming trends since 1850? Willis , anyone?
    Here’s Jones’s 2010 BBC Q&A link.
    And here is the York Uni data-base tool. Note that Cowton etc allowed RSS V4 TTT but not UAH V6, but only UAH V 5.6. Of course RSS V 3.3 TLT included.

    • Best is visit some of Heller’s historical comparisons at his blog RealClimateScience. For example, simply by comparing say the ‘official’ GAST from 2000 to 2016. He has a lot of that historical comparison stuff unaffected by mistakes in some of hismotyer analyses. My book essay has three examples, from NOAA, NASA, and the UK switch from HadCrut3 to HadCrut4 and then 4.1 to 4.2.. The comparisons ALL show that the past has been increasingly cooled, and the ‘present’ increasingly warmed. And this has been going for well over a decade. Your own four period comparisons from 2010 to ‘now’ show exactly that pattern as well. Probably mostly the HC3=>HC4.1=>4.2.

      • here’s your answer Rud…
        “”NOAA/NASA UHI policy is to warm the past so that current temperatures correspond to current thermometers””
        …it’s easier to hide the adjustments this way
        Nail them on their algorithm..
        If it’s constantly changing the past…then no one will ever know what it was
        and any claims they make are bogus

      • Of course ( using York temp tool) since 1850 HAD 4 shows about 0.5 c /century warming and GISS since 1880 about 0.7 c/ cent. Interesting that the Concordia Uni study shows about 0.7 c since 1800 or about 0.32 c / century warming
        Amusingly Australia was responsible for 0.006 c warming since 1800 or about 0.002c/ cent. See down page at link for countries warming responsibility since the Industrial Rev.

      • The game is really perfidious. Once the SST is “warmed”, in total with the land data comes however only a “small” warming out. In an interim step, the land data “warmed” in combination with the SSTs comes again only a small trend change. And this game continues, never at the same time, but in time, so that the individual steps are not so noticeable in the global sea + land data.

      • It is very strange that, AFAIK, all revisions/adjustments have been one way, to make the earth warmer today to support CAGW. (have there been any going ‘the other way’?)

  14. “One natural question might be whether upgraded homogenization (among other things ‘removing’ urban heat island (UHI) effects) is responsible? No from first principles,”
    So what is responsible? Do you have any idea?

    • No. The 2011 paper laid out what they intended to do in the coming upgrade over the 3 years from 2011 to 2014. It obviously did not say what they then did. And such documentation was not publicly available last time I looked. There is a link to the intentions paper in the post. Circles back to one of Bates main complaints about documentation, validation, and version control. This is AFAIK just another stark example of his whistleblowing point.

  15. Pointed it out many times. They make their one time adjustment for UHI…but then because most of the station moves that need to be homogenized are getting rid of the UHI, a break point is created between the old/new location…and puts all the UHI adjustment right back in! And if they move the station again…they put the UHI back in for a second time.
    One has to wonder, if the routine can manage to piece together a single world temperature from erratically spaced stations for say…January, WHY ON EARTH would anyone bother to try to string moving stations into a single record in the first place? Just calculate the temperatures with whatever stations we have and when they move, just use the same routines to calculate from that. The very act of trying to string different stations into ONE pretend station is just creating more certainty where there most certainly is not any..

    • Berkeley Earth does more of that than NOAA or NASA, I believe. And footnote 25 to the book essay referenced in the post discusses one provable flaw arising from the BEST regional expectations approach using their station 166900. Mosher was quite unhappy when I commented on that footnote over at CE a while ago.

      • Francois, let me help. First, there is my ebook. Cheap on purpose, and Amazon further lowered the price to sell more. Buy and read it. Second the Berkeley Earth surface temp data set is called BEST. Third, BEST label each station they ‘ingest’ and independently analyze. Their data ingestion algorithm is proven badly off by important station Rutherglen, Australia. This you can check by comparing the actual Australian record to the BEST ingestion. Jenn Merohasy and Jo Noba have lots of details. Your todo, since I already did and can come back factually harder than what follows.
        BEST regional expectations QC model is proven defective by station 166900 in essay footnote 25. You could have googled that also, sort of, starting with BEST station 166900 and then examining their results.
        So let me help you out by paraphrasing footnote 25, since you didn’t/cannot. That is the Amundsen-Scott research station at the South Pole. The most expensive weather station by far on Earth. And arguably the best maintained. BEST used its regional expectations model to reject 26 months of record cold since the stations inception 1957 well, to,the time my book published– could be more now). The nearest continuously manned station on Antarctica from which to derive a regional expectstion is US McMurdo, 1300 km away on the coast and 2700 meters lower. Really!
        You appear to be suffering cognitive dissonance. Did not understand the graphical comparisons, which never depended on alphabet soup. Now this. Suggestion: buy the ebook, read it, and check out all the footnotes.

  16. Rud Istvan:
    All global, hemispheric and regional compilations of temperatures have no scientific validity: they are junk.
    This is because there is no agreed definition of global temperature. Each team that provides time series of global, hemispheric and regional temperature uses a different definition than every other team, and each of the teams alters the definition it uses almost every month.
    The result is e.g. this http://jonova.s3.amazonaws.com/graphs/giss/hansen-giss-1940-1980.gif
    In real science a parameter indicates something about reality. It does not display whatever its compilers want to suggest at any given time.
    The three graphs I have linked differ because they represent the different suggestions their compilers wanted the time series of global temperature to suggest at the times they were produced.
    All global temperature data will continue to be bunkum until
    (a) there is an agreed definition of global temperature that does not frequently alter
    (b) there is some possibility of an independent calibration standard for global temperature.
    Until then all global, hemispheric and regional temperature compilations will remain less scientifically valid than phrenology. Paymasters say what they want the data to suggest and the compilers of global, hemispheric and regional temperature data can and do provide whatever suggestions are wanted.

    • RC, I published that exact example in the longer book essay, with footnote attribution to Jo and a hotlink to that post. Also gifted her an authors copy of the whole book. Hyperlinking is one neat thing about ebooks.

    • TY for that, for many years this layman has been saying we CANT even measure a single temperature for the globe much less have the precision being claimed to within hundredths of a degree.

    • All global temperature data will continue to be bunkum until
      (a) there is an agreed definition of global temperature that does not frequently alter
      (b) there is some possibility of an independent calibration standard for global temperature.

      The past is the past. Some things about the past, we can know. “Global” temperature is not one of those things.
      A BIG hint is that they have to “adjust” the number somebody reading a thermometer wrote down a hundred years ago. Even if the number WAS wrong, there is no way to know what the correct number should have been. It’s adjusted to find the “right” number that fits what they or a software program says it should have been.
      I’d add a (c) The actual science must be done before and independent of political/ideological influence becomes involved.
      But I guess that brings us back to what I’ve said before, the problem with the most perfect system Mankind can devise is that there are people in it.
      PS I was thrilled to see your name again. God Bless

    • With all the hype about anthropogenic climate extremes (cold AND hot are human caused) you would think AGW enthusiasts would be clamoring for NO global temperature statistics because such a thing would obscure the extremes.

  17. Temperature seems like it should be easy:
    Start with the earliest day recorded on paper.
    Make any TOBS adjustments (I am still not sure these are needed but I’ll give it to them).
    Perform spatial homogenization to get a gridded product
    Analyze sample density to provide accurate error bars for each grid.
    Repeat for each day one at a time. This gives a non-UHI corrected base temperature series.
    Now for each grid, perform a metadata analysis of population change and building development to determine the UHI effect and error in UHI calculation in time for that grid.
    Apply the corrections for UHI based on actual development rates in each grid (whether by warming the past or cooling the present I don’t care).
    Now plot.
    I have a feeling the results from this sort of analysis would lead to warming rates since the Little Ice Age of 0.5 – 1.0K. with error bars on the order of 1 – 2K once all errors are accounted for and propagated.

  18. From the AP

    Bates said in an interview Monday with The Associated Press that he was most concerned about the way data was handled, documented and stored, raising issues of transparency and availability. He said Karl didn’t follow the more than 20 crucial data storage and handling steps that Bates created for NOAA. He said it looked like the June 2015 study was pushed out to influence the December 2015 climate treaty negotiations in Paris.
    However Bates, who acknowledges that Earth is warming from man-made carbon dioxide emissions, said in the interview that there was “no data tampering, no data changing, nothing malicious.”
    “It’s really a story of not disclosing what you did,” Bates said in the interview. “It’s not trumped up data in any way shape or form.”

    • He probably would like I avoid a libel case. He doesn’t have to take lead on the allegations he’s made now, the HSC is back to digging around. The story has blown up and people are at attention. Be clear, just because the whistle-blower wants to fade back into the mist doesn’t mean his allegations are false. As you have stated already, he gave no evidence or specificsomething. Bates just basically said “hey look over there, there’s some bad people”.

    • You may have a point there Nick. As I understand it — and I haven’t paid all that much attention — the issue with Karl’s paper isn’t the data so much as what was done with it. But isn’t that pretty much what Rud says in the opening paragraph?
      But since you are here, what’s your take on the major point of the article — apparent systematic manipulation of US temperature data by NOAA to cool the past and/or warm the present?

      • “But since you are here, what’s your take on the major point of the article “
        Don, it will be slow, as all my comments now go through moderation. But I think this latest Bates bears on that. He
        “said in the interview that there was “no data tampering, no data changing, nothing malicious.”
        “It’s really a story of not disclosing what you did,” Bates said in the interview. “It’s not trumped up data in any way shape or form.””

        The data here is of course ConUS rather than global. And as often, it is not well specified as to what is being compared. Drd964x is what used to be called TOBS corrected; it is not homogenised, while the nclimdiv data it is being compared with is. So you are seeing the effect of homogenisation, which has differing effects on states. I did a complete computation here for the old USHCN. Some states do get a big trend increase, some not.

      • Thanks Nick. I looked briefly at your charts and my initial reaction is that — assuming that your work is correct — the results of homogenization are so bizarre that no sensible person would use homogenized data for any purpose. Probably my initial reaction is wrong. I shall go off and meditate on this.

    • Calling that link “From the AP” is technically accurate but totally misleading since it actually links to a Seth Borenstein editorial piece on phys.org. That’s the problem that rankles people, a kernel of truth is always buried in a mound of opinion in the debate over CO2 (yes this is a debate over CO2, not temperature trends)

  19. One natural question might be whether upgraded homogenization (among other things ‘removing’ urban heat island (UHI) effects) is responsible?
    Obviously not – without any adjustments, Urban Heat effect is expected to cause a warming trend. A warming trend can not be removed by adding a warming trend.

    • Science or Fiction February 7, 2017 at 2:19 pm: “(among other things ‘removing’ urban heat island (UHI) effects)”
      WR: Correct. The problem is the anthropogenic UHI effect itself. That effect has to be removed. When you take out this ‘anthropogenic local warming’, (better) comparable temperatures will remain. At least, those remaining temperatures will be less a reflection of the ‘local’ (urban) circumstances and they will be reflecting in a more correct way the regional circumstances.
      The same with the buoys/ships’ measurements: the anthropogenic ship measurement anomalies have to be corrected in case you want the remaining temperatures to represent the reality of the Earth. Never correct the well calibrated buoy measurements because they are already showing reality.

      • If I correct a measurement In my profession, I will first have to prove that the original measurement is wrong, I will then have to put forward a scientific argument for the correction and then provide both the uncorrected data and the corrected data together with the difference between them.
        I would end up behind bars if I treated data and corrections the way NOAA has.

      • Ship-borne calibration experiments show the Argo buoy SS temperatures have systematic errors of 0.5 C to 2 C.

      • At Pat Frank
        Inlet sea water pipes of varying length transiting engine rooms of varying temperatures, some pipes well insulated,some poorly, temperature sensors calibrated or not as the readings are non-critical 99% of the time. The buoys might be hooey but the ships are selected as a substitute for the soul purpose of reading higher. More pretend science in a field rife with corruption and politicization…

      • Science or Fiction February 7, 2017 at 3:39 pm
        “If I correct a measurement In my profession, I will first have to prove that the original measurement is wrong, I will then have to put forward a scientific argument for the correction and then provide both the uncorrected data and the corrected data together with the difference between them.”
        WR: And I suppose that every time you would present your results (e.g. a graphic) based on mutated data you would have to say that you did change the data and you will have to include a source where people can find the explanation about why and how you changed the data. Every time you present the results. Without that your work would be seen as not-scientific and if you would give but even the suggestion that it would be ‘scientific’ (for example by the corresponding press release in which you point at the scientists involved) your work would be seen as ‘misleading’. Because we have to represent ‘reality’. That is how I learned it.

    • The whole “global temperature” business is a figment of someones imagination, which is being used for a political purpose. It is impossible to take local temperature readings, (useful only for locals), merge them with other local temperature readings, making many adjustments to the original readings along the way, to finally produce an ACCURATE global average temperature – no ifs, no buts, it is impossible.

      • The temperature, elevation, humidity, and wind must all be accounted for, the objective being to determine heat flux all over the globe. It’s an impossible task.

      • Missing a boat here- temperature is only a broad, highly inaccurate estimate of the energy in a system, particularly something as varied and variable as the earth. Averaging temperatures does not give a thermodynamic energy average. Furthermore, the climate is a heat engine and responds to the energy differences, down to millimeter scales according to very well-established principles. A major reason why climate models have failed, besides the built-in biases, is that it is impossible to actually model the processes at a sufficiently small scale, assuming it is possible to actually do that. Nobody talks about a millimeter grid scale for climate models.
        Global average temperature is a useless measure for understanding the climate.

    • SciFi, the problem is worse since even rural stations are likely to reflect human interference in the microregional scales. Rural areas are not “natural.” They are frequently subjected to clearing for agricultural purposes, leading to effects tied to increased ground surface insolation, evaporation, etc. Also, as rural areas historically declined economically over the last century, you might expect that vegetation would return, shifting temperature data in a different direction. I know this has happened in the northeast. It takes just one hike with open eyes in upstate New York to realize that all the forest is “new.” The ruined stone walls mark formerly cleared pastures and fields. The USCRN is in reality the first US effort at a proper, scientific attempt to measure baseline “natural” phenomena in a “natural” environment.

      • I hope they make no mistakes with United States Climate Reference Network. If they make no changes, that network may be used for a well defined measurand the simpler the better – less questions less doubt. I have my doubts with “pairwise homogenization” – I wonder if that concept has been properly proven.

  20. To “Ripshin” : thank you very much for the useful information you provided to me. You know the French, always fussy, and demanding more : I am still trying to find out where that “shenanigan” thing came from (by the way, my post graduate studies were on “Latin and Greek epigraphy”…
    Whatever Mr. Anthony says, I know for certain that in the olden days, olive trees did not thrive in Paris, they bloom there now.

    • Try this :
      Shenanigan = German + East Anglian: schinageln + nannicking = working wool = pulling wool (over someone’s eyes’) = to deceive

    • And this is a bad thing? François doth protest too much. The Eiffel Tower wasn’t there in “olden days” shall we blame it’s presence on global warming too? On second thought, the Eiffel Tower was constructed with carbon based energy (coal fired steel smelters). Probably best to tear it down, since like the Olive Trees, it’s a terrible sign of that terrible, terrible use of carbon based energy by mankind.
      Call it ironic iron then:

    • “I know for certain that in the olden days, olive trees did not thrive in Paris, they bloom there now.”
      Kind of the opposite of the farms in Greenland.

    • François February 7, 2017 at 2:21 pm: “I know for certain that in the olden days, olive trees did not thrive in Paris, they bloom there now.”
      Paris surely will show an Urban Heat Island effect. Olive trees in Paris can be seen as a nice proof of the UHI. I suppose we don’t find olive trees in the region outside of Paris.
      Olive trees are sensible for frost:
      “Frost Prevention
      What are the variables regarding frost damage?The olive fruit can be damaged at temperatures below 29ºF (-1.7ºC). Young olive trees and branches can be killed at temperatures below 22ºF (-5.5ºC) and mature trees can be killed at temperatures below 15ºF (-9.5ºC). These are not precise numbers because the damage varies according to the specific temperature at ground level around the tree, the duration of the cold spell, the olive variety, the age of the tree, and whether the trees have had a chance to harden off.”
      Source: https://www.oliveoilsource.com/page/frost-prevention
      I suppose olive trees are like oranges: they can thrive well for years, but one real frost can damage them all. The difference here might be that oranges are very sensible for frost and as I read above olive trees can withstand some frost. But not a very severe winter. Atlhough it must be said that also in a very severe winter the Urban Heat Island effect can make a difference of many degrees as the Oslo UHI experiment of 25. January 2007 demonstrates. Which might just be enough for the olive trees to survive: see http://www.climate4you.com/OsloUHI%2020070125.htm
      So the olive trees do well in a protecting environment as the big city of Paris, filled up with heaters working all winter. Conditions without too much wind (buildings as wind breakers) will help the survival of the olives.

      • Russian olive can tolerate severe frost – no problem. Maybe you are referring to this cultivar. GK

    • Fancoise,
      Did anyone think of bringing olive trees to Paris back in the olden days?
      Many species of plant are transported and grown around the world. Many cites are much hotter than the surrounding countryside and trees, plants, flowers and bees are thriving under these conditions.

    • BBC NEWS | UK | Education | Wild parrots settle in suburbs
      6 Jul 2004 – But there were also parrots reported in inner-London, including … Escaped parakeets have been spotted nesting in this country since the 19th Century

  21. Nice work, Rud. I hope others add more evidence of bad science. This sort of stuff needs publicity and investigation by the authorities. How can global warming be taken seriously when the systems of measurement and data processing are so rotten? Why are we seeing people try to defend such unacceptable procedures? This is not science. At best it is incompetence, at worst it is designed to deceive.

  22. The new nClimDiv rate from 1895 to 2014 is 0.135F/decade, almost double. Definitely anthropogenic, but perhaps not actual warming.

    Perfect! We have found the A in GW and it is the same as the F in front of RAUD.

  23. Does anyone still have the actual raw data for temperature measurements or did the agencies alter or destroy it? Is it publically available in one place anywhere?

  24. This article reminds be of another WUWT article by Professor Robert Brown of Duke who wrote:
    “there is absolutely no question that GISS and HadCRUT, at least, are at this point hopelessly corrupted.”
    In this article, Professor Brown has some interesting observations about GISS and HadCRUT, as well as the urban heat index adjustments which he states have been made in the wrong direction.

  25. World temperature has risen about 0.8 C since 1880 and we all believe that disaster is upon us? Like watching a worm wiggle and projecting it is going to jump over a house!! In every city daily temperature will vary much more by time of day and position in the city. We have been conned!!

  26. If we consider unadjusted data from long term continental U.S. stations, there is no warming over the last century. link

    • Would agree then that UHI shouldn’t be taken into account when considering US temperature station data?

      • Surprisingly, there is little difference between rural and non-rural long-running stations. In particular, see Figure 3 in the linked article. Based on that, I take no position on UHI.

  27. Its importance was mainly political talking point pause-busting in the run up to Paris.

    The impact of that is greater than the chronic corruption.
    Each little skewing of datasets is useful. It raises the significance of research findings. All the findings raise funding.
    But so what? Everyone tries to emphasise the important parts of their work. All research highlights the most exciting possibility – regardless of probability.
    But most academic funding is restricted to its own level. A bit here, a bit there… but no real change in the total pie to be nibbled. Little differences help but don’t change the game.
    Trying to influence national policies… That’s a different league.

    • MC, leaving for dinner. But agree wholeheartedly. This post was my little effort to push back more publicly than the book. You might like it if you have not yet read it.

  28. I’m glad that the effects of NOAAs “homogenization” of station records are finally getting the public attention they truly deserve. Several years ago, even before the advent of the egregious adjustments made in GHCN3, I snarked on Climate Audit that they should be called “pasteurization,” because they clearly cooked the books. Nothing done by an administration that complained loudly about a “Republican war on science” ever changed my evaluation.

  29. If a particular temperature data set contains a known bias, whether warming or cooling, do folks here agree in principle that it should be corrected for?

    • Pray tell, by what means are the numerical values of various biases in a station record “known” with enough accuracy to provide a reliable correction?

      • For instance, if a temperature station is located somewhere that has recently become built up and starts to show warming that disagrees with nearby stations that remain rural, then it seems reasonable to assume that the warming is most likely attributable to the development around the site in question.
        I think that should be adjusted for. Do you not?
        The method I would use (as very much a non-expert), would be to reduce the temperature of the affected site by the average of the difference of the nearby rural sites. Otherwise my regional dataset would retain a warm bias.

      • While it may be certainly “reasonable to assume that the warming is most likely attributable to the development around the site in question,” it’s the lack of accuracy in the determination of that bias that is critical. In practice, you’ll seldom find nearby, certifiably “rural” sites that agree with one another sufficiently closely over the entire record span to provide a reliable correction. Such practical exigencies militate for disqualifying all UHI-corrupted records, rather than attempting to “correct” them.

      • DWR54 February 7, 2017 at 3:51 pm
        “I think that should be adjusted for. Do you not?”
        I do not. Once a station begins to show an effect that is not purely natural its data is suspect, period. It should be removed from the data set until it can be relocated to an unaffected spot. Nobody has any idea what the correct temperature reading would/should have been… Averaging temperatures of surrounding stations cannot be accepted either, as wind direction and wind speed, humidity, etc., all play a part in what the recorded temperature would have been. Baloney.

      • The best approach would be to remove the impact (tear down the buildings, plant trees, etc.) then make the measurement and after that critical and important task those unnatural surroundings can be installed again. Until another measurement is needed. /s just in case.

      • DWR54:
        Look up the definition of “disqualify!” Disqualified records need no “correction,” since they have no impact upon results.

    • If a data set has a known bias, the equipment or enclosures should be replaced to eliminate the bias. If the data set is “adjusted”, it is no longer a data set, but rather an estimate set. The estimate might be better than the biased data, but it is still an estimate.
      Climate science is a science which requires very careful calibration, installation and maintenance of measuring equipment, since it is not possible to rerun the experiment. It is possible, but highly questionable, to “fudge” the data, no less to make it up out of “whole cloth”.

      • Are you saying that it is necessary to relocate a temperature station every time a tree grows or sheds leaves, every time a road is built, every time trees are cut down or grow up, every time nearby buildings are erected or demolished, etc?
        That sounds expensive. Surely it is within the wit of man to allow for these things and adjust for them.

      • No, he’s not saying that DWR54. Goodness gracious. Please tell me your absurd hyperbole was just pathetic rhetoric and that you really aren’t that stupid.

      • Look at his response to 1sky1 above.
        Inability to respond to what others are actually saying is one of his most endearing features.

    • I completely disagree with adjusting for a “known” bias.
      In ERSST V3b introduction, Karl justified getting rid of the satellite sea surface temperature measurements because “the addition of satellite data led to residual biases” with not providing a single shred of evidence that this was actually the case. $billions wasted.
      In ERSST V4 introduction, Karl justified adjusting the buoys to the ship engine intakes due to “Buoy SSTs have been adjusted toward ship SSTs in ERSST.v4 to correct for a systematic difference of 0.12°C between ship and buoy observations.” BUT there was simply NOT a bias in the buoys versus ships over the whole time frame in question and the Hausfather 2017 comes along and said the buoys were actually measuring exactly the same as the ships. Throw out the $billions wasted on the buoys which never did show any colling bias.
      I don’t know if anyone remembers “the cooling bias” introduced by the new MMTS sensors or the “cooling bias” in the new XBT floats or Phil Jones and the non-existent “0.05C UHI bias” which was not worth adjusting for …
      But the word BIAS, just allows the NCDC’s and the Karl’s to adjust the temperature record up again even when they have ZERO proof of the need for one.
      The ONLY adjustment in history that was done carefully and with many different test measurements around the world was the “bucket adjustment” for pre-1940 canvas and wooden buckets which cooled off the sea water by 1.0C before it could be measured by a thermometer.
      I think we just go back to the RAW record only and note there are various changes in instruments over time and there was this TOBs thing which probably doesn’t impact the trend and just be done with it. Leave all the records alone after this.

      • “I completely disagree with adjusting for a “known” bias.”
        Then how could we distinguish a real deviation from ‘normal’ from a spurious one?

      • Bill Illis February 7, 2017 at 4:17 pm “I think we just go back to the RAW record only and note there are various changes in instruments over time and there was this TOBs thing which probably doesn’t impact the trend and just be done with it. Leave all the records alone after this.”
        WR: All that ‘homogenisation and adjusting’ of temperatures makes that I don’t trust any of the surface records. I am looking at UAH temperatures and so far RSS’. And yes, I also want our ‘raw data’ back. All of them.

      • Adjusting buoys up to ship data is just dumb. The buoys are designed specifically to take temperatures, the ship data is taken by eye balling a temp gauge INSIDE an engine intake. Engines inherantly heat things up, and then you have the human error of eyeballing a temperature gauge. If anything the ships data should be thrown out entirely or adjusted down to the buoys, not the other way around. Not to mention the fact that they didnt really start taking ocean temps until directly after the little ice age. Of course ocean temps would heat up from such a cold period in the earths history.

      • DWR54, yes ask yourself that question, “how could we distinguish a real deviation from ‘normal’ from a spurious one?
        How to tell a sudden warm wind from a car or aircarft exhaust or air conditioner without ACTUALLY OBSERVING THE CONDITIONS AT THE TIME OF THE DEVIATION?
        NASA/NCDC/GISS/BEST all make guesses about what happened in the past and that also includes TOBS.

    • DWR, @ 3:51 pm Feb 7, I agree as long as those 3-4 stations are within an acceptable distance +/- 5 kms in a ” circular pattern”. frankly I think that would be nearly impossible ( I am a observer, not an expert either). In our case the next stations to ours are way further away and in hugely different “Micro” climates. To me your solution would have to involve leaving the compromised station in place, install 4 more sites around it some distance away in non compromised ( “rural”) locations and , over a period of time, take readings and then correct the results for the “tainted central site” ( Hey more jobs!! and again I am not an expert either).
      This to me should not take long, the bias should show up relatively quick, say over a month we should have a pretty good idea what the anomalies are.

    • DWR54
      The WMO sum it up best-
      WMO – “The nature of urban environments makes it impossible to conform to the standard guidance for site selection and exposure of instrumentation required
      for establishing a homogeneous record that can be used to describe the larger-scale climate”

    • DWR54
      “If a particular temperature data set contains a known bias, whether warming or cooling, do folks here agree in principle that it should be corrected for?”
      It’s very easy to salaami the temps up each year to get whatever temp you require with this method.

    • If many stations have such a “known” bias, it would be better to revert to satellite measurements and to develop the instruments better there. For satellites determine everything by given aera, nothing needs to be homogenized, interpolated and extrapolated. Perhaps the measurement has to be broken down to the atmospheric level, but the problem with the UHI and other affected sites is eliminated. I would therefore advocate making this type of temperature measurement better by awarding research funds so that it can finally replace the surface measurement.

    • 1) How do you demonstrate that a dataset has a bias?
      2) How do you calculate the exact level of this bias?
      3) How do you adjust for this bias?
      4) How do you adjust the error bars for this “bias” adjusting?
      What is known to warmists is rarely actual.

    • Wow, the bias is strong. Dispite clear evidence of a lake, they conclude Mars couldn’t have been warm enough for liquid water because there wasn’t enough carbon dioxide to drive their model’s temperature up.
      Did every climate scientist invest his or her pension in carbon credit futures?

  30. One of the wonders of twitter is that you can reach out and touch someone … @KellyAnnePolls
    Kelly Anne Conway
    She pays attention to this stuff bc that’s her job … she v data intensive in her message making for the WH
    Redundancy matters
    Reach out and tell her
    Also @parscale
    Brad Parscale … sentiment data number cruncher
    Public Service Announcement over

  31. We need an open source temperature reconstruction. Way too much power is concentrated in the hands of activists masquerading as scientists.They start with a conclusion and work backward. There are the last people we should have in charge of anything that can impact public policy. They simply lack any credibility at all.
    Climate “Science” on Trial; The Consensus is more Con and NonSense than Science
    Climate Bullies Gone Wild; Caught on Tape and Print

    • “We need an open source temperature reconstruction.”
      That’s exactly what BEST (now Berkeley Earth – BE) was set up to do and was endorsed to do by this very site. It went ahead and provided this reconstruction – all of its methods and results are open to public scrutiny. The BE reconstruction turned out to be in agreement with every other group that has ever looked into this issue.
      You say we need ‘an open source temperature reconstruction’. The we get one and you don’t like the results. So again say “we need an open source temperature reconstruction”.
      Let’s face it, you won’t be satisfied with any temperature reconstruction that disagrees with your beliefs.

      • Berkeley is the last place I would want this project to be run out of. Also, the BEST project uses existing data. I’m talking about reconstructing from the floor up. What good is Best if they use corrupted data sources?

      • co2islife
        “Berkeley is the last place I would want this project to be run out of.”
        How times change.
        “What good is Best if they use corrupted data sources?”
        They use raw data sources and publish these freely.

      • My belief is that we have had some warming coming out of the Little Ice Age, but that we have cooled since the Medieval Warming Period, the Roman Warming Period, the Minoan Warming Period, the Holocene Optimum, and most of the Eemian.
        Small adjustments in the modern record don’t sway me. It is obviously provable that nothing happening since 1978, and the 1930s even, are remotely close to unprecedented.

      • DWR54, I suggest that you take a look at Best’s “Output Final Temperature”.
        Just try comparing your local Weather Station Temperature dataset from your country’s Met Office with Best’s data.
        Usually the raw data that best uses is the same as the data outpuf of your local Met Office (of course your local Met Office has has already done their own “Quality Control” on the data) and then compare the adjustments and “Final Output” with your original.
        I have done this for many sites along with many other posters, what they are doing is not science and the Final Output is not REAL.
        I suggest you also look at Valentia, a long term top class dataset and then tell me that the Irish Met office knows less about their own temps than Best.
        Or perhaps the Iceland Temp sites, again the same thing.
        Or perhaps you would like it from the horses mouth.
        Steven Mosher | July 2, 2014 at 11:59 am |
        “However, after adjustments done by BEST Amundsen shows a rising trend of 0.1C/decade.
        Amundsen is a smoking gun as far as I’m concerned. Follow the satellite data and eschew the non-satellite instrument record before 1979.”
        BEST does no ADJUSTMENT to the data.
        All the data is used to create an ESTIMATE, a PREDICTION
        “At the end of the analysis process,
        % the “adjusted” data is created as an estimate of what the weather at
        % this location might have looked like after removing apparent biases.
        % This “adjusted” data will generally to be free from quality control
        % issues and be regionally homogeneous. Some users may find this
        % “adjusted” data that attempts to remove apparent biases more
        % suitable for their needs, while other users may prefer to work
        % with raw values.”
        With Amundsen if your interest is looking at the exact conditions recorded, USE THE RAW DATA.
        If your interest is creating the best PREDICTION for that site given ALL the data and the given model of climate, then use “adjusted” data.
        See the scare quotes?
        The approach is fundamentally different that adjusting series and then calculating an average of adjusted series.
        in stead we use all raw data. And then we we build a model to predict
        the temperature.
        At the local level this PREDICTION will deviate from the local raw values.
        it has to.

      • It really is pathetic how you keep beating that dead horse.
        It doesn’t matter how many problems we find with BEST, because the effort was supported before hand, we are supposed to accept the results?
        The fact that the lead author was caught in a flat out lie when he claimed to be a skeptic before starting this project should be all the evidence needed that something fishy was being done.

      • DWR54:
        You assert

        Let’s face it, you won’t be satisfied with any temperature reconstruction that disagrees with your beliefs.

        That is a misunderstanding. I object to political propaganda that pretends to be science.
        All global and hemispheric temperature time series are complete bunkum: they are junk. This is explained in my post above and this link jumps to it.
        For a more detailed explanation of the scandal which is global temperature time series then read this especially its Appendix B. Please note that this link is to an item in Hansard (i.e. the official record of UK Parliament) and is a submission I made to a Parliamentary Select Committee. If it were untrue then it would be a perjury that would have put me in jail.

      • ““We need an open source temperature reconstruction.”
        That’s exactly what BEST (now Berkeley Earth – BE) was set up to do and was endorsed to do by this very site. It went ahead and provided this reconstruction – all of its methods and results are open to public scrutiny. The BE reconstruction turned out to be in agreement with every other group that has ever looked into this issue.”
        Yeah, Zeke Hausfather, and Berkeley Earth say they confirm Karl’s “Pausebuster” paper. So if the pausebuster paper is wrong, what does that make BEST?

  32. And also a possible explanation for why the surface now runs hotter than the troposphere, which is opposite to theoretical predictions. O, what a tangled web we weave when first we practise to deceive!

    • According to the latest TTT data (v4.0) from RSS there has been no statistically significant difference between the rates of warming observed between surface and atmosphere.

      • It is really hard to imagine why the TTT data, has such an increased trend compared to the TLT, TMT, and TTS measurements from RSS. TTT should essentially be the average of these other three but for some reason it is way higher.

      • Bill Illis
        “It is really hard to imagine why the TTT data, has such an increased trend compared to the TLT, TMT, and TTS measurements from RSS. TTT should essentially be the average of these other three but for some reason it is way higher.”
        Bill, you’re overlooking the fact that the current RSS TTT data is based on the revised and peer reviewed version 4. The current TLT data is based on version 3, which RSS chief scientist carl Mears says contains a known cooling bias (from the stratosphere).

      • There is no significant warming since the 1930s.
        CO2 has risen since the 1930s, but temperatures are basically flat.
        CO2 is ergo not the driver of temperatures.

      • Bill Illis February 7, 2017 at 4:23 pm
        It is really hard to imagine why the TTT data, has such an increased trend compared to the TLT, TMT, and TTS measurements from RSS. TTT should essentially be the average of these other three but for some reason it is way higher.

        TTT = 1.1*TMT – 0.1*TLS
        About 10% of the TMT comes from the lower stratosphere which is reduced by the subtraction of the suitably weighted lower stratosphere TLS. Since TLS is decreasing at ~0.26 K/decade TTT is higher than TMS.

    • Not talking about sat data. The topic is about land surface measurement data. People confuse easily don’t they…

  33. At 1:21, RWTurner asks about the ice coverage in the Antarctic. Was at 2 deviations above, the data has a problem and plots were erratic. Then shut down. Start up and then the plot goes to 2 deviation below. The swing seems beyond what the El Nino could force, which has been bugging me.
    Any comments?

  34. I have used two graphs from NOAA for several years now in my talks on climate. Unfortunately, I don’t know how to post them in the comments section. The first one displays the Michigan annual temps that I saved in 2012. The second I saved in 2014. The warming went from almost zero to .2 degree per decade or 2 degrees per century. Talk about data tampering!

  35. Another expose of corrupt temperature documentation has been published.
    That adds to a growing list made by others:
    Tony Heller
    Pierre Gosselin
    Paul Homewood
    Anyone else to add to this list?
    Good job,guys!

  36. What about the twice daily balloons, what do they show up in the atmosphere?
    Have those balloons found any warming up there?

    • There is no global data for balloons. They only cover land for the most part and only over a small area of the planet. However, they can be used to verify satellite readings and that work has already been done. What those comparisons show is the satellites are quite accurate.

      • Richard M, thank you for the information. I remember back in the 1960’s some of the meteorologist’s, out of Portland Or, would include the free air freezing point from Salem Or. .

    • Larry on February 7, 2017 at 4:46 pm
      There have been over 1,500 balloons working. They belong to the Integrated Global Radiosonde network.
      How many of them are still active I don’t know.
      A small but very representative IGRA subset, called RATPAC (existing in versions A and B), consists of 85 of them.
      Here is a comparison of GISS, RATPAC B and UAH 6.0:
      I choosed for the RATPAC plot the atmospheric pressure level of 700 hPa: it corresponds to an altitude of about 3 km, near to the place (3.7 km) where in theory (!) UAH satellite should perform their readings according to the averaged 264 K they measured in 2015.

    • Larry on February 7, 2017 at 4:46 pm [2]
      But now, if you repeat the same exercise with another IGRA subset of 31 “US controlled” {sic} balloons, selected by Christy and Norris, you obtain this graph:
      As you can see, the balloon trend moved from near GISS down to below UAH6.0 (but the pressure still is at 700 hPa).
      28 of the 31 balloons operate in CONUS+AK. If you restrict the plot those these 28, the balloon trend moves above UAH’s.
      So there seem to be these and those balloon radiosondes.
      The subset I found in the paper:
      Satellite and VIZ–Radiosonde Intercomparisons for Diagnosis of Nonclimatic Influences
      No wonder that Richard M writes below:
      What those comparisons show is the satellites are quite accurate.

  37. Dr. Bates doesn’t need to make specific charges that will land him in a lawsuit dragging out for years like Mark Steyn and Michael Mann. His original post and follow up on Dr. Curry’s blog are very clear. Karl et al did not handle their data per NOAA practices and it is not falsifiable o
    r reproducible because of archiving and software issues. Karl Popper is spinning in his grave.
    I would bet that most people do not know that AGW is mainly based on computer modeling which changes inputs frequently. They would compare it to political polling data.

  38. And yet NOAA’s “national temperature index” shows very little warming since 1895. The US Climate Reference Network shows none since it began in 2005. Something is fishy.

  39. If you can’t or won’t provide complete unfiltered data (NOAA) that support your conclusions then you aren’t credible scientists and your stuff is crap . If you follow the scientific method really what have you got to hide ? OK loss of funding understood .
    The question is who is leading the lying ? A. Scientists , B. Corporate self interest ,
    C. Political operatives and politicians ?
    Is there any doubt it is C followed by B and C . There is an assumption that tax payers are cotton heads and should pay for the scam . Wrong the jig is up and the climate mascot Polar Bears will be just fine if not shot . That’s there real threat . Scientist should be an honored profession when they follow the scientific method and are not bullied into supporting a con game .

    • Many scientists are ardent disciples of “environmentalists” ideology and believe in a greater social cause. The end justifies the means follows… and noble cause bias breathes life into corrupted souls. GK

  40. The Trump Administration transition teams are discovering they are in the middle of a Vietnam-Style Guerrilla War with Obama Regime holdovers who want and demand scorched Earth.
    The Fraud of the NOAA paper belies the years and decades of Fraud in the President’s Office of Science and Technology, CDC, DoA, DoC, DoI, NIH, NASA and NSF.
    Every employee under the Obama Regime is an IED, Improvised Explosive Device, who will gladly die for the cause as long as at least 10-others (Trump Administration employees) die with the IED.
    I would recommend to the President the mass firing of ALL Federal Employees and denial of benefits and pensions hired prior to January 20, 2017.
    This IS Civil War and we will do well just to survive it.

  41. Rud Istvan,
    A very interesting and useful analysis!
    Thank You (!) for all of your thoughtful contributions….

  42. The general public (well, half of it that is) is so ignorant that they thought Trump was out to destroy the data on the left.

    • Sure. Buy the original ebook. Then you can enlarge and link at large. Your choice i Books, Amazon Kindle, knobooks,…whatever in any format where ever. You want me to breach my publisher agreements to make you happy? Get real.

  43. Why are we debating a paper that doesn’t exist?
    Per the author’s own admissions cannot be replicated. It should be withdrawn by the author(s) if they have any integrity or by the editor. Reproducibility is at the core of the scientific method. When we abandon it, we are somewhere between Roddenberry and L Ron Hubbard.

  44. Notice how alarmist refuseniks look to dismiss ALL David Rose’s evidence out of hand. ‘David Rose – he’s a denier’. ‘The Daily Mail is a filthy rag’. ‘The NOAA guy didn’t work on the project’, and so on. What they NEVER do is acknowledge that yes, there’s a big data problem and yes, there’s some explaining to do. Such complete bias exposes their positions as being alarmist-driven activists rather than seekers of truth.

  45. I think we should stop using the term ‘homogenization’ and use the term ‘pasteurization’ instead.
    It more accurately describes the process of adding heat to destroy any data that would cause spoilage to the story line.

  46. There is no such thing as absolute Global Temperature and never has been. The best we can do is determine relative changes in temperature at specific locations over short periods using identical tools and process. Over longer periods it is problematic since too much can change if not everything changes. This lack of consistency tools and process then requires data to be “adjusted”.
    Lets look at the most simplistic case, missing field measurements. Someone puts a thermometer in two separate fields 1000 miles a part, and measure for awhile. Eventually one thermometer is hit by lightening, so NOAA decides to estimate/model the temperature for the missing thermometer. We end up comparing two actual series of measurements with one actual and one fabricated. How does this even pass the smell test?
    The milk is curdled, it went bad, throw it away. Or at best just consider it proxy data not measurements. Satellite measurements provide the best chance at consistency and even that has challenges in maintaining consistent process over time.

  47. I once (almost) won over a non-skeptic, at least to looking at things a bit differently, when I asked him what the temperature of Canada was yesterday, compared to today.
    He’s a smart guy, actually a statistician with modelling experience. We hashed out things such as how you would get an average for an entire nation, and importantly, how it could actually be relevant as it really wouldn’t tell you much.
    Great, I said, now extrapolate those issues in getting a planet average, then try to extrapolate that even further to find a trend over two days, then extrapolate further over decades, then three (to get “climate”), THEN once you have all that done…start in on precipitation, etc. THEN you MIGHT have some sense of climate change.
    We both gave up trying to convince each other, which might have to do more to the Guinness than anything.

    • I would asked him to compare his Guinness to data in Africa, given him an empty glass and told him to take a sip of the imaginary Guinness then asked him to estimate how much he had drunk. When he said you’re crazy, say exactly.

  48. Experiment you can do at home:
    1) Open excel.
    2) Use a random number generator to create random “high” temperatures for an imaginary weather station over a period of 100 years. Something like 90 to 100 degrees for the high.
    3) Graphed the data and put a linear trend line on it.
    My very first run of the trend line shows that the temperature is increasing at that imaginary weather station using random data by 2 degrees over century! Clear evidence of global warming!
    Go ahead – run this at home. It is pretty simple with a few excel skills. What you will find is that plenty of your fake weather stations will produce trends in the data showing warming or cooling. And YOU KNOW the data is random, and the trends are meaningless.
    Random data produces random trends.
    As a scientist I learned that when a random number generator can replicate your results, there is typically something very wrong with your study.
    The problem is the trend, if it actually exists, is very, very small. 1/10th of a degree per decade. Such tiny variations are susceptible to random measurement error, program changes, instrument error.
    I wonder how much of what we are seeing is actually deliberate, or based on expectations. “My temperature data isn’t showing a increasing trend!” “You must be doing something wrong – tweak the formula, because WE KNOW that temperatures are increasing.” “Ah, there, that’s got it. I just found a justification to discard all the data flattening the trend.” I see this all the time in science – people adjusting data to fit the model, or what we “know” to be true.

    • The met data is recorded [rounded off] to 1st place of decimal and we are talking the changes in second and third place of decimals. Do such results have any meaning?
      Dr. S. Jeevananda Reddy

  49. Here’s the link to the above graph:
    This shows that there is no long-term heating up in the arctic. You see melting / decline only in short term captions, not in the annual average long term trend.
    And as already known, a steady rising trend in the arctic now for 38 years:

  50. February 9, 2017 at 12:12 pm
    Sometime life is one step forward, 2 or more back.
    Waiting for this story to grow another arm and a leg.
    As said what we want in life to happen often takes a back seat to reality. We get our hopes built up only for the last minute fail.
    There are two sorts of global records, adjustable ones and unadjustable ones.
    Land and sea fall into the first one and satellite and balloons into the second.
    By adjustable I mean records that are adjusted continually and never stay the same as when originally listed.
    Zeke explained years ago that the land records are continuously adjusted downwards in the past.
    I am not aware that this happens with the other data sets.
    It means that running comparisons like Karl did are only ever valid for the date and list of past temps on the day that data was run.
    If you run the same programme a month or year later the data in is different .
    Hence the only way this study could ever be replicated is for a complete data set to be archived for that study.
    Bates is right this was not done.
    People talking about the raw data still being available do not understand that it cannot be run the the wringer and give that date again.
    Nor can the current data as the past data has been modified away from that Karl used.

  51. Turning back to the video. I bag your pardon for my bad English and lack of American knowledge since I live in Italy.
    When I watched to the video and a Senator (?) put the big pile of files saying: “Here are …..”, in my mind came the exact scene many, many years ago. Galileo being in front of the Inquisitors and one of them showing the entire bibliography about the claim that “the Earth if flat”.

  52. http://www.sciencemag.org/news/2017/02/how-culture-clash-noaa-led-flap-over-high-profile-warming-pause-study
    “Tuesday, in an interview with E&E News, Bates himself downplayed any suggestion of misconduct. “The issue here is not an issue of tampering with data, but rather really of timing of a release of a paper that had not properly disclosed everything it was,” he told reporter Scott Waldman. And Bates told ScienceInsider that he is wary of his critique becoming a talking point for those skeptical of human-caused climate change. But it was important for this conversation about data integrity to happen, he says. “That’s where I came down after a lot of soul searching. I knew people would misuse this. But you can’t control other people,” he says.”

  53. LOL
    When is somebody on this site going to cover the fact that John Bates has clarified his statement and confirms that there has been no manipulation of figures in Karl.
    “Bates accused former colleagues of rushing their research to publication, in defiance of agency protocol. He specified that he did not believe that they manipulated the data upon which the research relied in any way.”

    • “He specified that he did not believe that they manipulated the data upon which the research relied in any way.”
      He said/she said is irrelevant. The data is manipulated. Anyone who looks at what’s being presented can see that.

    • it is manipulated in many ways. The WMO flag up that pretty much the whole of Africa is estimated temps, or the equivalent of China, The US, India, Mexico, Peru, France, Spain, Papua New Guniea, Sweden, Japan, Germany, Norway, Italy, New Zealand, United Kingdom, Nepal, Bangladesh and Greece put together. That is some estimated temp data.

Comments are closed.