Through The Looking Glass with NASA GISS

Guest essay by John Mauer

One aspect of the science of global warming is the measurement of temperature on a local basis. This is followed by placing the data on a worldwide grid and extrapolating to areas that don’t have data. The local measurement site in the northwest corner of Connecticut is in Falls Village, near the old hydroelectric powerhouse:

fallsvillagestation

Falls Village weather station, Stevenson Screen at right

Aerial view of Falls Village GHCN weather station. Note power plant to the north

Aerial view of Falls Village GHCN weather station. Note power plant to the north Image: Google Earth

The data from that site appears to start in 1916 according to NASA records; it shows intermittent warming of about 0.5 degrees C over the last 100 years. However, NASA recently decided to modify that data in a direct display of political convenience to exaggerate the rate of warming, that is, making the current period appear to show increased warming. What follows is a look at the data record as given on the site for NASA Goddard Institute of Space Studies (GISS), a look at alternate facts.

The temperature data for a given station (site) appears as monthly averages of temperature. The actual data from each measurement (daily?) is not present. The calculation of an annual temperature is specified by GISS. A reasonable summary includes several steps.

Although the data set is not labeled, these measurements are presumably min/max measurements. The monthly numbers include the average of at least 20 measurements of temperature for each month. Then the monthly averages are combined to get a seasonal mean (quarters) starting with December. Then these four quarters are averaged to get an annual mean.

The entire data set is also subjected to composite error correction by calculating the differences of each data point from its monthly mean (the monthly anomaly). Then the seasonal correction is the average of the appropriate monthly corrections and the yearly correction the average of the seasonal corrections. The net of these corrections is added to the average. The result is an annual “temperature” which is a measure of the energy in the atmosphere coupled with any direct radiation impingement.

fallavillageraw021317

The plot of 101 years of temperatures in Falls Village is shown above. Although there is routinely a great deal of variation from year to year, several trends can be separated from the plot. A small, but noticeable increase occurs from 1916 through about 1950. Then, the temperature decreases until 1972 with much less volatility. Then, the temperature increases again until roughly 1998 when it holds steady until the present. However, the present volatility is very high.

The El Nino (a change in ocean currents with corresponding atmospheric changes) in 1998 and 2016 is very visible, but not unique in Falls Village. (Please note the Wikipedia editing on the subject of climate change is tightly controlled by a small group of political advocates.)

Sometime in the last few years, just before the Paris Conference on Climate Change, GISS decided to modify the temperature data to account for perceived faults in its representation of atmospheric energy. While the reasons for change are many, the main reason given appeared to be urban sprawl into the measurement sites. They said, “GISS does make an adjustment to deal with potential artifacts associated with urban heat islands, whereby the long-term regional trend derived from rural stations is used instead of the trends from urban centers in the analysis.” Falls Village is a small rural town of approximately 1100 people, mostly farm land.

gissfallsvillageadjustment021317

The plot of all the changes to the raw temperature data from Falls Village is shown above. First, of course, several sections of data are ignored presumably because of some flaw in the collection of data or equipment malfunctions. Second, almost all of the temperatures before 1998 are reduced by 0.6 to 1.2 degrees C which makes the current temperatures look artificially higher. Curiously, this tends to fit the narrative of no pause in global warming that was touted by GISS.

Source: NASA GISTEMP

Comparison of raw vs. final data Source: NASA GISTEMP

Link: https://data.giss.nasa.gov/cgi-bin/gistemp/stdata_show.cgi?id=425000626580&ds=5&dt=1

Further, if the reasoning of GISS is taken at face value, the apparent temperatures of New York City would be affected. Falls Village (actually the Town of Canaan, Connecticut) is about two hours outside of the City and is part of an expanded weekend community.

 

Advertisements

266 thoughts on “Through The Looking Glass with NASA GISS

  1. It would appear that GISS is ripe for a full 3rd party audit. Followed by a validation of their methods against actual data for sites where the values were recorded on paper and have been kept. That would seem to be an ideal exercise for some metrologists and statisticians.

      • take that one further again, if the original records are not available, the custodians are not to be considered scientists at all.

        I recall seeing books in the past here in Australia published by BOM with recorded temperatures for major towns and cities in various states, page after page of nothing but tabulated temperature records. I’d love to find one of these to be able to cross reference it against the ‘temperature records’ of today.

      • I would say that both the olive colored and the blue black (on my screen) curves are fake.

        There clearly are discrete points on the graph. I’ll assume that those are accurate measured numbers; as accurate as a reasonable thermometer can be expected to be. (What is the name of the person who measured those accurate temperatures ?)
        Every one of those points lies on the curve; exactly. Well that is great. The Temperature should be a curve of some shape or other.
        And if it is a real curve and not a fake curve, then of course it MUST go through (exactly) every one of those accurately measured points. (and it does).

        So maybe the curve is real after all. Now what if they made measurements more often; let’s say just one more measurement in that total time frame. Well that extra point of course MUST also fallon that curve, because we have now decided it must be a real curve.
        Well that would also be true if we took one extra measurement halfway between the points shown on the graph. Well every one of those extra point must also lie on the exact same curve because we just said it is a real curve.
        So no matter how many extra Temperature readings we take, the curve will not change. It already records every possible Temperature that occurred during the total time depicted.

        But what can we predict from this real accurate curve of Temperatures over the total time interval.

        One thing is apparent; no matter how many measurements we make, we can never predict whether the very next measurement we make will be higher, or lower, or maybe exactly the same, as the measurement we just made. There is NO predictability of ANY future trend; let alone any future value.

        Well there is only one problem with this conjecture.

        It is quite apparent that the given curve, which we believe to be true, and not a fake, has points of infinite curvature on it. The slope of the curve can change by any amount at each and every point of measurement, without limit.

        So we must conclude that the given curve is NOT a band limited function.
        It includes frequency components all the way from zero frequency to infinite frequency.

        But when we look at the data set of measured accurate Temperatures used to construct this real non fake curve, we find that there is not an infinite number of data points in the set; just a few dozen in fact.

        So clearly the data set is not a proper sampled set of the plotted curve function. It is in fact grossly under-sampled, so not only can we not reconstruct what the real function is, we are under-sampled to such a degree, that we cannot even determine any statistical property of the numbers in the data set, such as even the average. It is totally corrupted by aliasing noise, and quite unintelligible.

        So I was correct right at the outset.

        Both of those curves are fake; and do not represent anything real that anybody ever observed or measured.

        What did you say was the name of the chap(ess) who read that thermometer ??

        G

      • It is quite apparent that the given curve, which we believe to be true, and not a fake, has points of infinite curvature on it. The slope of the curve can change by any amount at each and every point of measurement, without limit.

        So we must conclude that the given curve is NOT a band limited function.
        It includes frequency components all the way from zero frequency to infinite frequency.

        This is a complete fallacy! The frequency response of all glass thermometers band-limits the measured temperature variations to frequencies lower than a few cycles per minute. Furthermore, the decimation of daily mean temperatures into monthly averages shifts the effective cut-off frequency into the cycles per year range. The monthly averages are then further decimated into yearly data, narrowing the baseband range to half a cycle per year. Band-limited signals contain only finite power; thus there ARE limits to how much the slope can change month to month or year to year. And aliasing is by no means the overwhelming factor erroneously claimed here.

        The real problem lies in the gross falsification of actual measurements through various ad hoc adjustments and the fragmentation of time series perpetrated in Version 3 of GHCN. Even the unadjusted (olive curve) does not show the average yearly temperature as indicated by actual measurements.

      • george e. smith February 22, 2017 at 12:40 pm
        I would say that both the olive colored and the blue black (on my screen) curves are fake.

        There clearly are discrete points on the graph. I’ll assume that those are accurate measured numbers; as accurate as a reasonable thermometer can be expected to be. (What is the name of the person who measured those accurate temperatures ?)

        Hi George, in 1916 it was someone called ‘J.K. Maclein’ (well Mac-something his handwriting is a bit difficult to read.

      • Well I can’t even imagine what 1sky1 is even talking about.

        It matters not a jot what sort of thermometers are used to make Temperature measurements so long as they give accurate readings. I took it for granted that whoever made the measurements used to construct those graphs, took accurate readings.

        So I have NO PROBLEM with the measured Temperatures, nor with whoever made those measurements.

        The problem is with whoever was the total idiot who drew those two colored graphs.

        A scatter plot of the original measured Temperature values would have made a wonderful graph.

        For the benefit of 1sky1 and others similarly disadvantaged; a scatter plot is a plot of INDIVIDUAL POINTS on an X-Y graph or other suitable graphical axis system.
        For the Temperatures measured by whomever it was, the Temperature is the Y value, and the presumed time of measurement is the X value.

        Scatter plots can easily be plotted by anyone with a copy of Micro$oft Excel on their computer.

        If the total idiot who drew those graphs had used Excel, (s)he could have made a wonderful scatter plot graph of those measurements.

        Moreover, that person could even have had Excel construct a CONTINUOUS graphical plotted curve that passes through EVERY ONE of those measured data points EXACTLY; perhaps using some sort of cubic spline or other approximation.

        Such a CONTINUOUS curve, would not necessarily be an exact replication of the real Temperature that was the subject of the original measurements; but it would be acceptably close to what that exact curve would have been, and points on such a fitted CONTINUOUS curve intermediate between the measured values, would have been close to what real value could have been measured at that time.

        But the total idiot who drew these two graphs, chose to just connect the measured and scatter plotted points with straight line segments, that results in a DISCONTINUOUS NON BAND LIMITED INFINITE FREQUENCY RESPONSE FAKE GRAPH.

        Apparently, 1sky1 doesn’t even know the difference between a CONTINUOUS function and a DISCONTINUOUS function.
        The first one is band limited, and can be properly represented by properly sampled point data values.
        The second one has no frequency limit whatsoever, so it cannot be properly represented by even an infinite number of sampled values.

        I suggest 1sky1 that you take a course in elementary sampled data theory, before throwing around words like “fallacy” or “erroneously”, whose meanings you clearly don’t understand.

        G

      • Well I can’t even imagine what 1sky1 is even talking about.

        At least you’ve got that much right, George! Since temperature is a continuous physical variable, any periodic sampling will generate a discrete time series. The crucial question is how well that sampled time-series represents the continuous signal. That’s what I address in pointing out the various band-limitations imposed by instruments and data averaging. (With well-sampled time-series, band-limited interpolation according to Shannon’s Theorem can always be employed to reconstruct the underlying continuous signal.)

        In stark contrast, you’re apparently concerned with the purely graphic device of connecting the discrete yearly average data points by straight lines. That’s looking at the paper wrapping instead of the substantive information content. The notion of using some spline algorithm on an Excel scattergram of the time series (instead of Shannon’s Theorem) to obtain a continuous function speaks volumes. It shows that, despite throwing around the terminology of signal analysis, its well-established fundamentals continue to escape you.

      • Well I see it is a total waste of time trying to educate the trolls.

        1sky1 simply doesn’t understand the fundamental concept the NO PHYSICAL SYSTEM is accurately described by a DISCONTINUOUS FUNCTION.

        Ergo; by definition, a discontinuous graph purporting to be an accurate plot of some measured data from a real physical system, is FAKE. No real system can behave that way.

        Gathering the data; of necessity sampled, is one issue. I have no problem with how this particular data was obtained.

        How it is presented is a totally different issue that 1sky1 does not seem able to grasp.
        Shannon’s theorem on information transmission is not even relevant to the issue. It’s a matter of interpolating real measured data with totally phony fake meta-data.

        G

      • Well I see it is a total waste of time trying to educate the trolls.

        Being “educated” by amateurs who are patently clueless about the rigorous theory of reconstruction of continuous bandlimited signals from discrete samples(http://marksmannet.com/RobertMarks/REPRINTS/1999_IntroductionToShannonSamplingAndInterpolationTheory.pdf) provides no intelligent benefit. On the contrary, it prompts senseless fulminations about “fake” graphs of well-sampled time series, based upon nothing more than the superficial impression of how the discrete data points are visually connected.

      • Well sky see if you can giggle yourself up a book that’s a bit more on the ball, than the one you linked to the contents pages of.

        Nowhere in that entire text book, does it teach reconstruction of a band limited continuous function from properly sampled instantaneous samples of the function, by simply connecting the properly sampled data points, with discontinuous straight line segments. Such a process does not recover the original continuous function, which a correct procedure will do.

        And your reference text is a bit of a Johny come Lately tome anyway.

        Well as was Claude Shannon, whose writings on sampled data systems are about 20 years after the original papers by Hartley and Nyquist of Bell Labs, circa 1928.

        I’m sure there are precedent papers from the pure mathematicians, preceding Hartley’s Law, and Nyquist’s Sampling Theorem, but then Cauchy and co, weren’t exactly in the forefront of communications technology as was Bell Telephone Laboratories.

        G

    • This sort of behaviour by GISS is, in my opinion, at about the level of paedophile priests. The data handling equivalent of ‘kiddy fiddling’.

    • Dementia setting in? I recall that during the first week of March in Dallas sometime in the early ’60s, it hit 106F. Looking at an official record of Dallas Love Field temps the other day, the warmest day I could find for the period was 97F. It’s possible what I remember from the ’60s was actually a record temp at a non-official station, perhaps Meacham Field in Fort Worth. But what I wonder is whether that 106 figure has been reduced in the record books by “adjustments”?

  2. It always surprises me that past measurements can be read so much better now, after 80 years, than they could at the time.

    In 2097 GISS will probably record that we are hunting woolly mammoths today.

    • “In 2097 GISS will probably record that we are hunting woolly mammoths today.”

      LMFAO – glad I didn’t have food in my mouth when I read that!

    • This is the one thing I have never understood … and in my opinion should be the main sticking point.
      Forget accurate temperature measurements….and everything else.

      They run a algorithm every time they enter new temp data.
      They say any adjustments to the new data…..is more accurate than the old data.
      ..so the algorithm retroactively changes the old data….every time they enter new data.

      And yet every thing published about global warming is based on the temp history today.
      …which will be entirely different tomorrow

      We will never know what the temp is today…..because in the future it’s constantly changing

      oddest thing about it all….the models reflect all of that past adjusting
      and inflate global warming at exactly the same rate as all the adjustments

      • oddest thing about it all….the models reflect all of that past adjusting
        and inflate global warming at exactly the same rate as all the adjustments

        Not odd at all, they base their adjustments on the same theory used in the models.
        Mosher has stated more than once, they create a “temp” field mostly based on Lat, Alt, and distance to large bodies of water, and the rest is just noise.

      • Remember when Nancy Pelosi said “We have to pass the bill before we can know what’s in the bill!”?? Well, climate science is like that…”We have to adjust the temperatures before we can know what the temperature is/was”.

        Makes perfect sense. :) *snark*

    • But we are hunting them today, there’s one over the way it’s huge & grey & hairy it eats children & it’s……………..oh sorry it was a big red bus instead dropping off some school kids!!! By mistake! sarc off!

    • Actually, it seems that only thermometer data recorded around the year 1970 are accurate, according to GISS. They have reduced the readings prior to that date and increase them after. See climate4you for the details. I am now searching for a circa-1970 thermometer in my box of old stuff in the garage. It seems it is the only device that works ok, according to the world class scientists at GISS.

  3. Could someone develop an easy-to-follow description of this concept of “atmospheric energy?”
    I understand that a temp reading is a measure of ambient temperature, and also of “atmospheric energy.”

    To the degree that a thermometer, properly used, is not so great at measuring atmospheric energy as ambient temperature, it seems that atmospheric energy is measured by the use of at least two indicators: the local thermometer and some other indicator.

    Apparently the other indicator or indicators is somehow simply not superior, since the thermometer is also needed. So, this other indicator and the thermometer are both flawed (as all measures are).

    And, it seems that some kind of systematic bias in the thermometer can be determined based on this second indicator, although the second is acknowledged as flawed.

    How does the logic go? What is the other indicator?

    If the thermometer is wrong, isn’t it systematically wrong, and so all values ought to be changed the same amount? If an old thermometer was wrong, wouldn’t it be wrong every day for years? And if the new one is wrong, wouldn’t it be the same amount of wrong every day for years?

    Is a thermometer more reliable at some temp ranges than others (barring extremes)?

    –All of this does not add up.

    • Temperature is actually the incorrect metric for atmospheric heat energy as the amount of water vapor in the volume of air alters its ‘enthalpy’. The correct metric is kilojoules per kilogram and can be calculated from the temperature and relative humidity.
      A volume of air in a misty bayou in Louisiana with the air temperature of 75F and a humidity of 100% is twice the amount of energy as a similar volume of air in Arizona with the air temperature of 100F and humidity close to 0%.
      It is therefore incorrect to use atmospheric temperature to measure heat content and a nonsense to average them. Averaging the averages of intensive variables like atmospheric temperature is meaningless. It is like an average telephone number or the average color of cars on the interstate mathematically simple but completely meaningless.

      • Ian W, in my view you are right. The Enthalpy of a volume of air is easily calculated from the wet bulb temperature, the dry bulb temperature and the barometric pressure and I would bet that all three have been recorded at weather stations for a long time. What’s more you can safely average Enthalpy. I can’t help but wonder why this is not done.

      • @IanW: I was about to write a post on my blog on this. Another part of the delta energy content not directly aligned based on simple temperature measurement is the variance in the sensible heat of disparate materials for global surface measurements, and the high phase change heat content for water at constant temperature points as it transitions from solid to liquid to gas and back.

      • TY this layman has been saying for many years we CANT compute a single temperature for the earth…..w dont have accurate measurements to average for starters, and as you posted an average is meaningless.

      • When the goal is to influence politicians, and the general populace, it is a bad idea to make them cranky by confusing them. You present your big picture in term they are familiar with, and thus believe they understand.

      • Here is my reply using only the average letter in the English language:
        Mmmmmmmmmmm mmm mm mmmmmmm m mmmmm mmm mmmmmmm m mmmmm mm mmm mm mmmmmmm mmmm m mmmm!

      • Bill, we could compute such a temperature, the problem is that the error bars would have to be around +/- 20C or so.

      • @John Mauer
        Thank you for opening this can of worms.

        Do you have a reference for “GISS decided to modify the temperature data to account for perceived
        faults in its representation of atmospheric energy”?

        To amplify on thoughts by Forest Gardener, Ian W, RobR, and others, the measure of heat
        energy is enthalpy. In joules per kilogram,the Bernoulli principle expression is

        h = (Cp * T -.026) + q * (L(T) + 1.84 * T) + g * Z + V^2/2

        Cp is heat capacity, T is temperature in Celsius, q is specific humidity in kg H20/kg Air, g is gravity
        L(T) is latent heat of water ~2501, Z is altitude, V is wind speed

        An interesting study is https://pielkeclimatesci.files.wordpress.com/2009/10/r-290.pdf
        Pielke shows that the difference between effective temperature h/Cp and thermometer temperature can be tens of degrees.

        Classical weather reporting does not include the data necessary to calculate enthalpy to an accuracy better than several percent. This inaccuracy is greater than effects attributable to CO2. Hurricane velocity winds add single degrees of effective temperature, but modest winds can add tenths of degrees.

        I speculate that part of the reason for the never ending adjustment of temperatures is an attempt to compensate for the inaccuracies inherent in temperature only estimates of energy.

        For a more formal discussion of wet enthalpy,
        http://www.inscc.utah.edu/~tgarrett/5130/Topics_files/Thermodynamics%20Notes.pdf

    • On the subject of systematic thermometer errors, the argument runs something like this. The surroundings where the temperature was measured changed over time, or the time of observations was changed, or there was an unexplained discontinuity in measurements or trends, or any one or more factors changed over the years. This means that 20C measured 80 years ago is not necessarily the same as 20C measured today.

      Some people say that means the 20C should be adjusted. Others say that the measurements should be treated as estimates with error bars. I say that the data is not fit for calculating climatic trends.

      • Absolutely right in my opinion (scientific opinion might I add). At the least the data should be taken as is, with error bars. Every measurement tool has a measurement error. If its decided past measurments aren’t right then the bars need expanding by some amount to cover the uncertainty. The errors then need computing forward. If the errors are too large you’ll end up with some total nonsense at the end that shows errors larger than the signal. That means your data isn’t good enough to draw any conclusions.

        There must be a reason that many climate model outputs seldom feature error bars.

      • “I say that the data is not fit for calculating climatic trends.”

        But it is fit calculating climactic trends. ;/

      • “….This means that 20C measured 80 years ago is not necessarily the same as 20C measured today…..”

        This is probably true, but since the heat island affect was less pronounced 80 years ago it implies that temperature measurements back then were MORE accurate (or more representative of the “true” temperature) than today’s (concrete, asphalt, brick, glass influenced) measurements.
        So if any corrections (i.e., data fudging ) are made, it should be made to TODAY’S temperature readings by LOWERING (artificially high) them !!!

        But if this were done, well, there goes the millions of $$$$$ into the AGW scam and it would be game over.

      • if the past location was essentially pristine then it should be unadjusted and all adjustments should be applied to the forward records based on the changes driving those adjustments …

      • I doubt that there is sufficient knowledge or skill to calculate anything which depends on the real world locality. The idea of comparing 20C then and 20C now to discern some climatic signal is just ludicrous.

      • I think the point of the article was that it read 20C eighty years ago and is still reading 20C. Exactly what changed so that the two readings of 20C are only adjusted from 80 years ago?

      • Jim, your question is a good one. There are some records of changes which can give qualitative guidance but nothing which can justify pretending that 20C then should really be considered have been 19.3C.

      • Beyond that, in the past, that 20C was calculated by averaging the daily max and daily min. With modern sensors, it’s the average of 24 hourly readings. (Some are taken more often. A few a little less often.)
        Even without any other source of contamination, it still wouldn’t be possible to compare the past number with the current number because they weren’t arrived at in the same fashion.

      • Quite so Mark, and so yet another fudge factor is introduced into the manufacturing process. That goes even beyond the meaningless nature of calculating the average temperature.

    • NOAA has built a “climate reference network of 50 stations out in the “boonies” where they don’t expect urbanization for 50-100 years”. They refuse to publish those temps because they don’t fit their theory. The models by the way, don’t adjust their error bars when they move the data from one model run to the next one, SO, at the end of 100 years of model runs, the ERROR BARS are plus or minus 14-28 degrees. That makes it very hard to find a gain of a few degrees in the 50+ degree error bands. PURE NONSENSE, to be polite.

      • Data for the NOAA Climate Reference can be found at https://www.ncdc.noaa.gov/crn/ The network is currently (I think) 114 stations in the lower 48, 18 in Alaska and 2 in Hawaii. On the site click on to “Graph the Data” within “National Temperature Comparisons.” Set the time interval to “previous 12 months”, set the start and end date you want and click “plot.” You will see that for duration of the CRN program, roughly from 2005 on, US temps have not changed. The chart can be set back as far as 1889 but as we know, the prior temps have been reduced so are not meaningful. I expect the info from the CRN will make or break the warmist’s in another 10 or 20 years because the data cannot be adjusted.

  4. ..If each and every WUWT follower would do this for their own area, I bet we could put together a great historical log of these false/unjustified “adjustments” ! Considering how many people follow this great blog, it should be a “YUGE” list !!!

      • Training and quality control? Once one knows how to download data and create a spread sheet…..We’re not building models, just recording data and its adjustments.

      • “Sheri
        February 22, 2017 at 9:54 am

        Training and quality control? Once one knows how to download data and create a spread sheet…..We’re not building models, just recording data and its adjustments.”

        My experience says very few people know how to make good user documentation. Writers generally assume the user will know various things they themselves know, thus don’t cover, and they frequently, very frequently, use multiple different terms (labels, names, etc.) for the same thing, without ever telling the user that these different words are supposed to be the same.

        The point is that no step of obtaining, calculating, or presenting the data would be obvious to the novice. For there to be any chance of getting people to participate it would be necessary to present extremely clear step by step instructions of how. Otherwise, the first result will be that most people quite in frustration when they can’t make the leap from step n to step n+1. The second result will be that many different, and not correct, procedures will be practiced by those who are persistent enough to be able to get SOMETHING done by trial and error.

  5. “The result is an annual “temperature” which is a measure of the energy ”

    It’s not a measure of anything. It’s an anti-physical value which if used as a temperature in a physical calculation, it is guaranteed to get the wrong result. Temperature is an intensive value (meaning: you don’t add such values) which cannot be defined for a system that’s not even in a dynamic equilibrium, let alone a thermodynamic one.

    • Amen, brother! Especially as the “energy” balance we’re looking for is a radiative balance (T^4), linear temperature averages have no meaning.

  6. Quote: The calculation of an annual temperature is specified by GISS.

    Annual temperature? What a concept. No daily or seasonal information remains. All weather information is thrown away but for a single figure for an entire year. It conveys about as much information as the average telephone number.

    And then there are the adjustments discussed in the article and plain old making up temperatures where none are measured. Through the looking glass indeed. And to think that some people refer to all of this as science!

    • I like this analogy that i read on one of these sites: ” What is the average color of your television annually?”
      How useful is that information?

      • To show how silly some averages are, I like to ask “If all my darts hit the wall around the board, can I say my average is a bulls eye?”

        SR

      • Reminds me of the Texas sharpshooter fallacy.
        Basically fire a bunch of shots at a barn. Find where most of the bullets hit. Paint your bulls eye there.

  7. In the future, none of the current “record” temperatures will be records. They will have been adjusted downwards and the temperatures of the day will be “new” records. Adjusting the past is the worst of all possible methods to deal with correcting data.

  8. GISS is nothing but artefact. As Schmidt said himself, “what we choose to do with the data determines the temperatures”

  9. It’s NOAA\NCEI that need to be stopped from making up data for places they have none. That is where the real problems!

    NOAA’s data warms as it moves down latitutes as in when they lose stations on higher latitude warmer southern data warms up the missing station data further north.

    BE do this too, fail to cool southern data that is used to make up data further north

    • The “real problem(s)” are that it is clear, and was centuries ago when the concept was invented, that “climate” is a summary of weather. It is not a real phenomenon but a reified “idea.” Even paleoclimatologists mistakenly discuss Pleistocene “climate” as if it were real, though they have considerably greater justification. The other “real” problem is that we don’t know in detail what drives weather. The basics are in place, but there are critical shortcomings such as the effect of clouds, and more importantly the manner that storms cool the planet. If you couple the altitudes at which clouds form with the geometry of the average optical path for an LWIR photon at that altitude, most of the energy released during condensation and cloud formation will be radiated away from the planet. If you have ever watched squalls pass with virga droppoing from the clouds but not reaching the ground, you are watching weather cooling that part of the planet locally. Energy is released at altitude, the virga falls groundward and returns to a vapor state and is carried back into the cloud layer. Basically, a refrigeration-like cycle. Climate is more attended to because it is already “summarized” and appears effectively simpler, but every single bit of data that addresses “climate” is really weather data at the base.

      • Duster wrote, “If you have ever watched squalls pass with virga dropping from the clouds but not reaching the ground, you are watching weather cooling that part of the planet locally. Energy is released at altitude, the virga falls groundward and returns to a vapor state and is carried back into the cloud layer. Basically, a refrigeration-like cycle.”

        Right! But that’s not a refrigeration “-like” cycle, it is a classic phase-change refrigeration cycle, exactly like your refrigerator, except that the refrigerant is H2O instead of a CFC or HCFC.

        Duster also wrote, “If you couple the altitudes at which clouds form with the geometry of the average optical path for an LWIR photon at that altitude, most of the energy released during condensation and cloud formation will be radiated away from the planet. “

        I’ve wondered about that. It seems like half would radiate downward, so at most half could initially be headed toward space. But, of course, when the radiation is re-absorbed by other CO2 molecules, it just warms the atmosphere, usually at a similar altitude, so eventually it should get other chances to be radiated toward space. So, can you quantify “most of”?

  10. Talk of an average temperature for the globe is meaningless. The world could be one with 15ºC from pole to pole or one with -10ºC at the poles and +40ºC in tropics and still be the same average.
    We have just one ice sheet over the south pole at present with a bit of sea ice in winter over the north pole.This is because we are living in a warm interglacial period.
    These balmy times usually only last for around 10,000yrs and we are nearing the end of that period.
    Within the next few hundred years the northern ice-sheet will return to stay with us for the next 100,000yrs until the next interglacial and son and so on for a few million years until continents shift enough to allow for better penetration of ocean warmth into the polar night.
    Water the great moderator of extremes.

    • More correctly … interstadial should replace interglacial as the term for warm periods within the colder stadials.

  11. NASA Climate are history and so are all their bogus adjustments.

    What matters now, is:

    1. getting people we trust to compile a metric that really does tell us what is happening.
    2. Adjusting for urban heating by REDUCING modern temperatures near settlement
    3. Filling in all the gaps around the world
    4. Creating a global quality assured system with ownership of the temperature stations so their compliance to required standards can be enforced.

  12. How are the adjustments made? i.e. how is it decided that the adjustment should be x degrees? the graph just looks like someone decided it should be 0.4 deg here and 0.6 deg there. There must be some routine that works it out for each point (though if there were, i’d expect each adjustment to be different perhaps). I’d love to know why, for instance an adjustment of 0.9 deg was used in the 60’s and then it jumps to 0.6 degrees in the 70’s….what changed to make that a realistic adjustment?

    Above all though, it is now not data is it. Its some computed numbers which someones opinion has had a part in forging. If they want people to take it seriously there needs to be significant background work showing why the adjustments are valid.

  13. Your contentions are totally wrong. GISS doesn’t calculate the monthly average. GISS doesn’t adjust the data as you claim. They get the monthly adjusted (and unadjusted) data from GHCN, a NOAA branch.

    The GHCN adjustment is the difference between the yellow and the blue line in this chart:
    https://data.giss.nasa.gov/cgi-bin/gistemp/stdata_show.cgi?id=425000626580&ds=5&dt=1
    The adjustment that GISS may do is the UHI adjustment, but that is zero at Falls Village because the site is classed as rural.
    You can check this by comparing the black and the blue line, they are identical.. The black line is what GISS finally uses..

    Its different in Central Park, NY City:
    https://data.giss.nasa.gov/cgi-bin/gistemp/stdata_show.cgi?id=425003058010&ds=5&dt=1

    There, GISS actually reduces the trend of the adjusted GHCN data (to that of nearby rural stations).
    Is that a bad thing?.

    • Mark – Helsinki
      February 22, 2017 at 6:51 am

      GISS is nothing but artefact. As Schmidt said himself, “what we choose to do with the data determines the temperatures”

      • Averaging temperatures from different stations is the bad thing. Physically meaningless.

        it doesn’t have to be, you average the SB flux, and then convert it back to a temp.

        That added about 1.2F to temps, but otherwise doesn’t change much. I’m in the process of switching my code to do all temp processing like this.

      • micro6500 February 22, 2017 at 8:08 am

        Do remember to account for enthalpy or you are wasting your time. The hydrologic cycle is what drives climate. That includes the thermohaline currents and clouds as well as humidity and the resulting wet and dry lapse rates.

      • Averaging anything is physically meaningless. Nothing physical pays any attention to it; well nothing physical can even sense an average if and when one happens.

        G

  14. If this year resembles 1983 and Powell and Mead fill up he’ll look even more foolish.

    Looks to me like Wettest may soon be reality in all of the West with a likely cyclical return to the same kind of wet years of the 70s to the 90s.
    And it’s not just the west coast. Inland upstream water basins are way up and feeding Lakes Powell and Mead in ways that may fill both as was the case in that earlier era.
    As sure as the Texas drought ended so is the rest of the west coast drought.
    https://www.wired.com/2015/05/texas-floods-big-ended-states-drought/

    http://www.sacbee.com/news/state/california/water-and-drought/article126087609.html
    “When the snowpack is way above normal and the Sierra Nevada precipitation index is above ‘82-’83, it’s time,” he said. Northern California has received so much rain this year that the region is on pace to surpass the record rainfalls of 1982-83.

    http://lakepowell.water-data.com/

    Rivers feeding Lake Powell are running at 149.53% of the Feb 22nd avg. Click for Details

    http://lakemead.water-data.com/

    https://en.wikipedia.org/wiki/Lake_Mead
    Multiple wet years from the 1970s to the 1990s filled both lakes to capacity,[10][11] reaching a record high of 1225 feet in the summer of 1983.[11] In these decades prior to 2000, Glen Canyon Dam frequently released more than the required 8.23 million acre feet (10.15 km3) to Lake Mead each year, allowing Lake Mead to maintain a high water level despite releasing significantly more water than for which it is contracted.

  15. It would be worthwhile doing some absolute basic research before writing these kinds of articles. The adjustments shown have nothing to do with GISS. GISS appear to have made no changes to the GHCN input they use.

    • Paul, which part of the article says that GISS did the adjustments? My understanding of the article was that what GISS presents is at odds with the actual temperatures measurements.

      The problem is the inexplicable fiddling which happens between the recording of actual temperatures and the final manufactured product. The article highlights that quite well.

  16. Another excellent report by WUWT. Unfirtunately this kind of report and discussion would make the average persons eyes glaze over after about eleven seconds, which is good for the attention span of their postmodern education. Fortunately there are people who can actually outline the detail and further the argument.

    Simply put:

    Everybody has heard about the non stop stream of Fake News coming from the leftist establishment, so you won’t be surprised to hear that NASA is peddling Fake Data on Global Warming.

  17. The same methodical manipulation of temperature trends by reducing older temperatures and increasing recent temperatures to produce an artificial warming trend, has been reported for many years in many blogs covering all regions of the world. It appears to be quite consistent and systematic.

    Since both NOAA and NASA GISS use GHCN temperature data, it is not clear to me whether this systematic manipulation is being done by NOAA, by NASA, or perhaps more likely by both NOAA and NASA GISS in a co-ordinated fashion. Hopefully a congressional inquiry will soon get to the bottom of this.

  18. The google earth pic of the site in 1991 does not seem to show two trees next to the weather station. The pic is not too clear but it seems that the trees, if they were there were no where near the size they are in the current pic. This would affect readings in the last 25 years surely?

  19. All is revealed!! NASA has a working time machine!!!! How else can they go back in time and determine accurate temperatures in the past???? I am sure I will find confirmation somewhere on the net of the great cover-up of all time!!!!!!
    Oh, they don’t have a time machine? Never mind. . .

  20. I live 20 minutes north of the generating station. One change to the plant was the large changes in the transformer locations, to right to the south of that SScreen. Now the main step-up transformer is in that location. I’ll get a photo.

  21. Its not obvious to me why the ‘homogenized’ plot would have gaps when the raw does not. Could it be that upward spikes in the raw are clipped off to prevent the homogenized from rising high too soon?

  22. Just look up the Berkley Earth results… thousands and thousands of surface temp sites checked, UHI influence eliminated.

    • As always, Griff defines as good, any “study” that reaches the correct result.
      If they claim that they have a 100% perfect method for removing UHI, then they do. After all, they got the correct result, so the methods must be good.

    • “Eliminated”…so it was quantified for every location and subtracted out? Would love to see the annual values on that for a number of cities.

      More like it was allegedly “eliminated” by algorithmic (no pun intended) background processing and hand-waving.

    • Griffy,
      I have serious reservations about the methodology BEST used to ‘prove’ UHI influence is negligible.

    • Laughably, BEST calculated a UHI cooling from 1950 to 2010. Instead of reaching the conclusion that something was f*d, they said it justified that UHI was minimal.

  23. Thank you. This covers many of the riddles I’ve been puzzling over.

    One big point is that the actual raw data are unavailable. The kind of instrument, readings and times of day, missing hours or days or weeks or months are all lost in the black-hole, the bit-bucket, the court-house fire or flood. The means used to interpolate and aggregate to arrive at monthly figures…well, scraps and hints and possibilities tantalize, but are not readily accessible.

    But, the watermelons say we should give up our liberty and earnings and property because CATASTROPHE!

  24. Another thought, about siting… though this is in a nice little grass area, it has problems… 1) body of water close by, 2) blacktop roadway and parking lot close by, nearly surrounding it, 3) shade trees too close, 4) building too close, 5) transformers in grid connections too close.

  25. The article states that it is considered to be rural so no UHI adjustment would have been considered necessary.

  26. Surely someone has done a study on the effect of building/trees/tarmacadam/water? It would be easy to do, just set up say 10 stations at various locations on one test site with building/etc, measure distances and compare results. This could even be done today for a short period, say a week, for an initial indication of variation. Obviously, years would be needed for comprehensive results, but even a week of measurements would show something, especially if the measuring instruments were state of the art. It could all be networked back to central station

      • Mosh,

        Why can’t you make an equitable comparison? How much is the cooling effect of water on a calm day within 10-15m of it’s border? How much is the “elevated temperature from pavement/asphalt” within a 30m radius? What about other sources of UHI i.e. car, AC and jet engine exhaust? Are these assumptions or have they been tested?
        At first glance it appears that the cooling effect is greater than the heating effect by a factor of 2 to 3. Your inability to compare apples to apples greatly diminishes your credibility.

      • Steven, some of your statements really diminish your credibility. This is one. If this statement is in any way reflected in Berkeley Earth’s calculations then it completely undermines their credibility as well.

        So start by telling us your source. You may of course wish to recant, but you know how you feel about people who recant.

      • Doncha just hate it when that happens.On the other hand, the effects of a lake can carry for miles.
        At least according to Steven.

      • On the other hand, the effects of a lake can carry for miles.
        At least according to Steven.

        It does, I frequently have to shovel it off my driveway in the winter. And I’m about 30 miles away.

      • The elevated temperature from pavement/asphalt diminishes within 10-15m of its border (calm day)

        Heck, you can go further than that. It diminishes the further away it gets. According to Leroy (2010), the impact of a heat sink from 1-10m distance from the sensor will work out to ~8 times that of the same area of sink from 10-30m.

        But there is a sizable number of HCN sensors that are rated as Class 3 (NWS non-compliant) rather than Class 2 (compliant) that have no heat sink exposure within 10m — but do have over 10% exposure within 30m (which works out to ~8 times the same exposure within 10m).

        So you are correct to say that heat sink exposure does diminish after 10m distance, as you say. But that also is not to say that heat sink exposure at under 30m is not — very — important, at least when we are dealing in terms of trend differences of tenths/hundredths of a degree C per decade.

        Exposure at distances from 30-100m can only make the difference between Class 1 and Class 2. Both of those ratings are NWS-compliant and both of the offsets are zero degrees C. So that is a distinction without much of a practical difference for our (limited) purposes.

        But anything within 30m can potentially bump a station into non-compliance.

        Well, we can adjust for that! (And we will, too, if the VeeV won’t see the light).

        One thing I am very interested in is how our stationset will run using your methods. But for valid results, any homogenization you do cannot, Cannot, CANNOT be cross-class. So no compliant stations being pairwised with non-compliant stations if you please! What that means in plain English is that you can only use Class 1s and 2s for pairwise. Any 3\4\5 stations would have to be adjusted for microsite BEFORE using them to pairwise with Class 1s or 2s.

      • OK, so the measuring site is in the middle of several square miles of said pavement/asphalt along with other monitoring sites. Yessiree, UHI in my book.

        Well, sure. And that will increase the baseline temperature. But that is merely an offset. I have found that UHI does not appear to have a heck of a lot of influence on trend (sic).

        When I do not grid the data, urban stations show less warming that non-urban. When I grid the data, it shows a bit more warming than non-urban.

        Urban stations are less than 10% of the USHCN total, in any case. So any effect on trend is marginalized.

        But bad MICROSITE, i.e., the immediate environment of the station, has a huge effect on trend — even if the bad microsite is constant and unchanging over the study period.

        UHI is edge-nibbling. Microsite is your prime mover. Microsite is the New UHI.

      • Chad Jessup February 22, 2017 at 7:22 pm
        OK, so the measuring site is in the middle of several square miles of said pavement/asphalt along with other monitoring sites. Yessiree, UHI in my book.

        Well your book is in error, the site is in the middle of several square miles of woods, and 23 m from the river (likely a cooling influence).

    • I propose that we dub these important physical discoveries “Mosher’s laws”. Now I’m fully convinced that Berkeley Earth contains nothing but the unvarnished, untarnished truth.

      • He is correct. But I think he is not following that particular trail closely enough. OTOH, he has said that microsite is a valid subject for study and has the potential to expose a systematic error in the data, i.e., not “nibbling around the edges”. (He doubts it will make much difference, of course, as well he may. But he said he thinks it is a good issue for study.)

      • He is correct that heat sink effect diminishes at 10-15m.

        (Although I think he is drawing the wrong conclusions from that: A diminished effect can still be quite significant.)

      • The heat sink effect surely diminishes in some sort of continuous manner. That means you need at least 2 numbers to describe the decrease as a function of distance.

        Yes.

        Not only that, but, say, you have a house and the front end is 15m from the sensor and the back end is 25m away. Well, the front end will have more effect than the back end. That makes it next to impossible, and certainly impractical, to calculate the effects precisely.

        So Leroy crudes it out so it can actually be measured in this lifetime. It’s not exact, but, as the engineers say, it will do for practical purposes. OTOH, we have always realized that Leroy is a bit of a meataxe (although a very good meataxe), and one of the things we want to look at in followup is re-doing leroy’s method to achieve greater and more uniform accuracy.

      • Engineers use rules of thumb for estimating the amounts of explosive for blowing up bridges also. After the calculation, they take everything times ten just to be safe. This makes sense, as long as all you care about that the bridge gets blown up. It is not a sufficient basis for constructing bridges.

  27. I think that Trump should offer an amnesty to fraudulent ‘climate scientists’ – Come clean and walk away, if you keep cheating go to jail.

  28. I guess I just don’t understand the whole thing, as the warmists are always telling me…

    Hasn’t warming been going on for about 10,000 years now give or take, and sea level rise the same? Don’t any direct measurements we have that could be considered even partially global, go back only several hundred years at most?

    What has been Man’s contribution to this warming and sea level rise over that period? Show me, and prove it. Show me, and prove, how the computer climate models accurately portray the climate as best as we can reconstruct it, over that period and over the recent directly measured past. Is that too much to ask?

    Why is “climate science” not held to similar high standards as are other scientific disciplines? And God help us if engineers had to meet only the low standards of “climate science”…

    • Well, it’s a Class 4 using Leroy (1999). With the upgunned Leroy (2010), it is a Class 3. (There is heat sink within 10m, but covering under 10% of the area within 10m.)

  29. Data revisionism only takes place in climastrology, and it usually happens completely opposite of what logic would dictate, i.e. lowering past temperatures and raising current ones because of UHI effect.

    • Data adjustment is very necessary. Raw data, writ large, won’t do. But that just means it is all the more important to do the adjustments right. (And clear. And explainable. And replicable.)

  30. “The data from that site appears to start in 1916 according to NASA records; it shows intermittent warming of about 0.5 degrees C over the last 100 years. However, NASA recently decided to modify that data in a direct display of political convenience to exaggerate the rate of warming, that is, making the current period appear to show increased warming. What follows is a look at the data record as given on the site for NASA Goddard Institute of Space Studies (GISS), a look at alternate facts.

    Are you accusing the Matte Menne and Claude Williams of scientific misconduct?
    For the record?
    Is the Publisher of this site aware that you are making a charge of scientific misconduct?

    The temperature data for a given station (site) appears as monthly averages of temperature. The actual data from each measurement (daily?) is not present.

    Check GHCN Daily. duh

    • Steve, with the facts presented in this piece, are you of the opinion that there would be need for adjusting the temperatures at this particular site? If so, why?
      This is an honest question. I’m not a lay person, I am a chemist who has done some study into this protocol. I’m still not sure why adjustment would be needed. Broadening uncertainty I could understand. Shifting data points would contradict much of my training in physical science.

      • I can think of adjustments that need to be applied.

        1.) There is a TOBS flip from 18:00 to 7:00 in 2010. There needs to be a pairwise comparison to adjust for the jump. (NOAA supposedly does this.)

        2.) CRS units have a severe problem with Tmax trend because the bulbs are attached to the box itself — the station is carrying its own personal Tmax heat sink around on its shoulders. CRS Tmax trends are more than double any other equipment (either as warming trend OR cooling trend).

        But instead of adjusting CRS to conform with MMTS (and ASOS and PRT, and, and, and), NOAA adjusts MMTS trends to match CRS units. In other words, they adjust in exactly the wrong direction. (In our study we account for the jumps, but we include MMTS units in our pairwise.)

        So all CRS units need an adjustment to reduce Tmax trend. Instead, MMTS trends are increased by NOAA.

        3.) It is a non-compliant Class 3 station. A microsite adjustment needs to be applied, one that will reduce the trend by somewhere between a third and a half.

        It’s not that the data does not need adjustment. Unfortunately, it does. But NOAA is doing it wrong, Wrong, WRONG. (As far as I can tell, this is NOT fraud — just error.)

      • Evan Jones wrote, “There is a TOBS flip from 18:00 to 7:00 in 2010.”

        Say what? Surely by 2010 they were automated and recording measurements every few minutes, right? So how can Time of OBServation be an issue?

      • Heh!

        Sure, the readout is on all the time for both MMTS and ASOS, so you could at any time do a reading. And you can get hourly data on ASOS, as it is.

        But all NOAA ticks down is max and min. That’s all that goes into USHCN2.5 data, anyway. Yeah, you can get the hourly scoop on the Airport stations from NOAA, but only max and min go into the HCN station data. However, airports typically observe at 24:00, which is a good time to do it.

        With MMTS, the hourly data could be recorded, but simply isn’t, so far as I know. Therefore, TOBS adjustment is necessary. We simply drop stations with TOBS flips and don’t adjust. (Bearing in mind that dropping TOBS-flipped stations is, in essence, the logical equivalent of an adjustment.)

      • That is amazing and appalling. The amounts of data are very small, by today’s standards. Data storage and transmission are practically free. The stations are expensive. Why on earth would they ever discard any data? SMH.

      • Accusations of misconduct seems to be an automatic go-to defense by alarmists. To think these scientists (sic) are just incompetent are never considered by these self proclaimed supremacists.

      • Misconduct or incompetence… how to decide which it is? Generally speaking, look at the patterns of the errors. Simple incompetence generally makes errors which do not have a long term alignment with some ulterior motive. Misconduct produces patterns of errors which correspond with some desire or purpose.

        Remember also that incompetence can only be judged in relation to the documented credentials of the people involved. A medical doctor who even once prescribes cyanide for a headache instead of aspirin cannot claim it was simple error, incompetence on his part. What level of incompetence is believable for a PhD employed by NASA as a climate expert?

        For climate scientists, this is not complicated. Are the changes that have been made to the data equally scattered, some cooling, a roughly equal number warming, with warming and cooling adjustments having no unusual patterns related to past or present? Alternatively, do the changes produce new trends which did not exist in the raw data, new trends which support plausible desires or purposes of the people making the adjustments?

        I know what it looks like to me.

    • Steven, you are right to object but your line of attack is completely misguided as it was with the Bates controversy. In that case you sought to have readers believe that recanting proved his original allegations to be false. Recanting does nothing of the kind.

      In this case the author is wrong to assert motive as a statement of fact. He should have suggested motive as a possible conclusion.

      Your rhetoric about accusations of scientific misconduct is completely misplaced. Nobody is asserting that NASA is doing science with the temperature record. There can therefore be no allegation of scientific misconduct.

      • Galileo recanted his heretical heliocentrism, but still was right that the earth moves, contrary to Church orthodox doctrine.

    • I didn’t bring this up before, Mosh, but I guess you realize there is a big-ass problem with CRS. Tmax trend is crazy-outlier-large (either in a warming OR cooling trend — doesn’t matter which) when compared with any other equipment.

      The MMTS, ASOS Hygro, or CRN PRTs all show a radically different story. Either all of them are wrong or CRS is wrong.

      And you can imagine what effect THAT will have on MMTS adjustment . . .

  31. Adjusted or no anything that close to the Housatonic River for the last century isn’t useful for climate research.

    The Housatonic River runs 140 miles from Pittsfield, MA, through Lee, Great Barrington and many other small communities on its way through Connecticut to the Long Island Sound.

    Over that period major manufacturing from Plexiglas to paper had been using the river for dumping industrial pollution. Indeed when the paper companies upstream were producing colored paper you could tell the color even into the 1970s.

    • Rob, please explain. I grew up down wind from the Naugatuck River, so on the right day the smell was nearly unbearable. Let me know how that may or may not have impacted temperature data. Are you thinking the aerosols? If so, that would have a cooling effect, and therefore temperatures should be adjusted ….. up?

      • I hit send prematurely …. to explain, the Naugy and the Housy rivers run pretty much parallel to each other. The Naugatuck Rubber Company dumped into the Naugy, and the smell was, well ….. we didn’t like North Winds much ……

  32. From the Climate Explorer. The average raw temperature anomaly in the 20 closest stations to Falls Village (one of these stations goes back to 1793).

    There is absolutely no reason to adjust Falls Village based on the pairwise homogeneity algorithm unless it is rigged or faulty or so unstable that it just not work.

    • Rigged?

      Mr Illis. Are you charging Matt menne and Claude Williams with scientific misconduct?
      For the record.
      And is the publisher of this site standing behind this charge?

    • 1793? really? the 20 closest stations?

      there are 20 stations within 25km of this site.

      the oldest one goes back to 1884.

      IF you are going to accuse people of misconduct show your work.

      I’d hate to see you and other who publish this stuff getting sued.

      • Steven, you protest far too much. It makes it look like you know where the bodies are buried as the saying goes. So, are you now going to say I’ve accused you of murder?

        I just love it when defendants in court cases carry on the way you are now. Their credibility is reduced to near zero. Take a break from the keyboard for your own sake.

      • Steven Mosher, currently running the BEST break-point “adjust temperatures higher” algorithim.

        Hopefully, Steven is not one those who can be prosecuted for running fake temperature adjustment algorithms. I’ve posted on boards with Steven for about 10 years now, most of which he would have been described as a skeptic. I hope in changing sides he has not sacrificed his integrity, at least not on the prosecutable side.

      • Let’s look at Falls Village in Berkeley Earth partially managed by Steven Mosher now. This is a good representation of what this adjustment algorithm actually does.

        It takes the raw temperature below.

        And then finds 13 different “break-points” in this raw data, then separates the original record into 13 different sections and then restitches the 13 different section back together into a “new regional record” that goes up by almost +2.0C versus the “no change” in the raw record.

        I mean there are even 3 different “time of observation” breakpoints in the year 1983 alone. As if they changed the time of observation at this station 3 different times in 1983, all of which made the historical records go down and at a period when all this time of observation problem was supposed to be sorted out 50 years earlier.

        Obviously, this is a “biased algorithm”. How it got so biased I don’t know but I doubt it was an accident because people would fix it after they found just one example like Falls Village when there are 13,000 more just like it in the Berkeley Earth system. They would have already noticed how biased it is.

      • Sheri: veiled law suits don’t advance science, but neither do hinted allegations of misconduct. Lets get it out there – Is Illis accusing them of misconduct or not? If not lets say it plain that there is NO accusation of misconduct. Then we all know where we stand.

        So Mr Illis “Steven is not one those who can be prosecuted for running fake temperature adjustments. ” Please tell us who is running fake temperature adjustments, and please clarify that by fake you mean that by fake you mean they are deliberately and knowingly publishing false data to misled the public.

      • Obviously, this is a “biased algorithm”. How it got so biased I don’t know but I doubt it was an accident because people would fix it after they found just one example like Falls Village…

        Spot on! The technical source of the bias of the “break algorithm” lies in its fundamentally erroneous ex- ante model of a monotonically declining “red noise” spectral structure for the data. That assumption, which misidentifies many sharp moves due to various quasi-cyclical components as “empirical breaks,” winds up fragmenting intact records into mere snippets, to be shifted upwards or downwards to conform to the model. Because of nearly ubiquitous–but largely unrecognized–UHI in the data base, the shifting of snippets toward the regional mean anomaly surreptitiously transfers the UHI effects to stitched-together non-urban records.

        There can be little doubt that the uncritical embrace of this biasing algorithm, devised by statisticians with no expertise in geophysical signal behavior, is no accident on the part of agenda-driven “climate science.”

  33. You realize that NASA GISS do not adjust this GHCN data as you claim..

    That NOAA creates the adjusted data for NASA?

    Is the publisher of this site aware of the fake news in your piece?

    Blaming NASA for NOAA changes in data looks irresponsible.. Are you trying to do a hit job on NASA?

    • Great points Mosher. An official agency publishing and promoting trillion-dollar policies based on a dataset has no responsibility for the accuracy of its content, whereas commenters on a website must be held to the highest standards..

    • Steve,

      do you realize that GISS in 1961,,was not supposed to be involved in surface temperature data, in the first place.Their original mission was to support NASA on Space exploration:

      “Thus an initial emphasis on astrophysics and lunar and planetary science”

      It was Dr. Hansen who changed the mission, when he took over the directorship in 1981.

    • First, the government is responsible. Everyone who touches it has a duty to assure that it is correct regardless of which office creates it. That’s why audit trails are important.

      Second, I think the Trump fire has raised temperatures a bit!

      • Hmm. Don’t overanalyze. Especially when one is in opposition. We haven’t the data to arrive at serious value judgments. And we all have our loyalties. I know I do, and I don’t mince words, not on that subject. Life is an armed truce. But we knew that, anyway.

        Besides, I may not agree with all of Mosh’s methodology or any of his conclusions, but BEST is a remarkable piece of work and has a different approach than we do.

        In simple terms, he does “jumps”, while we do “gradual”. But I think a gradual, systematic, spurious bias will slip right past BEST, because BEST does jumps, not gradual. And our two biggies — Microsite and CRS bias — are gradual and systematic and not only won’t be picked up by BEST, but will serve just fine to make homogenization crash and burn.

        Yet I think that by looking at both the BEST method and at our own method, when we publish — for their strengths — we (or others) might well create a better, more sophisticated method than either alone.

  34. Although the data set is not labeled, these measurements are presumably min/max measurements.
    The GHCN site clearly lists the data as being Max/min taken at 6pm (ideally) each day.

    The temperature data for a given station (site) appears as monthly averages of temperature. The actual data from each measurement (daily?) is not present.
    Here’s some data from 1916, for example:
    http://www1.ncdc.noaa.gov/pub/orders/IPS/IPS-FEF45D69-ABC4-4E68-B19D-0E2C58F574D6.pdf

  35. “The result is an annual “temperature” which is a measure of the energy in the atmosphere coupled with any direct radiation impingement.”

    Unless the enthalpy for each data point is calculated then they are not getting a “measure of energy in the atmosphere”.

  36. As OA points out above, this is all barking up the wrong tree. GISS does very little in handling this data. They use GHCN adjusted data; that is where to look for the adjustment activity. GHCN’s sheet listing the records and its adjustment is here. The article says that GISS does not have the daily record – again looking in the wrong place. It is in GHCN Daily here. If you really want the raw stuff the handwriiten forms are accessible here. Metadata is accessible here.

      • Nick,

        GISS was originally created by Dr. Jastrow as a supporting group, to NASA’s SPACE EXPLORATION projects.

        They have no justification to do work that other agencies already does,it is a waste of taxpayers money.

      • “They have no justification to do work that other agencies already does,it is a waste of taxpayers money.”
        GISS was doing it first. They have a long record, and their product is well used. It would cost very little to produce (I do a similar calc on my home computer). There is no reason to stop.

      • “They do very little, according to your own comment.”
        My comment was that they don’t do the data handling – the adjustments for homogeneiities which is a minor but unavoidable part of calculating a proper global average. One of the ironies of these posts which seem to come every month or so, is that the reason GISS attracts this misplaced attention is that they seem to run a more user-friendly website for accessing the data, although NOAA has more overall. Do you really want to lose that?

      • Nick: “GISS was doing it first. They have a long record, and their product is well used. It would cost very little to produce (I do a similar calc on my home computer). There is no reason to stop.”

        Before we launched all those expensive satellites, NASA was bubbling over with various excellent reasons to stop using surface stations. Those reasons haven’t changed, and the network has only gotten patchier since. It took a team of unpaid volunteers to even bother to document the current site conditions.

        If GISS was showing a satellite-era trend significantly cooler than satellites, it would be quietly buried in a field at midnight instead of being kept on life support and implausibly promoted as more accurate than those really expensive satellites NASA wanted from taxpayers.

      • “they seem to run a more user-friendly website”

        Seriously, Nick. How much money does NASA consume to produce “a more user-friendly website”?

        Why not have NOAA produce “a more user-friendly website”?

        Andrew

      • “How much money does NASA consume to produce “a more user-friendly website”?”
        No use asking me. Why don’t you find out? I expect it is very little, as is the cost of producing Gistemp.

      • “then there’s no need for taxpayers to fund it.”
        Lots of things that don’t cost much are still worth doing.

        Running the internet doesn’t cost that much. So do you think they should stop?

      • “Why don’t you find out?”

        We can’t get these folks to cooperate with FOIA requests or Congressional inquiries, but surely they’ll fall all over themselves in eagerness to help some commenter on a website critical of their work.

      • Nick — if you’re referring to the maintenance of Internet namespace databases (and not the trillions of dollars in private infrastructure), that is administered by a nonprofit.

        Would you like some help with a Kickstarter campaign?

      • I say this has introduced me to anew and surprising way of deciding which public projects to finance. Instead of looking at the value of the output and the costs of the input, then funding if the benefits seem to be greater than the costs, this new way is much simpler. You simply look at the costs, and if they are quite low you scrap the program! Such simplicity must be applauded. After all, if the costs are low, what does it matter if the value is huge?

        This new policy will ensure that cheap and excellent value for money projects will be scrapped, and only expensive projects will be funded.

      • “You simply look at the costs, and if they are quite low you scrap the program!”

        Seaice, you are being deliberately obtuse. The reason the costs are quite low is because they aren’t really doing anything. Key point for you to try and comprehend.

        Andrew

      • “Nick, if GISTEMP costs almost nothing to produce, then there’s no need for taxpayers to fund it.”

        That is the point I was responding to. If it does not cost much there is no need to fund it. Absurd.

    • The article says that GISS does not have the daily record … and you confirm that by saying that GHCN has them … thanks

      • Immediately below the graph on the GISS site it explicitly says where the data is kept and provides a link to it:
        “Key
        Based on GHCN data from NOAA-NCEI and data from SCAR.

        GHCN-Unadjusted is the raw data as reported by the weather station.
        GHCN-adj is the data after the NCEI adjustment for station moves and breaks.
        GHCN-adj-cleaned is the adjusted data after removal of obvious outliers and less trusted duplicate records.
        GHCN-adj-homogenized is the adjusted, cleaned data with the GISTEMP removal of an urban-only trend.”

      • And step 2 is where it all goes badly awry. Reason being that those breaks are fixed by doing pairwise, and the upwards of 80% of the stations used are invalid on the grounds of poor microsite, alone. And 100% of Stevenson Screen records have a Tmax trend that is more than spuriously doubled.

        But neither microsite nor CRS bias creates a break, so it just slips through. And since bad microsite alone creates a systematic error that affects four stations out of five (even if non-CRS), NCEI “corrects” the situation not by adjusting the ~80% bad down to conform with the ~20% good stations, but adjusting the 20% good stations to match the 80% bad ones.

        And that, folks, is how homogenization bombs. It works as intended if the majority of the data is good. In that case, the bad is conformed to the good. But if most of the data is bad, then it does the exact opposite. An average of good and bad data is not so great. Obviously. But misapplied homogenization takes a bad situation and, rather than making it better, makes it even worse.

    • Thanks, Nick. I am exposing my ignorance, but this was the easiest way to ask the question. I live about 20 miles south and was really curious why the old temperature data was modified.

  37. I am at the very beginning of studying the record of an extended and near-complete record of one station in New Zealand. It is an exercise to see how the modern record has been created. One must start with how early raw data we read and averages for months and years established.

    My understanding is that nowadays average (mean) daily temperature is established through finding the mean between max and min daily temp. But: when was it that instruments could automatically record max and min? This is what I am researching at the moment. As yet I can find no record of thermometer specs throughout the record.

    The most likely scenario before automation (to establish max and min) is that the reader recorded temp at specific times (or time) in the day. It is most unlikely that a reader would attempt to find the max and min on a daily basis. Reading the device at 9 am in winter has very different implications to reading at the same time in Summer. The first (winter) may well record min temp but the second (summer) most probably wont. Many of our stations were situated at hydro power stations, forests and research institutes. I cannot imagine these staff getting out of bed at 5 am in summer to ensure that they record a min temp. Neither would they hang around a station during the afternoon to find the max.

    There are people in our system who know what was done to establish the “mean” from early data which I am assuming were recorded at specific times during the day. I am going to keep digging until I get an answer.

    This is the most basic of questions

    • To Michael ,The first thermometer to read the maximum &minimum temperatures was invented in 1780 by James Six.of Canterbury (uk) .it recorded the current temp.&also the max &min since last read ..it had no time clock so the time of the occurrence of the max min temp was not known ,but was probably read every 24 hours .it needed to be reset before recording the next 24 hours temps .

    • The link I gave earlier showed that at Falls Village the max/min temp was read at 6pm although the records from 1916 showed the actual time was more variable but still in the afternoon.

  38. Goddard Institute for SPACE STUDIES

    Founded by Dr. Jastrow in 1961,Directed for the first 20 years. Note that INITIALLY it was: “Thus an initial emphasis on astrophysics and lunar and planetary science”

    “The GISS formula is designed to yield high research productivity and a flexibility that allows research directions to shift as NASA objectives develop. Thus an initial emphasis on astrophysics and lunar and planetary science has evolved into a focus on understanding the causes and consequences of global climate change on Earth, of direct relevance to the first objective in NASA’s mission “…to understand and protect our home planet.”

    Guess who became Director in 1981,who changed the mission to Climate Change?

    https://www.giss.nasa.gov/research/news/20080303/

  39. A surplus of solar energy on a dry surface produces only sensible heat which makes for a very hot surface and an equally hot air above.
    Evaporation from an equally solar radiated moist surface replaces some of that sensible heat with latent heat which cools the surface and warms the air somewhere else far away as vapour condenses.
    Hence tropical rainforests probably do more to cool the surface and move heat somewhere else than anything else.
    Without tropical rain forests rainfall would cease to be as regular, the land would be subject to extremes, ecosystems would be destroyed and mankind would have a tougher time surviving.
    Speaking as an eco-warrior, justifiably concerned about mankind’s real negative impacts around the world
    the whole CO2 misconception/distraction (to put it kindly) has given environmentalism a bad name and for science a huge embarrassment from which it will take a while to recover.

    • Speaking as an eco-warrior, justifiably concerned about mankind’s real negative impacts around the world
      the whole CO2 misconception/distraction (to put it kindly) has given environmentalism a bad name and for science a huge embarrassment from which it will take a while to recover.

      I’m not, but I have complained about this for 10+ years. Think what that money wasted on climate change could have done!

  40. Not sure if this has been mentioned before and apologies if so; I haven’t had time to read the entire thread.

    Berkeley Earth (BE) also analysed Falls Village and came up with results similar to NASA. Here are the data plotted using BE’s breakpoint algorithm: http://berkeleyearth.lbl.gov/auto/Stations/TAVG/Figures/37510-TAVG-Comparison.pdf

    This suggests that recent temperatures at Falls Village fall below those recorded at nearby stations, suggesting a local discrepancy of some sort, defined as an ‘Empirical Break’.

    (Can’t help noticing that the tree beside the screen in the photo is casting a shadow over the screen. Was that always the case, I wonder?)

    • we don’t live in a pristine environment … there will be local differences driven by alot of factors and averaged out over the globe that is fine … there is no need to adjust every location to a pristine baseline … nobody lives in a pristine baseline …

      • Perhaps the fact that we don’t live in pristine environments is one good reason why we ‘should’ adjust for non-climatic influences. Shouldn’t we adjust for influences like UHI, for example?

      • Many statistical tests for significance assume a random distribution of errors. If someone “adjusts” the sample, it makes the tests rather invalid.

      • Hey DWR54! “Shouldn’t we adjust for influences like UHI, for example?”

        Yes, if the influences can be justified and quantified. But justification and especially quantification require information. Do we really have some new source of information that allows us to correctly adjust data from the 1880s? No? Then why are the old numbers changing?

      • Jason Calley on February 23, 2017 at 12:22 pm

        No? Then why are the old numbers changing?

        This is always the same question, like an eternal refrain.

        All anomalies computed in a series out of the average of a given “baseline” period (e.g. 1951-1980 or 1981-2010) will change every time any absolute value within the baseline was changed (for example, to correct an error).

        But all the other absolute values in the time series arfe left unchanged.

    • If a tree has grown and is now shading it why would you adjust data from 100 years ago rather than more current data? Part of the problem is that there is no audit trail for any of this. This is beside the point of just what global temperature really means and what it actually indicates. If you use fudged up data to calculate some fudged up figure, all you have is something that is meaningless.

    • most rivers and stream run in a valley of some sort . even a relatively small depression over a significant length appears to be colder than the surrounding area, often by a significant amount. in-laws live on the bank of the local river half a mile south of me and around 300 feet lower elevation. i often see temperatures up to 4 c cooler in the morning on the car temperature display than when i left my house 5 minutes previously.

    • Yes, the stevenson screens were adopted when it was discovered that every weather station did not have a handy shade tree, so that the standard “temperature in the shade” might not always be the actuality as the sun hit it. There is shade on the shade, and both the shrubbery and the shelter minimally interfere with breezes.

      J.K. Mackie’s (variant spellings in the family include M’Kie, Mackey, MacKay…) record from 1916 March shows that they recorded a daily max, min, the range, and then they took an arithmetical average max per day, and an average min per day each month. Wouldn’t be surprised if they just averaged those 2 to arrive at a monthly aggregate “average”…or at least central tendency. As long as it is consistent, it should be OK, not bias a trend into the mix.

  41. This dataset was supposed to obsoleted decades ago with the advent of satellites and the shutting down or degrading of so many surface stations. Instead, it became a wonderful opportunity to promote a political agenda with adjustment that seem plausible on the surface (haha) but are deeply problematic when delving into the devils of the details.

    • Really hope Trump’s team just takes an axe to the whole GISS temperature dataset.

      But Nick Stokes should feel free to keep publishing it from his PC, at no cost to taxpayers.

      • Nick, Your protestations are pretty lame. Anybody can run the same code on the same data and get the same results. Or are you saying you have developed new software and even new data?

        You also say that adjustments make very little difference. So why have the already mangled figures been further Karlised? And why was there such fanfare when the pause was busted?

        See the problem with your line of argument?

      • Forrest,
        “See the problem with your line of argument?”
        I wrote a code using quite different methods to GISS, described here. It is similar to what BEST later used. I use unadjusted GHCN data, which is a difference, but has little effect on the result. But you don’t know that until you have done it. Where there are clear inhomogeneities, you have to adjust for them, even if it all balances out in the end. I can just see people here taking the other tack if they didn’t (Negligent!).

      • I use unadjusted GHCN data, which is a difference, but has little effect on the result.

        that’s because your processing generates the trends itself. You guys screw with the data so much, it doesn’t matter much.
        It interesting, I use the measurements as is, when I scrape off the day to day change of min temp, average out over a year, if I take the last 30 years, invert it, and it’s a good match to satellite temps, which makes me think the satellites are detecting the heat passing through the troposphere.

    • talldave2 on February 22, 2017 at 10:50 am

      Why should a dataset become obsolete with the advent of satellites when both look so similar?

      Here is a chart comparing, exclusively for the GHCN V3 FALLS VILLAGE station, unadjusted and adjusted data together with the UAH6.0 2.5° grid cell just above the station:

      You see that all three plots differ by so few that any claim for so called adjustments really sounds a bit paranoid.

      But not only the plots show such convergence. Numbers do as well, e.g. highest and lowest temperature anomalies wrt 1981-2010 (in °C) from december 1978 till december 2016.

      Highest is december 2015 for all three datasets
      – UAH: +5.35
      – GHCN unadj: +6.79
      – GHCN adj: +7.04

      Lowest is february 2015 for both GHCN datasets as well (december 1989 for UAH)
      – UAH: -5.32
      – GHCN unadj: -8.28
      – GHCN adj: -8.33

      The similarities between surface and troposphere temperatures at peaks and downs in the chart is sometimes amazing, especially when you look at it in a pdf file.

      This of course you see only when looking at anomaly based charts: there are about 24 °C difference between UAH and GHCN, making comparisons of absolute values impossible.

  42. If the data doesn’t fit, adjust it a bit!

    Climatologist’s maxim: One fudged data table is worth a thousand weasel words!

  43. Back in 2011 I tried to find out exactly what accounted for the NOAA/GISS U.S. temperature adjustments. I wrote about that attempt in a comment on WUWT in 2012, here:
    https://wattsupwiththat.com/2012/08/13/when-will-it-start-cooling/#comment-1057783

    Approximately all of the reported warming in the U.S. 48-State surface temperature record from the 1930s to the 1990s was due to adjustments. So I “asked the Climate Science Rapid Response Team” (a/k/a, the Defenders Of The Faith, the Congregation for the Doctrine of Anthropogenic Global Warming) to help me locate the old data, and to explain the alterations which had added so much apparent warming to the U.S. surface temperature record.

    They were unable to do so, though they did direct me to some interesting material — some of which made me queasy.

    In the WUWT conversation, Amos Batto claimed that the “data and software algorithms are publically available.” But when I told him that I couldn’t find it, and I asked him to find it, he went away.

    I never did find an explanation for the majority of those temperature adjustments, nor did I ever track down the original data graphed in Hansen’s 1999 paper. Eventually I reconstructed the data, pretty closely, by digitizing Hansen’s graph, using WebPlotDigitizer.

  44. That the author and most commenters seem ignorant of the fact that raw GHCN dailies are available from NOAA is a real head scratcher. Nick Stokes even supplied a link to the raw daily file for Falls Village, which includes TMAX, TMIN, TOBS, precip, snow, and snow depth, plus measurement, quality, and source flags. Data begins Feb 1916. Snow and snow depth measurements stop in 2010, precipitation measurements stop in 2014. (Why?)

    When someone says they are using raw data, however, I wonder how they deal with missing days and data flagged for various “quality issues.”

    Glancing at Falls Village, the biggest temperature increase appears to be balmier summer nights. No trend in precipitation. Record high temperature of 104F in 2002 beat the previous 103 in 1933. Has anyone asked long-term residents if they feel climatically threatened?

    • “Snow and snow depth measurements stop in 2010, precipitation measurements stop in 2014. (Why?)”
      It’s a coop (volunteer) station, and seems to be fading out. You can read this in the metadata here. Why it is happening in stages is a mystery.

      • Seems a shame to shut down a station in continuous operation since 1916.

        The Norfolk 2SW coop station, 7.8 miles away, has continuous records since 1943, but with many missing days. It shows more warming than Falls Village over the same period. The NWS shut down the hydrological reporting for that station in 2010 and has since been installing automated equipment:
        http://www.greatmountainforest.org/workingforest/weather/history.php

        What a difference 7.8 miles makes in extreme highs. Falls Village reached its record high of 104F in 2002, 5F above its 2001 high. Norfolk reached its record high of 98F in 2001, 7F above its 2002 high. Extremely odd.

      • More on the lack of correspondence between record TMAX days at Falls Village and Norfolk, CT.

        Falls Village:
        2001-08-03 — 91
        2001-08-04 — 85
        2001-08-05 — 88
        2001-08-06 — 92

        2002-07-29 — 93
        2002-07-30 — 104
        2002-07-31 — 91

        Norfolk:
        2001-08-03 — 87
        2001-08-04 — M
        2001-08-05 — 98
        2001-08-06 — 81

        2002-07-29 — 75
        2002-07-30 — 87
        2002-07-31 — 85

        An argument against interpolation?

  45. Anyone with even a basic view of statistics knows that taking averages of averages etc and then trying to derive even more data from that result is a fools errand. Remember the definition of statistics – “an attempt to derive meaning where there is none.” In general the more manipulation done to historical data the more useless it becomes. In other words, the error bar gets so large that ALL the data is useless.
    As far as the GISS manipulations go I’d be very surprised if anyone with a statistical background ever reviewed and approved this approach.

    • I’ve just sampled a few stations in Aus a few times but the mean of half hour readings usually comes out over a degree (C) different (more or less) than the mean of the official min and max (which is usually half a degree or more greater the the highest half hour reading shown because of a short spike). It shows how off the use of the mean of min and max readings of thermometer for a day is for a guess at how the thermal energy of a hundred cubic kilometers of air changes (especially since its not the same air the next day). Homogenizing the data using a method developed for real intensive properties is not real science.

      • It is not real science but it provides the right answer for the customer funding the research. I find it amazing that despite the immense amount of money being spent based on these over processed observations, nobody has carried out a formal validation of them. I doubt if any of them would even meet basic quality standards. A rather slap dash approach to processing large numbers of data points is forgivable in an undergraduate project with insufficient research, but for multiple government agencies with funding in the multiple millions to not bother to validate and provide result documentation of that testing is very close to malfeasance. Each site should have a ‘quality record’ that provides supportable reasoning for each and every ‘homogenization’ change to data and that change signed off by an accountable person. The original data for that site must be kept as well as the homogenized (invented?) data to allow replication of the claimed to be justified changes.

        If an accountant did what NASA/NOAA/Hadley Centre do to meteorological observations, to a company sales figures it would be criminal.

  46. In business you don’t duplicate. Profit margin is too important. Well so are my tax dollars. Trump the businessman is doing what should have been done the very first time another agency got into the temperature business. He is minding my money. About friggin time. I don’t care who does what as much as I care that several tax funded agencies are doing essentially the same $&@#%damn what!

Comments are closed.