HadCRUT4 joins the terrestrial temperature tamperers

By Christopher Monckton of Brenchley

In the carefully-planned build-up to the Paris “climate” conference whose true purpose is to establish an unelected and all-powerful global “governing body” (they’re no longer brazenly calling it a “government” as they did in the failed Copenhagen draft of 2009, but one can imagine what they’re thinking), the three longest-standing terrestrial temperature records – HadCRUt4, GISS, and NCDC – have all decided to throw caution to the winds.

Even though the satellites of RSS and UAH are watching, all three of the terrestrial record-keepers have tampered with their datasets to nudge the apparent warming rate upward yet again. There have now been so many adjustments with so little justification – nearly all of them calculated to steepen the apparent rate of warming – that between a third and a fifth of the entire warming of the 20th century arises solely from the adjustments, which ought to have been in the opposite direction because, as McKitrick & Michaels showed in a still-unchallenged 2007 paper, the overland warming in the datasets over recent decades is twice what actually occurred.

The three terrestrial datasets are no longer credible. The satellites now provide the only halfway reliable global temperature record. And it shows no global warming for 18 years 5 months (UAH) and 18 years 6 months (RSS), even though approximately one-third of all anthropogenic forcings since 1750 have occurred since 1997.

For the record, though, here is the six-monthly roundup of what the three terrestrial and two satellite datasets show. Make what you can of them: but I, for one, will place no further reliance on any of the three terrestrial datasets, which have been altered beyond all usefulness, in the wrong direction, and in a manner that is not easy to justify.

For instance, a month or two back Tom Karl of NCDC notoriously and arbitrarily increased the ARGO bathythermograph temperatures, even though ARGO happens to be the least bad ocean measuring system we have. The satellites show no warming of the lower troposphere over the past 11 years; the ARGO bathythermographs show no warming of the surface layers of the ocean over the same period; yet Mr Karl has capriciously decreed that the surface must have warmed after all.

Once science was done by measurement: now it is undone by fiat. Let us hope that history, looking back in bafflement on the era when the likes of Mr Karl were allowed to ru(i)n major once-scientific institutions, will judge him every bit as unkindly as he deserves.

For this and other reasons, I no longer propose to average the terrestrial with the satellite records. The terrestrial temperature records are now mere fiction.

Here is a table showing how far the terrestrial records now differ from the satellite records. Three periods are shown, all running to June 2015. The first period, from January 1979, runs from the first month common to all five datasets. The second period, from January 1990, runs from the year of IPCC’s First Assessment Report, since when the positive and negative phases of the PDO have approximately canceled one another out, giving quite a fair indication of the true long-run warming trend. The third period, from January 1997, runs from the month in which the Great Pause of 18 years 6 months began.

Warming rates (K century–1 equivalent)
To Jun 2015 HadCRUT4 GISS NCDC RSS UAH
From 1979 +1.59 +1.60 +1.60 +1.60 +1.50 +1.21 +1.13
From 1990 +1.47 +1.49 +1.50 +1.67 +1.56 +1.07 + 0.96
From 1997 +0.77 +0.79 +0.82 +1.24 +1.13 –0.03 +0.04

For HadCRUT4 the old and new values to May 2015 are shown in italics alongside the new value to June 2015. Satellite datasets are in bold.

From 1979 to the present, the difference between the means of the terrestrial and satellite datasets is 0.40 K century–1; from 1990 to the present, the difference is 0.56 K century–1; from 1997 to the present, the difference has risen to a hefty 1.06 K century–1. It is pardonable to deduce from this that the chief purpose of the terrestrial tampering has been to wipe out the embarrassing Pause.

The graphs are below. Frankly, even after the tampering, the warming rate is nothing like what it should have been if any of the IPCC’s predictions had been true. My guess is that once they’ve got their world government in Paris they’ll stop tampering leave the temperature records alone.

In fact, I expect that we’ll hear a great deal less about climate change once the world government is safely installed. As the divergence between prediction and reality continues to widen, the new dictators will not want anyone to be reminded of the great lie by which they took supreme and – for the first time – global power.

January 1979 to June 2015

clip_image002[4]

clip_image004[4]

clip_image006[4]

clip_image008[4]

clip_image010[4]

January 1990 to June 2015

clip_image012[4]

clip_image014[4]

clip_image016[4]

clip_image018[4]

clip_image020[4]

January 1997 to June 2015

clip_image022[4]

clip_image024[4]

clip_image026[4]

clip_image028[4]

clip_image030[4]

HadCRUT4: comparison of the old and new versions

clip_image032[4]

clip_image034[4]

clip_image036[4]

clip_image038[4]

clip_image040[4]

clip_image042[4]

Advertisements

315 thoughts on “HadCRUT4 joins the terrestrial temperature tamperers

      • The 100+ station’s added in the NH have on there own caused a little warming compared with historic data because there was no data to compare with before in this data set. IF the 100+ stations were placed during the 1930’s than the recent change may have even shown some cooling compared. All it’s doing is increasingly making historical comparisons less accurate. Adding more stations when the planet is generally warm just adds more positive anomalies to the global NH average. It is not a fair comparison of same station data, it’s just increases the warm biased for have more readings.

        When you compare the same station data over a long period, the warming decreases and the difference between the 1930’s and 2000’s decreases. This is particularly shown using station data from around the Arctic circle.

        The main question is why reduce station data greatly over past decades and then just add hundreds recently?

        The obvious answer seems to be because it gives more warming in the NH because of the reasons mentioned above.

      • Adding more stations when the planet is generally warm just adds more positive anomalies to the global NH average. It is not a fair comparison of same station data, it’s just increases the warm biased for have more readings.

        Watch what happens in 1990. The continuity and therefor the integrity of this temperature data is a complete joke. It is like combining the members of the Dow Jones Industrial Index from 1890 with the Dow today. These are apples and oranges data sets. No real science would accept this crap.

    • The reality is only 5% of the measurement stations are considered ideally sited, and the majority of them are/continue to be (grossly) substandard. If you restrict your analysis to these better sites the warming is greatly reduced (or even show cooling).

    • There doesn’t seem to be any honesty anywhere. Calculating a “global temperature” is a statistical flight of fancy, and bears no resemblance to physical reality. This post, and so many others, are really meaningless.

      • Average global temperature? Averaged on what basis? Weighted by area of influence of each measured site perhaps (thereby giving undue weight to sparsely distributed thermometers in remote areas)? Arithmetic average of all thermometers (thereby giving undue weight to densely populated and densely sampled areas)? The only way that I can visualize generating a meaningful temperature record is to take records from well documented individual sites and see if there is a detectable trend for each station, one at a time. Someone may well have done it this way, but it probably wasn’t HADCRU. Then of course, how do you take account of diurnal and seasonal variations? Average them out? Nah, the whole concept of a global temperature is a fantasy, and an invitation to fabricate data to support preconceived conclusions. In my humble opinion.

        I’m tired of all this “world government” stuff. Next it will be black helicopters, FEMA camps etc…….. All this right-wing rhetoric gives scepticism a bad name. In my humble opinion.

      • Smart Rock, it wouldn’t matter if you had perfectly equal thermometer distribution. Temperature is an intensive property, and is only representative of the temperature for that particular spot. Averaging with another thermometer 1000km or even 1km away doesn’t give you anything physically meaningful, it just gives you a number.

    • Or as I like to say, less than the difference in walking out of a bathroom after a shower into another room in your home. At least at that point it is noticeable.

    • If co2 is a harmless gas why did I spend so much time as a critical care nurse checking its concentration in blood. The procedure is called ABG (arterial blood gas). The partial pressure of co2 must be held within a fairly narrow range. It (co2) is extremely acidic and excess leads to acidosis. The pH of blood must be around 7.2 or slightly acidic. Excess co2 leads to death. Some will say well only in solution. That is dissolved in liquid. You breathe it in and it dissolves in your blood. What is meant by those who only offer half-truths to the uninformed is co2 is harmless IN NORMAL CONCENTRATION. Half truths are more damnable then straight up lies.

      • Good grief. Everything is dangerous at some level. The dose makes the poison. We’re not worried about semantics that 4th graders may find misleading.

      • NO you do not ‘breath it in’. It’s produced by your metabolism and you exhale ~4% CO2. For this reason adding 0.04% atmospheric CO2 will have NO noticeable effect. Of course the EPA knows nothing of physiology or science -are you their consultant?

      • If co2 is a harmless gas why did I spend so much time as a critical care nurse checking its concentration in blood.

        Where do I start? 1) CO2 is plant food, we all die without it. CO2 below 150 ppm will result in plant death.
        2) CO2 in the atmosphere is 400 parts per million, CO2 that you exhale is 4 parts per 100. CO2 on a submarine is 10,000 parts per million.
        3) No one says certain levels of CO2 aren’t poisonous, everyone knows that. We are talking about concentrations that result in zero warming, but much higher crop yields.
        4) Pure N2 can kill you as well pure O2.
        http://www.sciencefocus.com/qa/why-does-breathing-pure-oxygen-kill-you

      • We obviously worked in different ICUs over the years. A pH of 7.2 is alkaline (a smidge over 7.0) but if I had a patient with an arterial or venous pH of 7.2, I would have called that quite acidotic as the normal pH is 7.35-45 (approximately, depending on which reference you use). I’d be quite concerned about an arterial pH of 7.20 – I’ve never seen that considered ‘normal’ in 30+ years in ICU. Don’t take my word for it though. From http://www.nlm.nih.gov/medlineplus/ency/article/003855.htm
        ‘Partial pressure of oxygen (PaO2): 75 – 100 mmHg
        Partial pressure of carbon dioxide (PaCO2): 38 – 42 mmHg
        Arterial blood pH: 7.38 – 7.42’

      • Submariners work safely for months in an atmosphere where the CO2 concentration is many times our 400 ppm.

      • Robert, the CO2 in the exhaled breathe of a normal healthy person at rest is 100 times the concentration found in the atmosphere today.
        In a person who is exerting themselves vigorously, it is far higher than that.
        Learning climate science from hack nurses may be the worst plan yet.
        Luckily, most in the medical profession actually understand numbers.

      • Robert,

        Would you please divulge the hospital or medical center in which you work (or worked) as a critical care nurse? I’d like to make sure I avoid it at all costs. The lack of understanding of human physiology and science in general you demonstrated by your statement here should be reported to the accrediting agency that approved the nursing program from which you graduated. Simply unbelievable. Or, maybe you were not ever really a critical care nurse?

      • We are talking here about minute concentrations in the atmosphere which even increased a hundredfold would be harmless. Your analogy is at best facile and at worst incredibly stupid.
        The atmosphere has proved over a far longer period than science let alone climate science has even existed that the variability it can handle is immense unlike the body. As you say half truths are far more damnable than straight up lies.

      • Water is safe right ? drink 15 pints in an hour and see what happens.
        Submariners exist in CO2 concentrations of way above 2000ppm with no ill effects and up to 5000ppm is considered to cause no harm.
        As a critical care nurse you should be aware of the effects within the body as opposed to external factors.Good for you , but your comments regarding CO2 are uncritical.The fact that CO2 up to around 3000ppm can only be beneficial to the Planet as a whole and that CO2 has nothing to do with the weather or climate change,but is the weapon of choice to create fear, alarmism,sacrifice for the great god of the planet and/or the furtherance of green political aspirations should be considered carefully.
        Critical thinking is required for critical care or do you just believe what the Doctor says? Have you never corrected a doctor? explained a situation why his treatment might be detrimental.
        Je reste ma valise.

      • Robert Yasko: You say ” The pH of blood must be around 7.2 or slightly acidic.”
        You’re not really a trained nurse, are you?

      • In your nursing practice have you ever been required to check ambient CO2 concentrations in the atmosphere?

      • “why did I spend”

        There is probably a good reason that is past tense. And thank goodness too, I don’t want a nurse who thinks CO2 is a strong acid, 7.2 is acidic, and especially not one that thinks CO2 in our blood comes from the air we breathe. I wouldn’t even want you working on my dog.

      • 5% CO2 is common for atmosphere in submarines. Breathing that for months doesn’t seem to harm the squids.

      • I think Robert got his ideas from the “Shoot your blood chemistry to hell!” scene in the old Andromeda Strain movie. He should now read Michael Chriton’s book, “State of Fear.”

    • Or, its significance compared to seasonal temperature variation is like someone tapping their foot whilst an earthquake is going on outside.

      As is the annual sea level rise compared to a spring tide.

  1. As I have said the only data I consider valid is satellite data, the rest of the data in manipulated garbage.

    They can do all the adjustments they want but in the end AGW theory will be obsolete ,and by the end I mean before this decade is out.

    • Even the satellite data is “manipulated.” The important element is the honesty and openness of the “manipulators.” Why the data is adjusted (the need for the adjustment), the reasoning for implementing the adjustment (the methodology), and the “how” of the adjustment (the actual methods including algorithms and code) need to available for proper review. In looking at various sceptical analyses of adjusted data vs. raw(ish) it becomes clear that the methodology and methods elements are where the responsible agencies become remarkably obscure. There is very little dispute about the sources of inaccuracy that need to be accounted for between the AGW and sceptic communities.

    • …….They can do all the adjustments they want but in the end AGW theory will be obsolete ,and by the end I mean before this decade is out…….
      I suspect the AGW / climate change movement is now too powerful to be stopped. Ever. The evidence that the data is manipulated cannot even get significantly into the public arena while trumped up record highs get headline publicity.
      There are now three dangerous cults in the world. Islamism, Christian fundamentalism and Gaia worship. All of these cults are firm in their belief to the extent that they will not even allow any non conformity or belief questioning within the family let alone outside. Only the Gaia worship cult has the power to tax the industrial era out of existence and return us to the essentially primitive controlled existence of our predecessors..

  2. This is what happens when you put the fox in charge of the hen house .
    Those on control the data and strong professional , and sometimes personal , interest in making sure it comes out with the ‘right results ‘ .

    And irony is the reason for the ‘adjustments’ which have proved so useful is that this is an area that is far from ‘settled’ so they have excuse to ‘adjust’ in the first place.
    ““He who controls the past controls the future. He who controls the present controls the past.”
    For some Orwell’s book 1984 is a warning, for others an instruction manual , you can guess which side climate ‘science’ comes under.

    • For some Orwell’s book 1984 is a warning, for others an instruction manual , you can guess which side climate ‘science’ comes under.

      Bingo!!!! The nice thing about this lie however is that time isn’t on their side. If we are in fact entering a cooling phase, there is zero chance CO2 will continue to do anything but go higher. The AGW crowd have no mechanism by which CO2 can result in cooling, and there is no mechanism by which CO2 can cause climate change other than through warming. They made the mistake of clearly defining the theory. They obvioulsy recognized their mistake when they changed the theory from AGW to ACC (climate change). Problem is the nit wits didn’t realize that the defined mechanism for CO2 is trapping heat through the green house gas effect when they changed things to climate change. How can CO2 cause climate change if we are cooling? How do we cool by trapping heat?

      • Sadly the change form global warming to climate change means they can do claim that cooling is also ‘proof’
        frankly its a indication of how rubbish the whole thing is that ‘anything ‘ is consider ‘proof’ and they cannot give any answer to the question ‘what would disprove the theory ‘

  3. I’m going to re-post a comment I saw on a thread at Bishop Hill earlier today because it seems to sum up where we’re at:
    “…. it’s my guess that if you strip out all the tweakings, reinstate some of the cooler weather stations that were closed down in the 1990s cull, took an honest approach to UHI and stopped blowing hot air on sensors at airports then most if not all of this mythical “warming” would disappear.”

      • Most temperatures, especially those in the distant past, would indeed warm. Which would lower the recorded rate of warming. Most of the adjustments consist of cooling the past to make the present seem alarmingly warm by comparison. It is hard to warm the present – all those pesky thermometers.

        Clever obfuscation there. Nice subtle use of the ambiguous word “record”.

      • a line on a chart might change, but the physical temperature at any point in time will still have been what it was .

      • I rarely post the letters LOL, preferring LMAO, or the more emphatic ROFLMFAO.
        But this time I really did laugh out loud, but only for a second, until I realized that Steven may actually be serious.

        In which case, the appropriate response is:

        *rolls the eyes*

  4. What would it have been if they were still using the semi-adjusted HADCRUT3 version instead of the full-adjusted HADCRUT4.

    12 month average anomaly
    HADCRUT3 HADCRUT4
    Dec 1998 0.55 0.52
    Dec 2011 0.34 0.40
    Increase/ -0.21 -0.12

    The new version increases warming (or rather decreases cooling) since 1998 by 0.09C, a significant amount for a 13 year time span. Whilst the changes should not affect the trend in future years, they will affect the debate as to whether temperatures have increased in the last decade or so.

    https://notalotofpeopleknowthat.wordpress.com/2012/10/10/hadcrut4-v-hadcrut3/Decrease

  5. Cheer up. The next US president will be a Republican and a skeptic to boot. He (or she?) will point at the Constitution, do a Tony Abbott and throw the whole sordid lot out of the window. And if the “world government” objects to its treatment, the real president will point at his nuclear arsenal and ask them: “where are your divisions?”.

    • Ah,, Ed,, “where are your divisions?”. That’s a spin-off of “how many divisions does the Pope have. It was made Stalin. It didn’t work for him either in the long run.
      michael

    • Warning::
      Brian G Valentine trolling below.
      Can anyone able to find their front door – three times out of five – be so thick?

      Auto – not impressed.

    • The lifetime bureaucrats have been in charge for some time now. EPA is staffed top to bottom with environmental zealots. Their ideology conveniently is aligned with their desire for ever increasing power. How would a Republican president tame that crowd?

      I know that our media has been fast asleep for 6 1/2 years, but just watch what happens when a Republican president is in the Whitehouse. They will go absolutely berserk if there are serious attempts to bring the EPA to heel. Is there a Republican out there willing to stand that heat? Jeb Bush maybe? Cough, cough.

  6. Finally,Thank you Mr Watts!! Let us hope that there will be no further “analysis of data” using ANY of the surface data (unless raw from way back machines untouched).I Hope The Lord and Bob realize this.LOL

  7. My engineering background is rusty, but I am still having trouble understanding the use of hundredths of a degree for worldwide temperature changes over a hundred years, when surely some of the old temperatures were recorded in only degrees, not even tenths of a degree. Others have commented on this issue, but there must be a good reason, and it may have been handled here more than once before, apologies if I missed it. My recollection, and recent refreshment, of dealing with significant digits in math and science, and in recording and averaging measurements, is that its meaningless garbage to refer to measurements, including averages and other calcuations, that purport to claim accuracy beyond (or in some case beyond one significant digit) the least accurate measurement. To claim changes of a hundredth of a degree…may as well claim thousandths of a degree. Is there a ready and easy source explaining why that is good science for climate science in general, without getting into the footnotes of formal papers? And isn’t it necessary to report the margin of error if going beyond the significant digits, to give the data context?

    • I agree with B. Question – What is the accuracy of the thermometers normally used for atmospheric temperature measurement near the ground?

      • 100 years ago, they were using glass thermometers, read by eye, and rounded to the nearest degree.
        The accuracy depends a lot on how well trained the observers were to avoid parallax.

    • I have been thinking the same thing for years. Way, way back in Physics lab class, the difference between an “A” and a “C” was your error analysis for just this reason.

    • Is there a ready and easy source explaining why that is good science for climate science in general

      Software that provides the illusion of competence.
      The conflation of precision and accuracy.
      Impressing the easy-to-impress.

      • Everybody:
        Precision increases proportional to SQRT(N). If you average enough individual measurements, you are allowed more significant figures. And there are a lot of thermometer readings in these time series.

        This is why we always take multiple repeats of all our measurements in the lab. I am surprised any you had any issue with this at all. Although I will grant you, 0.01 degree does seem pushing it a bit, at least at first look.

      • TonyL, I hope you do not refer to the multiple runs used in modeled temperature output. That has nothing to do with collecting multiple sensor data under the in-situ initiating conditions of nature.

      • @Pamela,
        No, I would not do that. As far as I can see, model runs produce imaginary data, and so, logically, one would use imaginary numbers for the stats. Logically.

      • TonyL,
        I am no statistics expert but I understand that SQRT(N) thing applies to measurements where the errors have a normal distribution. While the actual “eyeballing” of the thermometer readings may have such a distribution, the attempt to derive a global temperature from the myriad of local samplings is not likely to be amenable to such simplistic error analysis. But then we know that climatologists and statisticians rarely communicate.

    • The claim of precision is via the misapplication of the law of large numbers, whereby the error decreases as the square root of N. This only works if you are measuring an object that is temporally and spatially static. I’m talking to you, Nick and Zeke.

      • What you say looks to be strictly correct. However, if we allow thermometers to the nearest degree, the uncertainty is +/- 0.5 deg. If we do not have some improvement from somewhere, all the time series would be random noise with a peak-to-peak of 1 deg. Even the super El Nino of 1998 would be lost in the noise, but that is clearly not the case. So the extra resolution must be coming from somewhere.

      • You are correct. The law of large numbers does not apply to temperature measurements made around the world. However, if you take the same measurement at the same place with the same equipment at the same several locations over a long period of time there is some benefit. I’m not sure how to calculate the error but it is not sqrt(n). It is also not the average of all the errors.

    • Ahh, precision, precision in the climate data…from the UK Met Office

      The new computer — which will perform more than 16,000 trillion calculations per second and weigh the equivalent of 11 double decker buses.

      Who needs two decimal points when the unit is ‘double decker bus’ with or without passengers.

      • Digital thermometers were not used in medicine until the 1990’s, which is probably about the same time they replaced outdoor temperature thermometers. To think that we have only the roughest idea of what ocean temperatures were before that is completely ridiculous! To anyone willing to accept what science should stand for, this whole issue is a pile of Garbage!

    • “My engineering background is rusty, but I am still having trouble understanding the use of hundredths of a degree for worldwide temperature changes over a hundred years, when surely some of the old temperatures were recorded in only degrees, not even tenths of a degree. Others have commented on this issue, but there must be a good reason, and it may have been handled here more than once before, apologies if I missed it.”

      The anwer is simple.

      the “average ” represents one thiing and one thing only: The best estimate of the temperature at unsampled locations. When we say best estimate we mean the estimate that minimizes the error.

      a simple example.

      You have a scale. It reports your weight to (+-1 lb)

      You weigh yourself 3 times

      201, 200, 200.

      Provide the best estimate of your true weight given a perfect scale.

      To do this you average the three and get 200.33333

      This doesnt mean You KNOW your weight to this level of preccision.

      It means your best estimate is 200.3333. that minimizes the error of predction.

      if you say 200 or 201 your error will be larger.

      AGAIN the “average” temperature is NOT the average of all measurements. It is a prediction of what the temperature is at UNSAMPLED places measured with a perfect system

      • “a simple example.
        You have a scale. It reports your weight to (+-1 lb)
        You weigh yourself 3 times
        201, 200, 200.
        Provide the best estimate of your true weight given a perfect scale.
        To do this you average the three and get 200.33333
        This doesnt mean You KNOW your weight to this level of preccision.
        It means your best estimate is 200.3333. that minimizes the error of predction.”

        I believe your example is what I’m getting at in terms of meaningless. The math answer is not necessarily th science answer, which might differ from the statistics answer. Aside from the law of large numbers and statistics that clearly don’t apply with only 3 numbers (and may or many not be misapplied in climate science for those who know ), as I understand simple science measurement principles and the rules of dropping insignificant digits. Best estimate is not the issue, nor helpful with these numbers. Regardless of what Math tells us how to calcalate the average for a math quiz, rules for measurement require dropping insignificant digits, else they are meaningless surplusage. The reason is that the number 200 and the number 200. (with a decimal) are differently recorded, and have a vastly different range of error, as I am sure you noted since you recorded them above differently. The number 200 means that measurement could be 151 to 249, a 100 point swing, while 200. means the measurement was somewhere between 199 and 201, a 2 point swing. Thus for meaning, once math computes the average at 200.3333 give or take 50 pounds, science requires that the average measurement was really 200, plus or minus 50 pounds. Saying it was 200.3333 plus or minus 50 pounds is of no more help than 200.33333333333333333333 plus or minus about 50 pounds. Best estimate is meaingless, particularly if the next time your measurements are 200, 250 and 200.45789 and then try and subtract the two averages down to the fourth place decimal and suggest that the difference means anything. It doesn’t. Its meaningless, well within the margin of error.

      • I need to correct my response to the extent that you assumed the scale was accurate +/- 1 pound. So my example is not responsive as well to your particular data, but it still explains margin of error, and the meaninglessness of best estimates, particularly when they are within the margin of error, and then the meaninglessness of subtracting best estimates each of which is within the margin of error, to come up with an increase or decrease that itself is meaningless.

      • You’re dead-on right, B. Your point goes to the difference between accuracy and precision. Accuracy never goes beyond the resolution of the device, no matter the precision of the measurements.

        Going back to your original question, the reason climatological temperatures are reported to ±0.01 C (or better) is that the entire field indeed does assume, and universally so, that all temperature measurement error is random, and the central limit theorem applies to it all. So, they just decrement the measurement error to approximately zero and then ignore it.

        Their assumption of random measurement error is completely unjustified. I’ve published on their negligence, and they’ve ignored that, too.

      • Your simple example does not illustrate what’s going on. Your simple example is the average of multiple measurements of the same measurand (object being measured). You are attempting to apply that method to single measurements (daily average, for instance) of multiple measurands (different weather stations, for instance).

        That you mistake one situation for another this way is just one reason why people think you, and those who do the kind of work you do, are wrong when you talk about these kinds of things.

      • a simple example.

        You have a scale. It reports your weight to (+-1 lb)

        You weigh yourself 3 times

        201, 200, 200.

        Provide the best estimate of your true weight given a perfect scale.

        To do this you average the three and get 200.33333

        This simple example is wrong. Suppose you have a scale that’s accurate to 0.2 lbs, and which is read to the nearest pound. Suppose, further, that you weigh exactly 160.2 lbs. How many 160 lb. readings would you need to average together to get your true weight?

    • I’m not a climate science expert, but am I wrong in assuming the following:
      each station records temperature to
      in the USA the nearest whole degree F
      elsewhere to the nearest whole or half degree K
      the results are then entered into a spreadsheet set to average the results to 3 decimal places.

      Is it a ‘measurement’, or a statistical construct?

    • David Livingstone got lost in Africa in the late 1860s, Burke and Wills made their epic attempt to cross Australia and perished in the process in 1861, Amundsen reached the South Pole in 1912 not to mention the 70% of the planet’s surface covered by oceans and yet the Climatic Research Unit claims to have the average global temperature anomaly since ~1845 to tenths of a degree C.

    • Most modern digital audio equipment rely on flipping a single value back forth extremely fast resulting in 2^24 possible values for a 20Hz to 20kHz (audio range) signal.

      That’s +/- 100% precision error for a single sample, but done so may times that the final error as the analog output to your amplifier is an extremely small 1/(2^24) error.

      I know quite a lot about this, because I helped write the software for a digital multimeter with similar capabilities as well as worked on an ECG of similar design.

      Thus I have no fundamental problem with a precision of .01degC when averaged over 10s of thousands of thermometers read at a resolution of +/- 1degC. In fact the accuracy might increase as well, because “miscalibration” should be randomly distributed as well and the net accuracy increase as different calibrations are run. (Calibration meaning calibrating to a known source). The known source of course is still the fundamental limit on accuracy.

      I do note that the audio equipment uses specially engineered noise shaping to get the dramatic increase in precision, better than sqrt(N). One might be able to argue that the “noise” of temperature measurement is not well distributed and thus sqrt(N) or better does not apply. I’d love to see a write upon this.

      A reasonably good read on delta sigma converters: http://www.ti.com/lit/an/slyt423/slyt423.pdf

      Peter

      • Where are you getting 224?
        Did you mean 65536 (16 bit) or 16777216 (24 bit)?
        Are you talking about PWM?

        The other obvious problem is that there is variation of thermometer accuracy to begin with, variation of people reading the thermometers, etc.
        Precision and accuracy in historical temperature records will always be an illusion.

        Compounding this problem (in addition to data tampering, which HAS occurred), is that actual “science” would calculate an average (and thus an anomaly) based on ALL available records, not a predetermined time slice.

      • You’re conflating precision with accuracy, Peter Sable. The multiple values just determine how closely your output gets to the resolution limit (accuracy) of your digital equipment.

        Likewise, in your ±1 C resolution thermometer example, zillions of readings just get you arbitrarily close to ±1 C; your limit of resolution.

        And that all assumes that your errors are normally distributed and stationary.

      • Likewise, in your ±1 C resolution thermometer example, zillions of readings just get you arbitrarily close to ±1 C; your limit of resolution.

        Nope, you are wrong. If you have thousands of thermometers, some might read 25degc, some might read 26degC. You will get resolution (precision) that increases as the sqrt(N) thermometers. The actual measurement errors (if they are uncorrelated) by the operators increase the noise level and increase the resolution as the number of thermometers goes up. (Sqrt(n)) doesn’t happen without noise being present). You could argue that all thermometer readers of a particular time period round up when the reading is between two tics on the thermometer (a correlated error), but I don’t think you are arguing that. If you are, post the source please.

        You can actually perform a little experiment yourself to see how this works. If you put your finger without moving on a surface, you can’t feel very much texture. If you move your finger around, however, you can feel lots of texture. This is mathematically similar to oversampling. The movement is the noise, and your nervous system is doing the sampling.

        On the accuracy side you also get some benefit with more thermometers. For example, if your calibration source has an accuracy of +/0.1degC, and your thermometers have a resolution of 1degC, some will read say 25degC and some will read 26degC at the calibration lab (both are in spec of +/- 1degC). On average they may read 25.5 degC and with enough thermometers the average reading will converge on the accuracy and precision of the original calibration source. This particular side of the metrology argument (accuracy) is more prone to correlated errors however (e.g. if one calibration lab is doing all the work for thousands of thermometers there’s not much benefit). So this is more hypothetical, you’d have to back this up with data about who/what/when/where thermometers are being calibrated.

        I agree there are numerous sources of correlated errors such as UHI, bad site placement, and increasing adjustments all in one direction. I think you are on the wrong track with this particular argument regarding precision and accuracy though. Metrology is a very well studied area and the idea of increasing resolution by oversampling is the basis of all modern digital audiio equipment and digital instrumentation. If it didn’t work it would all sound like crap and we couldn’t manufacture anything that required high resolution measurements (which would include all electronics)..

        (I was a metrologist in a previous job)

        Peter

      • Peter Sable, your analysis is not correct.

        Let’s suppose you have a thermometer marked every 1 C, giving it ±0.25 C resolution. Calibration to ±0.1 C means that in some 25.00 C test water bath, the thermometer will read to within ±0.1 C of 25 C.

        In the field, however, temperatures are not controlled, and will typically read between the integer temperature marks. Any reading taken with that thermometer will be reported to ±0.25 C, the limit of resolution.

        Suppose ten people take a reading from that thermometer under field conditions, all within 1 minute, and suppose further that the air temperature is stable during that minute.

        Let’s further suppose the sighting error across the 10 individual readings is strictly random. Whatever the thermometer reads is within ±0.1 C of the “true” temperature. However, every measurement is recorded to ±0.25 C, the resolution of the instrument. The sighting errors are scattered about that T±0.25 C record. The total uncertainty intrinsic to each reading is T±(±0.25C + sighting error), ignoring the ±0.1 C calibration limit.

        Of the two uncertainties, resolution is constant with every reading but random error varies in some unknown way (In the field, the “true” temperature is never known.). The two uncertainties are treated differently. When the 10 temperature readings are averaged, the average resolution uncertainty is ±sqrt[10*(±0.25 C)^2/10] = ±0.25 C, because the ±0.25 C enters as a constant uncertainty with every reading.

        The standard deviation of the random error is the usual σ = ±sqrt[(T_bar -T_i)^2/9], where T_bar is the average of the 10 readings, T_i is the individual reading, and one degree of freedom is lost by taking the average. This standard deviation is what decrements as 1/sqrtN, which is 3.3 in this case. The 1/sqrtN is justified because the random sighting error is stationary (σ is constant) with a mean of zero.

        The 1/sqrtN decrement of the random sighting error then only converges the average of 10 readings toward the ±0.25 C limit of resolution.

        Resolution is a constant uncertainty. It is a physical limitation of the instrument itself. It does not randomly oscillate about a mean of zero. Stationarity with a mean of zero is the only justification for decrementing error toward zero as 1/sqrtN. The resolution limit of the instrument is a constant and therefore violates that condition.

        The average temperature will never be more accurate than ±0.25 C, no matter how many readings are averaged, because the resolution limit enters as a constant uncertainty into every single reading.

      • ±sqrt[10*(±0.25 C)^2/10] = ±0.25 C, because the ±0.25 C enters as a constant uncertainty with every reading.

        Where’d you get that formula? It’s wrong. Please cite your source.

        https://en.wikipedia.org/wiki/Standard_error#Standard_error_of_the_mean

        SE = s/sqrt(N). 0.25/sqrt(10) ~= 0.08.

        Let’s take a specific example. Assuming sighting errors on an analog thermometer with 0.25 tic marks and you take the 10 following readings: 10.25,10.0,9.75,10.0,10.0,9.5,9.75,10.25,10.0. The standard deviation of the population is estimated at 0.24296, and the SE of the mean is 0.24296/sqrt(10) = .077.

        That 10 on the top of your formula doesn’t belong there. the standard deviation doesn’t increase by N. The standard error of the mean does decrease by sqrt(N).

        The biggest problem with your specific example is likely a lack of noise, which means the errors are not normally distributed and/or are correlated. However you get a much better noise distribution when the readings are scattered in time and space.

        Peter

      • The mathematics of uncorrelated error does not apply to the limit of instrumental resolution.

        Since I’ve designed and used electrical measurement instrumentation that uses this principle that’s sold in the 10s of millions of dollars range, and past strict scrutiny of engineering fellows (i.e. metrologist experts), and are used in thousands of engineering labs around the world without complaints in this area*, and you haven’t cited any sources or produced a correct simulation or example, I suspect you are probably wrong.

        Peter

        * software bugs I’ve made plenty of which is why I don’t trust models with millions of lines of code. I’ve won every technical metrology argument I’ve had in my professional career. Got some fun war stories…

      • I wrote a Monte-Carlo simulation of taking readings at a limited resolution and then averaging them together to see if resolution increases. I wrote this in less than an hour so it’s not pretty but it looks correct.

        The results agree with the s/sqrt(N)) improvement in resolution as documented numerous places such as wikipedia.

        Here’s the result with a resolution of 1.0 averaged over 100 samples. As expected, the standard error of the mean is about 1.0 (1.0/sqrt(100)

        Here’s the result with a resolution of 1.0 averaged over 1000 samples. As expected, the standard error of the mean is about 0.03 (1.0/sqrt(1000)

        Since I now have simulations with open source code, references, and lots of experience here, I respectfully submit that I am more likely to be correct here.

        Peter

        octave/matlab source code: https://www.dropbox.com/sh/jzoxwyqbf3qs2j5/AAAysSOjhsYDuSvOu5_mbCiHa?dl=0

      • Peter Sable, from “The Joint Committee for Guides in Metrology (JCGM/WG1) 2008 Guide to the expression of uncertainty in measurement. F.2.2.1 The resolution of a digital indication

        “One source of uncertainty of a digital instrument is the resolution of its indicating device. For example, even if the repeated indications were all identical, the uncertainty of the measurement attributable to repeatability would not be zero, for there is a range of input signals to the instrument spanning a known interval that would give the same indication. If the resolution of the indicating device is δx, the value of the stimulus that produces a given indication X can lie with equal probability anywhere in the interval X − δx/ 2 to X + δx/ 2. The stimulus is thus described by a rectangular probability distribution of width δx…” (my bold)

        That is, the lower limit of any instrument is the instrumental resolution. Instrumental resolution is a box (“rectangular”) distribution, not a normal distribution. It is constant, does not have a mean of zero, and does not average away. It is present in each measurement going into an average, and averages to its constant value no matter the number of repeated measurements.

        I.e., when resolution = constant = ±δx, then an average is 1/N[sum over(x_i ± δx)] = X_bar ± δx_bar, and ±δx_bar = ±δx.

        The JCGM discussion concerns the resolution limit for digital instruments, but applies equally to analogue instruments (such as a mercury thermometer).

        From Bevington and Robinson, “Data Reduction and Error Analysis for the Physical Sciences
        “p. 36, Section 3.1 Instrumental and Statistical Uncertainties

        “Instrumental Uncertainties

        “Instrumental uncertainties are generally determined by examining the instruments and considering the measuring procedure to estimate the reliability of the measurements. In general, one should attempt to make readings to a fraction of the smallest division on the instrument. For example, with a good mercury thermometer, it is often easy to estimate the level of the mercury to a least count of one-half of the smallest scale division and possibly even to one-fifth of a division. The measurement is generally quoted to plus or minus one-half of the least count [e.g., ±1/4 or ±1/10 degree, respectively, for a 1 C division — P] and represents an estimate of the standard deviation of a single measurement.

        …

        “If it is possible to make repeated measurements then an estimate of the standard deviation can be calculated from the spread of these measurements as discussed in Chapter 1
        [Ch. 1 introduces inter alia random errors — P]. The resulting estimate of the standard deviation corresponds to the expected uncertainty in a single measurement. In principle, this internal method of determining the uncertainty should agree with that obtained by the external method of considering the equipment and the experiment itself, and in fact, any significant discrepancy between the two suggests a problem, such as a misunderstanding of some aspect of the experimental procedure.” (original emphasis)

        To condense that last.

        external uncertainty: ±1/4 of the smallest readable division (typically). This is instrumental resolution.

        internal uncertainty: ±σ of the repeated measurements. This is experimental error, taken as random.

        Bevington and Robinson tell us that in a correctly done experiment with repeated measurements and random error (which averages away), internal uncertainty = external uncertainty, and (again) external uncertainty = instrumental resolution.

        So, there it is. When errors are random, repeated measurements reduce the uncertainty to the level of instrumental resolution. And no further.

        One might also consult the pragmatic discussion in Agilent Technologies’ “Fundamentals of UV-visible Spectroscopy” (2.3 MB pdf; free download).

        In the section “Key instrumental parameters, p. 44ff, and especially the discussion of instrumental spectral band width vs natural band width, it’s made very clear that instrumental resolution is the lower limit of accuracy of any measurement.

        Here’s the relevant quote: “Resolution is closely related to instrumental spectral bandwidth (SBW). The SBW is defined as the width, at half the maximum intensity, of the band of light leaving the monochromator (see Figure 30). The accuracy of any measured absorbance depends on the ratio of the SBW to the natural bandwidth (NBW) of the absorbing substance. The NBW is the width of the sample absorption band at half the absorption maximum (see Figure 31).

        And below that: “If an instrument with an SBW of 2 nm is used to measure samples with an NBW narrower than 20 nm (for example, benzene), an error in absolute absorbance measurements will result. This error increases as the NBW decreases (see Figure 32).

        “nm” is nanometers, the wavelength unit of visible and ultraviolet light. Agilent is saying that the resolution of their instrument is 2 nm. If the natural band width of the material is less than 10x the 2 nm instrumental spectral band width, the accuracy of the measurement becomes seriously compromised.

        Again: the instrumental spectral bandwidth of Agilent’s UV-visible spectrophotometer is 2 nm. This 2 nm is the lower limit of resolution of the instrument. No wavelength measurement using their instrument can be read more accurately than that 2 nm.

        Any given wavelength increment within a spectrum obtained using that instrument can be any place within its 2 nm resolution box.

        Any wavelength obtained using that spectrophotometer must be quoted to no better accuracy than ±1 nm; i.e., ±1/2 the instrumental resolution. And that limit of resolution is constant in every spectrum. It enters with each spectrum into an average and remains constant no matter how many spectra are measured and averaged.

        And that’s the way of instrumental resolution. Any instrument has a limit of resolution. The same analytical logic applies to all of them. There are no magic thermometers. No number of measurements will ever produce a result more accurate than the resolution lower limit of the instrument.

      • Peter, every single one of your examples assumes random error. We all understand that random error diminishes as 1/sqrtN. There’s no point demonstrating it.

        The issue is instrumental resolution. Not precision; not random error.

        Your conclusion is present in your assumption. With that circularity, of course the outcome is unvaryingly 1/sqrtN.

        But your assumption (error is always random) is violated by the conditions of instrumental resolution. Resolution is pixel size. It’s constant. It’s a property of the instrument. It never gets smaller. Its in every measurement and has the same magnitude in every measurement. It propagates unchanged into an average.

      • even if the repeated indications were all identical, the uncertainty of the measurement attributable to repeatability would not be zero,

        That should read “especially if the indications are all identical”, that indicates an absence of noise, which, as I’ve already agreed, means you’re stuck at the resolution of the device.

        The addition of random noise is what gets you the additional resolution as you add in more measurements. The sources you quote don’t discuss that aspect.

        There’s plenty of random uncorrelated noise in the reading of thousands of thermometers.

        Your second example suffers from the same issue – there’s little or no uncorrelated noise. In that case you are stuck with the resolution of the device.

        In short, your examples are about the dude in Maine reading the same thermometer many times over 5 minutes. We both agree that the precision is that of the instrument in that case.

        However, when there are multiple operators measuring multiple thermometers in multiple locals at multiple temperatures calibrated from multiple sources, you are getting lots of uncorrelated noise and you get to use s/sqrt(N) to get an increase in precision. (but probably not in accuracy…)

        The Agilent example is talking about a very complicated setup I’d have to read it to see if it’s running into Nyquist or something like that. It’s 12:40am, I’ll have to do that later…

        The stimulus is thus described by a rectangular probability distribution of width δx…” (my bold)

        Easy code change to change to a rectangular distribution. I suspect the CLT will come into play and I’ll be fine. I’ll tweak my code and see what happens.

        Peter

        * I’m going to do some playing around with autocorrelation of different surface locales and see how well this holds up. See http://www.mysimlabs.com/surface_generation.html, I can generate some temperature surfaces with different correlations and see what happens. That’ll take a couple of days, and alas this forum isn’t good for long-winded conversations…

      • Easy code change to change to a rectangular distribution. I suspect the CLT will come into play and I’ll be fine. I’ll tweak my code and see what happens.

        I tried it, CLT still applies. Still drops by sqrt(N) with a rectangular distribution.

        I think the place where we are talking past each other is the addition of noise. Every example you’ve shown there’s no added noise. I agree in that case you are limited to the precision of the instrument. I think I’ve shown that when you add noise, you get precision beyond that of the single instrument’s precision for a single reading. The example of that physics lab with the pendulum is an excellent example of this working in the real world. I hope it’s more understandable than the monte carlo simulations…

        Peter

      • I don’t know if you’re still following this back-and-forth, Peter Sable, but I would like a clarification of what you’re trying to show. ISTM that you are trying to show that repeated (multiple) measurements of the same measurand will increase the precision of the measurement. Is that right?

        For instance, you say at August 4, 2015 at 11:16 pm, “If you have thousands of thermometers, some might read 25degc, some might read 26degC. You will get resolution (precision) that increases as the sqrt(N) thermometers.”

        That, however, is not the methodology being criticized here, as I understand it. The situation being discussed is single measurements — most likely, TAVG = (TMAX – TMIN) / 2 — of multiple measurands (that is, the temperature at different weather stations). I am using TAVG here as the “measurement”, though it is a calculated average, because I think that’s what actually used when stating a day’s temperature at a weather station. From those measurements (TAVG), we get the monthly, then yearly, average for a weather station; from the accumulation of those, we get the monthly, then yearly, average for a region; from the accumulation of those, we get the monthly, then the yearly, average for the globe. (I am simplifying: I know weighting and other factors are applied, and I know some modern stations take the temperature more frequently.)

        To get back to your example, if you have thousands of thermometers, in the real-world situation we are talking about, some might read 15, others 23, others 24, others 25, others 32, others 33. See? They are thermometers measuring different things — the temperature at different places.

        ISTM that your analysis applies only to multiple attempts to measure the same thing. We are talking about single measurements (by way of daily averaging) of different things. Perhaps your analysis still applies, but, at least for now, ISTM that you are analyzing the wrong situation.

      • To get back to your example, if you have thousands of thermometers, in the real-world situation we are talking about, some might read 15, others 23, others 24, others 25, others 32, others 33. See? They are thermometers measuring different things — the temperature at different places.

        The sum of the squares of the errors still add up whether it’s one location or thousands and whatever the temperatures. Variance just adds up unless there’s covariance present. In terms of precision there’s probably very little covariance. (in terms of accuracy there’s probably lots of covariance…)

        https://en.wikipedia.org/wiki/Variance#Basic_properties

        Var(aX+bY)=a^2*Var(X)+b^2*Var(Y)+2ab*Cov(X,Y)

        In our case a and b are 1 (there’s no scaling factor), and covariance of random reading errors is likely zero, barring any arguments about humans tend to round up or down in a biased manner (which nobody has made here yet). So the variances of random measurement error add up across multiple thermometers at different locations.

        Also, adding a constant value (say the different temperatures) doesn’t change variance:

        Var(X+a)=Var(X).

        Which means the variance is the same whether it’s 15degC reading or 33degC reading, unless you want to argue that reading errors are larger at 33degC than at 15degC. (maybe the operator is shivering with a -40degC reading?…)

        In your example, if your operators were off the true value by +1 on the first measurement, -1 on the second measurement, exact on the third and so on (rectangular or normal distribution) then the precision of the average of those measurements is 1/sqrt(6) = 0.4. The precision increases precisely because different operators are making different random errors. If they were making the same errors then the covariance term would not be zero. If you wish to make an argument that the covariance is not zero then please do so (the context of precision please. I am quite sure in terms of accuracy the covariance is not zero).

        In fact I argue it’s necessary that there be different temperatures, locations, and operators otherwise don’t have enough noise for the process of increasing precision to work. (e.g. if you always read temperature in your last location of 33 with no variance, you are thus are stuck at 33degC +/- instrument resolution).

        I’m quite aware that you don’t trust averaging of multiple locations. I don’t either*. However your reason for the distrust isn’t the correct one, and you do the lukewarmer’s side of the argument no favors if you use incorrect arguments.

        Peter

        * I think there’s spatial aliasing and other horrible Nyquist related problems going on in addition to all the microsite problems, post hoc corrections etc etc.. The satellite folks have a much better chance of getting this correct (far simpler system).

      • Peter, if you treat a rectangular distribution using the math of random error, of course it will follow the same 1/sqrtN trend. Treating a constant resolution as though it were stationary high-frequency noise and then claiming it behaves as high frequency noise is just begging the question.

        In the JCGM quote, where do you see anything concerning noise in, “there is a range of input signals to the instrument spanning a known interval that would give the same indication.“?

        That statement indicates a limitation inherent in the instrument. The limit is resolution. An instrument cannot differentiate among signals that are more closely spaced than its resolution limit. Cannot differentiate means indistinguishable. The instrument cannot discern one magnitude from another and cannot differentiate among them. That will not change no matter how many times the measurement is repeated because it is a limitation of the instrument itself. This instrumental limit will not change no matter the averaging of any arbitrary number of measurements.

        A camera with the optics of your eye will never resolve anything at atomic scale, no matter how many photographs one takes. And yet, this is what you’re asserting: arbitrarily sharp resolution, indefinitely improving with the number of measurements. Atomic resolution with eyeball optics.

        Your position is not correct, Peter. Instruments are not capable of arbitrary resolution with an increasing number of measurements.

        With this comment, “The addition of random noise is what gets you the additional resolution as you add in more measurements. The sources you quote don’t discuss that aspect.” you’re supposing that a noise divergence added onto the constant limit of resolution, degrades the resolution of an individual measurement, but improves resolution when many measurements are averaged. Add a set of reduced accuracy measurements, obtain an increased accuracy measurement. Does that really seem reasonable?

        You’re claiming that stationary high frequency noise imposed on top of a constant uncertainty due to the limit of instrumental detection, improves a measurement past the detection limit of the instrument itself. The physical information is not present in any one measurement, but magically appears when many are averaged. How does that work? How does physical information appear from nowhere?

        Stationary noise diminishes with repeated measurements, true. A constant resolution limit does not diminish with repeated measurements, also true. Stationary noise imposed on top of a constant resolution limit diminishes with repeated measurements to the constant resolution limit, and no further. I showed that here.

        Your position is impossible. In an average, random noise imposed on top of a constant resolution limit decrements to the constant limit. The reason the sources do not discuss an aspect that, “The addition of random noise is what gets you the additional resolution as you add in more measurements” is because that aspect is not true. The adding together of measurements that do not contain information cannot create information.

        You asked for citations. I provided them. You’ve rejected them.

        Here’s another, again in the context of spectrophotometers. This one states, “the size of the resolution element (δλ) is set by the bandwidth limit imposed by the dispersing element.” where “λ” is wavelength. The “dispersing element” converts white light into a spread of wavelengths (energies, really). That bandwidth limit — the resolution limit imposed by the instrumental design — will never be reduced no matter how many measurements are taken and averaged.

        Adding random noise and then decrementing it away in an average does not change the capability of the instrument nor remove the effects of its resolution limit, nor modify or remove the instrumental δλ.

        Your position is an impossibility.

      • Peter, you wrote, “In your example, if your operators were off the true value by +1 on the first measurement, -1 on the second measurement, exact on the third and so on (rectangular or normal distribution)…

        But that’s not what the rectangular resolution limit means. The rectangular limit means that the instrument cannot distinguish anything within that limit. It’s not a distribution. It denotes a lack of physical information. The physical information is literally not present in the measurement. Nothing is there to extract. An average does not create information that is not present in any of the individual measurements.

        Your mistake is evident in your equation of “(rectangular or normal distribution).” You’re making an equation that is not correct. The rectangular resolution limit is neither normal nor a distribution. The error in your analysis follows from that initial mistake.

        Instrumental resolution is not an error. It’s not treated as an error. It does not follow random error statistics (your variance calculation).

        Instrumental resolution defines the lower limit of the physical information content of the measurement. One cannot create physical information by averaging together measurements that do not have that information within their structure.

      • A camera with the optics of your eye will never resolve anything at atomic scale, no matter how many photographs one takes.

        You are confusing Nyquist problems with precision problems. Two different things. Precision averaged over the entire sample length is a DC value and nowhere near the Nyquist limit. The average color of all those atoms can be ascertained just fine even though the individual atoms can’t bee seen.

        There are also sequential sampling oscilloscopes that sample at a rate of 200khz (5 microsecond resolution) yet have picosecond resolution – far far past nyquist. So your “impossible” was made possible on the order of 50 years ago.. I used these systems all the time back in the 1990s at picosecond resolution. It’s the same principle as moving your finger across a surface to get a better feel for the texture. In the case of the oscilloscope they move the trigger by a picosecond and sample the same signal again, thousands of times, before showing you what’s on the screen. (it has to be a repetitive signal of course).

        https://en.wikipedia.org/wiki/Oscilloscope_types#Digital_sampling_oscilloscopes

        http://cp.literature.agilent.com/litweb/pdf/5989-8794EN.pdf

        If you want to argue that sampling a surface of temperatures cannot resolve the mean temperature accurately – I agree! For the same reason in your camera problem. It has nothing to do with instrument resolution, and everything to do with surface resolution (i.e. not enough measurement stations). My very early results with matlab show that an autocorrelated random surface has outliers of the mean temperature that are far higher than outliers of a non-correlated random surface. Since the surface of the earth is highly autocorrelated (it’s fractal), the surface temperature is also highly correlated, and this means that sampling a surface of temperatures accurately is extremely difficult. Again, nothing to do with the instrumentation precision, but everything to do with how many samples you are taking on the surface. Geez, I sure hope the satellite folks got this right.. The surface station folks certainly did not.

        The entire modern world of measurement electronics relies on your “impossible”. I appear to be unable to teach you, so at this point, while I’ve been inspired to complete my Monte Carlo framework and my surface sampling test framework (I thank you for the inspiration), I’m at a loss as to how to get you to understand that your “impossible” happens daily. For example, the iPhone I’m listening to music on right now has an output DAC that has 1 bit of resolution, but is effectively PWM (really more complicated feedback loop than that) so that the noise is pushed out to beyond hearing range, and I hear fairly clean 16-bit* music out of that one bit of resolution on the comparator.

        Pete

        *Given it’s an iPhone 16 bits may be pushing it. My high end preamp has a 24 bit sigma-delta DAC however and sounds really beautiful. All from a 1 bit comparator…

      • But that’s not what the rectangular resolution limit means. The rectangular limit means that the instrument cannot distinguish anything within that limit. It’s not a distribution. It denotes a lack of physical information. The physical information is literally not present in the measurement. Nothing is there to extract. An average does not create information that is not present in any of the individual measurements.

        Yes, the information is there. Not much, but some.

        If the true temperature is 25.6 degC, you’ll get a reading of 26degC. If the true temperature is 25.4degC, you’ll get a reading of 25degC. You do that with enough thermometers at randomly different temperatures and your precision exceeds that of the +/-1 degC of each individual instrument. If you use the same instrument with non-varying temperatures then yes you are stuck with the resolution of that one instrument.

        What we’re talking about here is effectively quantization noise. A well studied area.

        https://en.wikipedia.org/wiki/Quantization_%28signal_processing%29#Quantization_noise_model

        http://classes.engineering.wustl.edu/ese488/Lectures/Lecture5a_QNoise.pdf

        “Low Pass Filtering the ADC output will reduce the noise power and yield more effective bits.

        averaging is a type of low pass filtering.

        Peter

      • Peter in your oscilloscope example, if they’ve got a trigger with picosecond time resolution, there is no mystery about resolution of signal phase beyond a kilohertz sampling rate.

        The picosecond time step of your trigger determines the limit of resolution of your experiment, not the kilohertz sampling rate of the oscilloscope. Your example doesn’t evidence your claim at all.

        The source of resolution in your oscilloscope example was just obscured in the way you presented it. So, although your oscilloscope example is irrelevant to your point, it pretty much makes mine. That is, resolution is determined by the instrument — your picosecond trigger.

        The atomic resolution with eyeball optics is not a Nyquist problem. It’s a pixel problem. You’re continually asserting that multiple measurements increase accuracy past instrumental resolution.This identical with claiming that information is discriminated within a single pixel.

        In terms of the eyeball optics, you’re claiming that multiple photographs averaged together will produce pictures of arbitrarily sharp resolution. An example of this claim is in your ,early post where you wrote, “with enough [±1 C resolution] thermometers the average reading will converge on the [±0.1 C] accuracy and precision of the original calibration source.” This is identical with an assertion that the pixel limit of a detector (eyeball optics) can be exceeded.

        You’ve already pretty much conceded my point by writing, “The average color of all those atoms can be ascertained just fine even though the individual atoms can’t bee seen.” But your original claim is not about getting an average smear from a set of points. It’s about resolving a point by taking the average of a set of smears. You’ve now implicitly agreed that’s not possible.

        In your subsequent post you wrote that, “the information is there” below the detection limit of an instrument.

        I’m a physical methods experimental chemist, and don’t know a single experimental scientist who would assert that (except maybe consensus climate scientists who don’t seem to actually take measurements). Averaging improves signal/noise, never instrumental resolution.

        You continued, “If the true temperature is 25.6 degC, you’ll get a reading of 26degC. If the true temperature is 25.4degC, you’ll get a reading of 25degC.

        That’s not how it works. In real experimental life, you don’t know the true temperature. If the thermometer has ±1 C accuracy, and a resolution of ±0.5 C, then you’ll get the same reading no matter whether the true temperature is 25.6 C or 25.4 C. This is because a ±0.5 C thermometer is incapable of distinguishing those temperatures.

        Alternatively, if you get a reading of 25.6 C, you don’t know where the true temperature is within ±0.5 C. Neither does the thermometer, because it cannot distinguish among temperatures within that range.

        This limit is true no matter whether your thermometer is digital or analogue.

        That’s what the JCGM caution meant by, “a range of input signals to the instrument spanning a known interval that would give the same indication.” It remains true, and is the core of the issue, despite that you set it aside.

        Your example above again makes the random error assumption that produces your conclusion. You’ve continually assumed your conclusion, throughout.

        The physical limitation of instrumental resolution violates the assumptions of random error. Your continual recourse to random error statistics is entirely misplaced.

      • My very early results with matlab show that an autocorrelated random surface has outliers of the mean temperature that are far higher than outliers of a non-correlated random surface.

        Some more results. The mean of the temperature of an autocorrelated surface appears to have a std error that is about 1.7x that of a noncorrelated surface, and it doesn’t appear to matter what autocorrelation length I use. I’m not sure why it’s 1.7x but this class (stats 501) might show why it’s > 1.0:

        https://onlinecourses.science.psu.edu/stat510/node/60

        Var(xt)=σw^2/(1−ϕ1^2)

        The bigger the dependence on previous values the more the true variance increases. In my case ϕ appears to be about 0.65, so that gives a 1.7x increase in the std deviation.

        I’m just going to assume that the people crunching on the historical temperature records assume no correlation (almost everyone makes this mistake outside the stats fields) and thus any error bars I’m going to mentally multiply by 1.7x

        Now off to find out why it was 1.7x. Probably something to do with the parameter choice for my surface…

        Peter

    • And isn’t it necessary to report the margin of error if going beyond the significant digits, to give the data context?

      No, the rules don’t apply to Climate “Scientists.” Their sanctimonious efforts elevate them above following the same lowly rules all other fields of science. Climate “Scientists” are saving the world and can not be questioned. This is science by polls, science by consensus, science by dictate, science by fiat.

    • TonyL…”Even the super El Nino of 1998 would be lost in the noise, but that is clearly not the case. So the extra resolution must be coming from somewhere.”

      Not necessarily. Just because a flawed method using blunt instruments showed a slight warming when it was supposed to one time doesn’t mean it wasn’t still just showing noise…after all, a flawed method showing noise still has a fifty fifty chance of catching a warming or cooling.

    • As to the law of large numbers, I understand the claim that if one takes a thousand (N) temperature measurements in a short five minute span at one location in Maine with no other variables, you can average all N of them, and maybe claim a degree of accuracy proportional to N (with some exceptions).

      However I believe that is different than taking one measurement at a thousand different locations by a thousand different people using a thousand different blunt devices and somehow claiming the average to be a much higher degree of accuracy than the blunt devices allowed.

      Rules (or their misapplication) pertaining to the law of large numbers, normal distribution and other assumptions necessary for Statistics to trump rules of science and measurement in climate ‘science’ are highly suspect I would expect, and have given us nothing but ‘noise, if they claim to be accurate to a hundredth of a degree over a hundred years. The law of Common Sense is of some assistance here. The law of Critical Thinking is also helpful.

    • PeterSable. “Metrology is a very well studied area and the idea of increasing resolution by oversampling is the basis of all modern digital audiio equipment and digital instrumentation.”

      I don’t know about your meteorology analysis but Electronics I understand, and I think you are confusing concepts comparing this to oversampling. In oversampling an analog wave, sin wave for example, is best recreated digitally by sampling it, ie taking a voltage measurement, a hundred times instead of ten times during the course of the wave…i. e. OVER TIME.

      I believe you are confusing number of measurements per second, which digital sampling is based on, with number of measurements period. I don’t see temperature measurements per second or other unit if time as an issue in climate change data.

      Also at issue is how the sampling measurement is taken, with a blunt instrument. In electronics you can sample and copy an exact voltage measurement, a key difference. If you had to measure it manually as in temperature data, that sample measurement is at issue. For example if a sin wave peak to peak is in millivolts, but if you take that voltage measurement sample at one point on the wave using a blunt voltmeter that measures volts instead of millivolts, you will get a lot of blunt numbers in volts. No matter how many you take you can average them down to millivolts but I guarantee you you won’t get a meaningful reproduced sound at the output that is accurate down to the millivolts in its reproduction no matter how many millions of blunt measurements you take and try and average. If all your measurements are either .2 or .3 volts because you used a blunt instrument, you can’t accurately reproduce a sin wave that was only .08 volts pp no matter how many blunt readings you take and average. It’s margin of error will be high….noisy output.

      • Also at issue is how the sampling measurement is taken, with a blunt instrument. In electronics you can sample and copy an exact voltage measurement, a key difference.

        Sigma delta converters take the blunt instrument of a comparator (1 bit of resolution, i.e. “is it higher or lower than value X”)) and turn it in lots of bits. One at a time, but really fast, over a period of time. The output is literally a stream of 1s and 0s – each individually has as little precision as mathematically possible but taken as a group these individual 1s and 0s have amazing precision. I know it’s somewhat hard to understand, but there’s lots of resources out there to explain it (did you try wikipedia?), and if you can’t understand the finger explanation then I’m at a loss, that usually works on most people. (note that in the finger example your nervous system is sampling at different times).

        I believe you are confusing number of measurements per second, which digital sampling is based on, with number of measurements period.

        There’s a continuous tradeoff between measuring amplitude, frequency, and time location. So I’m not confused, most people don’t understand signal processing, and it’s really hard to teach and learn. Feel free to accuse me of being a bad teacher :-). Note that the resolution of sigma delta converters, which are just comparing the signal to a calibrated voltage source and producing a 1 bit output, lose precision at higher frequencies. Hopefully that’s obvious in the graphs of the paper I posted.

        The same principle applies to a thermometer. The thermometer might be producing an 5-6 bit (e.g 32-64 possible readings) output instead of a 1 bit output but the math still applies. This was the basis for the ECG system I was working on many moons ago.

        However I believe that is different than taking one measurement at a thousand different locations by a thousand different people using a thousand different blunt devices and somehow claiming the average

        Nope, that math for uncorrelated errors is identical. In fact the thousand people produce uncorrelated noise which helps the resolution. Your hypothetical single thermometer in Maine sampled really fast over 5minutes probably still has a resolution of +/1 degC because the output is always say -2degC because it’s the same operator and sitting on the same value for 5 minutes. You might be able to argue there’s a higher chance for correlated errors, but please make that argument, not this one, you are making the warmist’s job easier with incorrect arguments.

        if they claim to be accurate to a hundredth of a degree over a hundred years.

        Now that’s confusion of accuracy and precision. If you want to argue that calibration methods and sources are not consistently accurate over the last 150 years I suspect you might be correct. It’s a hypothesis that you could prove with data. There’s also 10-15 other arguments about UHI, site placement, microsite problems, spatial resolution, &etc that change over time, that truly represent accuracy errors that change over the 150 years. Measurement precision is not one of those problems however.

        Just to be clear, the precision of a hundred +/-1 degC distributed thermometers measured with uncorrelated errors is +/- 0.1degC. The accuracy is God knows what, because of site placement, shade, spatial resolution, UHI, calibration consistency over the time period, etc. etc. Heck as I’ve shown just the adjustments from 2005-2015 for GISS account for a 0.2degC/century trend change, so the analysis side is also introducing errors. I suspect the accuracy error of the trend is on the order of +/-1 degC. The precision error however is very small.

        https://en.wikipedia.org/wiki/Accuracy_and_precision

        Peter

      • We’re not talking accuracy and precision, Peter. We’re talking precision and instrumental resolution. The mathematics of uncorrelated error does not apply to the limit of instrumental resolution.

    • Someone who is now a well known climate scientist when a similar point was raised told the engineer concerned to go back to fixing washing machines or whatever engineers do and leave science to scientists.
      The same engineer also later raised the question as to response times of electronic versus traditional thermometers given that they produce answers as much as a degree higher on transient peaks.

    • It is perfectly possible to take the average of 100 readings accurate to a nearest degree over a period of time and get an average accurate to one hundredth of a degree, and whose validity is probably better than a tenth of a degree.

      As a boy in Physics calls, we split into pairs and each constricted a pendulum, and measured the period as the avarege time for 300 swings to the nearest 1/10 second with stopwatches.
      .

      Each of our averages gave us a figure for ‘g’ that was not hugely accurate (1-2 decimal places IIRC), but the average of the whole class (with the dunces outlier removed) was accurate to 3 decimal places.

      There are reasons to attack warmist data: This however is not one of them.

      • As a boy in Physics calls, we split into pairs and each constricted a pendulum, and measured the period as the avarege time for 300 swings to the nearest 1/10 second with stopwatches.
        .

        Each of our averages gave us a figure for ‘g’ that was not hugely accurate (1-2 decimal places IIRC), but the average of the whole class (with the dunces outlier removed) was accurate to 3 decimal places.

        Thank you, a far better example than mine. Note that your stopwatches and fingers pressing the buttons are far more accurate (but not precise in the way you used them individually!) than thermometer calibration.

      • Leo and Peter.

        It still strikes me that all you are both saying is the same thing others have pointed out, that large sampling can IN THE RIGHT SITUATIONS increase accuracy for the intended measurement. But you both have qualified that with words like “can” and “possible” and “theory”, and others, which we already know.

        Neither of you have, if I understand your responses, addressed whether you agree that the correct assumptions and ground rules are present sufficient to justify the accuracy claimed in climate science for climate data a hundred years old, you have just said its possible. Unless you are saying that ANY multiple readings in any situation can be averaged out to achieve accuracy much greater than the calibrated instrument. Others here believe that the principles that allow the use of large numbers have been misapplied, and in fact, I would still suggest that some of your own criteria and analogies are not necessarily present in 100 year old temperature data….but I’m not committed to that thought and they may be harmless distinctions as well.

        Which is why the question I raised initially about climate data still is on the table. “Is there a ready and easy source explaining why that is good science for climate science in general, without getting into the footnotes of formal papers?” Or are there papers that take that issue to task?

        Peter, sorry, it was about 2 am when I typed meteorology…I’m familiar with metrology, but familiar is a far as it goes. Appreciate your weighing in, with others.

      • B, both Peter and Leo are making arguments about the decrease in random error following averaging multiple measurements. Neither are relevant to the problem of instrumental resolution.

      • Neither of you have, if I understand your responses, addressed whether you agree that the correct assumptions and ground rules are present sufficient to justify the accuracy claimed in climate science for climate data a hundred years old

        I don’t agree that the claimed accuracy of the data for 100 year old is valid. But not for instrument resolution reasons being discussed here. The data is suspect for spatial and time sampling reasons (Nyquist), site placement reasons, UHI reasons, post-hoc adjustment reasons, and I’ve probably forgot the all the reasons, there are lots.

        However I believe and can show mathematically and numerically that the precision of the data 100 years ago, taken as an aggregate, is better than a single instrument’s instrument precision.

        It’s useless to have precision of +/- 0.1degC when the accuracy is +/- 1 degC. You’re just more precisely off target…

        At the same time, arguing about precision when there’s tons of accuracy problems is not good, there are so many other, better ways to win the argument that are more likely to be correct arguments.

        Peter

      • You’re the only one arguing precision, Peter, when the point is about instrumental resolution. Your invariably precision approach to the problem of resolution is entirely misplaced.

  8. Lord M, thanks for the thorough review.
    World government? Perhaps western hemisphere’s government; Russia, China, India and few others would never fully comply.
    Fiddling the data is nothing new it happens all the time, financial industry, company reports, all kind of surveys and statistical reports. Even I encouraged the Met Office to warm-up the CET, and they did it! I am surprised at the modest amounts for the global temperatures data, many individual stations get far greater hits.
    Warming is good, if not in reality, then the virtual world is the next best thing.

  9. Brain, as you should know, if you’d bothered to read (beyond picking out dates) as many of Lord M’s previous postings on the pause, the great pause is determined by working from the present backward to find the earliest date that one can have a non-positive trend (Lord M explains it better, just read and for once try to comprhend his many previous posts where he explains the concept).

    • Brian if you’d actually bother reading Lord M’s posts for comprehsion instead of looking to pick nits based on ignorance, you’d already know the answer:
      On June 3rd 2015 it started in Dec 1996 because the slope from Dec96-May15 is negative whereas today it starts in Jan97 because the Dec96-Jun15 slope is positive but the Jan97-Jun15 slope is negative – remember the pause is the length of time that the slope is non-positive *starting from the present* and working backwards as Lord M has explained many many many times as you should well know by now.

    • Brian, the exact start date of the pause is really not that important. If one chooses a different metric to define the pause they get a slightly different answer. For example, the warming in the RSS data since June of 1996 (19 years) is at .1C/century. I think most people would not call that “warming”. The lack of any statistically significant warming goes back much further.

      The main point is that any one of these metrics shows the planet is not warming dangerously. Take your pick. They all say basically the same thing and those who try to use deflection techniques only look foolish in the process.

    • Sorry Brain, your question was answered you’ve merely chosen to be willfully ignorant on the subject.

    • Wrong Brian, the start date is part of the OUTPUT of the theory. Do you relish looking this dense?

    • Brain: “The problem is that Monckton uses the SAME method/metric and gets two different start dates. Doesn’t that suggest that his analysis is flawed?”

      Again, Brian, you are showing that it is your willful ignorance that is flawed as clearly, despite Lord M explaining it on many past occasions (and myself and others explaining it to you here) you continue not to even attempt to understand what his analysis *IS*.

    • Brian says “John says: “Brian, the exact start date of the pause is really not that important.”

      Not content to fail to comprehend what Lord M has explained on numerous previous occasions, you’d can’t even comprehend who said what in this conversation. Perhaps I was being too generous in describing your continued ignorance as being willful. (Hint, I didn’t say what you just quoted me as saying).

    • Brain: “So, look at the two quotes from Monckton’s writings.”

      Again, you are showing your willful (I’m being generous) ignorance here. You are pointing to the OUTPUT/end point of Lord M’s calculation and pretending its the INPUT/start point.
      As Lord M has explained and as I and many other posters here have repeatedly pointed out to you – the INPUT/Start point is the present day from which one calculates backwards to find the length of the pause based on *present day* data. Once you understand that, you’ll understand how much of an idiot you have been making yourself look with your willful ignornance/trolling.

    • Or to tackle your ignorance from another angle, Brian, you say:
      “One is wrong, and one is correct.
      ..
      Which one of the two statements made by Monckton is correct?

      The answer is BOTH statements are correct but only for the set of data against which they were made.

    • I think that what Brian is trying to point out is that, using this method, it appears that it “might” be possible to have a creeping start date for the pause, therefore allowing for a continued pause line that slides into the future even though temps are increasing behind the line.

    • Brian “your world” apparently is one where willful ignornace is a virtue. You are comparing apples to oranges. Historical events (such as the bombing of pearl harbot) and analysis of data (such as temperature data – whihc is a set of data that is ever growing and thus results of analyis change as the set changes) are two different things.

    • “Brian G Valentine August 4, 2015 at 10:45 am
      John, in my world, events that start in the past start on a specific date. I guess in your world the past depends on the present. Fine with me.
      …”

      How odd git; just what world is that?
      To help you decide, here are some of the current situations:
      A) scientific; where science is forever updated and nothing is set in stone, ever!
      B) beliefs and faith; where science is avoided, but everything changes so long as the message remains the same. Perhaps this is the area you’re allegedly accustomed to?
      –Under beliefs;
      — a) propaganda and bad science are declared
      — b) People, ideas and science are declared bad and demonized aggressively.
      — c) Salvation is promised to those who are faithful, damnation for all others.
      — d) Anything and everything is subject to change as long as the damnation/salvation song remains the same. e.g. 2° C rise in the global average temperature means disaster to all. Meanwhile, behind the scenes, the land and ocean temperature databases undergo constant changes; all intended to assist bringing about an apparent rise in temperature.

      So which are you?
      Science, where Lord Monckton inhabits and actively supports,
      or beliefs, where the alarmists dwell gnawing their bitter ends along with any friends who happen to follow any scientific path, such as open discussion?

    • Dear Brain(less),

      December 1996 ends at midnight on December 31. January 1997 starts at midnight on December 31. Somewhere in the indefinable space between midnight and midnight they come together. Just between us let’s settle on a one second gap between December 1996 and January 1997. So, to parody your screeching rants, “How can Monckton have two different starting points that are one whole second apart? It’s disingenuous, it’s dishonest, it’s splutter, splutter, splutter…I must be an idiot.”

      And I can only say…yes you are!

      pbh

    • LOL thanks brian. That was hilarious to see your “logic” persist even after you had a very simple and accurate explanation given.

    • The answer to your question Brian is: Yes it did.

      The ” pause ” runs from some current date, back to the start of the period of no statistically significant trend other than zero. That would mean no positive nor negative trend statistically.

      The starting date depends on the ending date, and Lord M of B has reported both Dec 1996, and Jan 1997 as start dates, for particular ending dates for which he has reported the answer. I haven’t checked the complete series, but I’d bet that M of B could give us a complete list of all of the months that have qualified as the start of the pause, depending on what end date month he reported on.

      If that is too esoteric for you to grasp, I would take up crochet instead.

      g

    • can you explain to me how events happening today determine when something started in the past?

      With pleasure. Let me use an example.

      Renewed warming started yesterday just after breakfast. You noticed of course. After all it is a definite event, the time at which the pause ended and warming resumed. And according to you such events happen at a definite time without reference to anything that might happen afterwards. So I am sure you noticed when warming resumed yesterday. It was a big event celebrated by climate activists everywhere. Lots of balloons and streamers and … well actually no. Because if they did that and next years temperatures were cooler than this year’s they’d look pretty stupid yes?

      The point is that things like the date of the start of the pause and the date of the end of the pause (when that happens), are not events that you notice at the time. The ONLY way to detect such events is by looking at what happens subsequently. They are defined by analysing the record.

    • brian says ” can you explain to me how events happening today determine when something started in the past? ”

      it might be an idea to ask steve mosher, he seems to know how to do this quite well ;)

  10. How is it possible for events happening today to alter the events of the past?

    From December 1996 to May 2015, the slope was negative. However with a relatively high June anomaly, the slope from December 1996 to June 2015 is positive. However it is negative from January 1997 to June 1997.

    By the way, UAH showed huge drop in July. And if RSS shows a similar drop, then the pause for RSS will be 18 years and 7 months from January 1997 to July 2015.

  11. They are driving the clown car faster and faster as it goes down the falling temperatures road…boom.

  12. What should be done but the lack of conviction is there is not accept the data AGW enthusiast are putting out. End of story.

    For example I would take the drastic action of not allowing their manipulated data to appear over this site.

    • Brian, your willful ignorance has no bearing on Lord M’s theories. He’s explain numerous times how he determines when the pause starts – that you choose to ignore those explaination and insist to continue forward in your ignorance is your problem, not Lord M’s.

    • Brian, do you realize how silly you look? Go ahead and use December 1996. You get a warming of .01 C / century from RSS data. Do you not think that qualifies as a pause?

    • Brian G Valentine: Your trolling is a bit too obvious. Did no one teach you the more subtle approach?

    • When new data justifies a change in theory, the theory should be adjusted

      …unless said adjustment challenges the job security, salaries, benefits or retirement plans of taxpayer-funded True Believers of catastrophic anthropogenic climate change.

    • since Monckton’s exercise is looking for two points on a curve that are the same value with zero slope between them and with the point on the end of the curve defining the number, there is no need to change anything. When the end point changes, the corresponding point on the curve will necessarily change. The “distance” along the Y axis between the points can, and must change, thus “the pause” gets longer and shorter. If you can’t follow his logic after reading how he defines it, you are acting like a willfully ignorant troll, and not looking for an explanation at all, just like B. Valentine has been doing, because in all honestly, I don’t think it is human possible to have a head as dense as granite.

    • The new data definitely shows that Monckton’s analysis is flawed, since his analysis gives two different start dates for the pause.

      If you had asked me yesterday what the date was, I would have said August 3. But if you were to ask me today, I would say August 4. So what is the real date?

    • What a ridiculous back and forth argument. Even the modelers examine pauses defined as stretches of observational or modeled data that demonstrate a period of time that is characterized by a flat trend. Models, generally, do not show such flat trends that last longer than a decade and a half. If the other side uses the same technique to define pauses in their model output data (and they do), it appears that Brian is arguing using a dead horse. Beat it all you want. It ain’t gonna whinny. And please, catch up on your reading before bringing forth such an accusation. It is you who knows not what you speak of, not Mr. Monckton.

      • Actually, average temperature data are not precise enough to be sure the “pause” is not really a slight increase, or a slight decrease, of the temperature.

        NASA may claim accuracy of +/- 0.1 degrees C., and present average temperature in hundredths of a degree, but that does not make it so.

        We have “adjustment” after “adjustment”.

        We have thermometers in the 1800s that typically read quite low for the starting point.

        We have sailors throwing buckets over the side of ships in shipping lanes.

        We have thousands of surface stations no longer in use,
        and most that remain are improperly sited.

        We have a huge amount of data “infilling”, and NASA ignores their own satellite data, and PhD climate liars present bogus hockey stick charts.

        The temperature and CO2 data are flawed.

        The people who collect the data are biased.

        The modelers are bribed by government grants to predict catastrophes, to get more grants.

        I doubt if average temperature since 1880 has a margin of error less than +/- 1.0 degrees C.

        If so, that means +0.8 degree C. warming since 1880 is meaningless.

        And after all my talk about inaccurate data, and the bad character of the data collectors, I notice that many people with more science education than me continue to debate tiny changes in average temperature, as if that was important !

        Much more important is to step back and observe the actual climate during our lifetimes.

        If there’s more CO2 in the air than 50 years ago, that’s good news for plants.
        Give them even more CO2 !

        If it’s a little warmer than 50 years ago, that’s good news for humans.
        Give us some more warming !

        You don’t need a weatherman to know which way the wind blows. But you do need smarmy PhDs, and inaccurate computer games, as useful idiots / props for politicians seeking more power … by needlessly scaring billions of gullible people about a future CO2 climate catastrophe that doesn’t even matter …

        … because if we believe all the scary predictions, is it not true that we’ll all be dead from DDT, or acid rain, or the hole in the ozone layer, or any other of all those long forgotten false environmental crises that were going to get get us … long before the CO2 can kill us ?

        Only leftists could lead this climate scare — they are never happy with life on Earth, and always trying to scare people to gain power: “The big bad climate boogeyman is going to get you unless you do everything we say without question — the science is settled — ask no questions — just follow orders”.

        Only fools would fall for this climate scaremongering with the climate as nice as it is in 2015 … but I’m afraid even “deniers” / non-believers waste too too much time debating those tiny 0.1 degree C. changes in the average temperature, and often miss the big picture:
        (1) More CO2 is good news for plants.
        (2) Warming is good news for people.

        If the estimated CO2 rise and temperature rise from 1880 to 2015 repeated in the next 135 years,
        then that would be wonderful news for plants and people.

        It’s amazing how good climate news can be twisted to scare people.

    • This Brain(?) is just a troll, let him rant, his question has been answered by a lot of people. Trolls are dense by definition and he is only here to occupy “blog space” and deny others the chance of conducting a meaningful conversation. Take no notice of “Brainless the Troll”!

    • @ Brian G Valentine.

      Brian, like the best way of announcing the result of Miss World, the pause is actually calculated in reverse. It *starts* from now – ie the most recently available complete monthly data, and works backwards into the past as far as the data is showing a negative trend signal. At the point the data shows a warming trend signal (ie warming when viewed in the right chronological direction!) then the pause stops.

      At *that* point we can then say, for example, that the pause in warming goes all the way back from todays date to July 1997, which would be 18 years and 1 month. So having calculated from the most recent complete monthly data all the way back in time to the point where a warming trend signal is detected, then for the purposes of counting the pause forwards we then START from that date and count forwards back up to todays data, thus establishing how long the pause is.

      That’s the reason it can appear to move the proper start date (ie the one 18 years or so ago) It’s due to today’s most up to date monthly anomalies. If this month is particularly hot it will be effect the overall signal and could shorten the length of time showing as being without a temperature increase.

      Brian, there’s minds immeasurably better informed than me on here who’ve explained it far better than I have just tried to, but if I can get my thick head around it I’m damn sure you have no excuse for not getting it too.

    • “Half the harm that is done in this world is due to people who want to feel important. They don’t mean to do harm – but the harm does not interest them. Or they do not see it, or they justify it because they are absorbed in the endless struggle to think well of themselves.”

      -T.S. Elliot

  13. In fact HADCRUT does not generally itself adjust temperature data. Some, such as USHCN, has been previously adjusted by the suppliers.

    What HADCRUT does do is to extend its list of stations. I think they have increased their efforts since Cowtan and Way showed that their limited Arctic coverage was missing the warming there. This file shows data sites that have been added since April 2014. Some is a consequence of the ISTI initiative.

    • Sorry, Nick, are you trying to say that 16 months of actual measurements in a few sites in the Arctic justify all the warming hypothesis? Don’t you think you have to wait, what, 30 years, before you can talk about actuals in the Arctic?

      • No hypothesis, Harry, just numbers and measurement. It has warmed where I am over the last few days. Probably won’t last.

        The method Hadcrut uses assigns hemisphere average behaviour to grid cells with no data. If the Arctic has warmed, and you measure the past with more stations, the average will go up. That’s not a hypothesis about the future, it’s arithmetic about the past.

        And it isn’t 16 months. Those stations are included in the average for as far back as they have data.

      • how does that work nick ? does it require an assumption that these grid boxes all behave the same in relation to each other .
        regional differences in the arctic are huge on an annual timescale . look at the amount of ice in hudson bay today compared to the east siberian sea . there is no relationship temperature wise from one area to another in the arctic, whether it be a large region or a grid box .

        i accept i am probably not understanding something here ,so apologies if the above is nonsense. despite my flippant comments above, i actually like steve moshers definition of best estimates for unmeasured areas . the arctic is a massively under sampled region, just like the antarctic ,therefor the best estimates for those regions are likely to be not as good as landmass estimates where extrapolation over such large areas does not occur .

      • bc,
        It’s just the arithmetic of averaging. If you average a property for a set of things (eg grid cells), and leave out the ones for which you have no value, then you get just the average of the rest, which you can see as the same result as if you had assigned that average to the missing.

        Suppose you want an average of a year, but are missing July (summer). If you just average the 11 you have, it will come out too cool. It’s as if you had treated July as an average month, and it’s not. What you should do is use a better estimate – say the long term average for July. You still introduce error through not having the true number, but it is much less. But of course, it’s better if you can incorporate (with care) any info you have about July – eg if you know only half the days.

        You refer to individual differences. That always adds to the uncertainty of averaging, but there’s also the tendency of the effects to cancel.

        Hadcrut doesn’t attempt to estimate the missing cells (that’s where Cowtan and Way showed a better way). Howecer, what Hadcrut is doing is simply to measure more places. That has to be better.

      • thanks for the explanation nick ,makes sense in terms of creating a best set of numbers . i will leave it up to others more informed as to how good those final numbers are.

    • Has Hadcrut attempted to extend its list of Antarctic stations, as well? It would be interesting to know how many stations per grid there now are in the Arctic and how many in the Antarctic. And, of course, the size of each grid.

  14. “World Government” indeed. If you hand an organisation $100bn a year, that is what you will get, with a large part of that money skimmed off just to preserve the organisation. The EU is probably the best example of this.

    The solution is for countries to not hand over any money, let the organisation approach you to fund specific actions, otherwise it will be you approaching the organisation to beg for some of your money back.

  15. As always, Monckton expresses himself beautifully, going right to the heart of the matter. Behind his clarity of thought and expression, I reckon, is a lot of hard work in marshalling the data and presenting it just-so. Thank you, Christopher. Keep up the good work. If I ruled the world (as Harry Secombe sang) you would be the EU’s climate change commissioner, and would probably do yourself out of a job by – what’s the metaphor I’m looking for? – ah, yes – doing a Canute.

      • I fear you are correct. They are going full bore Lysenko. A very disturbing thing to see unfold. We thought the Cold War was over when the Berlin Wall fell. But, it was just the beginning of the guerrilla campaign.

  16. The only real quetion is, how do we stop them getting “their world government” (it’s certainly not ours) in Paris?

    • How to stop them?
      Instruct your government to tell theirs to take a hike.

      Campaign against it, vote against it, and make sure your leaders do the same.

    • None of the major nations will never sign over themselves to the UN control. In fact, most of them cannot do it legally. They might sign some kind of treaty, but in the US that is meaningless without congressional ratification and that will never happen.

  17. Not sure what your problem is with the concept, Brian. Each new month brings in a new data point. The time period from the present and looking backwards for which there is no warming trend – and hence the starting month of “the pause” – depends on the value of this new data point.

    • Again, it appears that Brian may have a point to the extent that if the start date were to keep sliding forward then there could be the same pause for several additional months or years. But, strictly speaking, it would not be pause, since the pause line. although flat, would climb. Now one can say that the increase is statistically insignificant and still therefore maintain that it is a (statistical) pause, but probably that should be said.

  18. Simple. The pause is calculated as the longest period back in time from today> that shows no trend. A slight increase over the last month or two can reduce the length of that period. No rocket science.

    • To put it another way, the pause starts today and is calculated to see how far into the past it extends.

  19. It is logically impossible for some people to be this stupid.
    The pause “starts” on the day that the temperature gets lower than it is today. Thus the start of the pause is determined by the current temperature and will change as that changes.

  20. They are not “Terrestrial” but “human-adjusted”.

    And it was only a matter of time before the climate extremists decided to throw caution to the wind and try to “prove” man-made warming by fabricating man-made warming.

  21. World governments are pretty hard to form/control. I am expecting this Paris UN Climate Change Conference to fall completely apart.

  22. The level of scientific objectivity of any scientist is inevitalby inversly proportional to the level of activism in which he or she engages. The rate of publication of papers by the leading activist scientists is accelerating as ‘Paris’ approaches. These are at best scientifically suspect. At worst, they are probably scientifically worthless. Sadly, they will still be invaluable as propaganda in the MSM and in the halls of unthinking government.

    • No number of papers or evidence of cagw failure could in any way possibly, drive or control the outcome of Paris.
      Those people already have their objectives and they need no scientific support.
      The meet is to decide who pays and how much.
      Paris is about how to obtain and divvy up the spoils.

      • …They think they’ll get to write the history…..
        At the moment they are able to control the literature that children read in which science fiction disaster scenarios are presented as near unalterable fact. The BBC has a formal policy of brainwashing the public who innocently believe that because the charter demands objective and honest unbiased broadcasting the information they get on climate change is fact.
        I think it looks more and more as if they are right.

    • Not really, they know that the progressives will write the history and they with the help of the MSM will blame the Tea Party and Ted Cruz or who ever is the scapegoat of the day.
      Did Dodd or Barney Frank ever get credit for the consequences of requiring the banks to make bad loans to people who could not afford the homes?

    • I have to wonder. Don’t these people realize that History will judge them??

      The pass society and historians gave the coming ice age 10 year supply of oil Jimmy Carter Era scaremongers has emboldened them. They feel beyond reproach and having a clueless cheerleader in the White-house makes them feel that they can act without impunity. The nice thing is that never in history of the earth has CO2 caused catastrophic warming, not even when it was 7,000 ppm. The entire geologic record, even Al Gore’s chart proves AGW to be a huge hoax. It is only a matter of time before the talk turns to another ice age, and then the questions will start to be asked. Can solar panels and wind farms keep us alive when they are covered is 3 feet of snow?

      • Can solar panels and wind farms keep us alive when they are covered is 3 feet of snow?….
        I got the figures for the 45Kw array at the National trust building with just over half an inch of snow . It produced 800W.

  23. I have not personally examined the datasets. Are the data available in adjusted and raw format? Do they provide itemized conversion factors breaking any adjustments out into different categories (sensor bias, regional corrections, missing data point, etc).

    Likely not but thought I’d ask.

  24. It took me a couple of readings of your post to truly grasp its insipid nature. Did you really mean to post this? Did you think about what you were writing? You do realize, don’t you, that January 1997 fell immediately after December 1996? Logically, one can very easily make the case that “since December 1996” and “from January 1997” mean the same thing. If an event were to occur on January 1st, it would be since December and from January. There really is no point to your post other than to grasp at straws in finding fault. If you cannot add to the discussion, then just go away.

    • Logically, one can very easily make the case that “since December 1996″ and “from January 1997″ mean the same thing.

      Perhaps, but that was not the original intention. With the May numbers, the start date was December 1. But with the June number, the start date changed to January 1.

      (I probably should not go there, but when the date was officially given as December 1, it could have been November 20 for example. But we only have monthly numbers from RSS and not daily numbers, however that is a different issue.)

  25. Honestly pal, is this sort of drivel the best you’ve got? The “pause” being referred to in the data is evidenced by a sub set of the data that shows no uptrend. As you add new data, time marching on and all that, the set changes and you re-evaluate the subset which satisfies the qualifying criterion. Accordingly the start date can vary as well with each re-evaluation ( the end date poitentially changing with each new data point added). Its all down to the general volatility of the data which is pretty plain to see.

    Does your ‘argument’ evidence the level your CAGW belief?

    Sorry buddy, but it comes across as utterly cretinous.

  26. Yes and, when the BLS puts out numbers for, e.g., unemployment, it generally issues an updated and different value later as new information comes in. Does that mean we do not know what the unemployment numbers are, and they are just throwing darts at a board? Of course not.

    This is a stochastic variable. It cannot be determined with perfect accuracy, only estimated. So, the pause started Jan. 1997 +/- maybe a year. If you still do not understand, go take a class in statistics and then resubmit your question.

  27. Eureka, they have “found” the “missing heat”. It was there all along, hiding in plain sight in the data, which merely needed the proper “tweaking”, until it confessed. Just in time for the Great Paris Climate Gab-and-Shriekfest. How convenient.

    • Indeed. The adjusters have made fools out of a lot of their fellow scientists, many of whom have found reason(s) for lack of warming that actually hasn’t, supposedly, been lacking.

      It’s hard to figure out how to say that so it makes any sense.

      • of course every ipcc report will now be re written in light of karls “discovery”,and all associated papers pre karl will be withdrawn ?

    • An image appeared in my mind of android called ‘Data’ on the rack of a mediaeval Inquisition whilst Torquemada cries ‘Torture that Data until it confesses’.

  28. At the end of the day, there is no “we” in we. The real “we” is a small group of believers from wealthier countries and the major media who are hailing the 1500 pages final regulations of the EPA’s Clean Power Plan as enlightened and forward-looking.

    It has taken me over an hour just to print this testament to environmentalism. It’s anti-fossil fuel and even worse it’s an economic disaster for the 4 billion people living in poverty.

    “Too many from great and good for you to doubt the likelihood.” Robert Frost

  29. Despite all the fiddling and ” adjusting ” the average temperature for July 2015 in the UK and Ireland was below the long term ( 1981_2010 ) average by between .6 and 1.5 degrees Celsius. Likewise June and May were below average in the UK and Ireland. Once again it’s worth remembering the UK Met Office’s prediction for this period _ above average temperatures and a possibility of it being the hottest summer evah! As I’ve said before some people never get tired of being wrong _but why would you when you never get called out on it.

    • l quite agree Chris, they maybe able to adjust the figures but they can’t adjust the weather.
      l for one will be watching the coming winter with interest,because there a chance that this winter could end up calling the warmists buff. lts still early days and things could change. But l think there is a change that we could have below average temps in a area that spans from the Eastern states to NW Russia this winter.

    • the negative temperature anomalies for the coastal uk waters have been remarkably low, despite being incredibly high the past winter. cooling seas in summer does not bode well for the coming winter here in the uk.

  30. Back in the 1940-50s there were station keepers who went out dutifully to record the weather conditions for years without a gap. I wonder how they would react upon learning their direct readings had been so massively adjusted as to swamp any possible scientific value? “You have to understand, you think you were observing 74°F but we here seventy years later know it was cooler than that because well, because.” I say this because I am part of a long term medical study. I get my blood pressure taken by the same machine used on my grandparents in 1947. I suspect any attempt to lower their readings now would invalidate the entire study.

    • That’s often my point. These weathermen-of-old recorded data dutuifully I would venture to say with pride in their accuracy.. Their data should have been left sacrosanct, absolutely. To change that data and call the new “data” better is nothing short of a dishonest act.

  31. SO, inquiring minds want to know: How many variations of the multiple adjustments did they consider and then reject to arrive at the predetermined answer? An can we get access to those other runs to see if they overlooked something.

  32. “Whose going to save the planet from those who are trying to save the planet?”

    Anonymous Heins

  33. Monckton’s approach works backwards from Today and asks: How far back can you go and still claim no warming?

    If today gets colder that “start” date (which is really an end date working backwards) changes.

  34. I see your point of confusion Brian. He frequently references two satellite data sets. Perhaps Monckton used RSS in one case and UAH in the other.

    • Mikey, Brian’s ‘point of confusion’ is deliberate. For the entire time of the ‘pause,’ my brother has been arguing using Brian’s ‘logic,’ which, at best, is illogic. But my thanks goes to Brian for his persistent obstinacy. In it I see my brother is not alone! He gets hold of a useless point and badgers it endlessly, to no purpose but wearying his opponents!

    • It’s not a point of confusion – Brian has focused in on this talking point in an effort to discredit Monckton, and he’s taking the Goebbles approach of constant repetition despite the number of times it’s explained to him. Maybe he’s taking Joe Romm’s denialist class on-line or something.

    • No I think Brian was just uncomfortable with shifting the start date without justification, because if that were to keep up, there could potentially be a forward (but rising) flat pause for years to come. He just didn’t express it well.

    • BFL: I think you’re being generous. I get Brian’s argument, but I disagree it was a point of ‘being uncomfortable’, nor was it a case of expression – he’s posted the same comment at least half-a-dozen times repetitively, seizing on one point in an effort to discredit the entire issue, because the Pause is obviously the biggest problem in the AGW playbook and therefore it must be eliminated.

      Monkton’s methodology aside, as well as distracting quibbling over the exact date as to where the flat line begins, I think it’s safe to say that we have at least a decade-plus of very little variation in defiance of model predictions and there’s clearly a lot of CYA going on, trying to preserve AGW’s viability – with a lot of people whose livelihood – and in some cases, life’s work – depending upon its legitimacy. And in some cases, it’s just that people have expended a lot of emotional energy on the subject and it’s become a personal mission.

  35. “Even though the satellites of RSS and UAH are watching, all three of the terrestrial record-keepers have tampered with their datasets to nudge the apparent warming rate upward yet again. There have now been so many adjustments with so little justification – nearly all of them calculated to steepen the apparent rate of warming – that between a third and a fifth of the entire warming of the 20th century arises solely from the adjustments, which ought to have been in the opposite direction because, as McKitrick & Michaels showed in a still-unchallenged 2007 paper, the overland warming in the datasets over recent decades is twice what actually occurred.”

    The adjustments COOL the record.
    raw data for both land and ocean ARE COOLER before adjustments

    • The adjustments COOL the temperature records from the past to increase the RATE of warming. We all do understand this.

  36. “Climate science” is the most bizarre form of “science”:
    – The future climate is always “known” with great certainty, yet
    – The past climate is constantly changing, with “adjustments”, and new climate proxy estimates !

    What other group of scientists is more certain about the future, than the past, other than climate modelers?

    Climate models, by the way, are not data, and with no data they are not science — the models are just mathematical representations of the personal opinions of the people who control the programming of the computers.

    Average temperature is most likely irrelevant on a planet where the climate is constantly changing — no one lives in the average temperature — ordinary people care about the climate where they live and work, not some average.

    Whether Earth is cooling or warming depends mainly on the starting and ending points of the period being examined:

    Earth has cooled a lot since the greenhouse ages.
    Earth has warmed a lot in the past 15,000 years.
    Earth has warmed slightly since 1850.
    Earth has cooled since 1998.

    Average temperature is always changing.
    Earth is always getting warmer or cooler.
    SO WHAT ?

    Discussions about very rough estimates of the average temperature — measurements that are frequently “adjusted” and “infilled” by smarmy people who are very biased toward showing warming on their charts — are irrelevant discussions, because the data involved only cover about 0.001% of our planet’s history — 150 years of data do not determine a long-term trend for a planet with 4.5 billion years of climate history.

    Ice sheets have come and gone, with scientists still not sure why.

    Warming in the past 15,000 years was certainly not caused by coal power plants and SUVs.

    Temperature changes of a few tenths of a degree are meaningless — they are likely to be measurement errors or random variations.

    That’s why charts designed to make temperature changes of a few tenths of a degree look HUGE, grossly exaggerate the important of those tiny changes … but those types of charts are very useful as climate scare propaganda.

    More CO2 in the air is good news for green plants and the animals / people who eat them.

    If CO2 causes any warming, then it will do so mainly at at night, and mainly in the colder areas of our planet — the time and location when warming will be welcome by the few people who live there.

    And warming is good news for everyone — don’t most people take vacations in warmer climates?

    If in 1850, I was somehow given the power to choose the climate in 2015, I would have added CO2 to the air to stimulate green plant growth, with less fresh water required for that faster growth, and I would have increased the average temperature by a few degrees (not for me — just to please the always cold wife).

    Now that I consider the actual climate in 2015, it’s very obvious the climate in 2015 is BETTER than the climate in 1850 — the green plants are happy, and the wives are not as cold.

    When I look at the human progress between 1850 and 2015, I can’t identify ANY bad news caused by more CO2 in the air, and the slight warming — in fact I see that 1850 to 2015 was the most prosperous and healthy 165-year period for humans so far !

    The huge reduction in poverty since 1850 was based on:
    (1) The use of cheap and dense sources of energy: Coal, natural gas, and oil, and
    (2) Inventions spurred by free-market economics, where smart people who develop better products and services that others want to buy, get larger financial rewards by making their customers happier!

    More CO2 in the air is good news.

    Slight warming since 1850 is good news too.

    The only bad news is leftists are taking over our economy, so poverty is no longer declining.

    But good climate news doesn’t sell newspapers.

    And politicians can’t scare people with good climate news.

    So they invent environmental crises out of thin air to scare people, and then they declare the government must be given more power to step in and “save the Earth” … or they just seize power through the EPA as Obama does.

    Remember:
    – The DDT crisis.
    – The hole in the ozone layer crisis.
    – The acid rain crisis.
    – The global warming crisis.

    All are nonsense — false boogeymen to scare people.

    When a “crisis” stops scaring people, it is forgotten, and a new “crisis” is invented.

    This has been going on since the 1960s..

    The politicians lead the way with their glorious “we must save the Earth” speeches, but they need a bunch of nerdy-looking “scientists”, frowning and looking serious, as props.

    And that’s where ‘useful idiots’ like Michael Mann, et al, come in — he is a prop for the ‘climate change play’, and gets paid well to make scary predictions about the future climate, which the press loves to quote.

    The coming climate change catastrophe is only a political game, a strategy with with the goal of gaining power and money — the science is irrelevant, and that’s why :

    – Surface temperature data are “adjusted”, and “re-adjusted”, and “re-re-re-adjusted”,

    – Satellite temperature data are ignored,

    – Over 75% of raw CO2 measurement data from Hawaii since 1959 are thrown out to make a smooth CO2 curve on a chart,

    – Over 90,000 real-time CO2 chemical measurements from the early 1800’s to 1959 are ignored, and ice core proxy CO2 estimates are used instead, creating a suspiciously smooth CO2 curve on a chart.

    – The work of geologists and other scientists, concerning historical climate estimates, is generally ignored, or “adjusted” to create a bogus “hockey stick historical temperature chart”, and

    – Phony surveys are done to claim 97% of scientists agree, when in fact there is no consensus on a coming climate change catastrophe, nor would it matter if there was a consensus — a “consensus” on what is going to happen in the future is meaningless.

    In the history of science, a scientific consensus was often a good leading indicator that a theory was wrong, or would be significantly revised in the future !

    Most Important: Computer game predictions of the future climate are not science at all — they are climate astrology.

    My free climate blog, with no ads,
    and no money for me,
    designed for the average guy,
    who is not a scientist:

    http://www.elOnionBloggle.blogspot.com

  37. yes

    The right way to look for the Begining of the pause is to start at the begining and look for a BREAKPOINT.

    That is you specify a data generating model ( say a linear one ) and then you identify the point where that model breaks down and you have to assume a zero trend to get the data to fit the model.

    When this el nino hits the pause will disappear using monktons method.

    using a break point approach if it exists it wont disappear.

    • The time when a no trend series in the data starts can overlap the time when the previous trend-line ends. There is no reason that the end of one has to exactly coincide with the beginning of the next.

    • If you define a pause as a time interval during which there is no statistically significant increasing or decreasing trend then the pause will start and end where ever it statistically wants to. If you fix the end point as I believe Monkton has then the start point is free to vary. The fun is seeing how far back it goes as time goes on. But you are right, this el nino could break the pause just as easily as adjusting the past.

    • when this el nino hits ? surely you mean if ? will be interesting to see what the following la nina does, or does not, do to the surface temp trend.

    • And when this little warming spree, caused by water, ends in three months and the temperature dips it will return much longer in its duration..

  38. Well it appears that the ‘winter gales’ have come early to the UK this year.
    Because there is a ‘sea ban’ in place in SW England due to the large waves and bad weather

      • Don’t know, not old enough to remember ;)
        But this July has been rather cool with unusually strong gales for the time of year in Scotland and now SW England. This is what happens with a cooler Atlantic and a more zonal southern tracking jet stream. lt risks leading to cooling in northern europe during the summer.

  39. Changing the reported temperature anomalies has not hidden the proof that CO2 has no effect on climate. Only existing data [Phanerozoic (last 542 million years) and current ice age] and a grasp of the fundamental relation between math and the physical world are needed or used. The proof is at http://agwunveiled.blogspot.com

  40. “Make what you can of them: but I, for one, will place no further reliance on any of the three terrestrial datasets..”

    Promise? Well that’s a relief!

    [Rick, what makes you think anyone cares about your opinion on this? -Anthony]

  41. You might consider this land based data/chart made by Michael Palmer, Department of Chemistry University of Waterloo, Ontario, Canada,

    https://wattsupwiththat.com/2011/10/24/unadjusted-data-of-long-period-stations-in-giss-show-a-virtually-flat-century-scale-trend/
    It would be nice to see that chart brought up to current time and compared to U.S. Climate Reference Network (USCRN)
    https://wattsupwiththat.com/2015/06/14/despite-attempts-to-erase-it-globally-the-pause-still-exists-in-pristine-us-surface-temperature-data/

    • Here is the longest continual thermometer record going back to the mid 1600s.

      How does CO2 cause such variation?

      • Yes!! I knew it!! I remember 1957 well, it was very hot, unbearably so, in southern England, so hot I went swimming in some very dirty southern English rivers to help stay cool, along with thousands of other southern English people. My dad’s boss’ house had a basement where it was pleasantly cool, and I wished that we could have been rich enough to have a basement of our own. So there you have it, 1957 was the hottest year ever in Central/Southern England (except for 2015 of course, and before that 2014, 2013, 2012, etc. all the “hottest years ever”).

  42. Thanks Lord M. Another solid contribution.

    Bearing in mind your central point in this post is the obvious data tampering that has now ruined 3 land-based data-sets, I thought you may like to consider the post below which I put onto Bishop Hill earlier tonight. Just as a change of government here has resulted in ‘renewables’ subsidies and some green policies being axed, so too could a change of senior government personnel result in a potentially disastrous come-uppance for the climate data-snatchers. See below: (slightly amended from my original post at BH)

    ***********************

    “Tom Hayes has just got 14 years in jail for fixing / rigging Libor rates across the globe. Ouch! Now ask yourself – away from the world of finance is there a more data-sensitive area that is completely dependent on accurate information than ‘climate change’? I doubt it. False data has cost implications running into hundreds of $billions, and that’s before social policy, commercial advantages / losses and personal career prospects are considered.

    Nobody has given this guy any sympathy whatsoever so that means a clear precedent has now been set regarding public acceptance of severe penalties for manipulating public policy-influencing data. The guys who’ve been adjusting climate data – and they clearly have been, have left their handy work right there in the public domain. Oops. They could be in for a few sleepless nights following this very interesting precedent, and I wonder if you think it’s worth a bit of sabre-rattling?”

    *********************

    So, there you have it. Following Hayes’ 14 year jail sentence for adjusting and rigging Libor data, it should now only be a matter of time before a similar fate befalls our friendly data-changing climate criminals. Tick tock.

    • The obvious advantage of climate change as the basis of this scam is that it’s notoriously hard to quantify. You can’t prove it is a problem, but at the same time you can’t prove that it isn’t. Even the most in-depth analyses always leave some nagging doubt that it m-i-g-h-t just be a problem after all. Plus you can even knock-down almost all of the postulates of the alarmists, but that still doesn’t entirely dismiss their argument.

      Thus, the scam has a long potential lifetime. If a more readily analysable scare had been chosen, then the question would have been settled too quickly for a morass of legislation, all beneficial to the scammers, to have built-up around it.

      The ‘millennium bug’ was an example of a similarly effective scare story which also persuaded firms to spend large amounts of unnecessary money on mitigation measures, but one with a too definite sell-by date after which it was not credible. The main worry of the climate alarmists is that the 21stC cessation of warming is their equivalent of going to work on Jan 3rd 2000 and finding that all is normal. Except, it will take more like a decade to sink in, instead of a day or two. By which time it may be too late to undo the damage.

    • Well the Crimastrologists have just endorsed this as their theme tune,changing ‘He’ to ‘We’https://www.youtube.com/watch?v=C3r0CFKzILo

  43. BGV
    I’m no NFL statistician – this is a pure example – and, besides, I like the Bears.
    If on 1st January 2018 the Bears have won six Superbowls [so they’re fourth equal in all time SB wins] – that’s a fact.
    If they win the 2018 SB – on January 14th [say – my brother’s 60th birthday] they will then have won seven SBs.
    So you or I could say on 16th January 2018 ‘The Bears have won 7 SBs! They’re now third equal!’

    Simple things for a simple mind – or a determined deviant?

    Auto – I watched the ’46’ and the Refrigerator get a TD

    • Haha, got to laugh. Is Tim Tebow the new Refrigerator, the go-to man on the one yard line? I hope he makes the team. I want to see how he is used. The old Refrigerator was a lot of fun to watch — unstoppable force meets immovable object.

      Eugene WR Gallun

  44. Why is the most pristine data set being ignored?
    https://www.ncdc.noaa.gov/data-access/land-based-station-data/land-based-datasets/us-climate-reference-network-uscrn
    “Data from NOAA’s premiere surface reference network. The contiguous U.S. network of 114 stations was completed in 2008. There are two USCRN stations in Hawaii and deployment of a network of 29 stations in Alaska continues. The vision of the USCRN program is to maintain a sustainable high-quality climate observation network that 50 years from now can with the highest degree of confidence answer the question: How has the climate of the Nation changed over the past 50 years? ”
    ” These stations were designed with climate science in mind. Three independent measurements of temperature and precipitation are made at each station, insuring continuity of record and maintenance of well-calibrated and highly accurate observations. The stations are placed in pristine environments expected to be free of development for many decades. ”
    Here is the plotted data from this pristine un-adjusted data set:

    https://wattsupwiththat.files.wordpress.com/2015/06/uscrn-conus-plot-10years.png?w=780&h=450

    where is the warming in the USA

  45. Christopher Monckton said,

    “Once science was done by measurement: now it is undone by fiat. Let us hope that history, looking back in bafflement on the era when the likes of Mr Karl [of NCDC] were allowed to ru(i)n major once-scientific institutions, will judge him every bit as unkindly as he deserves.”

    There is an intellectual root that allows the likes of Mr Karl [of NCDC] to rationalize that he can cut science off from reality; that intellectual root was the irrational reaction by Immanuel Kant to a developing modern skepticism/empiricism; such as the skepticism/empiricism expressed by Kant’s contemporary David Hume. The Kant irrational reaction to skepticism/empiricism spawned the post modern philosophy of science which Mr Karl uses to perform fiat science.

    John

  46. McKitrick & Michaels 2007 paper was challenged in 2008 by Schmidt, quick search, not hard to find.

  47. I am afraid that poor old Obama and Strong, Figueres, Ban-Ki Moon, Orestes, Schellnhuber, Suzuki etc and all their Liberal Luvvie mates are badly mistaken if they think that China or India are going to give a tinkers fart about their plans to rule the world from Paris. They missed the boat by 20 years.

      • AndyG55:

        Perhaps that is the point. The Obaminator doesn’t want to deal with real problems so he is keeping things internalized with classic misdirection. Look how he has dealt with China, Iran, the ISIS threat, Russia …

        So he needs a straw man. Chamberlainesque. The US will need another Pearl Harbour to look globally. Meanwhile the BRICS countries are strengthening.

        To be fair, it may be several years too late to do anything about ISIS. They are already amongst us. While ‘we’ worry about CAGW, (Fiddling), the world (Rome) burns. I believe I have heard that somewhere before.

  48. [On June 3rd 2015 Monckton said, “For 222 months, since December 1996, there has been no global warming”

    Now on August 4th, 2015 Monckton says: “The third period, from January 1997, runs from the month in which the Great Pause of 18 years 6 months began.” ]

    No idea why this has caused an issue? There are two different periods shown with no pause because two different data sets don’t agree. If somebody doesn’t know which one is best to choose, therefore you can’t pick which period of the pause it was.

    Scientifically we can pick which data set is best to choose because one covers the planet better than the other.

    UAH shows the pause 18 years 5 months.

    RSS shows the pause 18 years 6 months.

    UAH covers more of the poles than RSS, so would always consider this product the best choice if you have to pick one.

    So picking one period most likely correct for the pause would be 18 years 5 months.

  49. Even though the satellites of RSS and UAH are watching, all three of the terrestrial record-keepers have tampered with their datasets to nudge the apparent warming rate upward yet again.

    They have still failed to notice the law of holes.

    The fact average surface temperature is rising faster, than that of the bulk troposphere (as measured by satellites) is extremely damaging to the prevailing climate paradigm in general and specifically to computational climate models.

    It should be the other way around. Global average temperature of the lower troposphere (~3.5 km elevation) is expected to change some 20% faster than that of the surface and 40% faster in the tropics (between 20N &. 20S), a.k.a. “hot spot”.

    According to all available datasets, it is not the case.

    There can be two distinct solutions to this puzzle.

    1. Rate of surface warming is overestimated by 50-60%
    2. Climate theory is flawed, average absolute humidity is actually decreasing in the upper troposphere

    In either case, a big trouble is brewing.

  50. Brian’s ranting has prompts me to ask a more serious question. At some point the pause will presumably end and we will see either renewed warming or cooling. What will be the trigger for declaring the pause to be over, and how will the end date for the pause be identified? Monckton’s method doesn’t look well suited to identifying the end of the pause.

    • Natural cycle oscillations of around 30 year periods occur in the data for many decades. The planet cools for around 30 years then warms for around 30 years and this has happened since the last ice age. Nothing so far has been different from this natural oscillation. All periods of warming or cooling lasting longer than 15 years have always ended up with this 30 year oscillation. Based on this evidence the period is already long enough to show it is not a pause and just part of the next 30 year cooling oscillation.

    • Ian, you raise excellent questions. As far as GISS and Hadcrut4 are concerned, the pause has indeed either ended or is less than a year, which I would not classify as a pause. In cases like this, we can point to the time for no statistically significant warming which always has some value. We can also point out how the small rise in temperature is way less than the models have predicted which is the case for NOAA, GISS and Hadcrut4, even though they no longer show a pause.

  51. Not only are the Climate policies ramping up, fiscal instability in world economies is getting closer to crisis point. Puerto Rico next on the basket case list and the USA looking to raise rates, shut down Congress (again) and reduce its credit rating. If One World Government is the goal, the conditions are ripe for seizing control

  52. “Regardless, all global combined LSAT and SST data sets exhibit a statistically non-significant warming trend over 1998–2012 (0.042°C ± 0.093°C per decade (HadCRUT4); 0.037°C ± 0.085°C per decade (NCDC MLOST); 0.069°C ± 0.082°C per decade (GISS)).”

    Ooops, that is a problem. Instead of spending all this time exposing the fraud by highlighting all the statistical manipulations the climate “scientists” do, I would use their manipulations against them. Hoist them by their own petards. If temperatures are truly increasing at an abnormal and accelerating rate, sea levels would show the same acceleration. They don’t. There are countless dogs that don’t bark in the conclusion of the climate “scientists.” Climate “science” is a fraud, the people perpetrating it know it is a fraud, that is exposed in the Climategate emails. Dr Thompson knows it is sublimation, not melting, that is causing the Mt Kilimanjaro glacier to disappear. They have to know they are wrong or why else would they violate just about every commonly accepted scientific and statistical practice? To commit this fraud you don’t do it by accident, you have to do it in a premeditated manner. I would instead focus on the smoking guns and use their own data to hang them. The climate “scientists” claim that the oceans are warming, and they blame it on CO2. CO2 traps IR radiation between 13µ and 18&micro. Changing CO2 from 250 to 400 ppm resulted in an increase in IR radiation of 346.7 to 347.6, or less than 1W/M^2 (use looking up, tropical, no clouds or rain, 0.01K)
    http://climatemodels.uchicago.edu/modtran/

    The question then becomes can 1W/M^2 applied over 150 years warm the oceans? The answer is hell no. If the answer is hell no, then the next question is what is warming the oceans. Once that question is asked, the conclusion is the same thing that is warming the oceans is warming the atmosphere, and that something is the sun.

    Also, H20 absorbs the same spectrum and much more than CO2. Change the humidity by 10% and see what happens. If the GHG effect can cause catastrophic warming, H20 would have done it a long time ago. A 10% change in humidity can increase the radiation by 7 W/M^2, 347.6 to 354.5. Basically H20 renders CO2 impotent. H20 can be 4 parts per 100, CO2 is 4 parts per 10,000.

    Once again, warming and energy is all math, and math provides the way to expose the fraud. The climate scientists have twined the ropes that will be used to hang them. Either the calculator is wrong, or the Climate “Scientists” are wrong. I bet on the calculator being right.
    http://climatemodels.uchicago.edu/modtran/

  53. “Once science was done by measurement: now it is undone by fiat”.

    Sums it up perfectly,.

    I don’t know what some of the giants of science,such as Newton, Copernicus, Galileo, Darwin, Faraday, Einstein, Feynmann, Bohr, Turing, Curie, Watson, Crick, etc etc would think about it all, but I strongly suspect they wouldn’t like it.

  54. Having ‘adjusted’ their way into World Governance, it will be a simple matter to ‘normalise’ the data over time to show how ‘successful’ eco-marxist policies are in general, and power impoverishment is in particular. The crushing cold will do the rest.

  55. The July anomaly for RSS came in at 0.289. As a result, the negative trend goes from January 1, 1997 to July 31, 2015, or a period of 18 years and 7 months.

    The following is for Brian G Valentine and anyone else who is interested.
    The slope from January 1, 1997 is -0.000252. The slope from December 1, 1996 is +0.000141. Note that the negative slope is larger than the positive slope. This means that the “real” starting time for the period of 0 slope is closer to December 1 than January 1. The difference between the two slopes is 0.000393. Let us now make an assumption that may not be true but which is the best we can make under the circumstances. Namely let us assume that the change during December 1996 was uniform. If we assume this, then the starting time for the pause is 0.000141/ 0.000393 x 31 =11, or in other words, December 11, 1996.
    Lord Monckton will rightly say the pause goes from January 1997 to July 2015, or 18 years and 7 months. However if he would say it went from December 1996 to July 2015, he would not be totally wrong. It is just that it is from December 11 and not December 1.

    • Brian, by what mechanism does CO2 increase to warm us out of an ice age? Why didn’t we have catastrophic warming when CO2 was 7,000 ppm, how did we fall into an ice age when CO2 was 4,000 ppm? Lastly, CO2 increased from 250 to 400 over the past 150 years. The additional heat absorbed by CO2 is 1W/M^2, how is that enough to warm the oceans?

    • Werner Brozek it doesn’t matter what it came in this month.

      It could make a huge difference!

      When that was written, the slope was 0 (or slightly negative) from September 1996. The zero line would have been around an anomaly of 0.24. Had the anomaly stayed at 0.24 or very close to it since then, the pause would still start in September 1996. However since it is now starting in January 1997, it just means that more recent months were above 0.24. And a really strong El Nino over many months could make the pause disappear entirely as has already happened with GISS and Hadcrut4.
      On the other hand, if we get a strong La Nina, with anomalies being way below 0.24, then the start of the pause gets pushed back earlier.
      Today, the pause is 18 years and 7 months long on RSS and it starts January 1, 1997 since that is the first full month that the slope is negative. The starting time could change next month however if the August anomaly is way above or way below 0.24.

    • Hello again Brian.
      You’re either being wilfully obtuse or are irredeemably thick. I’ll give you the benefit of the doubt about your intelligence, which leaves only one possible conclusion.
      Bye bye.

    • Brian, what do any of your questions have to do with Monckton’s methodology? You’ve yet to show you even understand his methodology yet you keep asking the same questions for which you’ve been given the answers to by numerous people in this thread alone. Time to take your trolling elsewhere.

    • co2islife….what do any of your questions have to do with Monckton’s methodology?

      The absurdity of you finding fault in an immaterial 1 second difference, and ignoring all the flaws routinely produced by the climate “scientists” is absurd on its face. Why don’t you apply the same standards to the IPCC and other Climate ‘Scientists?” My questions are intended to expose your epic level of hypocrisy. Clearly you are a student of Saul Alynski, a deceitful propagandist, and not a student of real science.

  56. It is logically IMPOSSIBLE for the “pause” to start on two different dates. Either it started Dec 1996 or Jan 1997. How is it possible for events happening today to alter the events of the past?

    You should ask the people adjusting all the historical records. The pause can be defined in numerous ways.
    Here is the temperature record. Today’s temperature is at or below the same level of 2002, 1998, 1996, 1991, 1988 and close to 1984. That is the unfortunate reality the pause deniers must cope with. BTW Brian, how does 1W/^2 warm the oceans? How does CO2 cause local warming in the Arctic? How does CO2 cause glaciers and ice to melt in sub zero temperatures like those found on Mt Kilimanjaro’s peak?

  57. Lord Monckton, I would move beyond exposing the data manipulations and flaws in the AGW theory and science. It is a fraud, anyone with a 2nd grade education is science can see that. The warmists will simply keep putting our crap in order to keep everyone chasing their tails. These are distraction tactics. Liberals perfected that concept, just read Rule for Radicals by Saul Alynski. Deceit, deception, distortion and manipulation is their well published MO. Don’t fall into their trap. What we need to do is to start promoting a Science Verification and Validation Agency to verify the validity of any science paid for by the tax payer and used to promote a public policy. This science is so bad because there are no watch dogs. People are being paid to lie, cheat and steal. We need an agency that does double blind testing on all the data and conclusions. As long as people like Michael Mann can be exposed as a fraud in the climategate emails, and then an “internal” investigations exonerated him, the corrupt cycle will continue. There is simply too much money at stake. Eisenhower warned us about it in his farewell address.

    The prospect of domination of the nation’s scholars by Federal employment, project allocations, and the power of money is ever present and is gravely to be regarded.

    Yet, in holding scientific research and discovery in respect, as we should, we must also be alert to the equal and opposite danger that public policy could itself become the captive of a scientific-technological elite.

    It is the task of statesmanship to mold, to balance, and to integrate these and other forces, new and old, within the principles of our democratic system-ever aiming toward the supreme goals of our free society.

    We need to have an independent agency verify the data and validity of the conclusions, and have it done in a double blind manner. If you remove the titles of these climate “science” graphs and ask people to interpret them they will reach the exact opposite of the climate “scientists.” If you put the data into a computer and run a stepwise regression CO2 would never be selected as the most significant variable. Only when people with preconceived conclusions look at the data does CO2 become important.Remove the bias and the fraud is exposed.

    To instill Socialism in a Capitalist Nation one needs only control the energy supply..V. Lenin

    This is evidence of a bias, not a valid theory. The only way you get something to error completely to one side is if you systematically fail. Climate “science” is a systematic failure because it is based upon CO2, and not the Sun and Water Vapor and clouds. Once again, CO2 can’t warm the oceans with its 1W/M^2 marginal energy input. The sun is heating the oceans which in turn warm the atmosphere.

  58. Tom Karl — The Devil Is In The Size Of
    The Adjustment Between Data Sets

    In data given special care
    I proved the “Pause” was never there
    “It isn’t there!” is what I say
    My paper made it go away!

    The magic that my data gets?
    Adjustment twixt the data sets!
    That little number up or down
    Creates a slope that’s — up or down

    I admit a lousy poem — but it makes a true point. The slope is created by the size of the adjustment between the two data sets. Karl thinks he has found a way to compare apples with oranges. Doesn’t work.

    Karl used the rare instances where ship data and buoy data were close together in space and time (I hope) and used the difference between them to determine his “adjustment”.

    !) This adjustment is just “luck of the draw”. (Or did he cherry pick which ships and buoys to use leaving certain ones out?)..

    2) Both ship and buoy data have margins of error that combined greatly increase the margin of error of his adjustment. The margin of error of the adjustment would be so large as to make it meaningless even if the whole concept was not flawed.

    Anyway that is my poetic take on Tom Karl’s inane paper.

    Eugene WR Gallun.

  59. As HH Lamb wrote in 1982:
    The cooling of the Arctic since 1950-60 has been most marked in the very same regions which experienced the strongest warming in the earlier decades of the 20thC, namely the central Arctic and northernmost parts of the two great continents remote from the world’s oceans, but also in the Norwegian-East Greenland Sea….
    A greatly increased flow of the cold East Greenland Current has in several years (especially 1968 and 1969, but also 1965, 1975 and 1979) brought more Arctic sea ice to the coasts of Iceland than for fifty years. In April-May 1968 and 1969, the island was half surrounded by ice, as had not occurred since 1888.
    Such sea ice years have always been dreaded in Iceland’s history because of the depression of summer temperatures and the effects on farm production….. The 1960’s also saw the abandonment of attempts at grain growing in Iceland, which had been resumed in the warmer decades of this century after a lapse of some hundreds of years…

  60. Have you ever tried to find out the exact start date of the hundred months to irreversible climate change prediction. The only thing I know for certain it is well before the time in their web site.
    In a noisy signal it is difficult to define a precise starting point. On a more pedantic note if the statement does not include the exact start timing we are only talking about a microsecond as the timing quibble point.

  61. Ironically, these adjustments and time series manipulations are also some sort of man made warming.. Sincerely, I wonder who is going to believe their forged evidence, deception made to impose a new world order, and who is going to save us from them. Climate science will need its Snowden moment someday. We can understand that the people involved in this don’t want to step forward and ruin their career, but leaving evidence as a testament is also a possibility. Let’s hope that someone will step forward.

  62. If terrestrial datasets are continuously being adjusted then that means that they are deemed to be continuously inaccurate both now and in the past. If they cannot be deemed accurate then nor can predictions derived from them be deemed accurate.

    • Easy those ‘measurements’ that support ‘the cause ‘ are accurate any which do not are inaccurate .
      Reality of course has nothing to do with it .

  63. In order to satisfy yourself over the perplexing difference between Dec. 1996 and Jan .1997 . you can assume a starting point of midnight December 1996. Doing!! happy new year!

  64. One thing I’ve always wondered is, “Where are the papers that prove these statistical techniques are valid?” What is the name of the paper/s that proved the gridded method of deriving temperature up to 1200 km away actually works? Surely someone used known values from stations to derive values for other known values, and compared the results and proved the technique worked, e.g. used temperatures from stations across the US South to derive values for stations in North Dakota, and arrived at nearly the same numbers as ND’s weather stations.

    Similarly for the homogenization methods, and the time of observation adjustments. Surely someone did a paper using controlled station values to derive these methods and prove they are valid. What are their names, who did them, and when were they published?

  65. Perhaps the temperature datasets are showing an increase because the global average temperature is actually increasing.

    If you are genuinely “open minded” or a skeptic, then you have to consider that possibility.

    • Perhaps the temperature datasets are showing an increase because the global average temperature is actually increasing.

      The problem with that is that the satellite data sets contradict the others. That should not happen.

    • If the global temperatures were actually increasing then satellite data and balloon data would also. It is scientific evidence that tampering with surface data has caused this increase. The surface data have not even shown any warming while left alone with no adjustments.

      All GISS and HADCRUT excuses previously were because didn’t cover much of the poles. UAH covers more of the poles than GISS and HADCRUT ever will.

      Taking this into account the blame on the poles is not an excuse as it is not backed up.

    • Svante Callendar

      Though we are incapable of measuring it, i will grant that at any given time there is such a thing as a “global average temperature”. We have had ice ages and warm periods — thus that “global average temperature” is always changing.

      Lately earth has been coming out of “the little ice age” so the “global average temperature” has been rising. This rise we have experienced has not in anyway been harmful — In fact, it has been quite helpful to mankind.

      The dispute with the temperature data sets is that their creators seem to deliberately want to increase the “global average temperature” by statistical manipulation of the data. They seem determined to make it higher and its rise quicker than it actually is.

      Suggesting that we be “open minded” and grant that the “global average temperature” could actually be rising as stated by the creators of the data sets is what is known as “a straw man argument”.

      Though their results spark the argument it is HOW THEY MANIPULATE THE DATA that is being questioned. It is their methods used to arrive at their conclusions that is highly suspect.

      We cannot ever know what the “global average temperature” is at any given time — but we can certainly examine the means currently used to estimate it.

      These people have gone to great effort to hide the methods they use to manipulate the data. That is highly suspicious in itself. A dishonest accountant only shows you his results and hides how he obtained them. When you see the manipulators of the data sets doing the exact same thing doesn’t that make you suspicious?

      So in the future, remember, it is their methods we are arguing about — not the results par se. Though we can never know what the “global average temperature” is at any given time we can determine the VALIDITY of the methods these people are using to calculate it.

      But as I said, they don’t want their work examined.

      Eugene WR Gallun.

      • Eugene WR Gallun.

        No my point was the global average temperature increase may also be real, to be honest the skeptics have to consider that possibility. Just making up stories that it is fabricated data is just a convenient excuse for ignoring scientific evidence that some skeptics do not like.

        The scientists can indeed talk about a “global average temperature” as it is an estimate. ALL temperature measurements are estimates.

  66. Paul Homewood has also done his calculations on HADCRUT’s latest version 4.4
    https://notalotofpeopleknowthat.wordpress.com/2015/08/05/hadcrut-cool-the-past-yet-again/

    “However, we find that similar adjustments have now been made since the original version 4.0 was released in 2012. Note that the anomaly for 2010, (the last year that appeared on version 4.0), has increased by 0.004C on the latest revision, but by 0.029C over the four revisions since version 4.0.
    All of this, of course, is designed to remove the pause. Some would call it fraud.”

  67. “Where are the papers that prove these statistical techniques are valid?”

    They don’t exist. That is how a real science would address this issue, this isn’t a real science. People simply need accept that and go to the next step. How do you battle a fraud with such political support? How did Russia battle Lysenkoism? How did we battle the Piltdown Man? How did we battle Eugenics? Eventually the truth will be exposed in the next cooling cycle. Mother Nature will debunk this “science” on her own. We need to start battling this fraudulent political movement with messages that resonate with the public. Highly scientific arguments simply won’t work. The truth doesn’t matter in politics. Here is a quote that would help in the political approach to this fraud.

    Climate Change Business Journal estimates the Climate Change Industry is a $1.5 Trillion dollar escapade, which means four billion dollars a day is spent on our quest to change the climate.

    http://joannenova.com.au/2015/07/spot-the-vested-interest-the-1-5-trillion-climate-change-industry/

    $1.5 Trillion or $4 billion/day can provide a lot of needy children lunches, build schools and hospitals, repair degenerating bridges, pay down the debt, provide water for California, provided medicines for the elderly and needy. Best yet would be pay unemployment for unemployed coal, gas and oil workers. The Coal Industry and John L Lewis built the modern Democratic Party back in the 1930s and now they are being thrown under the bus. $4 billion/day could fund sensitivity training for cops and promote “black lives matter” events. $4 billion could fund a large number of NEA projects, AIDs research, sanctuaries for Lions, scholarships for illegals, breast cancer research, more funding for Planned Parenthood. Lower tuition for college. More funding for Obamacare. More foreign aid to Africa. $4 billion/day will build a lot of inner city schools. Just think of all the liberal spending priorities that are being underfunded because we are wasting all this money on a hoax. I’m pretty sure Democrats don’t like paying higher utilities and gas bills any more than anyone else, so they get hurt just as much as everyone else by these idiotic policies. We need to start a campaign that lists spending priorities and highlight what isn’t getting done because Climate Change is soaking up al the money and pouring it down a rat hole. The selfishness of these Climate Hoaxsters is of biblical proportions and the human suffering they are willing to inflict on society because of their own misguided priorities is Stalinistic in scale. We simply need to change course on how we battle and expose this hoax. We need to start talking to the American people, give them alternatives, talk in cost benefit terms, talk in terms of alternatives. We need to make this hoax personal. We need to make the American people know they have been played as fools, and they paid for the rip off. We need to generate anger that will send people to the ballot box. American needs an enemy, and the fraudulent climate hoaxsters fit that bill. They have corrupted the educational, scientific, NGO, Media, EPA and government. We need to unleash the IRS on Sierra Club and others that were working closely with the Administration. talking in scientific terms won’t work, we need a political campaign to expose this fraud. We need Coal miner being interviewed. We need “experts” put of the spot to explain how glaciers melt in sub zero temperatures. We need “experts” to explain how CO2 can result in a 20 pause and how did ice ages occur in the past when CO2 was 4000ppm? We need an angry electorate.

    Where is the IRS?

    Emails appear to show coordination between EPA, environmental groups on power plant rules

    http://www.foxnews.com/politics/2015/08/04/emails-appear-to-show-coordination-between-epa-environmental-groups-on-power/

    • co2islife: Well said! Here in Europe, we have more of the ‘climate change’ lunacy at work. I have in front of me a European Commission press release from Warsaw dated 19th November 2013, obtained via the internet.
      It says:
      “At least 20% of the entire European budget for 2014-2020 will be spent on climate-related projects and policies, following the European Parliament’s approval today of the 2014-2020 EU budget. The 20% commitment triples the current share and could yield as much as 180 billion Euros in climate spending in all major EU policy areas over the seven-year period.”
      180,000,000,000 Euros.
      A colossal waste of money which could be much better spent on improving people’s lives in the real world, as you point out.

    • what a fantastic post by co2 is life. well said that man. the only part i would change is when you say americans, i would say all people of the developed world,this is a global fraud.

Comments are closed.