UPDATE – BOMBSHELL: audit of global warming data finds it riddled with errors

I’m bringing this back to the top for discussion, mainly because Steven Mosher was being a cad in comments, wailing about “not checking”, claiming McLean’s PhD thesis was “toast”, while at the same time not bothering to check himself. See the update below. – Anthony

Just ahead of a new report from the IPCC, dubbed SR#15 about to be released today, we have this bombshell- a detailed audit shows the surface temperature data is unfit for purpose. The first ever audit of the world’s most important temperature data set (HadCRUT4) has found it to be so riddled with errors and “freakishly improbable data”  that it is effectively useless.

From the IPCC:

Global Warming of 1.5 °C, an IPCC special report on the impacts of global warming of 1.5 °C above pre-industrial levels and related global greenhouse gas emission pathways, in the context of strengthening the global response to the threat of climate change, sustainable development, and efforts to eradicate poverty.

This is what consensus science brings you – groupthink with no quality control.

HadCRUT4 is the primary global temperature dataset used by the Intergovernmental Panel on Climate Change (IPCC) to make its dramatic claims about “man-made global warming”.  It’s also the dataset at the center of “ClimateGate” from 2009, managed by the Climate Research Unit (CRU) at East Anglia University.

The audit finds more than 70 areas of concern about data quality and accuracy.

But according to an analysis by Australian researcher John McLean it’s far too sloppy to be taken seriously even by climate scientists, let alone a body as influential as the IPCC or by the governments of the world.

Main points:

  • The Hadley data is one of the most cited, most important databases for climate modeling, and thus for policies involving billions of dollars.
  • McLean found freakishly improbable data, and systematic adjustment errors , large gaps where there is no data, location errors, Fahrenheit temperatures reported as Celsius, and spelling errors.
  • Almost no quality control checks have been done: outliers that are obvious mistakes have not been corrected – one town in Columbia spent three months in 1978 at an average daily temperature of over 80 degrees C.  One town in Romania stepped out from summer in 1953 straight into a month of Spring at minus 46°C. These are supposedly “average” temperatures for a full month at a time. St Kitts, a Caribbean island, was recorded at 0°C for a whole month, and twice!
  • Temperatures for the entire Southern Hemisphere in 1850 and for the next three years are calculated from just one site in Indonesia and some random ships.
  • Sea surface temperatures represent 70% of the Earth’s surface, but some measurements come from ships which are logged at locations 100km inland. Others are in harbors which are hardly representative of the open ocean.
  • When a thermometer is relocated to a new site, the adjustment assumes that the old site was always built up and “heated” by concrete and buildings. In reality, the artificial warming probably crept in slowly. By correcting for buildings that likely didn’t exist in 1880, old records are artificially cooled. Adjustments for a few site changes can create a whole century of artificial warming trends.

Details of the worst outliers

  • For April, June and July of 1978 Apto Uto (Colombia, ID:800890)  had an average monthly temperature of  81.5°C, 83.4°C and 83.4°C respectively.
  • The monthly mean temperature in September 1953 at Paltinis, Romania is reported as -46.4 °C (in other years the September average was about 11.5°C).
  • At Golden Rock Airport, on the island of St Kitts in the Caribbean, mean monthly temperatures for December in 1981 and 1984 are reported as 0.0°C. But from 1971 to 1990 the average in all the other years was 26.0°C.

More at Jo Nova

The report:

Unfortunately, the report is paywalled. The good news is that it’s a mere $8.

The researcher, John McLean, did all the work on his own, so it is a way to get compensated for all the time and effort put into it. He writes:

This report is based on a thesis for my PhD, which was awarded in December 2017 by James Cook University, Townsville, Australia. The thesis1 was based on the HadCRUT4 dataset and associated files as they were in late January 2016. The thesis identified 27 issues of concern about the dataset.

The January 2018 versions of the files contained not just updates for the intervening 24 months, but also additional observation stations and consequent changes in the monthly global average temperature anomaly right back to the start of data in 1850.
The report uses January 2018 data and revises and extends the analysis performed in the original thesis, sometimes omitting minor issues, sometimes splitting major issues and sometimes analysing new areas and reporting on those findings.

The thesis was examined by experts external to the university, revised in accordance with their comments and then accepted by the university. This process was at least equivalent to “peer review” as conducted by scientific journals.

I’ve purchased a copy, and I’ve reproduced the executive summary below. I urge readers to buy a copy and support this work.

Get it here:

Audit of the HadCRUT4 Global Temperature Dataset


As far as can be ascertained, this is the first audit of the HadCRUT4 dataset, the main temperature dataset used in climate assessment reports from the Intergovernmental Panel on Climate Change (IPCC). Governments and the United Nations Framework Convention on Climate Change (UNFCCC) rely heavily on the IPCC reports so ultimately the temperature data needs to be accurate and reliable.

This audit shows that it is neither of those things.

More than 70 issues are identified, covering the entire process from the measurement of temperatures to the dataset’s creation, to data derived from it (such as averages) and to its eventual publication. The findings (shown in consolidated form Appendix 6) even include simple issues of obviously erroneous data, glossed-over sparsity of data, significant but questionable assumptions and temperature data that has been incorrectly adjusted in a way that exaggerates warming.

It finds, for example, an observation station reporting average monthly temperatures above 80°C, two instances of a station in the Caribbean reporting December average temperatures of 0°C and a Romanian station reporting a September average temperature of -45°C when the typical average in that month is 10°C. On top of that, some ships that measured sea temperatures reported their locations as more than 80km inland.

It appears that the suppliers of the land and sea temperature data failed to check for basic errors and the people who create the HadCRUT dataset didn’t find them and raise questions either.

The processing that creates the dataset does remove some errors but it uses a threshold set from two values calculated from part of the data but errors weren’t removed from that part before the two values were calculated.

Data sparsity is a real problem. The dataset starts in 1850 but for just over two years at the start of the record the only land-based data for the entire Southern Hemisphere came from a single observation station in Indonesia. At the end of five years just three stations reported data in that hemisphere. Global averages are calculated from the averages for each of the two hemispheres, so these few stations have a large influence on what’s supposedly “global”. Related to the amount of data is the percentage of the world (or hemisphere) that the data covers. According to the method of calculating coverage for the dataset, 50% global coverage wasn’t reached until 1906 and 50% of the Southern Hemisphere wasn’t reached until about

In May 1861 global coverage was a mere 12% – that’s less than one-eighth. In much of the 1860s and 1870s most of the supposedly global coverage was from Europe and its trade sea routes and ports, covering only about 13% of the Earth’s surface. To calculate averages from this data and refer to them as “global averages” is stretching credulity.

Another important finding of this audit is that many temperatures have been incorrectly adjusted. The adjustment of data aims to create a temperature record that would have resulted if the current observation stations and equipment had always measured the local temperature. Adjustments are typically made when station is relocated or its instruments or their housing replaced.

The typical method of adjusting data is to alter all previous values by the same amount. Applying this to situations that changed gradually (such as a growing city increasingly distorting the true temperature) is very wrong and it leaves the earlier data adjusted by more than it should have been. Observation stations might be relocated multiple times and with all previous data adjusted each time the very earliest data might be far below its correct value and the complete data record show an exaggerated warming trend.

The overall conclusion (see chapter 10) is that the data is not fit for global studies. Data prior to 1950 suffers from poor coverage and very likely multiple incorrect adjustments of station data. Data since that year has better coverage but still has the problem of data adjustments and a host of other issues mentioned in the audit.

Calculating the correct temperatures would require a huge amount of detailed data, time and effort, which is beyond the scope of this audit and perhaps even impossible. The primary conclusion of the audit is however that the dataset shows exaggerated warming and that global averages are far less certain than have been claimed.

One implication of the audit is that climate models have been tuned to match incorrect data, which would render incorrect their predictions of future temperatures and estimates of the human influence of temperatures.

Another implication is that the proposal that the Paris Climate Agreement adopt 1850-1899 averages as “indicative” of pre-industrial temperatures is fatally flawed. During that period global coverage is low – it averages 30% across that time – and many land-based temperatures are very likely to be excessively adjusted and therefore incorrect.

A third implication is that even if the IPCC’s claim that mankind has caused the majority of warming since 1950 is correct then the amount of such warming over what is almost 70 years could well be negligible. The question then arises as to whether the effort and cost of addressing it make any sense.

Ultimately it is the opinion of this author that the HadCRUT4 data, and any reports or claims based on it, do not form a credible basis for government policy on climate or for international agreements about supposed causes of climate change.

Full report here

UPDATE: 10/11/18

Some commenters on Twitter, and also here, including Steven Mosher, who said McLean’s thesis/PhD was “toast” seem to doubt that he was actually allowed to submit his thesis, and/or that it was accepted, thus negating his PhD. To that end, here is the proof.

McLean’s thesis appears on the James Cook University website:  “An audit of uncertainties in the HadCRUT4 temperature anomaly dataset plus the investigation of three other contemporary climate issues“, submitted for Ph.D. in physics from James Cook University (2017).

And, he was in fact awarded a PhD by JCU for that thesis.

Larry Kummer of Fabius Maximus directly contacted the University to confirm his degree. Here is the reply.


For Mr Mosher,

I don’t insult and I don’t accuse without investigation. And if I don’t know I try to ask.

(a) Data files
If you want copies of the data that I used in the audit, as they were when I downloaded them in January, go to web page https://robert-boyle-publishing.com/audit-of-the-hadcrut4-global-temperature-dataset-mclean-2018/ and just scroll down.

Or download the latest versions of the files from yourself from the CRU and Hadley Centre, namely https://crudata.uea.ac.uk/cru/data/temperature/ and https://www.metoffice.gov.uk/hadobs/hadsst3/data/download.html. (The fact that file names are always the same and it’s confusing is one of the fidnings of the audit.)

(b) Apto Uto not used? Figure 6.3 shows that it is used, the lower than expected spikes are because of other stations in the same grid cell and the vale of the cell is the average anomaly for all such stations.

(c) What stations are used and what are not?
The old minimum of 20 years of the 30 from 1961 to 1990 was dropped a few HadCRUT versions back. It then went to 15 years with no more than 5 missing in any decade. HadCRUT4 reduced it again to 14.

best wishes


512 thoughts on “UPDATE – BOMBSHELL: audit of global warming data finds it riddled with errors

  1. The last time someone did a PhD thesis which showed up the Climate Change fraud (it was on some tree ring samples, I believe) all the data magically disappeared…..

    • It is nonsense to mix land and sea data as an “average” , especialy if you think this may tell you something about the supposed heating effects of IR radiation.



      Land ans sea water have a heat capacity which differ by a factor of two, meaning land warms faster. Adding the two to get an average biases to result to warm faster than a proper energy based calculation.

      Anyone who does not understand that should not be working on AGW.

      As a crude fix, land temps should be weighted 50% less than SST.

      Kudos to John McLean for doing this work and managing to get it accepted as his thesis. Well done.

      • Um, not to mention that “land data” isn’t the temperature of the land at a specific point, it is the temperature of the atmosphere approximately 1 meter above the surface while “sea data” is a measurement of the water temperature at or relatively near the surface.

        This isn’t “apples and oranges”, it is more “apples and apes”.

      • It’s also nonsense to average temperature data from different locations. Intensive properties.

        • In that case, there is no such thing as temperature. As even a single thermometer is averaging the temperature of millions of individual atoms and molecultes.

          • No, temperature is a useful concept to express the energy density (due to molecular motion) of a substance. Temperature is inherently quite localized, but depending on the circumstances, a single measurement can represent a large volume. You just need to keep in mind that accuracy will drop as you move further from the measurement point, and that this will vary by substance and circumstance. Unfortunately, many scientists forget, or ignore, these important truths about temperature.

          • Paul, I was responding to the writer who claimed that any averaging of temperature was inherently invalid.

            It can be done, you just have to account for the uncertainty via the error bars.
            The way the climate scientists do it is because they claim that two readings, hundreds of miles apart are inherently more accurate than either reading individually.

          • @MarkW

            Except that the metrics are _not_ average temperature. They are the mean of the highest and lowest temperature in a 24 hour period which is most certainly not the average. The set of means is then ‘averaged’ to provide a month or annual average by which time all sense is lost.

            Further, the enthalpy of the air is continually changing with its humidity, a 100% humid air say in a misty bayou in Louisiana at 75F contains twice as much energy in Kilojoules per Kilogram as a close to zero humidity but 100F air in the Arizona desert. As it is ‘trapped energy’ that the concern is about then that is what should be measured. Temperature is the incorrect metric and averaging atmospheric temperature is a nonsense.

            The entire meteorological exercise shows that climate ‘scientists’ have a very poor grasp of metrology – possibly deliberately so.

          • Well said Mark. You’re exactly right. There IS no such thing as temperature. It’s just an unitless index of heat. The ‘units’ are just the name of the guy who came up with the particular index. Sadly AGW only ‘exists’ in temperature measurements. And likewise is totally bankrupt because it doesn’t index back to ACTUAL heat. I get that you were being somewhat facetious but your point is not totally inane. It speaks directly to the lie of the Global Warming hypothesis, while simultaneously revealing why gullible twists who don’t understand the relationship between temperature and heat buy these lies wholesale. You can lie about temperature, you can’t lie about heat. Make it about the thing, in this case heat, and you can’t cheat. Make it about the measurement instrument, in this case a thermometer that tells you a temperature ( not the real thing) and you can fudge, lie and mislead all day. Which is precisely what happened. And now we know that it did happen and how. Though I logically deduced all this years ago, as did most everyone here and probably you too. Good stuff.

          • IanW, I agree completely that the record in question is a real dog’s breakfast and isn’t fit for the purpose it is being used for.

            My point is just to argue against the claim that averaging a bunch of thermometers is a scientifically meaningful exercise. It can be done, but you need to have the proper error bars on the results.

        • the intensive argument is wrong.

          Essex fundamentally misunderstands what a spatial average of temperature is.

          Even more hilariuous is that essex thinks you cant average color.

          guess he never worked in image recognition

          • Mosh,
            What is hilarious is that you think you know what temperature is a measure of. And if you say “heat” I’ll laugh even harder. Even more comical is the idea that the midpoint between the minimum and maximum temperatures for a month is the average temperature for that month.

          • Mosher, you don’t average color. You can average only a numeric representation of color, like RGB or hue/intensity/brightness vector.

            Take red and green for example. Their average in RGB is bright yellow when you go arond the hue axel, brown when you just average numbers, but rather dirty gray when you have pigments to mix. None of those are well defined as such – RGB for example is always a subspace of the human vision with a crude, arbitrary metric.

            The basis of a color space can be selected in many ways and connecting color with a number is always a bit arbitrary.

            It’s not hilarious to see you think it is hilarious.

          • Here is a better question. If the globe is warming, then it should be warming everywhere. Why spend billions on thousands of thermometers, supercomputers, bureaucrats, etc. when you could get the result by opening your door, looking at the thermometer on your porch and recording the value? If you tell me that not every place is warming, then my next simple question would be what is the MINIMUM number of thermometers it would take to say the globe is warming.

            We are going at it the wrong way. I hear more and more stations allow a better and more accurate average. Or, we need to forecast the climate for precautionary reasons. On and on. HOGWASH. If better and more measurements are needed to forecast the weather, then let the meteorologists pay for them along with the studies they generate. More and more measurements really only allow for statistics to be used to generate numbers that are inaccurate and for more and more corruption in the data. Then those same inaccurate numbers are used in models that admittedly, I say ADMITTEDLY, can’t make predictions. They can only make projections about what may or might happen.

            As I sit here I can’t shake the vision that it is all a shell game with shills on every corner grubbing up money. Watch the pea! Watch the pea! Is the hand really faster than the eye? Where is the pea, sir? That’s not to say scientists are dishonest. I suspect they simply are being driven by the same desires and beliefs that old sailors were when they refused to sail past the horizon.

          • Hey Mosh – where’s your apology? Seeing as you were flat out wrong and thereby maligned McLean, perhaps you should preface every post with an apology. It is what an honest researcher would do.

          • I guess Mosh has never tried using paint. You mix red , blue and yellow and you get shit brown. If you divide by 3 you still have shit brown.

            HadCRUFT4 is climatologists’ equivalent of shit brown.

          • The intensive argument is correct. You could learn that if you would open up a thermodynamics and statistical physics book, but I reckon climastrology is much easier to ‘comprehend’. That’s why a Niels Bohr Institute researcher had the guts to put his name on an article pointing that (and so should have any honest physicist).

          • Spatial average?
            You mean like the air temp is x and the ground temp 6 inches down is y in one place and time and y2 in another and y3 in a third , etc., etc., etc. and in some places and times the ground absorbs heat from the air and in other places and time it warms the air and then there is a similar equation for all the areas covered by water of various depths?
            And then you write all those numbers down on a universe sized piece of paper and do infinite calculations and it so happens to turn out that the answer is exactly what you need it to be to justify firebombing the world’s economy?
            Science is amazing!

        • The temperature is not just changing during the day due to the Sun coming out. Its a completely different mass of air that the min was measured for than the max. There are other reasons not to treat it like a simple intensive property but that is the big one, even if looking at min and max separately. If the measurements were well spaced you could assume all the movement cancels out quite well, but its far from it.

          • This whole issue is why I mostly ignore “surface temperature” and pay attention to the satellite readings.

            The method inherently averages the signal from an enormous volume of the atmosphere.

            It’s also why I believe Christy’s results over almost anything that comes out of nominal surface temp data, and probably why he tested his measurements only against balloon data and not any surface dataset.

      • Earth heats up quickly and also releases quickly. This is not so with water body wherein it heats up slowly and releases slowly and maximum and minimum times are different. The basic principle of land breeze and see-breeze follow this principle only.

        Dr. S. Jeevananda Reddy

      • Greg Goodman says:

        “As a crude fix, land temps should be weighted 50% less than SST.”

        His source says:

        “Several of the major datasets that claim to represent ‘global average surface temperature” are directly or effectively averaging land air temperatures with sea surface temperatures.

        These are typically derived by weighting a global land average and global SST average according to the 30:70 land-sea geographical surface area ratio.”


        • Exactly Barry. but 30:70 weighting assume incorrectly that the land temp and sea temps are fungible. They are not, for the reasons I stated. If you read the article to the end you would realise the 50% downgrading of land temps makes that 15:85 land/sea weighting.

      • AS I understand it, the temperature used as an ‘average’ for the day is actually the midpoint between the high and low temperatures recorded and not indicative of the true mean unless you have a very symmetrical distribution of temperatures. The midpoint is yanked left and right by the points on either end of the distribution – and these are often fleeting and vary by season, cloud cover etc. Highs and lows have some meaning in terms of local weather, and that is why they have been recorded, but if they have anything to say about the heat content of the atmosphere, I can’t see it. No wonder no one seems to have a clue what may or may not be happening to climate.

        • DaveW:
          I have likened the GAT (or is it now, the GAST?) to be about as useful as averaging lottery balls over a period in the hope they will predict the favourable numbers for the future.

      • But what about the poor Great Barrier Reef? It is subjected to local environmental phenomena. How will it know when the “Average” Global temperature reaches +1.5 – +2.0C? How will it know it is time to kick off this mortal coil?

    • I looked at HadSST3 ( the sea section of this data ) years ago.

      Temperatures actually recorded as engine-room or over the side bucket dips were freely changed from one to the other when they did not match expected statistical quotas. ie if a sector did not have enough buckets some of them would get arbitrarily changed to engine-room intake readings.

      This reveals a pervading attitude that if the data does match what you expect , you can “correct” it. Changing the type of reading implies an adjustment since the two are not the same.

      Proposed ‘corrections’ are compared to model output as part of the validation process. Again implying that if the data does not match the model it must be wrong.

      Adjustments for bucket measurement near the Japanese coast were validated by comparing to SST measured by Japanese fishing vessels … which used bucket measurements ! ie jap buckets are fine brit buckets need correcting. This is considered part of the “validation” of the adjustments Hadley apply to the data.

      If John McLean wants details he can search for the ( fragmented) exchanges I had with Met. Office’s John Kennedy in the comments below the C. Etc article, or drop me a note on my WP blog:

      • Slightly off piste but it may be of interest to you anyway. I was engineer for a mining company in the late 70s through to the end of the 80s. We mined coal in the UK. Part of the licence to do so issued by the NCB was a requirement to install a weather station on each site and record the readings of rain, temp and pressure. These readings were given to the NCB who forwarded them to the met office. I assume they used them.
        As far as we were concerned this was of no interest whatsoever to our business and the task of daily readings was given to the ‘chain-boy’. Usually a sixteen year old who worked for the surveyors.
        We were not the only mining contractors employed in this activity and I would estimate somewhere around a yearly average of about twenty sites across the UK for the period.
        It defies credibility if anybody thinks these figures were in any way accurate. Equipment sited wherever, readings taken in every weather condition by an untrained teenager whose main interest was something to put on the paper and get back to the warm but used as figures correct to a tenth of a degree.

        • Tom Malcolm

          This has been my enduring refrain since I started looking into climate change some years ago.

          The guys chucking the bucket over the side of a ship and taking a temperature reading was not the scientist on board (hah, hah) or a senior officer, it was the cabin boy or deck hand, when he had time/could be bothered. In many cases it would be judged on “is it colder/warmer today than yesterday”

          Similarly, the guy trudging out to a Stevenson screen in the wind, snow and rain was the tea boy when he went out for a ciggie. Again, if he could be bothered.

          The SST bucket measurements were largely along well plied trade routes, barely a ship would have been in the southern ocean to take a temperature in those days. And in much the same way, land temperatures were a local endeavour with no global implication so no one really cared what they were other than for academic purposes.

          Even satellite temperature observations have been fraught with problems. Calibration, drift, obsolete equipment, newer better equipment, clouds etc.

          Quite how we accept historic temperatures down to a tenth of a degree simply defies logic.

      • Add to that the incentive for the operating engineers to understate temperatures so they could justify working the engines harder than the manufacturer would recommend/warrant and any integrity to the sea data was obliterated and replaced with distinct bias (downward) => apparent uptrend to Argo buoy data used now.

    • Temperature isn’t even a measure of atmospheric heat content.

      The atmosphere has sensible heat—the temperature measured by a thermometer— and latent heat; the energy that was required to evaporate water and which is returned as heat when the water condenses.

      The total heat content of atmospheric air is call enthalpy. It’s measured in units of BTUs per pound (BTU/lb) or kilojoules per kilogram (kJ/kg).

      A summer after in Florida, with a temperature of 90 °F, can have the same heat content as a 110 °F day in Arizona, because the air in Florida tends to have more latent heat in the form of water vapor or humidity.

      Where I live, near Los Angeles, a humid winter afternoon at 65 °F could have the same heat content (enthalpy) as a summer afternoon at 100 °F but the temperatures are 35 °F apart.

      For temperature to be a reasonable proxy of atmospheric heat content, the atmospheric water vapor content (relative humidity) would have had to have been the same for every measured temperature that is used to compute the global average temperature. That assumption seems absurd.

      • I have always thought this too.

        Having said that, I thought something like ‘wet bulb temperature’ (or something like that, I’m definitely no expert?) was defined to fix this problem. If it does not, then yes, air temperature itself is utterly useless as a metric.

        • Zig Zag

          Temperature is not “useless” but it is only one of the two parameters required to get a meaningful metric, which is the energy content.

          We cannot say temperature is useless on its own, it has value, for example to indicate when freezing will take place, or to forecast melting. This is the current case in Alberta where farmers are on tenderhooks hoping for melting and ten days above zero. Something like 40-60% of the crops are in the fields and they have a foot of snow over them. It is a huge, potentially expensive issue. It started snowing in September, hard, and has not melted since. Massive losses loom. Bankruptcy threatens.

          This is what we can expect during a significant downturn in temperature, not enthalpy. Hunger follows cold, not heat (as much).

      • Temperature alone can’t tell you what the heat content of a volume of gas, liquid, or even a solid is. This is because temperature is a measure of energy density. This is why the temperature of a gas decreases when you lower the pressure – the density of the gas is lower, and so is the energy density (temperature). Because of the non-linearity of most substances around their phase change points, it is not a simple calculation to determine the total amount of energy in a mass. And this is even more difficult when you have a mix of different substances, unconstrained, and at a continuum of pressures.

        • Paul Penrose,

          You make some excellent points. Climate scientists seem assume that these factors all average out but I have never seen a detailed treatment of the subject that lends any credence to that assumption.

          I once asked Richard Lindzen why temperate is used as a metric for global warming since it doesn’t even measure heat content, which is all we’re interested in with regard to AGW. He said averaging temperature is like averaging all the numbers in your phone book, i.e. meaningless. But he didn’t otherwise address my core question other than to agree with my understanding of temperature and enthalpy.

          A few months later Dr. Lindzen was addressing Congress and made the same point that I had made to him (temperature is not a measure of heat content).

          Richard Trenbreth got all hot under the collar and said (yelled) something about the Clausius-Clapeyron relation. But nothing that made any convincing rebuttal.

          If the world were in a petri dish, it wold reach some reach some equilibrium temperature and the water vapor content in air above the surface would be accordance with the Clausius-Clapeyrone relation. But the real world is much more complicated, with water existing in all three phases, in varying amounts, and with turbulent air/vapor and water flows.

          It seem to me that the main thing wrong with the surface-air temperature record is that it measures a parameter that is meaningless with regard to the AGW debate.

          It’s astonishing that so many scientists could have a debate, for so many decades, over the temporal and spacial variations in a meaningless parameter.

          Nevertheless, it does seem pretty obvious that the world has warmed since the little ice age. I suspect the argument will continue at least until it starts to cool again, which could be decades.

        • Hell, something a lot of people don’t get is we can’t even measure temperature directly anyway – we use proxies like the expansion of a liquid or a solid relative to a reference point we assume is accurate. Hard to imagine something as fundamental to science as temperature has never been directly observed. On the matter of heat capacity yes that’s something I’ve been saying for years, only to be met with blank stares. Explaining that air molecules themselves are actually in the thousands of degrees earns scorn with no amount of explanation about compression or density getting through. Basically most people’s attitude is any questioning of anything by anyone Not Qualified as they see fit is clearly mad. No further inquiry required.

          • Karlos said
            “Explaining that air molecules themselves are actually in the thousands of degrees”

            They arent hot as in temperature. What you are referring to is the amount of energy contained therein with mass is related by Einstein’s famous equation E = mc^2 where E = energy ouput and m = mass and c is the speed of light.

            Karlos, the measurement of temperature has been done accurately since the invention of the mercury thermometer in 1714 and many equations in physics depend on temperature as an independent variable. Physics has enough problems without YOU questioning the use of thermometers.

        • It is even worse than that.
          Enthalpy = Internal energy + (pressure * volume)

          The atmosphere does not have a constant volume, nor is the pressure the same at any 2 altitudes. However you can’t measure enthalpy directly because you cannot measure the internal energy directly. The best you can do, is measure the change of enthalpy if you can measure the heat gained or lost from the system and the work done by or on the system.

          The ideal gas law PV =nRT, on the other hand is only applicable to a system that has a definite boundary whether it is open or closed. However, it can be used as a loose approximation to figure out the average temperature at planet surfaces as long as the pressure is over 10 kPa.

          Temperature anomalies would be a much better indicator of real changes to the earth’s climate if we could be sure that on the same date for different years for any particular place; the temperature should not vary due to natural causes. We know that that is false, so the real reason anomalies are used is to infill geographical areas(that have no temperature stations) with the same equal anomalies of nearby temperature stations. GISTEMP has a definition limit of the word “nearby” to mean not more than 1200 km.

      • This is exactly what happens with urban-heat island effect and rural-cold island effect.

        Dr. S. Jeevananda Reddy

      • Thomas: You raise an interesting point: Which is more relevant, sensible heat or total heat? And relevant to what?

        For radiative cooling (W = oT^4), only the sensible heat term matters.

        For human comfort, meteorologists have devised a composite scale (“real feel temperature) that contains both measures. We use air conditioning to cool and lower humidity. For some reason, we feel most comfortable with an air temperature 15 degC cooler than our internal temperature (which is warmed by biochemical reactions). Most species have adapted to a particular environment. The physical properties (especially fluidity) of the lipid bilayers that surround all cells is critically dependent on temperature. (See Alkenone temperature proxy). Chemical and biochemical reaction rates modestly depend on temperature, but proteins denature when the temperature gets too high. Aquatic species don’t care about latent heat.

        The rate of evaporation is proportional to wind speed and “undersaturation” of the atmosphere, and not directly by temperature.

        Climate change is important. We wouldn’t want to be living during the last ice age. Why would total heat content be a better measure of such climate change be a better measure of climate change than temperature alone? We have better data about temperature change, so one would need a good reason to switch to a different metric.

        • Frank,

          You wrote, “Why would total heat content be a better measure of such climate change be a better measure of climate change than temperature alone?”

          Total heat is not necessarily a better metric of climate change but it is the only metric that can tell us if CO2 forcing is causing heat to accumulate in the system. Temperature alone will not tell us if heat is accumulating because temperature is not a measure of atmospheric heat content.

          • Thomas: Thanks for the reply, which makes scientific sense. However, heat from the putative radiative imbalance created by rising GHG’s is accumulating in:

            1) the sensible heat of the atmosphere (temperature), but often only measured at 2 m over the land. SST’s are used to predict the sensible heat in the atmosphere over the ocean. The sensible heat in the atmosphere over the ocean is measure. Satellites and radiosondes measure warming at all altitudes, but orbital drift has damaged the validity of satellite data. Radiosonde technology has changed a lot and that data has been subject to a great deal of processing. UAH uses radiosonde data to correct for orbital drift.

            2) Latent heat (of water vapor) in the atmosphere.

            3) Melting of glaciers and ice caps and changes in seasonal snow cover.

            4) A little heat is raising the temperature of the land.

            5) Warming of the ocean. The vast majority of heat (ca 95%) from the putative radiative imbalance caused by rising GHGs is supposed to be accumulating in the ocean and the goal of the ARGO floats is to measure that change. The skeptic Roger Pielke Sr was a big proponent of the ARGO program to measure ocean heat uptake. From a practical point of view, one could forget about 1), 2), 3) and 4) above and simply focus on ocean heat content, which has been about 0.7 W/m2 over the last decade.

            As best I can tell from the sources linked below, there is about 2 g/cm2 of water vapor in the atmosphere and it has been growing at a rate of about 5%/decade over the past two decades. The water in cloud droplets has already released its latent heat, so total column water is an inappropriate measure.) Latent heat is about 2500 J/g or 5000 J/cm2. The 5% change is 250 J/cm2 per decade or 25 J/cm2/year. That is 250000 J/m2/year. With 31.6 million seconds/year, that is 0.008 J/m2/s or 0.008 W/m2. That is about 1% of the rate of heat accumulation in the ocean.


            The atmosphere weighs about 10,000 kg/m2 and has a heat capacity of 1 kJ/kg/K. That makes 10,000 (kJ/K)/m2. If the atmosphere is warming about 0.17K/decade, that is 1,700 kJ/m2/decade or 170 kJ/m2/yr or 170,000 J/m2/yr or 0.0054 J/m2/s or 0.0054 W/m2. So the increase in sensible and latent heat in the atmosphere appear to be similar in magnitude and about 1% of the increase in hear in the ocean. So you are right to be worried about the increase in latent heat vs sensible heat, but both are trivial compared with ocean heat. (:))

      • Thomas questioned why we use sensible heat (temperature) rather that sensible+latent heat (total heat) as a measure of “climate change”, but didn’t offer any compelling reasons why temperature was an inappropriate measure.

        A related question is whether we should expect there to be an important difference between trends in temperature and total heat. For total heat to go down as temperature rises, then absolute humidity must go down. Saturation vapor pressure rises about 7% per degC, so relative humidity would fall even further. AFAIK, absolute humidity is not falling.

        • It should be obvious that it is the enthalpy that is the significant climate variable, not temperature, since temperature is only part of the energy content, and a variable part at that.

        • Frank,

          I did offer a compelling reason. Saturation vapor pressure raises as temperature rises but absolute humidity rises only if there is water present. Absolute humidity is very low in the Sahara but very high around the Persian Gulf.

          I also didn’t know if there is any trend in absolute humidity so I looked it up. According to NOAA it’s going up.


          I emailed NOAA to see where I can download the specific humidity data from. With that I can make a chart that shows annual changes in total heat (enthalpy). I’ll post it on here on wattsupwiththat.com if I’m successful.

    • Concerning sea surface temperatures.

      Note that in the decades before the advent of the significant coverage of the oceans by the buoy networks, the ocean temperature data was acquired in the main by ship’s engine room water inlet temperature data or by measuring the temperature in buckets thrown over the side on a rope.

      Ship’s engine cooling water inlet temperature data is acquired from the engine room cooling inlet temperature gauges by the engineers at their convenience, there is no protocol for the recording of the temperatures.

      There is no standard for either the location of the inlets with regard especially to depth below the surface, the position in the pipework of the measuring instruments or the time of day the reading is taken and the position of the temperature sensor may be anywhere between the hull of the ship and the engine cylinder head itself.

      The instruments themselves are of industrial quality, their limit of error in °C per DIN EN 13190 is ±2 deg C. for a class 2 instrument or sometimes even ±4 deg. C, as can be seen in the tables here: DS_IN0007_GB_1334.pdf . After installation it is exceptionally unlikely that they are ever checked for calibration.

      It is not clear how such readings can be compared with the readings from buoy instruments specified (optimistically IMO) to a limit of error of tenths or even hundreds of a degree C. or why they are considered to have any value whatsoever for the purposes to which they are put, which is to produce historic trends apparently precise to 0.001 deg. C upon which spending of literally trillions of £/$/whatever are decided.

      But hey, this is climate “science” we’re discussing so why would a little thing like that matter?


  2. I wonder what his professor was thinking when he agreed that this would be a fit subject for a PhD. Had he no understanding of the political implications of work in this area?

    How long do you think he will remain in tenure? And for how long will Robert Boyle Publishing retain this file on their sales database?

    • According to James Delingpole, as reported at NOTALOTOFPEOPLEKNOWTHAT, his supervisor was Peter Ridd. So, the answer to your question is that he has already lost his tenure. Peter Ridd, formerly of James Cook University, was the scientist who pointed out that the Great Barrier Reef was not dying, a blasphemy for which the punishment was loss of his post.

    • Geezer, McLeans paper will be there as long as it needs to be. We thought these results were important and David and I set up Robert Boyle Publishing with John. We put in a lot of work to make sure these results would not disappear.


      Yes, his supervisor was Peter Ridd, famously sacked for saying that “the science was not being checked, tested or replicated” and for suggesting we might not be able to trust our institutions. John started the audit about 8 years ago. He’s done the last two years unpaid. He could have stopped after finding 26 issues.

      The wildest scandal is that some of these errors are so obvious – a monthly average hotter than the hottest day on Earth — and no one even noticed. A high school geek could write code to find that one.

      40 years after scientists switched to celsius, HadCrut still hasn’t got there. That’s 40 years of couldn’t-care-less from the same people who tell us what light globes to use. Geezer, McLeans paper will be there as long as it needs to be. We thought these results were important and David and I set up Robert Boyle Publishing with John. We put in a lot of work to make sure these results would not disappear.


      • I strongly support the auditing of data to remove errors, and sceptical analysis of all scientific papers to uncover faults. Particularly in Climate Change, where the impact on individuals and society is so large.

        But the Climate Change scientists do not support this, have a lot of money available and no interest in behaving either fairly or legally. How well is Robert Boyle Publishing equipped to handle court cases intended to bankrupt you?

        • Climate science is the only science I’ve ever heard of that’s always right…
          even when they are proved to be wrong and have to correct it, they go right back to claiming they were always right

        • “Dodgy Geezer October 7, 2018 at 9:13 am
          I strongly support the auditing of data to remove errors…”

          If, by “auditing the data to remove errors”, you include data verification, on site equipment testing and certification at installation and regularly afterwards, metadata validation, methods evaluation and certification, etc. etc.; then I agree with you.

          Data can get disqualified, but should never be directly “adjusted”. Adjustments should be kept in separate files along with explicit metadata.

          • Agreed. You can’t “adjust” temperature readings that are deemed incorrect by any metric unless you can substitute another actual reading at the same place at basically the same time. Anything else isn’t “data,” it’s just guesswork. The response to any supposed inaccuracy or bias in the data should be to stretch error bars, not CHANGE the instrument readings.

      • A lot of data errors that are blatantly obvious when you look at them, and trivial to write automated checks for, you might not think of unless you encounter them doing an audit or (maybe) if you are familiar with data collection procedures. But now that we know some of what to look for it should be possible to automate checks and find more obviously invalid data.

    • I just ordered my copy. Within a minute the computer at my credit union called to inquire about some suspicious charges.

      After some identify verification steps, it listed five transactions, I reassured it that they were all legitimate and it went away happy.

      It didn’t have the decency to explain what the red flags were, perhaps it knew that Robert Boyle publishing is a new company, perhaps I made the first ever transaction there from the DCU, perhaps it knows I’ve never set foot in Australia, perhaps someone is pressuring the computer to keep an eye on this Robert Boyle fellow.

      No biggie. However, it might be a good idea to be near your phone when you place your order.

      Perhaps someone is keeping an eye on this McLean fellow already.

      • I had similar problems with my CC transaction. I replied to an automated Text Message from my bank confirming the transaction was made by me and then the charge went through. It worked out. Now I just need to make sure I was not charged multiple times, because it took multiple attempts for it to go through.

        Glad to support the author. I’m looking forward to digesting the information.

        • Lewis P Buckingham
          October 7, 2018 at 1:28 pm

          I can confirm no problems here in NZ. Quick download with lots of good background information and graphs.

      • For what it is worth, I had no problem ordering, and then downloading, the paper.

        I used a [UK] credit card, and have – so far – had nothing to suggest I am under head-exploders’ surveillance.


        • No problem getting the paper here in Red State Arizona. Haven’t had a chance to read it all – I’m scanning websites to get the overall ’cause I’m super busy. So far it looks like a block-buster.

      • What country is Robert Boyle Publishing located in? If outside the US, that might explain why some US banks are cautious. Also, this report is the only thing they sell. They appear to have just started business and this paper is their first product. The lack of business history might trigger some banks and maybe the banks in the US are more cautious than in some other countries.

        • Mosh: It looks as if MacLean never located HadCRUT4’s raw data AFTER quality control, which should have eliminated most of the problems cited above.

          “No open data. no open code. no science.” is a reasonable slogan. The question is who the slogan applies to.

        • You mean, like this open source code that had recently climastrological propaganda of Earth going Venus or something?


          How do you feel to learn that computer models you have blind faith in have fundamental errors in them? It doesn’t matter, because it’s written by the might MIT postdoc gods or something? Or just it doesn’t matter if the errors support your cargo cult science dogma?

        • There are commercial agreements in place regarding the source of some data.
          We are unable to find out which data is covered by these agreements, so therefore we can’t give you any of it.
          Also, why should I give you my data, when all you want to do is find mistakes?

          Go on Mosher, I DARE YOU to say the above is unacceptable AND also suggest that HADCRUT is acceptable.

    • Dodgy Geezer,
      “I wonder what his professor was thinking when he agreed that this would be a fit subject for a PhD. Had he no understanding of the political implications of work in this area?”

      I think you have not really understood the implications of his results. He is saying, in the first place, that it cools the past too much, and in second place, that data prior to 1950 is not reliable due to poor coverage. So if data prior to 1950 (i.e. prior to significant human influence through CO2 according to IPCC) is not reliable and the procedures are, in addition, cooling the past too much, we have that it could be claimed that the NATURALLY CAUSED warming prior to 1950 may have been exagerated. Which could make the alarmists claim that our impact is much greater than previously thought. I can foresee a conspiracy here to remove the naturally-caused warming pre-1950 as a result of this audit.

      • Nylo you must be new here. All Dodgy Geezer is saying; is that both the professor and the pupil are extremely courageous men. The blowback from this report will be enormous.

        • Why do you think that I don”t understand it? I understand it very well, and I am explaining why it may not be so. The audit can be used against skeptics like me that believe that a significant part of the warming is most likely natural. And if it can be used against us, it’s good for alarmism, nothing to punish the author for.

  3. I’ve been reading WUWT comments and articles for years saying exactly what your essay points out. Not to be too trite, but what’s new?

    About the only database you can put any faith in is the UAH and even it is an indirect way of measuring temps. But no real issues here. Just publish numbers to the tenth or hundredth of a degree, including extremely high detail colorful maps in bright reds and purples, and darn they look like the real deal.

    I can’t ever remember an error bar on anything put out. Probably because publishing a number like 26.1+-3.0 just won’t get the message across.

    • 1) It’s not so easy to ignore an actual detailed PhD study of the subject — compared with the ease of ignoring commenters on a “Denialist” website. This study can be used by any skeptic to bolster his reasons for skepticism in an argument.

      2) I thought the explanation of the way they adhysted a relocated site’s data creating artificial cooling in its past was new — I haven’t seen that before at least — and it explains that mystery. Clearly it is incorrect. If they were to un-adjust that data and re-adjust it more realistically, magically several tenths of a degree of warming would disappear.

      • More than that TD…..when real people on the ground measure a 10 degree or more difference in UHI…and they adjust for UHI only 1-2 degrees

        …but then they can claim the adjustments lower the temp

    • While I am not arguing in support of the HadCRUT4 data set (I have not read McLean’s analysis yet), I will at least point out that HadCRUT4 comes with lower and upper 95% confidence intervals along with an explanatory paper at https://www.metoffice.gov.uk/hadobs/hadcrut4/HadCRUT4_accepted.pdf. The difference between L95 and U95 in January 1850 is 0.801 degrees, and in August 2018 it is 0.448 degrees. Over the last few decades the range seems to have fluctuated seasonally from about 0.22 degrees to about 0.45 degrees.

      • Randy,
        Those confidence intervals are, at best, estimates. Their “ensemble” technique is novel, but untested by professional statisticians, so I don’t think you can give it much weight. I also found this little gem in the paper:

        “This model cannot take into account structural uncertainties arising from data set
        construction methodologies. It is clear that a full description of uncertainties in near-surface temperatures, including those uncertainties arising from differing methodologies, requires that independent studies of near-surface temperatures should be maintained.”

        I don’t see how you can cite any kind of confidence intervals without taking these things into account.

    • “Yes sir it really is a bombshell and just in time for the IPCC Christmas Party.”

      ^A miracle has happened.^

  4. Not much of a surprise there.
    For about at least 60 or more years the Met Office was using incorrect/wrong formula to calculate the CET, the world longest set of temperature data. Met Office corrected a long standing error in calculating annual data from daily and monthly temperatures compilation only after I alerted them to the error in the early August 2014 suggesting method of recalculation which they have now adopted. Subsequently, from 01/01/2015 the Met Office recalculated the annual values for the whole set of data going back 350 years .
    For more see:

    • Difficult to seehow the Met Office were geting CET wrong for 60 years, when they only started maintaining it in the 70s.

          • Makes no difference, the Met Office used incorrect formula year after year, decade after decade for a half a century, and the most likely they would still be doing so; there is no excuse for it.

          • By “incorrect formula” you mean the simplification of treating all months of equal length when averaging the annual figure?

            Yet you say it makes no difference whether it was for 60 or 40 years, or whether it was the Met Office or Professor Manley who committed this inexcusable mistake.

          • But you keep making the “sloppy” accusation that the Met Office were giving slightly inaccurate annual figures for over 50 years. They couldn’t have because they weren’t publishing CET 50 years ago.

          • Met Office is the caretaker of the CET data and it was, and is their duty to the public to present that data in the most accurate form they could master, regardless if that is today, or was a decade or half a century ago.
            Are you actually suggesting that prior to 2015 the MetOffice was not not required to know that not all months in the year don’t have same number of days?
            No serious person could defend half a century of the erroneous calculations by a multimillion pound public institution financed from hard pressed taxpayers, including four decades of my however modest contributions.
            Mr Bellman, if you happen to be here as an apologist for the MetOffice’s sloppy work, and what you have shown above is your the very best, you are not doing well, are you?

          • No, I’m trying to explain that the MO could not have been making this “mistake” for 60 or 50 years because 50 years ago the CET was not produced by them. I have no idea how long they used the slightly simplified formula for calculating averages. I don’t know when they started giveing the data to 2 decimal places. The current online page only goes back to 2011.

            When Manley published his final version of CET in the mid 70s, there is no indication of whether he weighted annual values by length of month. I’d expect he didn’t as that would have made the process more time consuming. But it is largely irrelevant as he only gave annual averages to 1 decimal place.

            I have never worked for the MO and am not apologising for any simplifications they made in calculating the annual values. I just don’t think it is a serious problem as the differencies are minor and completely irrelavant to any long term analysis of the data.

          • Thanks, I enjoy these chats myself.

            I’m well aware of the links to CET data thanks, but I note that you have so far failed to produce any evidence that the Met Office have anything to do with Manley’s original reconstructions. Therefore I still wonder if you accept that your opening statement that “For about at least 60 or more years the Met Office was using incorrect/wrong formula to calculate the CET” is wrong, or your later claim that “the Met Office used incorrect formula year after year, decade after decade for a half a century”.

          • You are joking, aren’t you.
            What? Are you suggesting that the Met Office front desk receptionist is the one who was calculating the CET annual data.
            It was Manley, Parker, Legg, Folland, etc, they are all responsible for using “incorrect formula year after year, decade after decade for a half (or more) of a century”!
            I don’t think you will be on their Christmas card list, since your defence of the MO has badly misfired. As Mrs. May would say ‘a bad defence is worse than no defence at all’.
            good night, see you some other time, some other place.
            with best of regards to you

          • You seem to continue to miss my point, or maybe I’m missing yours.

            Professor Gordon Manley, the inventor of the CET, had nothing to do with the Met Office (apart from working for them for a year in the 1920s). The Met Office had nothing to do with his CET, published in 1953 or 1973.

            It’s largely irrelevant whether Manley in his 1973 paper calculated annual averages as a weighted average or not as he only gave the figures to decimal place.

            My only “defense” of the MO has been to point out that they couldn’t be guilty of a 60 year error when they had nothing to do with the tables until 40 years before found the error.

  5. If only trees, preferably bristlecone pines, grew in the ocean then we would have a really reliable way of measuring past temperatures of 70% of the earth’s surface.

    • What? Mainstream media to skip a politically inconvenient piece of work? /sarc

      The data is riddled with errors. That’s why it needs adjustments. Only that being very biased in one’s presumptions will let one do the adjustments to support the ‘predone’ conclusions.

  6. “… (HadCRUT4) has found it to be so riddled with errors and “freakishly improbable data” that it is effectively useless.
    My, oh my, imagine that. All those papers, projections, studies, etc. based on this dataset over the years, just USELESS.

    • I’m sure Nick will be along at any moment, explaining how the folks maintaining HADCRUT remove these outliers effectively and completely, so there is nothing to see here.

      • “explaining how the folks maintaining HADCRUT remove these outliers effectively and completely”
        OK. This is no BOMBSHELL. These are errors in the raw data files as supplied by the sources named. The MO publishes these unaltered, as they should. But they perform quality control before using them. You can find such a file of data as used here. I can’t find a more recent one, but this will do. It shows, for example
        1. Data from Apto Uto was not used after 1970. So the 1978 errors don’t appear.
        2. Paltinis, Romania, isn’t on that list, but seems to have been a more recently added station.
        3. I can’t find Golden Rock, either in older or current station listings.

        • But they perform quality control before using them.

          That’s reasonable explanation. Still, removing significant noise from underlying data or trying to adjust it introduces uncertainties. Furthermore, there are more issues highlighted as massive extrapolation of temperatures from small samples of records, almost complete lack of historical records from the ‘down under’ or ‘city adjustments’ which may have introduced significant and artificial cooling effect for many records.

        • “But they perform quality control before using them”

          Unfortunately, the quality control process is also riddled with obvious errors, so the net result is probably worse than the data.

          Also, “we do QA later” doesn’t explain why obvious errors are still in the source data.

          • “Also, “we do QA later” doesn’t explain why obvious errors are still in the source data.”
            Because it is source data. People here would be yelling at them if they changed it before posting. You take the data as found, and then figure out what it means.

            ” the quality control process is also riddled with obvious errors”
            You don’t know anything about the QC process.

          • “Because it is source data. ”

            That doesn’t explain why THE SOURCE didn’t correct the source data. Classic hiding the pea.

            “You don’t know anything about the QC process.”

            I know the quality control process is also riddled with obvious errors, so the net result is probably worse than the data.

          • This is just the internal crap: When a thermometer is relocated to a new site, the adjustment assumes that the old site was always built up and “heated” by concrete and buildings. In reality, the artificial warming probably crept in slowly. By correcting for buildings that likely didn’t exist in 1880, old records are artificially cooled. Adjustments for a few site changes can create a whole century of artificial warming trends.
            “It seems like neither organization properly checked the land or sea temperature data before using it in the HadCRUT4 dataset. If it had been checked then the CRU might have queried the more obvious errors in data supplied by different countries. The Hadley Centre might also have found some of the inconsistencies in the sea surface temperature data, along with errors that it created itself when it copied data from the hand-written logs of some Royal Navy ships.”
            And this is just the internal stuff. As many, many analyses have pointed out, what gets done later in the name of QC later on is even worse, and every such step tends to add its own new error in the course of addressing other error. The error bars should fill the graph to be accurate.

          • ““Because it is source data. ”

            That doesn’t explain why THE SOURCE didn’t correct the source data. Classic hiding the pea.

            “You don’t know anything about the QC process.”

            I know the quality control process is also riddled with obvious errors, so the net result is probably worse than the data.


            You know no such thing.

            raw source data is riddled with errors, but skeptics LOVE THEIR RAW DATA.

            the data suppliers should never touch raw data.

            A) data suppliers can apply QC and then document how they QCed. This is done with
            flags typically
            B) downstream users may apply their own QC and document it. NOAA does this.
            we do this.
            C) On the grand scheme of things QC versus no QC is less than 1% different

          • As usual, Mosh demonstrates that he has no intention of arguing honestly.

            Yes, we do insist on seeing the raw data, because very often the methods used to “clean” the data aren’t legitimate.

            Nobody ever claimed that the raw data was pure, just that it was better than the data post cooking.

            Regardless, the point is that this is an excellent example of why the climate scientists are so reluctant to provide raw data. It’s that bad.
            The claim is that special statistical techniques can change this sow’s ear into a silk purse. That’s BS.

          • “You know no such thing,”

            On the contrary, I know the quality control process is also riddled with obvious errors, so the net result is probably worse than the data. Yes, the raw data is also terrible, but that doesn’t excuse an even worse QC process. Other published datasets are also rife with this kind of thing — I wrote my own little 4GL because I couldn’t believe SG’s claims about how many in the record temperatures were being generated by models.

            Unfortunately the field is rife with this kind of behavior. It’s not just these shenanignans, re-running Hansen 1988 with new data and claiming the result vindicate his fails predictions, or the various Climategate plotting against skeptics. It’s just activist slop everywhere, and it’s an affront to good science.


        • The original certified data is rotting in a landfill in the Netherlands. What we have is adjusted data with no way of knowing what was adjusted. Which they admit was adjusted. That adjustment has since been adjusted several times. Additionally, the researchers are hiding behind ‘the work is confidential and not available to the public’, pay walled or not. They’ve created a moving wave of higher temperatures. The current data is correct but the former data always has to be corrected. In 30 years, today’s data will have to be adjusted. Why bother, it’s a belief system.
          All the arguments against AGW are based on assuming that the data is correct, and AGW cannot stand up to that either. AGW as a theory should have died a death 10 years ago. Only belief in outdated incorrect models keeps AGW alive.

          • “What we have is adjusted data with no way of knowing what was adjusted. Which they admit was adjusted. That adjustment has since been adjusted several times. ”

            Of course you do – it’s in the original files direct from national Met services.
            And no it hasn’t been adjusted several times – the continuing myth of the US GISS adjustments due to inadequate TOBs by weather observers and correct homogenisation to make apples = apples.
            The biggest “adjustment” is to warm the past and reduce the global warming trend.
            Or was that some kind of “double-bluff” conspiracy? (sarc)


        • Nick, I feel sorry for you. You believe what the CRU says despite the HadCRUT4 (and CRUTEM4) contradicting it.

          Not only have I shown a fully worked Apto Uto situation (see http://joannenova.com.au/2018/10/hadley-excuse-implies-their-quality-control-might-filter-out-the-freak-outliers-not-so/#comment-2060139) but I’ve also looked at the Gold Rock Airport data and worked through the calculations that incorporate it into the CRUTEM4 data. I haven’t documented it wider reading because unlike Apto Uto the Golden Rock HadCRUT4 grid cell contains a lot of ocean and since the SST figures are correct the relationship between the HadCRUT4 and CRUTEM4 values aren’t as obvious.
          From some notes in front of me right now, in December 1984 only Golden Rock Airport (St Kitts), Raizet (Guadeloupe) and Melville Hall A (airport?) (Dominican Republic) reported data. The anomalies from these stations (i.e. Dec 1984 values minus December averages) are respectively -23.4, 0.3 and 0.3, the average of which is -7.6, which matches what’s in the CRUTEM4 grid cell for that month. The HadSST3 value is -0.72 (i.e. the sea surface temperature anomaly) and therefore HadCRUT4 is -2.43, which is quite a large spike compared to normal values. If Golden Rock had been 0.3 then CRUTEM4 would have been just 0.3 and HadCRUT4 probably about 0.6 rather than -2.43.

      • I’m seated and ready with my popcorn eager to watch Nick Stokes chewing on
        this giant chunk of indefensibility:

        The dataset starts in 1850 but for just over two years at the start of the record the only land-based data for the entire Southern Hemisphere came from a single observation station in Indonesia. At the end of five years just three stations reported data in that hemisphere.

        I’m betting he will attempt to wash it down with liberal amounts of Hansen’s Coherence! 😉


        • CRUTEM4 (and HADCRUT) are shown with uncertainties. By the time you get back to 1950, they are large (about 0.5°C). SH uncertainty is over 1°C. I personally don’t use HADCRUT back to 1850, and I’m sure many don’t. But that is no reason to suppress the information.

          • Nick Stokes said
            “I personally don’t use HADCRUT”

            However the IPCC does and that is the problem.

          • Nice chomp!

            So, HadCRUT – when shown with uncertainties – is edible but completely unpalatable! 😉

            Your response to a paper concerned with an audit of uncertainties is to point out that the dataset has uncertainties!
            That is an answer, not a good one but it is an answer!

            Given that governments are deciding energy and climate policies on claims based on the HadCRUT4 dataset, an independent audit of its accuracy and uncertainties was undertaken using data from 1850 to 2018. – John McLean

            But isn’t it even worse than you admit because the global mean is calculated as a land area weighted average – with the emphasis on land area!

            That lone Southern Hemisphere (SH) reporting station in 1850 only grew to nine by 1860 and to date 52.7% of all reporting grid cells have only 1, 2 or 3 observation stations a figure that hasn’t improved beyond the minimum set in 1974.

            Clearly, the paper points out that the uncertainties are greater than are known and that prior to 1950 HadCRUT4 is of limited value because for almost all of the period from 1850 to 1950 the coverage of the Earth’s surface was less than 50%.

            This would seem important because the IPCC use the period 1850 to 1900 as a baseline.

            Interestingly, the paper also found that from 1850 to 1900 SH temperature anomalies were disproportionately represented by particular latitude bands and that this unequal contribution to the spatial coverage, did not stabilise until 2015, unlike the Northern Hemisphere which had stabilised in about 1950.

            The bottom line is that the uncertainties described in the paper preclude the calculation of any meaningful trend in the database as a whole.

          • “However the IPCC does and that is the problem.”

            Only if you think that because we don’t know everything (precisely) then we know nothing.
            If that’s the case then we will get nowhere in anything.

          • They are shown with “station” and “sampling” uncertainty. I didn’t see one for “quality control uncertainty” but maybe that was because it was larger than the graph it applied to.

          • A grand total of nobody has suggested that the information be suppressed. We just point out that the data is not fit for using to determine the world’s temperature.

          • OK, dumb question interval.

            Is there a recognised standard like ISO 9001 for data QC across the scientific world?

          • Yes they are Nick, but I think you’ll find that those uncertainties are calculated from the available data (e.g. 2 standard deviations from the mean).
            This method is only appropriate if the sample is representative of the whole, such as might be the case with political polls prior to elections, when a sample of let’s say 5,000 produces a result that is a pretty good estimate of tens of millions of votes.

            The problems exposed by the HadCRUT4 data audit are very extensive and we can’t say that the problems are all evenly balanced. While the error margin is probably rather evenly distributed as positive and negative for most issues, the “daylight savings” issue suggests that lower mean temperatures will result, the “not adjustment for urbanised stations that closed” would mean an upward bias and the flawed adjustment of temperature go right through the entire record and excessively lower earlier temperatures.
            On top of that, the composite of two error margins is calculated in quadrature, not by simple addition. The many different errors exposed by the audit would probably mean a much higher power than just squared.

  7. I have said it before, but relying on data from cabin boys told to chuck a bucket over the side of a ship in the 1850’s to judge SST, and expecting the tea boy not to be sent out in the wind, rain and snow to check a Stevenson screen instead of the scientist in charge is simply a belief too far.

    Not to mention that SST would have been routinely measured along well worn trade routes, not in the Southern Ocean or large parts of the Pacific.

    Satellites can’t even be really relied on in their early days as there were calibration issues, breakdowns etc. and Argo buoys (unsurprisingly) didn’t conform to the alarmist’s expectations so have been quietly ignored.

    You’de have thought they’d have got it by now.

    • Indeed given the scale of the ocean , even current levels if measurement are like looking at a single hair they claiming you known all not only about the elephant it came from , the the rest of the herd and the land it lives it.

      Let us be fair the standard of much in this area is in reality ‘better than nothing ‘ and that is ‘settled science’ in action .

    • But – multiple temps along one well-traveled shipping route can cover a lot of ocean when “homogenized” over 1200km/sarc.

    • HotScot,

      Before the Panama Canal, the Southern Ocean saw a lot of traffic, but with its terrible WX, I doubt that many good observations were made of SST between South America and Antarctica.

      However the British coaling station at Sandy Point, today’s Punta Arenas, Chile, might have kept decent records.

      Ships stopping for water and provisions at Valparaiso after rounding the Horn were so important to Chile’s economy that its navy supported Colombia’s war against the Panamanian separatists. But neither Chile nor Colombia could keep Teddy Roosevelt from pushing through the canal.

  8. I can predict with a 99% probability that this story will not be covered by main media outlets.

  9. I can predict with a 99% probability that this story will not be covered by main media outlets.

      • 99% of papers are rarely covered by the media.

        On the other hand ground breaking papers on topics of interest to the media usually are.
        Unless the ground being broken is the ground the media has been standing on.

    • CBC’s feeble and lone attempt at science reportage is called Quirks and Quarks. That show almost exclusively interviews female PhD candidates. This story fails the CBC publication test in that the researcher isn’t male, the work doesn’t align with the official CBC CAGW narrative and there is no gender preference angle. There isn’t a snowball’s chance in hell that CBC will cover it.

  10. Presumably this work will never be peer-reviewed, or published in an authoritative journal.

    So it will be completely ignored by Climate Change scientists, and rejected for consideration by the IPCC….

  11. The issue is of course to consider what makes ‘good data ‘ and for that you need to consider what the purpose of the data is not what its scientific or empirically validity.

    Once you understand that you see what ‘bad data ‘ from a empirical sense becomes ‘good data’ from a ‘agenda ‘ point of view . And let us be fair , if you livelihood depends on results going a certain way then when it goes not way its tempting not to ask to many question has to how valid it really is.

    And to that the reality that is an area that fails basic experimental design , for it lacks both range and accuracy to cover that which it claims to measure, and you can the problem . Despite the claims of ‘settled science’ and that is before we get to ‘adjustments ‘ which follow a pattern by ‘luck ‘ which should see the people behind them on the gaming tables of Vegas where with that type of ‘luck’ they could earn millions.

  12. “St Kitts, a Caribbean island, was recorded at 0°C for a whole month”

    This is a totally reasonable number. I was there once and bought an ice cream. (Ice cream is very expensive in the Caribbean.)
    But for a better island average, a thermometer in a frozen foods cooler should probably be averaged out with a thermometer placed just above a Weber barbecue grill. (As the Surface Stations Project shows, the Weber grill is the favorite brand for sites reporting temperature data.)

    In any event, this shows that sometimes thermometer placement can be significant, and inside an ice cream cooler may not be representative of a tropical island as a whole.

    • “the Weber grill is the favorite brand for sites reporting temperature data.”

      Don’t be naive. That’s what the fossil fuel companies want you to think.


  13. Bombshell? How many previous posts here and elsewhere have been headlined as bombshells? I’ll believe Global Warming/Climate Change will take a hit when the several billion in annual funding
    is reduced. Until then to paraphrase Admiral David Glasgow Farragut, the response from the scientists who have bet their careers on Climate Change will be, “Damn the bombshells full speed ahead!”

    • Armor matters.

      Even though mid- to late-19th century cruisers typically carried up-to-date guns firing explosive shells, they were unable to face ironclads in combat. This was evidenced by the clash between HMS Shah, a modern British cruiser, and the Peruvian monitor Huáscar. Even though the Peruvian vessel was obsolescent by the time of the encounter, it stood up well to roughly 50 hits from British shells. link

      The CAGW ship is heavily armored. It can withstand many direct hits from explosive shells. Yes, we have bomb shells. No, they don’t have the effect we might hope for.

      • commieBob … 8:24 am

        Thanks for reading my post – and the history lesson. Yes, it’s going to take more than science to sink the Climate Change juggernaut. Government funding is the mother’s milk of this insanity. Hit them in their pocketbooks and just maybe the hordes will start to seek employment elsewhere.

        • Science has nothing to do with it. CAGW is a religion. Debunking an article of faith is completely impossible.

        • There is also an evil rot at the heart of our universities. Eventually something will have to be done about it.

      • “The CAGW ship is heavily armored. It can withstand many direct hits from explosive shells. Yes, we have bomb shells. No, they don’t have the effect we might hope for.”
        Absolutely, this applies to many other subjects besides CAGW.
        Anyone who thinks otherwise should look at the facts versus the propaganda in the recent supreme court hearings and opinions for another example. Same tactics applied.
        I was once naive and thought facts and data will rule the day, boy was I wrong.

      • Huáscar was captured by Chile in the Great Pacific War (1879–83), and is now a floating museum in Talcahuano harbor in the Greater Concepción metro area.

        During the 1877 action against the Royal Navy in the Peruvian Civil War, she was the first ship ever attacked by self-propelled torpedoes.

    • mwhite,
      How about a clue as to what we are supposed to be looking for on the page for which you provided a link?

      • I believe you are supposed to be looking for an article which talks about the study mentioned in this post. With the point being you won’t find such an article.

    • he BBC don’t allow deniers sceptics on air these days: to them, the science is settled even if it’s not been audited.

  14. This is excellent work and confirms many suspicions previously raised about quality of the “global” long term temperature measurements (or should I say adjustments). It would seem the only way to get a real understanding of what may have happened historically to global temperatures is to identify as many long term single site records without known artifacts from urbanization, station changes etc. and look at their trends over time. If there was global temperature change then the average trends for those stations should reflect both the direction and magnitude of that change. CET is a good example and as far as I know it is not very alarming.

    • Since 1970 CET has been warming at a rate of over 2°C / century. Faster than any global surface set.

    • What information?

      I presume that Dr McLean is not an accepted Climate Scientists, and he has no peer-reviewed paper published in an acceptable journal. So We can see no reason to read anything written by him.

      The BBC have already stated that ‘climate deniers’ must not be given any publicity. I assume that this audit counts as denial? Ergo – he will sink without a trace.

      • Dodgy Geezer

        But to be a little more positive, the hits on CAGW just keep coming and there are lots of young journalists and politicians waiting to pounce, and make a name for themselves.

        Public opinion is waning, the scandal of wasted money is becoming obvious, the under-performance of Germany’s energy policy is being recognised, the withdrawal of renewable subsidies is coming home to roost and the disregard of anything climate related by the Chinese by planning and building ~1,200 coal fired power stations is making people sit up and think.

        No one likes Trump (allegedly) yet his policies are seeing America grow, whilst the Paris agreement and the IPCC are largely recognised as excuses for a knees up for the bureaucrats we Brits hate so much (and most other countries). The Kavanaugh fiasco is recognised as a political hatchet job by the left that’s failed miserably and will, I’m sure, engender yet more support for Trump.

        In short, we sceptics just cant stop winning and the levee will eventually break when someone recognises there’s a name to be made by vilifying the green blob for all the damage it’s done to the world.

      • I am not sure about that, DG . There is a 2009 paper from a J D McLean in the reasonably well esteemed AGU journal : J Geophys Res-Atmospheres , with one coauthor who is at JCU so I assume that this is the same person.
        It refers to the Southern Oscillation Index , one of the topics in the Mclean thesis from JCU
        According to the publisher the paper had a number of citations and an Attention Score (whatever that is ) of 70 , which seems to be quite good apparently.

      • There are a lot of things piling up right now that are dissolving any credit AGW may have. While the media has collectively avoided any serious reporting of counter-narrative news, global crop losses this year due to unstable and unexpectedly persistant cold weather, and consequent increases in the prices of staples like wheat, soy, barley, and such will be making many people start asking where the warming went. The US actually saw “winter storm warmings” in the northwestern tier of states in late summer (https://agfax.com/2018/08/16/wheat-outlook-global-production-down-sharply-u-s-exports-lifted/). Serious losses have been experienced in both (all?) hemispheres. Russia has had serious crop reductions and so has Australia and South Africa.

    • Since RICO is a US law, whom would you bring to book and in what jurisdiction? There does not appear to be any international mechanism to address the breathtaking scope of this scam.

  15. Let us not forget that nearly 40% of land temperature readings are estimated. This in addition to the bad data.

  16. “This process was at least equivalent to “peer review” as conducted by scientific journals.”

    In my experience the examination of PhD dissertations in British and Commonwealth universities is routinely done to a distinctly higher standard than peer review for journals.

    • “is routinely done to a distinctly higher standard than peer review for journals.”

      Precisely !!

      PhD reviewing is scientific, and very thorough.

      Journal peer-review , is for journal publication.

      • No, the PhD reviewing when it comes to climate science is a joke. Tell your professor and chairman of the Atmospheric science department that you want your PhD in atmospheric science/climate science and that your research has been proved rock solid but that you don’t believe in CAGW. Your chances of obtaining your PhD will be delayed until you are programmed the CORRECT way. They will find any excuse not to give you the certificate until you demonstrate adherence to the religion.

        • The procedure at JCU requires the candidate to submit a a list of potential external examiners from which the Advisory Panel chooses two who actually independently examine the thesis.

  17. Considering that climate models have failed, and continue to fail, to show what temperatures are doing, climate modelers should embrace this research and publicize it far and wide.

    They can now claim their models are NOT wrong, but that the historical data used as input was (and they are not at fault). “Our output was wrong, but we are still right.” They could then re-run the models with different data that reduces the short-term temperature increases, but keeps the longer-term, steeper upward trajectory. They may even be able to claim, “it’s worse than we thought.”

    Frankly, though, I don’t see them sufficiently intelligent to use this gambit to stay relevant in the debate.

  18. Now we know why Phil Jones didn’t want to release his data and methods…it so easy to find something wrong with them.

  19. As a general reader, I found the explanation of how making site adjustments resulted in lowering older temperature records incorrectly to be one of the most interesting points. Tony Heller has been printing graphs for years now that show how local records of past temperatures have been consistently adjusted downward. If the reason for those downward adjustments can be shown to be primarily due to the obviously incorrect process described in this thesis, then that should be a major story in itself.

    But is that the case, or are there many other reasons for adjustments always seeming to cool the past? If not, someone should write a paper exposing the fraud, for fraud it would be. Anyone thinking it happened as a result of an innocent mistake or miscalculation hasn’t been paying attention the past few years.

  20. Is this the same John McLean that predicted (in early 2011) “it is likely that 2011 will be the coolest year since 1956 or even earlier” ???

    • And your point is?

      That because you cannot attack the work you attack the man, thus proving once again the Ad Hom fallacy that alarmists love so much.

      • ” HotScot
        October 7, 2018 at 9:23 am

        Dodgy Geezer

        But to be a little more positive, the hits on CAGW just keep coming and there are lots of young journalists and politicians waiting to pounce, and make a name for themselves.

        Public opinion is waning, the scandal of wasted money is becoming obvious, the under-performance of Germany’s energy policy is being recognised, the withdrawal of renewable subsidies is coming home to roost and the disregard of anything climate related by the Chinese by planning and building ~1,200 coal fired power stations is making people sit up and think.

        No one likes Trump (allegedly) yet his policies are seeing America grow, whilst the Paris agreement and the IPCC are largely recognised as excuses for a knees up for the bureaucrats we Brits hate so much (and most other countries). The Kavanaugh fiasco is recognised as a political hatchet job by the left that’s failed miserably and will, I’m sure, engender yet more support for Trump.

        In short, we sceptics just cant stop winning and the levee will eventually break when someone recognises there’s a name to be made by vilifying the green blob for all the damage it’s done to the world.”

      • Once again, the warmists have to re-define the language in order to try and change the subject.

        Skeptical Science? Really, is that the best you can do? Might as well quote Dr. Seuss.

      • “Not sure if you know what “ad hominem” means, but it does not mean going after what someone has said or done.”

        Correct. Other misuses of philosophical terms are “begs the question” (misused 95% of the time) and “appeal to authority” (ditto).

      • ==>Whiskey

        Not sure if you know what “ad hominem” means, but it does not mean going after what someone has said or done. – Whiskey

        What? You went straight after the man and not his argument!

        Is this the same John McLean that predicted.. – Whiskey

        The only time criticism of the person is not an ad hominem argument is if a person’s merits are actually the topic of the argument! You went straight to his credibility and that is attacking the man! You have confused fallacious reasoning with criticism. You went straight to “going after what someone has said or done” and that is the very definition of argument ad hominem*.

        You unwittingly applied a typical form of psychological priming to “poison the well” a subtle use of ad hominem to influence the views of spectators.

        *A fallacious argumentative strategy whereby genuine discussion of the topic at hand is avoided by instead attacking the character, motive, or other attribute of the person making the argument, or persons associated with the argument, rather than attacking the substance of the argument itself. – Wikipedia

      • I find it a bit odd that of the eight links provided there under “Climate myths by McLean”, only one of those links (the last one) actually references him by name.

    • That was a pretty bad prediction if so, but not sure it was the same guy. Anyways, he’d be in good company with Hansen and others…

      Hubert Lamb, Director of CRU, Sep 8 1972: “We are past the best of the inter-glacial period which happened between 7,000 and 3,000 years ago… we are on a definite downhill course for the next 200 years….The last 20 years of this century will be progressively colder.” http://news.google.com/newspapers?nid=336&dat=19720908&id=AiwcAAAAIBAJ&sjid=0VsEAAAAIBAJ&pg=5244,2536610

      John Firor, Excecutive Director of NCAR, 1973: “Temperatures have been high and steady, and steady has been more than high. Now it appears we’re going into a period where temperature will be low and variable, and variable will be more important than low.”

      • Given that the prediction isn’t an actual quote from McLane, but rather came from the writer of a media statement (see below); and given the oddity of the prediction; and given that the quick-and-dirty review of potentially relevant documents which I did didn’t turn up any evidence to back it up anyway, then I have to assume that this was simply a misunderstanding on the part of the person who wrote the media statement. (Such a situation isn’t uncommon.) I didn’t dig too deeply into it, though, so I could be wrong.

        John McLean: Statement: COOL YEAR PREDICTED: Updated with LATEST GRAPH


    • Lordy, lordy, lordy! You mean that researchers can’t test hypotheses by making predictions and seeing if they come true or not?
      And please tell us all how the many predictions made using climate models have turned out.

  21. If the climatic alarm was true, the very first task of scientists involved would have been to set up a tight grid of new stations overlapping the best existing ones and let the data flow in for the past 30 years then start to make sense of temperature, pressure, humidity etc…
    Instead, algorithms, computer models, a complete dismissal of climatologists/geographers’ knowledge -see the Leroux versus Legras & consorts- and scientactivist media campaigns replaced the search for a diagnostic, away from politics.

    • No need. The UAH satellite temperature data set is the only one that both sides trust. Everybody drools near the end of every month waiting for it to come out on the 2nd day of the next month. This dataset is now where the climate wars are fought because the alarmists don’t have any other credible data that they can point to. Eventually even the UAH dataset will crumble the alarmist sand castle as the daily tides have to always come back in.

      • An outlier, by definition, is not the most likely……


        Especially as neither RSS nor UAH are consistent with the sensor on the previous satellite that was superceded in 1998.
        UAH says the present one is he correct one and, pragmatically, RSS says we dont know and splits the difference….

        It is one instrument having taken over from the previous one instument. Measuring anyway a depth of the troposhere and missing the surface where the majority of warming is taking place over land.

        • Anthony,

          Don’t you think that a warming surface ought to warm the troposphere?

          The GHE hypothesis supposes that a troposphere warmed by slowing down the migration of heat toward space will warm the surface. If the surface is warming before and faster than the troposphere, then the GHE hypothesis is falsified.

          • Tty,

            I was going with the official US government version of the GHE. That doesn’t mean I agree with it. In fact, I agree with Lindzen’s version of the hypothesis. But my point was that, given this view of the GHE, then observations don’t support that whatever warming has occurred to due to such an effect.

            Lindzen says that water vapor and other greenhouse gases elevate “the emission level, and because of the convective mixing, the new level will be colder. This reduces the outgoing infrared flux, and, in order to restore balance, the atmosphere would have to warm.”

            NASA, for example, by contrast explains the GHE thusly:

            “A layer of greenhouse gases – primarily water vapor, and including much smaller amounts of carbon dioxide, methane and nitrous oxide – acts as a thermal blanket for the Earth, absorbing heat and warming the surface to a life-supporting average of 59 degrees Fahrenheit (15 degrees Celsius). Most climate scientists agree the main cause of the current global warming trend is human expansion of the “greenhouse effect” 1 — warming that results when the atmosphere traps heat radiating from Earth toward space. Certain gases in the atmosphere block heat from escaping.”


          • The official US government and IPCC hypothesis could I guess be called retarded, since it supposes that more CO2 retards the movement of heat from the surface to space.

  22. Welcome to the Adjustocene where if we don’t know what the historical temperatures were, we make it up. This fact has to be driven home over and over again, that we don’t have a very good reliable data set for most of the 19th century and much of the 20th century. Knowing that, it is only fitting that we accept a wider margin of error what that hypothesized data might be, with a caveat that going forward the error bars on newer data can be somewhat tightened as we gather more accurate data from more of the surface of the earth. That means that 19th century data world wide is speculative at best, and manufactured at worst. And really doesn’t mean much other than we know it was still fairly cold from the previous 500 years of a cooling trend from the LIA that we know with some certainty was much colder than any previous historical normal. That really makes 1850 colder than any historic normal for a starting point to this current exercise. Adding 1.5 C to a really cold beginning doesn’t even allow for much natural variability.

    If the IPCC wants credibility, then it should at least be honest with itself about the data it does have. Plus it would be more amenable if the threshold for dangerous warming is set at 1950 going forward, instead of some mythical temperature from 1850 at the tail end of the LIA, once of the coldest periods in the Holocene to date. That should be noted, when it was a fairly cold time time in the history of the world. If we do see long term temperature trending much higher over the next 30+ years to 2050 by 2 C, then that should be the basis for taking any kind of action with regards to limiting economic output of the world, by limiting CO2 and other GHG production in the future if it demonstrated that GHG’s are indeed a significant factor.

    So far in the 21st century, temperatures seem to be within an acceptable range of error, in fact a global hiatus or a pause in any significant warming in these first 18-19 years of the 21st century indicates that any temperature increase is not linear with CO2 concentrations in the atmosphere. So let’s allocate resources to collecting honest and accurate weather and climate data so that wise decision making can be implemented in the next 30 years. We are just very early yet in declaring any emergency, and it hasn’t been demonstrated that any real significant threat has been identified, other than much of the very populous world is just not ready for any kind of normal inclement weather which is what leads to alarmism in general. Perhaps that is where any resources are first spent, which is hardening our defences to inclement weather.

  23. Holy cow you weren’t kidding. This is HUGE news. I knew the dataset was sparse in the 19th century, but that is ridiculous! There’s really no reason to trust HadCRUT until 1950 at the earliest.

    I’m sure the response will be measured and sober.

    • …I’m sure the response will be measured and sober…

      What response? This will simply be ignored.

    • Delingpole says that McLean says that of the 0.6 warming since 1950, 0.2 is likely exaggerated.

  24. John Mclean has been a WUWT guest blogger or indirectly supplied article information a number of times before and has always been instructive.

    Some of his previous contributions:

    “Reckless commitments to the Paris Climate Agreement, November 10, 2017”
    “Friday Funny: more upside down data, March 25th, 2016; through an article by Bishop Hill where Jon McLean asked for a lookover”
    “Hadley Climate data has been “corrected” thanks to alert climate skeptic, April 11th, 2016″

  25. Back in 2005 McIntrye post a comment from P. Jones.

    Why should I make the data available to you, when your aim is to try and find something wrong with it.

    Well yes, we do want to look at it and of course, McIntrye was completely correct about the need to look!

  26. It has been obvious for quite a while that the temp data is not fit for climate purpose. See, for example, essay When Data Isnt in ebook Blowing Smoke.
    Good to have yet another detailed confirmation of that basic fact.

  27. It is easy to predict the reponses…

    (1) errors are minor and make no difference
    (2) there are other data sets which independently verify the temperature record
    (3) examples of errors presented show readings both too cold and too warm, which would mostly cancel-out as errors often do
    (4) McLean has misrepresented his qualifications previously
    (5) McLean’s prior works were heavily criticized and/or avoided rigorous peer-review
    (6) McLean is an industry shill


    • Yes, except that #4-5 will come first. The climate activists’ first impulse is generally the ad hominem attack. Any reference to actual data comes later, if at all.

    • Michael Jankowski
      October 7, 2018 at 10:18 am

      Yes, you’re probably right.

      Alternatively they could just say…Look, there are x hundred thousand measurements in the record…of course there a few errors(we’re only human) – we have always known that but have concluded that they average out in the end. End of story.

    • “(2) there are other data sets which independently verify the temperature record”

      And there are other accusers of Kavanaugh. There is always some other line of evidence to confirm the current one which is weaker than expected. Then the new line of evidence is even sillier, but there is another one, which have not yet been DEBUNKED.

      Debunking isn’t a word of the universe of science, and it’s applicable to fake sciences, because making up stuff is pure fraud and needs debunking not just refutation.

  28. These results mirror Steve McIntrye’s US work and results. Why would anyone expect the UK’s work to be any better?

      • HotScot,

        I was referring to the HadCRUT4 data set, which I thought is managed/controlled by the Climate Research Unit at the U. of East Anglia (in the UK). The Aussies did the great work of researching and exposing the worthlessness of that CRU dataset. My hat is off to them.

        McIntrye exposed the poor quality of NASA/GISS’s surface stations network that is used by NASA/GISS to collect data for a comparable data set .

        Apologies for confusing antecedents.


        • Steve McIntyre is, among other things, a statistician. He’s best known for debunking MBH98, aka the Hockey Stick.

          Anthony Watts, before he started this blog, was looking at the performance of Cotton weather instrument shelters, type of paint, wear & tear, etc. When he discover that several stations that were part of the USHCN, he started http://surfacestations.org/ to recruit assistants to document more of them (82.5% were eventually logged). The system is operated by NOAA’s NCEI, NASA’s GISS does most of the adjusting and analysis to produce their GISTEMP database.

  29. How about someone doing an audit of the fundamental, underlying concepts.

    How accurate or consistent a method someone has devised to measure the global average length of unicorn hairs is of little consequence, when the reality of unicorns is nil.

    • I am going to post soon on the original scientific paper that Hansen did in 1976 on CO2 and other greenhouse gas temperature response to doubling. It is called GREENHOUSE EFFECTS DUE TO MAN-MADE PERTURBATIONS OF TRACE GASES. Hansen had been publishing for 10 years before that but the year 1976 was key because in the previous decade everybody was worried about global cooling.

  30. In the same way that the Villack conference stated that they should disregard all previous historical data, perhaps we should start again and disregard all data prior to to 2018 and instead only look at data from Satillites, ballons and Argo bouys. Disregard all ground based readings.


  31. There is another gross error in palaeoclimate too.

    It has been assumed that ancient mountain treelines are controlled solely by temperature. But this has resulted in impossible lapse rates, and much head-scratching, because the treelines are far too low. Nevertheless, these low mountain temperatures and high temperature lapse rates have been used in all palaeoclimate models. And these get reflected in modern models too.

    In truth, those low mountain treelines were caused by low CO2, not by temperature. So all the historic temperatures used in glacial era climate models are all wrong, and so all those models are wrong too.

    But don’t worry, the science is settled…….!


  32. Even the sea-level measurement data contain obvious errors, though coastal sea-level measurement data are certainly much better quality than the temperature data. E.g., the great 9.2 magnitude Alaska earthquake was March 27, 1964. But NOAA’s monthly sea-level measurement data for Seward, AK jumped one meter in January, 1964, rather than April. That’s obviously wrong.

    Thank you for all you do for sound, trustworthy science, Jo, Anthony, and John McLean.

  33. I just purchased the report and downloaded it. I also downloaded all of the data files used in the analysis. 7 large .zip files.

  34. The following was excerpted from a January 21, 2015 published commentary by Dr. Tim Ball, which was titled …… “2014: Among the 3 percent Coldest Years in 10,000 years?

    Challenges and IPCC Fixes

    Every alteration, adjustment amendment and abridgment of the record so far, was done to create and emphasize increasingly higher temperatures.

    1. The instrumental data is spatially and temporally inadequate. Surface weather data is virtually non-existent and unevenly distributed for 85 percent of the world’s surface. There are virtually none for 70 percent of the oceans. On the land, there is virtually no data for the 19 percent mountains, 20 percent desert, 20 percent boreal forest, 20 percent grasslands, and 6 percent tropical rain forest. In order to “fill-in”, the Goddard Institute for Space Studies (GISS), made the ridiculous claim that a single station temperature was representative of a 1200 km radius region. Initial claims of AGW were based on land-based data. The data is completely inadequate as the basis for constructing the models.

    2. Most surface stations are concentrated in eastern North America and Western Europe and became the early evidence for human induced global warming. IPCC advocates ignored, for a long time, the fact that these stations are most affected by the urban heat island effect (UHIE).

    Read more @ http://wattsupwiththat.com/2015/01/21/2014-among-the-3-percent-coldest-years-in-10000-years/

    • The only areas that come even close to be adequately monitored, are also those areas that have been the most extensively modified by humans.

  35. Is freakishly improbable the same as obviously manipulated? I’m thinking yes. Yes it is. About gd time someone calls bs bs.

  36. “…an IPCC special report on the impacts of global warming… in the context of strengthening the global response to the threat of climate change, sustainable development, and efforts to eradicate poverty.”

    For the IPCC to have multiple objectives, as stated above, seems problematic to me. How do you “eradicate poverty” or encourage “sustainable development” without providing poor countries with the money and technology to do so? And when you promise developing nations who sign on to the Paris Accords that wealthy nations will pay them climate reparations as the climate warms, how does that not put political pressure on them to make sure their temperature data shows warming? When payments depend on the climate warming, you can expect the climate to warm. That goes doubly for climate scientists whose paychecks and grants also depend on a warming climate and the predictions of negative consequences from such warming. So what incentive is there to remove errors in the temperature data if it reduces the warming trend?

    • How do you “eradicate poverty” or encourage “sustainable development” without providing poor countries with the money and technology to do so?

      “HA”, ….. American taxpayers have been trying to “eradicate poverty in the US” for the past 55+ years by providing trillions of dollars and technology to do so …… and the percent deemed to be impoverished now days is far, far greater than 55+ years ago.

      And “HA, HA”, ….. for the past 50+ years, ….. American taxpayers have been trying to “eradicate poverty” and encourage “sustainable development” by giving the Palestinians TENS of BILLIONS of dollars and technology to do so, ……. and living standards there are still bout the same.

  37. “one town in Columbia spent three months in 1978 at an average daily temperature of over 80 degrees C”

    Not too bad for holiday lovers. I anticipate twofold response to those findings:

    1. Ignore as long as you can.
    2. If you cannot ignore and findings are actually finding ways into the professional community/public and are gathering attention say that averaging process operating on large numbers will nicely cancel out all those unfortunate errors. Thus, we may not be able to figure out accurately actual temperatures but with appropriate statistics we can with certainty measure a warming trend. And all is fine.

    • Should be Colombia. The country is spelled correctly farther down.

      HadCRUT is probably less of a mess than GISS and BEST.

    • The question then becomes, what do you do once the problem has been identified.
      First check the original logs, if they still exist.
      If that doesn’t work, an honest scientist would not use the data as it is obviously flawed.
      A climate scientist would declare that we can just model the data by taking the average value for that month and then adjust that average based on what has happened at stations up to 600 miles away.
      The climate scientist would then tell you that the data thus modeled is even more accurate than the original data could have been so you don’t need to worry about error bars.

  38. Global Temperature is as long as a piece of string.

    The string of assumptions, definitions, locations, errors, manipulations, bias, presentation, accuracy, interpretation and wishful thinking, – plus a few more.

    Currently it is a bit like a tangle of knitting.

  39. I’ve followed this issue for many years now and remember from way back, probably 10 years ago, that Prof Jones of CRU said to à Foi request (from a Warwick Hughes I think)

    We have 25 or so years invested in the work. Why should I make the data available to you, when your aim is to try and find something wrong with it.

    Am I right in thinking this is that data?

  40. .

    Was the recent Slowdown caused by the super El Nino of 1998?

    If you take the GISTEMP temperature series, and replace the 1998 temperature anomaly with a new value, that is spot on the trend line, does the Slowdown disappear.

    Warning – the results of this article will be shocking, for some people.


  41. Support John McLean financially.

    Pay for MULTIPLE copies of his ebook ($8) and DON’T onforward it unless you’ve paid for every copy you’re sending out.

    (I’ve bought 4 so far)

    • Hi StefanL,

      I really appreciate what you’ve done. It’s nice to know that people support your work.

      All the best,


  42. Just purchased a copy.
    Probably won’t read it but John M deserves our money.
    Robert Boyle Publishing has a very clunky ‘shop’.
    Attention to detail is paramount in e-commerce.
    First issue is no auto-fill and it gets worse from there.
    RBP should have a ‘friend’ (who’s never used their ‘shop’ before) have a go at buying a copy under observation (without guidance). Things don’t quite happen the way one expects!

    • Warren
      October 7, 2018 at 1:56 pm

      I found the website OK…had to fill in all my details as this was my first purchase from them but download was very quick.

  43. “…in the context of strengthening the global response to the threat of climate change, sustainable development, and efforts to eradicate poverty.”

    🎵One of these things id not like the others,
    One of these things doesn’t belong.
    Can you tell which thing is not like the others,
    Before I finish my song?🎶

  44. It is good to see James Cook University at Townsville being involved and assisting some actual scientific work for a change and not simply promoting alarmism over the Great Barrier Reef to raise funds.
    I would guess that the Hadley data had been checked by the Warmistas there to ensure that only improbable data, systematic adjustments, gaps with no data, location errors, Fahrenheit temperatures reported as Celsius, and spelling errors were left in the data only when they promoted and supported the Alarmist-Warmist cause. Any that did the opposite would have been ruthlessly rooted out already.

  45. Given the nature of the way the data is collected from such disparate sources, these findings are not really surprising. Other technical and scientific considerations aside, this is one of the points where satellite readings score so much better. The relative costs in man-hours taken to produce the respective competing data sets would also make for interesting reading.

  46. I wonder what would be found if the called-out problem locations/date values were checked in the other ‘global’ datasets. Do they show the same artefacts?

    • The Australian paper has included it in its latest news, so it will be in tomorrow’s news. Already the leftists are trying to discredit the report.

  47. As I recall, I found similar errors in the original Minnesota temperature records when trying to analyze min, max, mean, and kurtosis.

  48. You know, you call yourselves “skeptics” but noone here has challenged the article, in spite of its obvious shortcomings. Instead, you wait for the real skeptics, Stokes and Mosher, to say something. But they don’t, letting you relax in a puddle of embarrassment.

    • Whiskey,

      Please point out to readers here the paper’s shortcomings which you find so obvious. Why wait for others to do so?


      • Isn’t it obvious. The paper fails to agree with the models. That’s proof that the paper is flawed.

      • John Tillman says: “Whiskey, Please point out to readers here the paper’s shortcomings which you find so obvious. Why wait for others to do so?”
        Why. Because why should do work for fake skeptics? If you are really skeptics, why are you doing this “pal review”. I mean, isn’t this what you are all against? You are a bunch of fakes.

        • Why not do such work in order to give us skeptics something to comment upon.

          If the shortcomings are obvious, how hard could it be for you to do this work.


          What has been obvious to anyone taking the most cursory glance at HadCRU’s “data” has found the same sort of issues. Phil Jones himself has admitted that they warmed the sea “surface data” to bring it in line with the land, which their adjustments had warmed so unphysically.

    • While it’s true that Nick did stop buy, his complaints were easily dealt with.
      As of the time of my post, Mosh has not posted to this thread.
      You three really need to work on your co-ordination.

      • I’m sorry, there are no notes on Stoke’s comments that related to Bennett’s comments. So you are misrepresenting facts, a common so called fake “skeptic” ploy. Thank you.

        • Nick didn’t address the main point of the paper — the temp data is sparse, poor quality, poor precision, heavily adjusted, and not suitable for scientific purposes. He knows all of these things are true, so he ignores them.

          We don’t know the global temp in 1850. To suggest that we do with a precision of 0.1 C is simply absurd.

  49. Boy, almost 200 comments in and no one critical of the post. Not one skeptic among you. (Not that I really ever believed that, but I regress) I should get all your names and addresses for my next paper to enter as reviewers, I would love to have peer review as wimpy as this.

    • “Boy, almost 200 comments in and no one critical of the post.”

      It’s natural for people to jump the gun, based on a first look. But it’s been only 14 hours since publication here, not enough for skeptics to screw in their loope. Give it time.

      • I think Whiskey has “regressed” further than he/she thinks.

        Maybe not all readers could afford to pay to buy the whole article before a free link was provided in the comments. However, I’m sure Whiskey will be delighted to learn that the word “measurments” on page 203 is widely considered to be an incorrect spelling.

        • I might have given him too much credit. I thought he intentionally said “regressed” instead of “digressed” in an attempt at humor.

    • Between all the posts in the skeptic blogs about HadCRUT and other databases having data with adjustments that all made warming look worse than it was, and all text in the README file, the result is not surprising.

      The only obvious shortcoming I see is that McLean didn’t find all the errors.

    • Boy, almost 200 comments in and no one critical of the post

      so where is your critical analysis of the post? Rather than bitch about everyone else not doing the work for you, why don’t you do the work yourself? hmmm?

  50. Our minders and binders,
    Whose blinders betray,
    Be fault-finders, stem-winders—
    No truth-finders they.

  51. My question would be “so what”? Certainly it is important to audit the quality of any dataset
    but it is hard to see what is new here. Anybody who cared could have downloaded the data
    and realised that the coverage was sparse especially in the Southern Hemisphere before 1950.

    However what nobody has shown is that any of these issues change the results? The HadCRuT4
    data series comes with associated errors. Nowhere does McLean show that the error estimates given
    in the official database are wrong. In addition McLean states in his thesis that the errors do not
    suggest a systematic bias and that they are probably normally distributed. So at most the conclusion
    would be that we are less certain about the warming trend over the past century that we thought. Which
    also leads to the possibility that the earth has warmed more than we think.

    • “However what nobody has shown is that any of these issues change the results? The HadCRuT4”

      No they haven’t, as Nick Stokes explained above ….
      “OK. This is no BOMBSHELL. These are errors in the raw data files as supplied by the sources named. The MO publishes these unaltered, as they should. But they perform quality control before using them”

    • No, he stated that:

      “The variation in coverage over time is a potential source of systemic (rather than random) errors that the process of averaging cannot remove. If temperature variation trends are not uniform across the globe changes in coverage will potentially cause a misrepresentation of the global average trend. ”

      They do “change the result” because the IPCC base period 1850 – 1900 is of limited use because for almost all of the period from 1850 to 1950 the coverage of the Earth’s surface was less than 50%. It matters specifically because the all important base period has zero reliability and anything before it even less. If you think one station is a perfectly reasonable way to estimate a hemisphere before 1850 then all is fine with the world.

      • Scott,
        One station is clearly not a perfect way to estimate temperatures but short of constructing
        a time machine and going back in time to install more that is all we have. So the options are to
        (a) make the best estimate possible with the data we have or (b) give up and make the illogical
        leap claiming that if we don’t know the temperature in the past then the earth isn’t warming.

        • No Percy the answer is (c) admit that we don’t know the temperature of the past (due to all the various ways in which past data is lacking) and make damn sure we are getting good data *now* so that the in the future we can have a better understanding of the temperature. Until we have good data, we can’t leap to the conclusion that you want to. Garbage in garbage out applies.

        • The correct answer is that while it may be warming, there’s no way we can say by how much and absolutely no way we can prove that the warming is more than would naturally be occurring.

          I really find it fascinating how the best estimate after fixing all these errors and admitting that the coverage is woefully inadequate, somehow comes out to be 10 times more accurate than the best thermometer used.

        • (b) is a better choice than (a). And it’s hardly “illogical” at all. If you don’t know the temperature in the past, you CAN’T say the earth is warming.

          If policy decisions with trillions of dollars are at stake, it’s better to do nothing and keep on as is than to cripple the global economy chasing a ghost. Now that we know the IPCC is suggesting a carbon tax as high as $27,000/ton of oil equivalent, we can see the true lay of the land. At that level, there won’t be any effect on the global economy because there won’t be a global economy.

  52. I’m happy he got his thesis out the door and I’m happy with his choice in terms of datasets, but much of this is hardly news or indeed novel. I guess what I’m saying is that I’d have liked him to have found more. We already knew much of this and so far that hasnt stopped them, so for those hoping for a smoking gun, i think the wait continues.

  53. The IPCC AR5 found it “extremely likely that more than half” the observed warming of 0.6°C during 1951-2010 was caused by human influences. If Dr McLean is correct in his generous estimate that “observed warming” was overstated by only 0.2°C, then AGW for that 60-year period was somewhere between 0.2°C and 0.4°C.

    The IPCC’s Special Report today, says that the world has warmed by 1.0° since about 1850. If so, and if AGW continues at the same rate as previously, then the overall rise will be 1.2° to 1.4°C in 60 years’ time. The IPCC won’t have to worry about hitting the 1.5°C target during this current century.

    • Barry Brill

      “The IPCC’s Special Report today, says that the world has warmed by 1.0° since about 1850. If so, and if AGW continues at the same rate as previously, then the overall rise will be 1.2° to 1.4°C in 60 years’ time.”

      They’re not saying that though. They’re saying that the current rate is different from the rate 1850-present, because the early part of the record to 1900 is largely unaffected by AGW as is much of the first part of the 20th century. They define the ‘current’ rate of global warming as 0.2°C (+/- 0.1 °C) per decade with ‘high confidence’ (see SPM A1.1).

      Using their best estimate figure of +0.2°C/dec, they would expect to see ~1.2°C within the next 10 years and ~1.4 °C within the next 20 years. In 60 years’ time, assuming a continued +0.2 °C/dec rise, temperatures would be ~2.2 °C relative to 1880. They could be wrong, of course, but that’s what they are saying as far as I can see.

      • Could you go through your calculations in more detail? 0.2C/decade should give 0.2C for the first ten years, 0.4C for 20 years, and 1.2C for 6 decades (60 years).

        • Jim Gorman

          Jim, the warming rate of 0.2C per decade is ‘in addition’ to the warming already experienced since 1901; the so-called ‘post-industrial’ temperature rise. This is estimated by the IPCC to be ~1.0 C at present (based on linear regression using an average of the GISS, HadCRUT4, NOAA and Cowtan & Way temperature data sets).

          If you add that additional 1.0C to the values you state you’ll get the same numbers I quoted.

          • That’s not what you indicated. You mentioned relative to 1850, not 1901. In essence you’re saying 1850 – 1900 was flat, i.e. no warming, and the atmosphere started warming in 1901.

      • When the real error margins are at least 10 times greater than trends you are claiming to measure, then you really can’t say anything meaningful about either trend.

  54. “…one town in Columbia spent three months in 1978 at an average daily temperature of over 80 degrees C…”

    I’m guessing that the anomaly then would be at least ~40°C, assuming these were summer months. It seems extraordinary that such an anomaly would by-pass any quality control filter. If it did, then I’m asking myself why no-one has noticed this until now? Also, why publish this astonishing evidence online via a website rather than in a peer reviewed journal?

    Can anyone who has downloaded this publication confirm that the current published HadCRUT4.4 files contain monthly temperature anomalies for individual stations that are in the region of 40 °C for any month, let alone for a continuous 3-month period? Are we sure these are not just the ‘reported values’ rather than the quality controlled HadCRUT4.4 output? Are we sure that stations reporting these outlandish figures were even included in the HadCRUT4.4 database and not simply discarded?

  55. As a statistician I find the concept of recording a global average temperature absurd. There were no statisticians included (nor are there yet) in the IPCC process. The misuse of statistics is a travesty beyond belief.

    • Its an oddity that climate ‘scientists’ claim they are the only people able to judge their work , as the ‘experts ‘ , but regard it pointless to ask those who are experts in others areas for their input because climate ‘scientists’ are experts in everything .

      You may not need to be any good and science, but you certainly need a planet sized ego and the ability to talk out of the side of your mouth to work in that area.

  56. This article should be saved under “Climate Fails” for future reference. It is big, notwithstanding Stoke’s contention that everyone who uses it cleans up all the failings of HadCRUT4, which is doubtful.

  57. “It is big”
    No, it’s a big fat zero.
    “notwithstanding Stoke’s contention that everyone who uses it cleans up all the failings of HadCRUT4, which is doubtful.”

    That’s NOT what Nick says …..

    “These are errors in the raw data files as supplied by the sources named. The MO publishes these unaltered, as they should. BUT THEY PERFORM QUALITY CONTROL BEFORE USING THEM. You can find such a file of data as used here. ”

    IE: HADCRUT4 does not have the errors in it.

  58. So the net conclusion from this paper and the comments is this: the government and academics lied in exchange for money power and status.

    In other news, the sky is blue (isn’t it?)…

  59. Mr Banton: “Only if you think that because we don’t know everything (precisely) then we know nothing.
    If that’s the case then we will get nowhere in anything.”

    I prefer to think about it in the following manner: to make a certain judgments we need data/measurements with acceptable accuracy. Instrumental temperature records up to few decades ago simply do not provide sufficient resolution to decisively answer the question whether changes of global temperature average +/- 0.5 C per half of the century actually happen.

  60. Yeah, good work and all that, but doesn’t such a study belong in a bachelor level thesis. Its something that could have been done in a science fair project. I guess though, that the people who put HadCRUT together all have PhDs, too. Are there any climate scientists who DON’T have a PhD? Do experts nowadays skip BSc and MScs? Steve McIntyre’s famous quote comes to mind:

    “In my opinion, most climate scientists on the Team would have been high school teachers in an earlier generation – if they were lucky. Many/most of them have degrees from minor universities. It’s much easier to picture people like Briffa or Jones as high school teachers than as Oxford dons of a generation ago. Or as minor officials in a municipal government.

    Allusions to famous past amateurs over-inflates the rather small accomplishments of present critics, including myself. A better perspective is the complete mediocrity of the Team makes their work vulnerable to examination by the merely competent.”

    – Steve McIntyre, Climate Audit Aug 1, 2013 at 2:44 PM

    • Gary,

      I could be wrong, but IMO Gavin has more or less admitted that he was saved by the convenient emergence of CACA as a lucrative thing. His degree is in math, not any scientific discipline relevant to climatology. He wasn’t good enough to get a job as an academic mathematician, so computer gaming in NYC was just the thing for him.

      Now, as a legal alien and federal employee, he can’t be gotten rid of. Which is why I advocate shutting down the corrupt conspiracy which is GISS and sending its now unemployed and unemployable former denizens to the North Pole to gather real data rather than making stuff up.

  61. If the global warming nuts would have based their speculation on regions where the data is plentiful they may have succeeded in their efforts. Areas with poor data would be excluded from the analysis. Then they could say that ‘40% of the regions show global warming’ and gotten their way. Instead they concocted the global temperature and botched the whole affair.

  62. Too bad there is so much politicization of climate data. “Bombshell” reports like this could be used to improve or correct errors in the dataset.

  63. Simple check

    CRU in the end uses about 5000 stations

    these are refered to as the USED stations. stations can be dropped if they do not have enough coverage in the ‘baseline period”

    To understand what data is ACTUALLY USED you go here


    the researcher claims:

    “For April, June and July of 1978 Apto Uto (Colombia, ID:800890) had an average monthly temperature of 81.5°C, 83.4°C and 83.4°C respectively.”

    there is NO such station in the data that is used

    800010 126 817 1 SAN ANDRES/SESQUICEN COLOMBIA 19612011 101961 3646 1136
    800090 111 742 -999 SANTA MARIA COLOMBIA 19752011 101975 3647 1138
    800220 105 755 2 CARTAGENA/NUNEZ A COLOMBIA 19512011 301951 3648 1137
    800970 79 725 250 CUCUTA/DAZA A COLOMBIA 19712011 101971 3651 1210
    801100 62 756 1490 OLAYA HERRERA AIRPOR COLOMBIA 19412000 101941 3652 1209
    802220 47 742 2547 BOGOTA/ELDORADO A COLOMBIA 19232011 101923 3655 1282
    802410 46 709 171 LAS GAVIOTAS COLOMBIA 19712011 101971 3656 1282
    802590 36 764 961 CALI/BONILLA A COLOMBIA 19482011 101948 3658 1281
    803150 30 753 439 NEIVA/SALAS A COLOMBIA 19712011 101971 3661 1281
    803910 79 726 -999 CAZADERO AP. COLOMBIA 19481970 101948 3663 1210
    803920 76 726 1235 BLONAY COLOMBIA 19512000 101951 3664 1210
    803930 49 751 1495 EL LIBANO COLOMBIA 19521970 101952 3665 1281
    803940 50 757 1400 NARANJAL COLOMBIA 19512000 101951 3666 1209
    803960 44 744 1550 TIBACUY GRANGE COLOMBIA 19522000 101952 3667 1282
    803990 13 775 1700 OSPINA PEREZ COLOMBIA 19531970 101953 3668 1281

    The reason why CRU does not USE Apto Uto is because it does NOT have the required number of years
    in the base period. For CRU this is 1951-1980 and a station MUST HAVE 20 of those 30 years

    But there is a way to use this data if you dont use anomaly periods

    OH LOOK! QC flags the data


    take back this jerks Phd

    • What is with you guys and misdirection, the audit found 70 areas of concern.

      .. a few people who know what they’re talking about… – Philip Schaeffer

      Yeah, yeah they attacked one single issue! One down, 69 to go.

      Ever heard of the law of small numbers, its another name for Secundum quid, the fallacy of hasty induction, generalization from the particular, illicit generalisation, blanket statement, leaping to conclusion… etc.

      You really should look into the weakness of the fallacy of the lonely fact!

      • Why do you think it is that they found this, but none of the so called real skeptics here did?

        Were you all not looking, or is it an issue of technical ability? Can you point to anyone else here who is skeptically assessing the accuracy of this paper and its conclusions??

        • Read my next comment and reply to Mosher, they didn’t find anything real! It was misrepresentation of the pertinent facts… again!

    • ‘stations can be dropped if they do not have enough coverage in the ‘baseline period”’

      and replaced by what ?

      It a very easy game to drop data that does not support you and add in ‘model data ‘ which does , but its not ‘science’ its marketing straight out of the ‘nine out of ten cats ‘ approach .

    • And what is your PhD in Mosh?

      if we are taking back the credentials of jerks, yours should be at the head of the queue.

    • Has anyone canvassed the potential for transcription and conversion errors in historical data? For example historical temperature measurements in the British system would have been in Fahrenheit. Other countries’ observers may have used Fahrenheit or Celsius – who knows if the high 80’s records from Colombia were actually Fahrenheit numbers that remained unconverted to Celsius?
      I would place no credibility on historic temperature records given the total lack of quality control in data recording. It’s rubbish!

    • I feel sorry for you. You have shown yourself gullible enough to believe what the CRU, home to the Climategate emails, says.
      The demonstrable fact is that obvious errors, including those for Apto Uto, have found their way into the HadCRUT4 dataset (and for that matter into the CRUTEM4 dataset).

  64. Poor guy.

    1 check and his Phd is toast.

    now some of you will pay for this report. But I wont because he failed the simple requirement of posting his data and code., And more importantly he points to data

    THAT CRU DOESNT USE!! For fucks sake skeptics.

    CRU requires data in the period of 1950-1980. that is HOW the calculate an anomaly

    and look. in 30 seconds I checked ONE one his claims.

    None of you checked.

    you spent money to get something that FIT YOUR WORLD VIEW

    you could have checked. but no.

    gullible gullible gullible

    • Pages of back slapping and cheering, and a few comments about how Stokes and Mosher will be along with their usual derision….. But do any of them actually bother to look at the study skeptically?

      All the usual carry on, and here we are against as usual with you two actually examining and testing what was put forward, while others who didn’t cheer for the study and sneer bitterly at the few people who know what they’re talking about and actually bothered to investigate for themselves.

    • Again with the deliberate misdirection!

      CRU requires data in the period of 1950-1980. that is HOW the calculate an anomaly – Steven Mosher

      He wasn’t talking about the calculation of the anomaly, was he Steven!

      He was talking about the calculation of the normals, the standard deviation from which the anomaly is later derived.
      And there were two periods over which the long-term average temperatures and standard deviations are calculated for this location, the first from 1961 to 1990 and the second from 1947 to 1988; both in the period 1950-1980.

      The author specified that his concern was for the inclusion of outlier locations – Apto Uto in this case – in the CRUTEM4 grid cell “Normals”:

      The concern at this point is the inclusion of outliers in the calculation of the long-term average temperatures or of the standard deviations. Outliers present in this subset of the data will widen error margins in long-term averages, distort temperature anomalies and, for standard deviations, potentially lead to the inclusion of further outlying data in the data record at other times.

      I love the smell of alliteration first thing in the morning:

      gullible gullible gullible – Steven Mosher

      Fire ready aim! 😉

    • Dear Steven Mosher:

      You say “he failed the simple requirement of posting his data and code.”

      You should be even more concerned about such failures in the climate establishment, which is promoting policies which could potentially require trillions of dollars in spending.

      Are you indicating you would be willing to vigorously support requests for data, associated code, etc. from researchers whose findings support the “climate disaster in the future” narrative? To the extent you would be willing to testify in court, such as the cases in Virginia and Arizona, regarding the critical importance of making such data available?

      • I thought his data was HADCRUT 4. Isn’t that publicly available?
        What code, he visually inspected the data and reported problems with it.

        It really does seem that Mosh is just phoning it in these days.

        • This is exactly what I was thinking Mark. The data is publicly available already, he didn’t modify it from what I can find in the report.

          He may have written some code but it would be pretty basic stuff for anyone familiar with statistical analysis. Furthermore, the code didn’t do calculations against the data to come up with some other number to report, he just looked for outliers, missing data, and adjustments that don’t make sense.

          I try not be mean towards people like Mosher when they post here. In fact I look forward to their posts because I learn as much or more from the replies and conversations that ensue (though the arguments are getting more than a bit repetitive these days). However, on this occasion I think its safe to say this was a swing and a miss.

    • Steven Mosher, forward this fine research of yours onto the IPCC, i think you’re a shoe in for AR7.

    • Language Mr Mosher!
      Despite what you claim (and what the CRU says), I can show that the obvious errors in the Apto Uto data are included.

    • “Has it ever been used”
      Well, that seems to be a question that John McLean, PhD, did not bother to investigate, nor his supervisor (norany of his supporters here). But this 2011 post-QC data listing shows the station had its data truncated after 1970. And then, as Steven says, for use in a global anomaly calculation as in CRUTEM 4, the entire station failed to qualify because of lack of data in the anomaly base period. That is not exactly a QC decision, but doubly disqualifies it from HADCRUT 4.

    • Francis,

      Those hourly peaks and valleys in enthalpy are probably due to thunderstorms. Absolute humidity usually falls during a thunderstorm because cold air from the upper atmosphere falls with the rain and because cold rain dehumidifies the air it falls through.

  65. The usual suspects have come out with the usual line about how the errors have been found and fixed.

    The point is that lost data can’t be recovered. Sure they can make guesses about what the data should have been. However a sane person would never consider such guesses to be the same quality as real data.

    And this is on top of the many quality control problems with the sites themselves.

    The idea that we can use this data to figure out what the temperature of the earth is today, within 0.1C is ludicrous. The idea that we can use the same data to figure out what the earth’s temperature 100 years ago with equal accuracy is out and out insane. Only someone with no concept of how science works could make such a claim.

  66. The problem remains that you assume that data being wrong is a ‘bad thing ‘ when in practice that the data is wrong but ‘useful ‘ makes it a wholly ‘good thing ‘ in climate ‘science ‘ .
    Has ever the trick is to not think ‘science’ ,but politics, religion or fanatical sport fan , and you then understand how this works and why ‘faith ‘ is far more important that ‘fact ‘

  67. I’m bringing this back to the top for discussion, mainly because Steven Mosher was being a cad in comments

    I’m shocked! Shocked to find that gambling is going on in here Steven Mosher was being a cad (with apologies to Casablanca

  68. I just downloaded Dr. McLean’s thesis. If all of the readers here do that it will send a message of support.

  69. “Steven Mosher was being a cad in comments, wailing about “not checking”, claiming McLean’s PhD thesis was “toast”, while at the same time not bothering to check himself.”

    Not s surprise.
    Hard cheese to Mosher, for cad behavior; i.e. acting like any number of trollops.

    Perhaps it is time to give Mosher a rest from WUWT commenting?

    • Mosher may be rude and have trouble stringing his thoughts together in complete sentences (often due to using a phone to comment) but has not violated site policies.

    • Nah, let Mosh keep being a cad. He helps to remind the world of the caliber of people who support the CAGW scamm.

      • MarkW, you have no clue as to the caliber of people that study and accept AGW. You certainly can’t base it on the class of people that visit this site. Real climate scientists don’t waste their time here.

        • I base it on what they say and what they do.
          As always, the alarmist assumes that the only reason people don’t agree with it are because they are ignorant.

          • But MarkW you don’t know what they say and what they do. You spend all your time here, and you never interact with them .

          • If their report has no “errors” they should be able to easily prove so. Why hide behind a wall of obfuscation and denial, seems quite deceptive on its very face.

          • A lot of commenters here do know what so-called “climate scientists” do and say. We can read their papers and communicate with Gavin on his blog, which he maintains on the taxpayers’ dime. Too much of what they do isn’t real climatology but GIGO computer gaming, and many aren’t even scientists but mathematicians and programmers. Some who are scientists, like Dr. Spencer, do comment here.

            Who knows better what “climate scientists” say and do than atmospheric physicist Dr. Lindzen, emeritus Alfred P. Sloan Professor of Meteorology at MIT? His conclusion from this close acquaintance is that 90% of “climate science” should be defunded.

            As the late, great “Father of Climatology”, Dr. Bryson, so eloquently stated, “You can go outside and spit and have the same effect as doubling carbon dioxide”. As you may know, Dr. Gray, the “Father of Hurricanology”, was also skeptical, to put it mildly, of catastrophic anthropogenic climate change.

            These and many other skeptical climatologists, meteorologists, physicists, chemists and scientists in other relevant disciplines know well what “climate scientists” say and do. And are horrified or disgusted.

          • Lindzen doesn’t believe tobacco causes cancer. He smokes during his lectures. How can you believe anything he says? He can’t deal with the scientific evidence of the harms of smoking on his own health, how can he even think about the harms of human pollution?

            PS….he works for Heartland/Cato, firms that are paid by fossil fuel interests. Follow the money my friend.

          • Tillman, you and everyone else here that are skeptical of the current science of AGW are actually providing a great service to the theory of AGW. Your complains and investigations only find the weak points in the theory, which are useful to modifying the theory, and improving it. The only problem you have is that no matter how hard skeptics have tried, they have never falsified AGW.

          • Let me simplify this for you, since clearly you have issues understanding simple things. The climate changes, constantly, humans are not causing it and can not stop it. Period. Full stop. Your apocalyptic religious fixation on your own imoprtance is not helping the human race, over all, and is in fact hurting us. Let me guess? You support Planned Parenthood?

          • Paul,

            Well, smoking hasn’t caused cancer in his case yet. He was born in 1940.

            Lindzen works for Heartland because of his scientific conclusions. He doesn’t hold those conclusion because he works for that institution.

            Could be wrong, but IMO Michael Mann and other alarmists have gotten more money from Bog Oil than any skeptic.

            In any case, yours is an ad hominem argument. The fact remains that Lindzen is an eminent, genuine climatologist, well acquainted with “climate scientists”, ie knowing what they say and do, which causes him to have a low opinion of their work. Can’t comment on his opinion of his lesser colleagues personally.

          • C. Paul Pierett October 11, 2018 at 5:28 pm

            AGW was born falsified by reality. Earth warmed coming out of the LIA from the mid-19th century, without benefit of greatly increased CO2. The early 20th century warming cycle was indistinguishable from the late 20th century cycle.

            For the first 32 years after CO2 took off after WWII, Earth cooled dramatically, indeed to such an extent that by the 1970s, scientists were worried about global cooling. Then, in 1977, the PDO flipped, and the planet warmed slightly for about 20 years, until the 1998 super El Nino, or shortly before it. Next, Earth’s temperature, to the extent that it can be measured, stayed flat for another ~20 years, until the 2016 super El Nino. Since it peaked, the planet is back to cooling. All these down, up and sideways trends while CO2 rose steadily.

            Sea level rose during the added CO2 interval at the same rate as it had since the depths of the LIA, c. AD 1690. While Arctic sea ice was in a declining cycle from its century high in the late ’70s, Antarctic sea ice was growing alarmingly. Hence, no CO2 signal.

          • Actually they would, they are not leftist scumbags. They pay for results, not predetermined outcomes.

          • C. Paul Pierett October 11, 2018 at 5:42 pm

            Lindzen formed his conclusions long before joining Heartland.

            Any honest atmospheric physicist would come to the same conclusions, or physicist in general, such as Will Happer, Freeman Dyson or Ivar Giaever, not beholden to the climatariat for career advancement.

            The human contribution to CO2-caused warming is negligible and more plant food in the air is beneficial to life on Earth.

          • 2hotel9, I’ll use “simple” language so you can understand what I’m saying. You are 100% correct when you say that “humans are not causing it and can not stop it.”
            However humans can influence the climate and that is exactly what AGW is saying. It says our emissions of CO2 are warming the earth. Humans do not “CAUSE” climate, they “INFLUENCE” it.

            Get it?
            Oh, and I have no idea what you mean by “stopping it.”

          • Yes, I get it! You are not going to forsake your religion simply because of facts. Again, since you are so dense, HUMANS ARE NOT CAUSING CLIMATE TO CHANGE, AND CAN NOT STOP IT FROM CHANGING. Period. Full stop. Destroying energy production, agriculture and manufacturing across the globe is simply, there is that word again, stupid. Only a leftist, well, idiot, would advocate such stupidity. And that is all the envirotard movement is about.

          • Paul,

            Hotel was referring to “climate change”, not climate. He wrote, “The climate changes, constantly, humans are not causing it and can not stop it. Period. Full stop.”

            By “it”, he clearly meant “changes”, although in his sentence, that’s a verb rather than a noun. But, still, his meaning was pretty plain to me.

          • Tillman says: “Earth warmed coming out of the LIA from the mid-19th century, without benefit of greatly increased CO2.”
            Why did it do that?

            How do you explain it?

          • C. Paul Pierett October 11, 2018 at 6:12 pm

            Thanks for asking.

            Real climatologists have observed that Earth has naturally occurring climatic cycles within secular trends that are also cyclic. This is true at many time scales from decades to tens of millions of years.

            The causes of some cycles are known fairly well, while others are less understood and controversial. For the centennial to millennial scale cycles, many propose periods of more or less solar activity.

            The LIA, for example, suffered three or four (depending upon when you date its start) solar minima. (Major volcanic activity has also been cited, but not convincingly.) So, by the solar hypothesis, all that the LIA needed to end and for the Modern Warming Period to begin was decades of solar maxima, without any minima.

            The Holocene, like other interglacials, enjoyed an early Climate Optimum of prolonged warming, followed for the past ~5000 years, by cyclic peaks of warming alternating with troughs of cooling, within a general cooling trend. Some would date the cooling from the end of the Minoan Warm Period (~3 Ka) rather than the end of the Holocene Climatic Optimum (~5 Ka), since the Egyptian WP (~4 Ka) reached about the same top temperature as the HCO.

            Peak Roman WP (~2 Ka) warmth was lower than for the Minoan WP, and the Medieval WP (~1 Ka) was cooler still. So far the Modern WP has also been cooler than the Medieval, but man-made CO2 might interrupt this trend.

            Between the warm periods are cool periods of approximately equal length. The LIA was probably colder than those which preceded it, although some say that the Dark Ages CP was cooler.

            But whatever the cause, it’s clear that prior warming cycles have lasted longer and gained more in temperature than the late 20th century warming. IOW, nothing unusual is happening with Earth’s climate. Hence, the null hypothesis can’t be rejected.

          • Tillman says: ” So far the Modern WP has also been cooler than the Medieval”

            You have a serious problem making that statement. Since thermometers did not exist during the Medieval period, you must base your assertion on proxy measurements of temperature. If you accept the validity of proxy measurements, then you must accept Mann’s hockey stick. If you disavow Mann’s hockey stick and all of the subsequent studies confirming his work, then you disavow any/all proxies that say the Medieval is warmer.
            You are caught between a rock and a hard place with that assertion.

          • C. Paul Pierett October 11, 2018 at 7:25 pm

            Nope. Seated quite comfortably, actually.

            The problem with Mann’s HS was with his misuse of proxies, not with proxies in general. Trees aren’t thermometers. Tree ring width is subject to too many variables beside T to be used as such.

            But that problem was only the beginning of all the things wrong with the HS.

            So far, no fifty year period in the Modern WP has equaled, let alone exceeded, the three warmest such intervals during the Medieval WP. The period 1951 to 2000 might have come close. Our current 2001-50 may or may not equal one of the peak heat intervals of the Medieval. It’ll depend of course on what happens over the next 32 years.

          • Tillman, there does not exist any reconstruction using any proxy that shows the Medieval period to be warmer than today. If you disagree please post a link to the global reconstruction that show this not to be the case.

            In fact carbon dating of exposed organic material at the terminus of melting glaciers DO NOT DATE back to the Medieval time period.

            Two strikes against you.

          • C. Paul Pierett October 11, 2018 at 7:56 pm

            You are mistaken. From 1994, but still relevant. There are lots of other such papers.



            Abstract. It is hypothesised that the Medieval Warm Period was preceded and
            followed by periods of moraine deposition associated with glacier expansion.
            Improvements in the methodology of radiocarbon calibration make it possible
            to convert radiocarbon ages to calendar dates with greater precision than was
            previously possible. Dating of organic material closely associated with moraines
            in many montane regions has reached the point where it is possible to survey
            available information concerning the timing of the medieval warm period. The
            results suggest that it was a global event occurring between about 900 and 1250
            A.D., possibly interrupted by a minor readvance of ice between about 1050 and
            1150 A.D.

          • Tillman, your cited paper does not claim (as you do) that the Medieval period was warmer than present.

            However, it is apparent that you did not read the cited paper. Specifically section 5 (page 149) states: ” The bulk of detailed research has been carried out in Europe.”

            Now….got anything that is GLOBAL?

            For example below my quote(same page) they say: ” But regions such as the southern Andes or the Canadian Rockies contain thousands of glaciers which have never been examined”

          • C. Paul Pierett write:
            October 11, 2018 at 5:22 pm

            > Lindzen doesn’t believe tobacco causes cancer. He smokes during his lectures.

            Do you have a better reference? I think high fructose corn syrup causes some health issues. I also drink Coca-Cola.

            The URL includes a letter from Lindzen on the topic:


            “I have always noted, having read the literature on the matter, that there was a reasonable case for the role of cigarette smoking in lung cancer, but that the case was not so strong that one should rule that any questions were out of order. I think that the precedent of establishing a complex statistical finding as dogma is a bad one. Among other things, it has led to the much, much weaker case against second hand smoke also being treated as dogma. Similarly, in the case of alleged dangerous anthropogenic warming, the status of dogma is being sought without any verifiable evidence.”

          • and CPP once again reveals himself to be a hypocrite of the first order.

            How do you know that this is the only place I ever frequent?

            Once again, the alarmist can’t help but assume that people disagree with it because of ignorance.

          • Once again CPP demonstrates that he is incapable of arguing honestly.
            Lindzen’s comment was regarding second hand smoke, not cigarettes in general.
            CPP, do you ever tire of being an A*hole?

          • A yes, CPP the hypocrite claims that anyone who receives even a penny of money from fossil fuel companies is completely tainted and can never be believed on anything.
            On the other hand, if you receive your money from government or other groups who seek to gain from the power that the control of fossil fuels will give them is a saint and can’t be questions.

          • CPP, the number of scientists who have been fired for not supporting the AGW myth is legion. As always you are a hypocrite.

          • CPP whines:

            Why did it do that?

            How do you explain it?

            Answer, there are a number of theories. In any case it doesn’t matter, it’s up to you to prove that whatever caused this other warming isn’t causing the current warming before you get to declare that the current warming must be caused by CO2.

            That’s how science works. Not that you care.

          • Once again, CPP demonstrates that
            1) He has no idea what the position of skeptics is.
            2) He has no idea how actual science works.

            The objections to Mann’s graph is not that proxies are no good, it’s that tree rings aren’t a proxy for temperature.
            It’s also that Mann used invalid statistical methods.

            If you knew half as much as you think you do, you might be able to claim to be intelligent.

          • “…Lindzen doesn’t believe tobacco causes cancer…”

            There is hearsay that he found the association to be “weak” 20 yrs ago or more and that he had issues with some studies. Where is a direct quote from Lindzen saying he doesn’t believe it causes cancer, period?

          • “How do you know that this is the only place I ever frequent?”

            Because your mindset doesn’t allow you to understand what is discussed at a real science site.

            And it’s apparent from the five or six replies to me that you are obsessed.
            Thank you, and welcome to my fan club.

          • So CPP the hypocrite is now able to read minds.

            As always, he’s convinced that everyone who disagrees with him is an idiot.
            He just can’t let go of that conviction.

            That just makes him a typical liberal.

          • Fascinating. You make over a dozen posts.
            I respond to about half those posts, and according to you I’m obsessed with you.
            It’s always about you, isn’t it.

        • Paul,

          What’s your opinion of Dr. Hansen’s claim that Earth is on the Venus Express, and that man-made global warming could cause the oceans to boil?

          • Paul,

            Thanks for your non-opinion.

            My opinion is that Hansen’s conclusion is, to say the least, not warranted and not supported by the evidence, hence invalid. Even most of his fellow alarmists don’t share his catastrophic conclusion.

          • You are welcome for my non-opinon. I would also like to point out that I don’t care what your opinion is. The reason I don’t care is because your opinion is worthless.

        • “MarkW, you have no clue as to the caliber of people that study and accept AGW” ~ C. Paul Pierett

          Can’t speak for Mark, but I can certainly speak for myself.

          The Climategate emails revealed that people who accept this — the leaders in this field of “science” were\are highly corrupt. Corrupted by politics — conspiring to ignore FOIA requests, they conspired to delete emails, delete data and model programming code, conspired to corrupt the peer review process and blackballing scientists who had the temerity to question the status quo.

          What caliber of people are they?

          And why would anyone in their sane mind believe them and bankrupt their futures, and their children’s future on such nonsense?

          • Basing your opinion on stolen property (emails) is not a recommended course of action. Of course you are entitled to your opinion, but seeing that the stolen emails didn’t prove anything, carry on.

          • OHOOOOOO!!!! So, the Mafia can do what it wishes because intercepting their communications is “uncool”? Hahahahahaha!!!!!! Sweetheart? Since their work and lines of communication are paid for by me, taxpayer, we have full authority to audit ever word and digit. Don’t like that? Don’t take my money and then lie to me. That just makes them prostitutes, same as lawyers.

          • 2hotel9, since the emails were stolen, how can you guarantee authenticity lacking a provable chain of custody? Do you know what “chain of custody” means?

          • C. Paul Pierett October 11, 2018 at 6:00 pm

            Again, it’s clear to me what Hotel is saying.

            He points out that emails among workers at public institutions should be public property. As you may know, the emails were assembled because the UEA was under a FOIA request, which they fought tooth and nail, yet apparently expected to lose the battle.

            Before HadCRU was ordered by a court to make the emails public, someone leaked them. They thus weren’t stolen but made public sooner than Phil Jones wanted, since he wanted to keep them secret in the first place. Just like his “data”. IOW, his attitude was antiscientific.

          • Tillman, the same people that have determined that the emails are “authentic” are the same people that have absolved the scientists from fraud, deception or any other irregularity. So, if you accept them as “authentic” you must also accept that they show no malfeasance.

            If you disagree with me, please post a link to any criminal/civil/administrative action taken against any of the email composers to punish them for their action(s).

          • C. Paul Pierett October 11, 2018 at 7:31 pm

            Nope. UAE said the emails were authentic. You can’t expect UAE and HadCRU, ie Phil Jones, to exonerate itself and themselves.

            The various inquiries and reports on the emails’ content were a whitewash, yet still found some serious problems:


            Mann’s HS however was thoroughly eviscerated by other analyses, such as McIntyre and McKittrick.


            I failed to mention regarding paleoclimatic data, that we don’t need to compare paleo proxies with thermometers. We can compare proxies with the same data for today.

          • Once again CPP demonstrates that he is so desperate that he will latch onto any excuse to dismiss data that doesn’t support his religion.
            1) There is no evidence that the e-mails were stolen. What little evidence does exist leans towards them being leaked by an insider.

            2) The e-mails have been confirmed as being authentic by many of the people named in them.

            Now deal with the facts.

        • You’ve got the perfect forum, light moderation if any, educate us.
          We are here to learn, so teach us.

  70. Whiskey’s recursive ‘Droste effect’-type argument that skeptics cannot be genuine skeptics because they are not skeptical of skepticism is reductio ad absurdum.

  71. I haven’t read all of the posts so this may have already been addressed.
    I mentioned this audit on another website and the response was that all of the correction have been made prior to the temperature data being added to the data set. Therefore this audit was of no use as all of the problems had been corrected.
    It was also stated that other temperature data sets were used and produced the same results. Therefore the problems pointed out could not have resulted in significant problems. Now I have learned quite a bit about climate science over the past several years but I will be the first to admit that the depth of my ignorance is still huge. Are these valid criticisms? They do not sound like valid criticisms to me.

    • No it is not valid Criticism, that HadCRUT4 has errors and uncertainties is accepted by all sides of the debate.

      The recent “criticism” is simply misdirection because it didn’t address the specific issue raised in the audit (Though it did concern the use of poor raw data.) which is, that though correction were made and stations removed, major systematic errors* remain in the current 2018 database that can not be corrected without creating even larger uncertainty!

      More importantly this problem is only one of the 25 major issues discovered!

      In short, the world may be warming since 1850 or not but HadCRUT4 has nothing to say about it except weak talking points** for the IPCC!

      *There is a real problem that remains in the database caused by the use of uncorrected raw data that is used in the correction process itself. A systematic error that continues to this day.
      **”Authors of the IPCC’s Fifth Climate Assessment Report (2013) admitted during the review process for that report that no audit of the HadCRUT4 dataset or any associated dataset had been undertaken.”

    • The point is that most of the problems detailed can’t be corrected.
      How do you correct for wrong or missing raw data?
      How do you correct for woefully inadequate surface area coverage?

      The claim is that this data can be used to determine the temperature of the entire earth to within 0.1C.
      Just examining the lack of coverage is enough to disprove that claim. These other problems just make the claim more ridiculous.

  72. JOHN MCLEAN here.
    For Mr Mosher,

    I don’t insult and I don’t accuse without investigation. And if I don’t know I try to ask.

    (a) Data files
    If you want copies of the data that I used in the audit, as they were when I downloaded them in January, go to web page https://robert-boyle-publishing.com/audit-of-the-hadcrut4-global-temperature-dataset-mclean-2018/ and just scroll down.

    Or download the latest versions of the files from yourself from the CRU and Hadley Centre, namely https://crudata.uea.ac.uk/cru/data/temperature/ and https://www.metoffice.gov.uk/hadobs/hadsst3/data/download.html. (The fact that file names are always the same and it’s confusing is one of the fidnings of the audit.)

    (b) Apto Uto not used? Figure 6.3 shows that it is used, the lower than expected spikes are because of other stations in the same grid cell and the vale of the cell is the average anomaly for all such stations.

    (c) What stations are used and what are not?
    The old minimum of 20 years of the 30 from 1961 to 1990 was dropped a few HadCRUT versions back. It then went to 15 years with no more than 5 missing in any decade. HadCRUT4 reduced it again to 14.

    best wishes


    • “If you want copies of the data that I used in the audit, as they were when I downloaded them in January”

      And what makes you think those files were included in Hadcrut without being QC’d first?
      – and are not the files as originally sent by the national Met service involved?
      And (correctly) recorded by the UKMO as being corrupted on receipt.
      IOW: Please show that a particular file was used for Hadcrut as sent.

      Staggering confirmation bias on display (yes I know of your “stance” regarding AGW)
      The UKMO is well used to QC procedures – it uses them routinely in its “day job” – that massive quantities of data that arrives continually 24/7 for inclusion in it’s NWP models.
      It’s just a basic requirement.

      It would be staggering incompetence on the part of the UKMO (as Nick says – full of actual Phd recipients) – and anyone with knowledge of NWP observational data knows that they are certain to have errors.
      So they don’t bother?
      Oh, well if you say so – and the echo-chamber agrees, so it will enter into the Naysayers bible of myths.

    • Had you read more of the comments, you’d have noted that the alleged “defect” wasn’t.

      But even had it been a fault, what about the dozens of other problems identified in the PhD thesis?

      Have you considered the possibility that there aren’t any defects to be found?

      But, again, please, if the defects are so obvious to you, share them with us.


    • You must not have actually read much of this post or you would be aware that those objections have been demonstrated to be invalid. But if you are like most alarmist that I have encountered you simply ignore facts which prove to be inconvenient.

    • “you would be aware that those objections have been demonstrated to be invalid”
      Where? The objection is that he is critiquing raw data, not originally CRU’s, which then goes through a QC filter, which he doesn’t investigate.

      But yes, I downloaded the thesis (some days ago). What stuck out for me was this quote:
      “This thesis makes little attempt to quantify the uncertainties exposed by this investigation, save for some brief mention of the impact certain issues might have on error margins, because numerous issues are discussed, and it would be an enormous task to quantify the uncertainties associated with the many instances of each. It has been left to others to quantify the impact of incomplete data, inconsistencies, questionable assumptions, very likely data errors and questionable adjustments of the recorded data.

      WTF? “left to others?”. How can you get a PhD saying that I did the proofreading, but calculations were to hard. And if a PhD project can’t do it, who are those others?

      It isn’t an enormous task at all. HADCRUT isn’t rocket science. You just write a program that emulates it, and then see what happens when you correct hat you think is wrong with the data. I wrote a program years ago which I have run every month, before the major results come in (details and code here). I have done that for seven years. They are in good agreement. In particular the simplest of my methods, TempLS grid, gives results which are very close to HADCRUT. If I used 5° cells and hadsst3 instead of ERSST, I could make the agreement exact. I wouldn’t expect to get a PhD from doing that, let alone saying it was too hard.

    • Well, some of us may prefer to carefully read through the entire report (all 135 pages of it), and maybe at least spot check some of the data for ourselves (there’s quite a bit of that data) before we make any comments about it. So there’s that. Also, while some here have pointed out potentially valid criticisms of the report itself, others here have pointed out that those criticisms may not actually be accurate. So there’s that, too.

    • Whiskey,

      It is true. And it still appears that you didn’t read the relevant comments.

      You missed the comments showing that HadCRU doesn’t do a quality control audit on the “data” before using them.

      Nick asserted that they don’t use two of the sites cited in the paper, and couldn’t find a third, but that doesn’t mean that all “data” used by HadCRU have been checked. Nick was also wise enough to say that he “personally don’t use HADCRUT back to 1850”.

  73. Your arguments are so weak, which prob’ly why Stokes didnt bother with them. 3 out of 3 cases are not found in the dataset, i forget which one. That’s 100%. So you should take it on yourself, oh mighty arguer, to come here with data, you know, sort of that sciency thing.

    Find some data used that is wrong. Do it. Do it!

    Then you won’t look so weak and irrelevant.

    And as for the older parts of the data set, before 1900, what does it really matter? Until someone with a better data set comes up, that’s all we have. You and your friends should put a grant in to do it better. Last one like that was BEST. How did that turn out?

    [fake- non functional email is against our WUWT commenting policy – mod]

    • Hard to jibe:

      “So you should take it on yourself, oh mighty arguer, to come here with data”


      “prob’ly” and “i forget which one.”

      Mad arguing skillz you got there. Did you get your name from imbibing?

  74. Granted I’m only 56 years old, but it just seems to me that most all the “problems” that the political left has tasked itself to solve don’t actually exist on a global basis or don’t exist at all…oh, but wait until you see their bill for services rendered. It seems that I may have accidentally stumbled upon a truly global problem; rent-seeking parasites masquerading as experts and leaders.

    • Mairon62,

      More vague hand-waving on WUWT. The usual proof by assertion, such as “…that most all the “problems” that the political left has tasked itself to solve don’t actually exist on a global basis or don’t exist at all.”

      • It’s not so much that the problems don’t exist. Global poverty obviously does exist.
        The problem is that the solutions pushed by liberals never solve these problems and almost always make them worse.

  75. JOHN MCLEAN – Update.

    CRUTEM4 documentation should apply to CRUTEM4 dataset, not necessarily to HadCRUT4. But is seems it doesn’t apply to even CRUTEM4 because an extract of the grid cell for Apto Uto shows strange values too.

    The grid cell extract of
    where Apto Uto is in the central cell is listed below, with Year and Month, then cells A to I as per above.

    1978 1 0.20 0.00 -0.05 0.25 0.93 0.23 0.17 0.13 0.20
    1978 2 0.80 0.70 0.78 0.95 1.10 1.43 1.13 0.87 0.20
    1978 3 0.50 0.40 0.65 -0.30 0.17 0.23 0.07 0.27 -0.30
    1978 4 0.50 -0.20 0.15 -0.35 8.65 -1.03 -0.83 -0.27 -0.50
    1978 5 0.10 0.30 0.30 -0.10 0.22 0.20 -0.57 0.10 -0.80
    1978 6 0.10 -0.10 -0.32 -0.25 9.02 -0.23 -0.50 -0.17 -0.60
    1978 7 0.10 -0.30 -0.05 -0.40 9.22 0.63 -0.13 0.07 0.00
    1978 8 0.40 -0.10 -0.37 0.40 -0.18 -1.00 -0.13 -0.27 -0.80
    1978 9 0.00 0.20 0.03 -0.10 0.14 -0.07 -0.03 0.20 -0.50
    1978 10 0.10 0.10 0.07 0.00 0.46 -0.30 0.27 -0.13 -0.40
    1978 11 0.20 0.60 0.43 0.25 0.62 0.30 0.27 -0.27 -0.20
    1978 12 -0.20 0.10 -0.15 -0.20 0.12 -0.00 -0.03 -0.33 -0.50

    Note the odd values in the central grid cell in April, June and July of that year. I’ve also checked the other stations reporting data for that grid cell in that month and none vary from their averages much more than 2C.

    It seems that Mr Mosher has assumed
    (a) that what’s said about CRUTEM4 definitely holds true for HadCRUT4 (okay, the doc for HadCRUT4 isn’t really clear)
    and (b) that documentation from the CRU correctly describes CRUTEM4.

    An apology seems to be called for.

      • Yes, there are two parts to a civil debate: expertise and good will.

        Somehow, climate alarmists, even when they can exhibit the first, can rarely claim the second.

  76. It is astounding to see some here questioning whether the Author, John McLean, knew the difference between raw observations and adjusted opbservations, are straining their credibility. A person invited to submit a Ph.D. thesis at a recognised modern University can probably bes assumed to know enough of the chosen topic to avoid making stupid, elementart errors. Ref Nick Stokes who brushed it aside by saying “These are errors in the raw data files as supplied by the sources named. The MO publishes these unaltered, as they should. But they perform quality control before using them.” and Mosh with “A) data suppliers can apply QC and then document how they QCed. This is done with flags typically”.

    You think John McLean does not know this?
    Yet, he proceeds to provide evidence of a significant problem. This is proper, because it is real.
    I can’t count how many times since 1992 I have written that the data in question is unfit for purpose, with the main purpose in my criticisms being that it is used to construct a global temperature average when it cannot possibly do this accurately enough for use for most puirposes.

    • “A person invited to submit a Ph.D. thesis at a recognised modern University can probably bes assumed to know enough of the chosen topic to avoid making stupid, elementart errors.”
      So we have to assume JM is right, because he has been invited to submit a PhD? And so believe him when he says that HADCRUT, which is full of established PhDs, is making stupid elementary errors.

      • Nick takes the goal posts and proceeds to run with them.

        Geoff never said we need to assume that JM is right because he was invited to submit a PhD. What he said was that he could be assumed to know enough to not make the basic mistakes that you accused him of.

        Please go away until you are mature enough to argue honestly.

        • Nick is someone who’ll do anything to justify whatever [the] AGW crowd do. Honour, truth, a sense of shame etc.don’t appear in the picture for him. He’ll twist himself worse than a pretezel to move goalposts and discussions to avoid the truth. [pruned].

          • If Stokes did work for the CSIRO and is now retired, he can say what the h3ll he likes, he will still get his pension and perks (Govn’t agency). So he can’t be in it for the money.

    • Geoff, the situation is even simpler. It can be shown that obvious errors are included in the HadCRUT4 dataset. Figure 6.3 of the audit shows that Apto Uto is included.
      Also, if the CRU had integrity it would show the data files that it was sent AND it would show a file with revised value and explain what was adjusted and why.

  77. There have been many man-months of work over the years by a group of us who cannot see Australia’s land data showing warming at more than 0.5 deg C versus the official 0.9 deg C roughly for the century starting 1910.
    We are sticking by that.
    Australian data goes into HadCRUT4. It has a large influence on estimates of Southern Hemisphere temperatures. This estimate has errors that should be corrected.
    We have done several Australian land temperature studies showing data problems. One of them is here. Geoff.
    http://www.geoffstuff.com/explanation_chris_gilham.pdf http://www.waclimate.net/year-book-csir.html

  78. “You missed the comments showing that HadCRU doesn’t do a quality control audit on the “data” before using them.”

    There may be “comments”
    But that does o equal evidence.
    Except on WUWT of course.

    • Mr Banton, please show us the data to back up your statements.
      Disprove Mr McLean’s findings with actual data instead of snide remarks.

      • Mr Osborne:

        That is the job of the accuser (obviously).
        To provide evidence.
        And he patently hasn’t done it.
        A file stored from a Nat Met service is NOT evidence of it being included without being QC’d.

        There is an easy and straightforward way to check.
        To write to the UKMO and ask.
        Smacks of …. “I have found a smoking gun” … and the accusation is good enough.
        Lets not spoil it by actually getting to the truth.
        That you do not see that is of course a given.

        And – my “snide remarks” are well deserved here as yet again denizens are entirely unsceptical of sceptics while being entirely critical of the rest.

      • One other “common-sense” thought that is missing in Mclean’s “analysis”.
        As I’ve stated above.
        The UKMO is an organisation that for ALL it’s data – checks for errors (QCs) as a MUST requirement, else the outcome is fatally altered – weather data assimilation is their “day-job”.
        They wouldn’t have in place a routine QC software? really? …..

        You really need to be ideologically motivated to jump to that conclusion.
        And to not do the easy thing and clarify with the UKMO that that is in indeed what they do, is the classic put up an unsupported accusation into the naysayer book of myths via gaining uncritical hugs and kisses in the Blogosphere.

        I also note that Mr Mclean has not answered my objections.

        • ==>Anthony Banton

          I’d also like complete clarification along with you and Dr Mclean, as he did ask the same question in his paper. Why are outliers included in the station data files and why weren’t suitable quality control measure applied to the data?

          A greater problem to be addressed is why such obvious outliers are included in the station data files. It would seem that not all the national meteorological services that supply the data to the CRU, nor the CRU itself, apply suitable quality control measures to the data in question. – Mclean, John D. 2017

          I can confirm and have verified that outliers such as Apt(Airport) Otu are in the published files supplied in association with CRUTEM4. These files are included in the latest 2018 version and the implausible monthly mean above 80.0C is there for all to see.

          These station files list the normals and standard deviations which again backup the statement made in the Dr Mclean’s paper:

          The data files published in association with the CRUTEM4 dataset do not include a set of station data files that have been corrected or data removed so it is not possible to determine the changes that have been made or to verify that the data has [been]modified as described. What is clear however is that the inclusion of some erroneous values when calculating long-term average temperature and standard deviations has negative repercussions on the CRUTEM4 dataset. – Mclean, John D. 2017

          The questions now to ask, is what station data has been modified and what were those changes… as it isn’t at all obvious to anyone who examines the dataset!

    • “Write up a clear and comprehensive response. I am sure that Watts will publish it (or JoNova, or me, whoever your choose).”

      No, how about Mr Mclean write to the UKMO and put questions to them.

      First question…. Do the files as sent by the Nat Met services go un-QC’d into Hadcrut?

      Is it me or is that not an obvious thing to do … and indeed an obvious thing for his Phd referees to ask of him.
      (being as there is a presumption of incompetence on his part)
      I cant think of a better way to get the truth – unless of course….
      The “C” word comes into thinking.

      • UKMO were more gracious than you. They responded acknowldging mistakes. Whereas you twisted you were busy throwing mud at McLean without even bothering to read his thesis and acknowledge it’s merits. Who’s looking like a fool now?

  79. ==>John McLean

    Hi John,

    Firstly, thank you for commenting. It is a brave scientist who dares to show up at WUWT!

    As a layman I’m struggling to work out what data was used and when.

    On the CRU web site* under the heading – “Land Stations used by the Climatic Research Unit within CRUTEM4” – there is a link to the station files,** Below the link it says:

    The file gives the locations and names of the stations used at some time (i.e. in the gridding that is used to produce CRUTEM4) during the period from 1850 to 2010. All these stations have sufficient data to calculate 30-year averages for 1961-90…

    I downloaded all the relevant files and the station data is there for Apto Uto but it is not included in this site list – as has been pointed out here. This has become the major bone of contention with you paper and nobody seems capable of moving beyond this one issue, despite there being many others of major importance.

    It is interesting if confusing that even with the data provided, the exact averages for HadCRUT4 and HadSST3 can not be replicated!!

    The reason given in the FAQ is as follows:

    Both these are ensemble datasets. This means that there are 100 realizations of each in order to sample the possible assumptions involved in the structure of the various components of the error… All 100 realizations are available at the above Hadley Centre site, but we have selected here the ensemble median. For the gridded data this is the ensemble median calculated separately for each grid box for each time step from the 100 members. For the hemispheric and global averages this is again the median of the 100 realizations. The median of the gridded series will not produce the median of the hemispheric and global averages, but the differences…

    This seems absurd to me and incredibly opaque but what would I know!

    The other disclaimer is the admission of several “variance adjustment” (Monthly updates, NMSs and the moving 30-year baseline) that change the data from year to year.

    It would seem to be an impossible task to audit the provided data when even the provider has declared that their result can not be duplicated! ;-(

  80. My thanks to Mr. Watts and Dr. McLean. I am still studying the paper but even the little that I have followed makes it well worth while the $8.

  81. John McLean notes on page 7 of his report, ” The frequency of the upward or
    downward adjustments are irrelevant on these scales; it is the size of the adjustment that
    matters. For example, five adjustments downwards by 1.0°C are not cancelled out by five
    adjustments upwards by 0.2°C.”

    This is an important point that I do not have the computing power to deal with myself.

    An adjustment applied to a selected portion of a temperature/time series can have an effect depending on 3 main factors –

    1. Magnitude. How large the change is, e.g. delete 0.5 deg C, replace with 0.75 deg C.
    2. Duration. The duration of the change, e.g. a change to a time period one year long has less effect than a change applied to 10 years long.
    3. Leverage. How far from the pivot point the change is. Current methodology keeps the most recent observation as the fulcrum point, so a change to a block of data dated dated 100 years ago will swing the outcome more than a chnage of similar smagnitude and duration made 1 year ago (that is like a torque in foot pounds can have the same pounds but many more foots).

    Of course sign is part of this, as noted.

    So, each adjusted time series needs to be examined for the total effect of the adjustment considering sign, magnitude, duration and leverage. This is what I see as an analog of torque. The program to do this is not daunting to write, but easy access to each temperature-time series used in HadCRUT4 has to be there, preferably cleaned of the other errors mentioned.

    Running this quality control test would put to rest a whole lot of speculation about the speculated effects of adjustments and homogenizations. It is a complete answer to Mosh’s claims that adjustments to BEST made the past warmer. They might have, but the full test does not seem to have been done yet.

    In my book, it is reprehensible for BEST and HadCRUT (and probably others like GISS) to have endured this long before they do the definitive demonstration using this methodology. Maybe they can redeem some reputation by doing it immediately. Geoff.

  82. If HadCRU uses different data to GISS and NOAA and BEST, if the each have different methodologies, and the results are very similar….

    If raw and adjusted data get similar results. If different datasets (GHCN daily – GSOD etc) get similar results…

    How bad is HadCRU really?

    Remember this skeptic effort?


    “First the obvious, a skeptic, denialist, anti-science blog published a greater trend than Phil Climategate Jones. What IS up with that?”

    We read from above (Stokes) that McLean is referring to the raw data, which gets tripped of these errors in the processing. I did not get that clarification in the post here or the abstract.

    We learn upthread that station data (from at least one station) that is faulty is not used.

    I’ve seen a lot of back-slapping and cheering, but not much of the skepticism that should be driving interest in scientific findings every time. Why is it left to Mosh and Stokes to cast a critical eye on this? And why are skeptics waiting for anyone else to do due diligence?

    • As I’ve already stated elsewhere in this thread, some of us may be carefully taking our time to review things before making comments either way. Meanwhile Mosher, for one, was apparently so eager to stick the knife in that it looks like he got a lot of his criticism wrong, as others in this thread have already pointed out, should you care to peruse the whole thing.

    • I didn’t see one person say they would digest it carefully before commenting (you mentioned this as a possibility). But there were a great many who took the conclusions as read, praising the author for getting to “the truth”, and scorning the Met Office.

      I doubt we’ll get any commentary from regulars who digest the report with a critical eye. I’d be delighted to be proved wrong, but there’s just no reason to believe it.

      “Meanwhile Mosher, for one, was apparently so eager to stick the knife in that it looks like he got a lot of his criticism wrong”

      There were only 2 points of criticism from him in this thread, so he couldn’t have got “a lot of” his criticism wrong.

      He said no data, no code. McLean THEN replied by offering links here, but not to code. Looks like it wasn’t supplied with the report.
      He said Apto Uto station not used. Stokes checked and says it’s not used after 1970. Mosh may have been wrong about that, then. But the faulty data was in the 1970s.

      The failure to mention that it was raw data that was so messy seems to be a pretty damning indictment of the report. I’m prepared to give some benefit of doubt, but I can’t see much changing with other temp records of different method and provenance having similar results.

      • Gawd I wish we could have a sensible discussion on that level. You’ve got right to the heart of the issue.

        I know I’m probably dreaming, but what I would love to see here is an article detailing exactly what the complaints about this paper are, in detail, and detailed responses to each bit from McLean.

        It would be a shame to let this just slip by when we actually have a chance to sort this out and get some clarity.

        • While you lot are moaning and throwing mud at McLean’s work, Met Office admitted that he flagged issues and they did corrections. So eat some crow, barry and Philip.

      • “He said no data, no code. McLean THEN replied by offering links here, but not to code. Looks like it wasn’t supplied with the report.”

        Links to the data can be found near the bottom of the following page. AFAIK these have been there since before the WUWT discussion even began, and I assume that they are also buried somewhere in the document itself, but I haven’t looked. Links to code may be in there somewhere, too, but again I haven’t looked.


        Concerning Apto Uto, Mosher said the following:

        “The reason why CRU does not USE Apto Uto is because it does NOT have the required number of years
        in the base period. For CRU this is 1951-1980 and a station MUST HAVE 20 of those 30 years

        McLean responded thusly:

        “(c) What stations are used and what are not?
        The old minimum of 20 years of the 30 from 1961 to 1990 was dropped a few HadCRUT versions back. It then went to 15 years with no more than 5 missing in any decade. HadCRUT4 reduced it again to 14.”

        Now I noticed immediately that Mosher seemed to be using the wrong decade range, from what I’d seen so far of the report plus the relevant web sites, only I said nothing because I wasn’t quite sure. But McLean seems to be confirm this and also points out that Mosher seems to be using completely incorrect (or at least outdated) criteria in his criticism. So it comes as no surprise to me that others note Apto Uto is actually in there somewhere, while Mosher insists that it simply can’t be, using his criteria.

        Before making any further comments here, you do know that we’ve been down this path at least once before, right, with HadCRUT3? While McLean may very well be the first person to do a deep quality analysis of HadCRUT4, others did similar work with HadCRUT3 years ago and found problems with it similar to what McLean has found. Those folks were generally mostly ignored, though (except by sites like WUWT), but McLean’s work may be harder to ignore since it seems to be drawing a lot of attention.

        • Yes, the baseline period is wrong (it’s GISS), but I suspected brain fart, and it seems McLean did too judging by the update.

          Before making any further comments here, you do know that we’ve been down this path at least once before, right, with HadCRUT3?

          Yes. I’ve seen criticism of the temp records for over a decade: McIntyre noticing a problem with 2000s data in the US temp record and a correction from the institute acknowledging him; Anthony Watts’ surfacestations project and photographic evidence of site bias; endless focus on adjustments that cool the record for a particular weather station (but never, it seems, showing the adjustments that war a particular station – it took Stokes to point out that there were as many cool as warm adjustments); criticism of the SST constructions, station ‘drop out’ and the rest.

          But I’ve also seen skeptics knuckle down and construct their own temp records – people who did more than notice problems, they acted on it. I’ve seen BEST produce a temp record much the same as the others. I’ve seen Jeff Condon and Roman M come up with a temp record from raw data that ran warmer than HadCRU. I’ve sen Anthony Watts publish a paper highlighting min/max biases in the US records, but that corroborated the US mean temp record.

          I also see that these developments are soon forgotten whenever a ‘bombshell’ appears. It seems that it is enough that someone has criticised for a wave of congratulations to appear, so that it seems that there is only ever one scandal after another.

          We’ve known for years that the data are not perfect. And for years the various compilers of the official records have said the same. This is old ground. What’s new in this report that will make a substantive impact? The Met Office replied to it that the small number of errors out of millions of datum would not significantly change the results.

          You have suggested that – maybe – some regulars are taking their time digesting the paper and will eventually comment. That there is a sober coterie doing due diligence.

          Whether or not that’s true, none have said so or recommended patience. If it was an ‘enemy’ paper being attacked there would already be specific comments on it.

          No, if any AGW skeptics are going through it, they’re not commenting, and I doubt we’ll see substantive commentary from them before this thread shuts down, leaving Mosh and Stokes as the few voices of doubt in a tide of approval.

          • Oh Yeah, we’ve seen that. Every error won’t make a difference. Every adjustment won;t make a difference. But hey presto, sum of all will be ” It’s worse than ever “. That’s been your lot’s modus operandi.

        • Since I had the 2018 version of the data and the APTO_OTU (Otu Airport?*) file open on my desktop, here are some details:

          APTO_OTU has 41 years of data 1947 – 1988.
          The 80C outlier is from 1978.
          13 years are missing and 11 years are incomplete.
          The normals were calculated from 1961-1988
          The standard deviations are from 1947 – 1988

          Here is header above the observations:

          Number= 800890
          Name= APTO_OTU
          Country= COLOMBIA
          Lat= 7.0
          Long= 74.7
          Height= 630
          Start year= 1947
          End year= 1988
          First Good year= 1947
          Source ID= 79
          Source file= Jones
          Jones data to= 1988
          Normals source= Data
          Normals source start year= 1961
          Normals source end year= 1988
          Normals= 24.1 24.4 24.6 27.8 24.6 27.9 28.0 24.6 24.4 24.1 24.1 24.0
          Standard deviations source= Data
          Standard deviations source start year= 1947
          Standard deviations source end year= 1988
          Standard deviations= 0.6 0.6 0.5 11.9 0.5 11.8 12.0 0.6 0.5 0.6 0.6 0.7

          *Otu airport has that Lat/Long.

  83. I know of only one study on global warming based on temperature readings that I consider legitimate. Tony Heller at realclimatescience.com one time looked at the temperature record from continental US (only data set he considers reliable). He computed the temperature trend at each station and then averaged the trends to find an over cooling trend from the data. This directly contradicts the government sponsored rising temperature claims and is thermodynamically valid. Averaging temperatures from areas of different heat capacities is not.

    Regardless of heat capacity, heat does flow from something with a higher temperature to something cooler. This is why I claim Mr Heller’s averaging of temperature trends at different stations is valid. If global warming was real this analysis would show it and there is no legitimate reason not to process the temperature data by first computing trends for each station. Step changes at one site would actually be a good indicator of site changes.

    Now if we could just quite throwing half the data away and compute minimum daily temperature trends separately from maximum daily temperature trends. I have read here at WUWT that nighttime minimum temperatures are increased by buildings increasing mixing more than the temperature highs for each day. Keeping both might allow some measure of changes in the Urban Heat Island effect at each site to get a measure of site changes.

Comments are closed.