July 2017 Projected Temperature Anomalies from NCEP/NCAR Data

Guest Post By Walter Dnes

In continuation of my Temperature Anomaly projections, the following are my July projections, as well as last month’s projections for June, to see how well they fared. Note that I’ve changed to a different NCEP/NCAR reanalysis dataset as of the July 2017 projections. More details below.

Data Set Projected Actual Delta
HadCRUT4 2017/06 +0.583 +0.641 +0.058
HadCRUT4 2017/07 +0.680
GISS 2017/06 +0.81 +0.69 -0.12
GISS 2017/07 +0.77
UAHv6 2017/06 +0.384 +0.208 -0.176
UAHv6 2017/07 +0.253
RSS v3.3 2017/06 +0.486 +0.344 -0.142
RSS v3.3 2017/07 +0.354
RSS v4.0 2017/06 +0.539 +0.389 -0.150
RSS v4.0 2017/07 +0.446
NCEI 2017/06 +0.76 +0.82 +0.06
NCEI 2017/07 +0.85

The Data Sources

The latest data can be obtained from the following sources

Switching to a different NCEP reanalysis data set

Up til now, I’ve been using air.sig995.YYYY.nc data files from ftp directory

ftp://ftp.cdc.noaa.gov/Datasets/ncep.reanalysis.dailyavgs/surface

where YYYY is the year the data represents. As of this month I’m switching to air.YYYY.nc files from ftp directory:  ftp://ftp.cdc.noaa.gov/Datasets/ncep.reanalysis.dailyavgs/pressure/
(Citation: Kalnay et al.,The NCEP/NCAR 40-year reanalysis project, Bull. Amer. Meteor. Soc., 77, 437-470, 1996.).

As its name suggests, the sig995.YYYY.nc data is valid at the 995 mb level, which is a good proxy for surface temperatures. Unfortunately, it has not worked well as a proxy for the satellite data sets. The air.YYYY.nc data has 17 pressure levels of data; 1000, 925, 850, 700, 600, 500, 400, 300, 250, 200, 150, 100, 70, 50, 30, 20, and 10 millibars. A bit of experimentation indicates a very good correlation between satellite data sets, and the 700 mb pressure level data, when taking the appropriate global subset corresponding to the satellites’ coverage.

The 700 millibar data will be used for the satellite projections, until/unless something better comes along. To reduce the amount of files, downloading, etc, the 1000 millibar level data from the air.YYYY.nc files will be used as a proxy for surface temperatures. Thus, my surface data will no longer be identical to that on Nick Stokes’ web page, but it will probably still track closely. As with the 995 millibar data, GISS has a good correlation (0.836) with the 1000 millibar data, but HadCRUT and NCEI are both below 0.45.

The Latest 12 Months

The latest 12-month running mean (pseudo-year “9999”, highlighted in blue in the tables below) ranks anywhere from 2nd to 4th, depending on the data set. The following table ranks the top 10 warmest years for earch surface data set, as well as a pseudo “year 9999” consisting of the latest available 12-month running mean of anomaly data, i.e. July 2016 to June 2017.

HadCRUT4 GISS NCEI
Year Anomaly Year Anomaly Year Anomaly
2016 +0.775 2016 +0.992 2016 +0.948
2015 +0.761 9999 +0.911 2015 +0.908
9999 +0.700 2015 +0.871 9999 +0.868
2014 +0.576 2014 +0.752 2014 +0.747
2010 +0.558 2010 +0.714 2010 +0.703
2005 +0.545 2005 +0.696 2013 +0.673
1998 +0.537 2007 +0.659 2005 +0.667
2013 +0.513 2013 +0.658 2009 +0.641
2003 +0.509 2009 +0.648 1998 +0.638
2009 +0.506 1998 +0.639 2012 +0.628
2006 +0.505 2012 +0.637 2003 +0.619

Similarly, for the satellite data sets…

UAH RSS v3.3 RSS v4.0
Year Anomaly Year Anomaly Year Anomaly
2016 +0.510 2016 +0.573 2016 +0.778
1998 +0.484 1998 +0.550 9999 +0.616
9999 +0.354 2010 +0.474 1998 +0.611
2010 +0.333 9999 +0.411 2010 +0.555
2015 +0.265 2015 +0.382 2015 +0.513
2002 +0.217 2005 +0.335 2002 +0.422
2005 +0.199 2003 +0.320 2014 +0.411
2003 +0.186 2002 +0.315 2005 +0.400
2014 +0.176 2014 +0.273 2013 +0.394
2007 +0.160 2007 +0.252 2003 +0.385
2013 +0.134 2001 +0.247 2007 +0.333

January-through-June of 2017 were all cooler, in all 6 data sets, than the corresponding months in 2016. Therefore, July-through-December 2017 would have to be noticably warmer than the corresponding months in 2016 to beat the 2016 annual values and make 2017 “the warmest year ever”. “Never say never”, but it’s looking more difficult with each passing month.

The Graphs

The graph immediately below is a plot of recent NCEP/NCAR daily anomalies, versus 1994-2013 base. The second graph is a monthly version, going back to 1997. The trendlines are as follows…

  • Black – The longest line with a negative slope in the daily graph goes back to late May, 2015, as noted in the graph legend. On the monthly graph, it’s June 2015. This is slowly growing ever longer but nothing notable yet. Reaching back to 2005 or earlier would be a good start.
  • Green – This is the trendline from a local minimum in the slope around late 2004, early 2005. To even BEGIN to work on a “pause back to 2005”, the anomaly has to drop below the green line.
  • Pink – This is the trendline from a local minimum in the slope from mid-2001. Again, the anomaly needs to drop below this line to start working back to a pause to that date.
  • Red – The trendline back to a local minimum in the slope from late 1997. Again, the anomaly needs to drop below this line to start working back to a pause to that date.

NCEP/NCAR Daily Anomalies:

daily

NCEP/NCAR Monthly Anomalies:

monthly

Miscellaneous Notes
At the time of posting, the 6 monthly data sets were available through June 2017. The NCEP/NCAR reanalysis data runs 2 days behind real-time. Therefore, real daily data from July 1st through July 29th is used, and July the 30th and 31st are assumed to have the same anomaly as the 29th. For HadCRUT, GISS, and NCEI, the 1000 millibar data is used as a proxy. For RSS and UAH, subsets of the 700 millibar reanalysis are used, to match the latitude coverage provided by the satellites. In all cases the projection for a specific data set is obtained by

* subtracting the previous month’s NCEP/NCAR proxy anomaly value from this month’s value (1000 mb or 700 mb as appropriate)

* multiplying the result by the slope() of the data (previous 12 months) of the specific data set versus NCEP

* adding that result to the previous month’s value of the data set

Advertisements

62 thoughts on “July 2017 Projected Temperature Anomalies from NCEP/NCAR Data

  1. One prediction liable to be born out is that adjustments to the so-called “surface data” sets, ie packs of lies and flights of science fantasy, will keep cooling the past and warming the present.

  2. This is so accurate, good job. If you’re interested in raising temperatures I suggest you to visit the IPCC site, they publish summaries where are registered the temperature variations.

  3. Bureau of Meteorology in Australia is being forced to respond to Jennifer Marohasy’s allegations of temp data tampering. behind paywall. hope some can access it. this article has been online for 9 hours and the story is being reported on at least one commercial radio station, yet no other MSM is carrying the news as yet!

    1 Aug: Australian: Bureau of Meteorology opens cold case on temperature data
    The Bureau of Meteorology has ordered a full review of temperature recording equipment and procedures after the peak weather agency was caught tampering with cold winter temperature logs in at least two locations. The bureau has admitted that a problem with recording very low temperatures is more widespread than Goulburn and the Snowy Mountains but …
    http://www.theaustralian.com.au/national-affairs/climate/bureau-of-meteorology-opens-cold-case-on-temperature-data/news-story/c3bac520af2e81fe05d106290028b783

    • You know this is 3 years ago don’t you. The story was amped up by this science denying newspaper owned by Murdoch. The then Prime Minister, Tony Abbott, another science denier, got his business adviser, not a scientist, to go and attack the Bureau of meteorology, the result. There was no investigation as there was no evidence to open an enquiry and the data gathered is real, 2017 heading towards hottest year on record again in Australia. Nice try to muddy the waters …. result …. total failure!

      • Steve,

        The only “science d@niers” are the consensus Team.

        Mann, Jones, Hansen, Schmidt, Trenbreth and their so far unindicted coconspirators rank right up there with eugenics proponents as the leading enemies of humanity among alleged “scientists”, which of course they aren’t. To be a scientist, you have to practice the scientific method.

      • The BOM are in the news again Steve and their day will come. I would have bet my left ?? on two bodies that would never distort the truth ie The CSIRO and the BOM.

        I am incredibly disheartened to find lots of evidence that suggests they are no better than that East Anglia mob of rogues. Remember them??

        The BOM have written out hottest days on record such as Bourke and now they are trying to fiddle Goulburn’s coldest day on record, however they have been sprung again.

        Surely you can’t ignore the stench of a rotting carcass. Or do you have a scientifically or statistically based answer for these actions?? I will listen.

    • “yet no other MSM is carrying the news as yet”
      Of course not. Goulburn is a minor station, not used by any major indices. There was some confusion about whether the minimum on a record cold morning was -10.0 or -10.4°C. Only in Lloyd/Marohasy world would that be any kind of news.

  4. 1 Aug: Australian Editorial: Bureau clouds weather debate
    In a time of climate change, it’s not surprising there is more interest in — and scrutiny of — the Bureau of Meteorology. A confident, outward-looking agency would seize this as an opportunity. Instead, as we report today, the bureau still struggles when called on to give a transparent account of its work. On July 2 in Goulburn, NSW, observant local Lance Pidgeon noticed the temperature on the bureau website had dropped to minus 10.4C. Next it read minus 10C, then the reading disappeared altogether. The original low reappeared after questions were put to the bureau.

    One explanation from the agency is that results below minus 10C are flagged as possible anomalies and checked before they are restored. Yet the same system applies at the alpine Thredbo top station where temperatures as low as minus 14.7C have been registered. Seemingly at odds with its first explanation, the bureau also says machines at several cold weather stations have failed to record below minus 10C and will be replaced. In any event, the bureau insists, these failures will not skew the national weather records because the Goulburn and Thredbo stations do not feed into this official dataset. However, results from Goulburn are used to adjust readings from Canberra, which are included in the national dataset…

    That adjustment process, known as homogenisation, has got the bureau in trouble in the past…READ ON
    http://www.theaustralian.com.au/opinion/editorials/bureau-clouds-weather-debate/news-story/defe9d457e78517992d7c90b1d2275fc

  5. unfortunately, Newman’s piece does not include the brea BoM news today:

    1 Aug: Australian: Maurice Newman: Media’s silence of the climate scams
    How lucky to have gatekeepers such as the ABC, SBS and Fairfax Media to protect us from the likes of Climate Depot founder Marc Morano, recently here promoting his documentary Climate Hustle?
    Thanks to mainstream media censorship, Morano’s groundbreaking film, which promised a heretical fact-finding journey through the propaganda-laced world of climate change, was denied publicity…

    Australian scientist Jennifer Marohasy recently outed the Bureau of Meteorology for limiting the lowest temperature that an individual weather station can record. If this is accepted practice, no wonder American physicist Charles Anderson declares “it is now perfectly clear that there are no reliable worldwide temperature records”…READ ALL
    http://www.theaustralian.com.au/opinion/medias-silence-of-the-climate-scams/news-story/b124752820c94822915f94917e6566b2

    however, you have to “admire” BoM for this coming out today!

    1 Aug: Townsville Bulletin: Helter swelter, it’s heating up
    by ANDREW BACKHOUSE
    Bureau of Meteorology senior climatologist Catherine Ganter said for the next three months there was a greater than 80 per cent chance of warmer days and nights compared to normal.
    “From what we can see there are much warmer than average sea surface temperatures all the way across the east coast and that’s partly behind the warmer outlook,” she said.
    Towns along the coast of Queensland will be most affected by the warmer sea temperatures.
    The forecast could mean warmer nights in particular.
    “It tends to affect minimum temperatures more,” Ms Ganter said…

    Ms Ganter said the chances of wetter and drier conditions in August and ­October were about equal…
    The average maximum temperature for July is likely to beat the previous record set in 1975 to be 2C above average.
    Official readings began in 1910.
    And the unseasonable warm conditions will continue.
    “In Townsville it’s likely to be a degree warmer than normal,” Ms Ganter said.
    http://www.townsvillebulletin.com.au/news/helter-swelter-its-heating-up/news-story/9aa228c912323d5578d0cad1e830e44e

  6. “… you have to “admire” BoM … ” Not many of us do that. Checking on all this fiddling gets tiresome.
    ““In Townsville it’s likely to be a degree warmer than normal,” Ms Ganter said ”
    Yep. Its what used to be called “spring”.

  7. Walter, a lot of work. I note that your temperatures turned out in most sets to have been too high. I would have counseled you to trim a bit off your temps by viewing the global temperature maps.

    I have for many months posted on the large cold Blobs in the temperate zone which have replaced hot blobs that had persisted for several years. Moreover, over several months, instead of cold water upwelling much in the eastern Pacific equatorial zone, it was slanting down (and ‘up’) from the cold patches north and south of the equator. Also a wide cold band in the equatorial Atlantic and no warm pool in the west Pacific. The cool temperatures are now less a factor of ENSO. This decoupling is querying up your forecasts and the formulae of several other analysts that use ENSO data to calculate.

    Personally, I think this unusual development presages worrying colder weather for a spell into the future. In the NH it has been a ‘year without (much) summer’ (personal experience with Canada and wife with Europe and Russia). I’m predicting a very cold NH winter this year, cool tropics (Oz has been so cold that the Oz met office has been caught clipping degrees off bitterly cold areas). There is too much cold water around to forecast a warm season ahead.

      • Fairly serious sea ice protrusion into the Indian Ocean this year. I consider north of 60 to be the Indian not Southern, YMMV.

    • The satellite data sets were the biggest problem recently. NCEP surface data is not a valid proxy for them. A quick check indicates that 700 mb data correlates better with the satellites, so I’ve switched to that for satellite projections.

      Arctic Ocean ice cover in winter is also important. Last boreal winter had record low ice cover. A million or 2 square km may not look like that much, but… normally ice-covered areas, which should be approx -23°C ended up as -3°C, the melting/freezing point of salty water. That’s 20 Celsius degrees (36 Fahrenheit degrees) above normal. This is where the scarey American headlines about portions of the Arctic being 36 Fahrenheit degrees above normal came from.

      The good news is the Stefan–Boltzmann law https://en.wikipedia.org/wiki/Stefan%E2%80%93Boltzmann_law which says that radiated energy of a black body is proportional to the 4th power of temperature in Kelvin degrees. Dr. Trenberth is looking in the wrong direction. Heat is being radiated up into space not down into the oceans. How do you think satellites see it? Yes, ocean water has a much lower albedo than ice, but there isn’t a lot of sunlight coming in during the Arctic winter. Furthermore, water vapour pressure is very low near freezing. H2O is a much stronger greenhouse gas than CO2.

      I see a negative feedback mechanism here.

      1) In the late 1970’s arctic ice area was at a peak. This insulated the polar regions, like a bald man with a toque on a cold day. The planet warmed up.

      2) Now, with lower ice area, more heat is radiating away into space. Think of a bald man, without a toque, on a cold day. The planet cools off.

      3) Constantly radiating away this much heat is not sustainable. The planet will cool off, and ice cover will build up again. As they say in FORTRAN…GOTO 1

    • Not so. Walter averages a bit too low on the satellites and a bit too high on the stations in general. Perhaps he builds their biases in unconsciously.

      • Amazingly Australia is claimed to have hottest July in a hundred years.
        Might mean there was a hotter temp 101 years ago.
        This BOM adjustment stuff is big news.
        The senior head of department has written a letter which makes a wild claim that a number of recording devices have mysteriously stopped working at exactly minus 10 degrees.
        This needs a post, if someone can bring it to Anthony’s attention.
        I presume they use the same thermometers over Australia,
        The bureau itself sent a text advising that they set 10 degree cutoffs for cold data in some sites.
        Where are ZEke and MOsher to explain……
        Firstly setting limits on supposedly tamper proof recording equipment.
        Would this not invalidate their use.
        Secondly if say 6 out of 800 approx thermometers mysteriously stop at -10 degrees exactly …
        What does this say about instrument reliability in general and the instruments themselves.
        This story could be big.
        It should lead to the head of department offering an apology for misleading and a slight drop in Aussie temps.
        Cheers.

      • angech, does anyone know if they do similar capping on the upper limits of temperatures recorded ? i strongly suspect not.

    • “Oz has been so cold”
      It hasn’t been cold at all. A few frosty mornings in the SE. The only places below average are in the SE – light green:

      Here is just July. Even warmer, especially in the north.

      • Except, of course, for the data which may or may not be included because BOM may or may not have deemed it reliable because their equipment may or may not be faulty….

      • It ought to be compared with the late 1800s. But of course, BOM has decided to exclude that warm period.

    • The summer so far in the UK has been on the warm side, and with August still to come, it is shaping up to be within the warmest 10 years out of the last 100.

      Meanwhile, most of southern Europe has been having a hot dry spell for several weeks, and Spain had its highest recorded temperature (possibly – depending on the validity of a couple of slightly higher claims in earlier years), of 46.9 at Cordoba on 14th July.

      Neither a “Year without summer” nor “worryingly cold”. Perhaps your wife spent the hot afternoons in air-conditioned restaurants?

      • i can assure you the scottish summer has not been on the warm side richard . last time i looked scotland was still part of the uk .

  8. “To even BEGIN to work on a “pause back to 2005”, the anomaly has to drop below the green line.”

    While I accept this in principle I have a quibble as to the claim there is no ‘pause’. To qualify as an ‘increase’ the difference has to be statistically significant. It is not reasonable to have numbers and a calculation with an uncertainty of say, 0.2 degrees and then claim to have a rise or fall of a lower value than that. It just cannot be justified if the measurement ‘error’ (uncertainty) is larger than the ‘effect’.

    No amount of statistical BS can hide this. Making thousands of different measurements with thousands of instruments does not qualify as a reduction in the uncertainty. This point is usually not grasped by the novice and no one promoting alarm is bothering to explain what constitutes a valid vs invalid claim.

    The tiny differences between most years 2000-2016 are not rankable in the normal fashion as many of them are indistinguishable by any standard approach.

    • Mostly agreed. But you want a distinction between statistically significant and statistically meaningful. Increasing the temporal sampling resolution would necessarily influence the former but not the latter. The latter is reflected in the magnitude of the slope, and the former can qualify the latter which I believe is the crux of your point.

      The higher sampling rate increases N, reduces the standard error (SE) of the regression cofficient (i.e. the slope, ‘b’), narrows the theoretical sampling distribution of estimated regression coefficients from samples of size N, increases the t-value for the t-test (against zero or ‘null’) of our sample regression coefficient [t=(b/SE)], increases the degrees of freedom (df) associated with the theoretical distribution of t-values we use to approximate the null sampling distribution of b estimates (df=N-2), narrows that distribution, and reduces the deviation from zero required to exceed some conventionally very large proportion probable t-values (usually 0.475 on either side of zero).

      Aside from the appropriateness or inappropriateness of using the t-distribution, there are a pile of linear regression assumptions that have to be met in order for the b estimate and SE estimate to be valid and, therefore, the aforementioned statistical inference to be valid.

      Such as independent residuals. So, the autocorrelation probably needs to be removed as well, because one measurement will depend on the preceding ones. To do this, additional regressors (predictors) that are themselves the measurements but lagged by 1, 2, 3, etc. time points.

      The process first outlined above is used by default. Checking the appropriateness of the assumptions is rarely ever reported, if even done in the first place, unfortunately.

    • “No amount of statistical BS can hide this. Making thousands of different measurements with thousands of instruments does not qualify as a reduction in the uncertainty. “

      As so often, leaving out what it is the uncertainty of. It is standard error of a mean, not the measurements, and in this case, the trend, which is a weighted mean. The standard error (uncertainty) of a mean is sample error – not measurement error. What might have happened if you had chosen a different sample. And that certainly does reduce with larger samples. That is why polls, drug trials etc spend money to get the largest samples they can afford.

      Basic stats.

      • What might have happened if you had chosen a different sample.

        And therein lies one of the significant problems with the so called time series.

        The sample in 1860 is different to that in 1880, which is different to that in 1900, which is different to that in 1920, which is different to that in 1940, which is different to that in 1960, which is different to that in 1980, which is different to that in 2000, which is different to that today.

        Throughout the time series almost every year involves a different sample, such that one cannot compare one year with another, or one year with the so called base period.

        There is no meaningful anomaly because of the different sampling.

      • Nick Stokes July 31, 2017 at 8:57 pm
        “Oz has been so cold”
        It hasn’t been cold at all. A few frosty mornings in the SE.

        ” temperature records have fallen as many Australians woke to freezing weather
        Snow, hail and rain has fallen across parts of Victoria, South Australia and Tasmania, as a band of cloud followed by a pocket of cold, dry air crosses the Tasma
        As forecast, Sydneysiders shivered through the city’s coldest pair of mornings since 2008, with minimum temperatures plummeting to 5.4 degrees today and 5.8 degrees yesterday
        Goulbourn, the coldest city in New South Wale’s south-west, reached -10.4 degrees today
        The negative temperature was 12 degrees below Goulburn’s long-term morning average of 1.6 degrees. Canberra also experienced its coldest pair of weekend mornings in two decades Temperatures in the capital plummeted to -8.2 degrees today, following a biting reading of -8.7 degrees yesterday
        It was the city’s coldest pair of mornings since in 1971.

        The last time Canberra experienced a weekend with mornings this cold, John Howard was in his first term as Prime Minister, Stuart Diver had just been rescued from a landslide at Thredbo and the film The Castle had just been released, Weatherzone meteorologist Ben Domensino said.

        Minimum temperatures will climb closer to the July average of zero degrees from tomorrow, as cloud and wind increase over the ACT early next week.

        In Melbourne, city temperatures dipped to 1 degree today, on par with yesterday morning.

        Biting record temperatures hit other parts of Victoria, with Mildura dropping to -2.1 degrees today, making it the coldest July morning since July 6, 2012.

        Shepparton shivered through its coldest morning since July 7, 2012, plummeting to -3.9 degrees
        Australia to shiver through the weekend
        Tasmania had another cold start today, with temperatures dropping to 1 degree in Hobart , -6 degrees at Butler’s Gorge and -1 degree in Launceston.
        Frost and ice is expected across much of Tasmania.
        The snap-freeze is on the move, with a shift in weather patterns expected this week.

        “We have a low pressure system and a cold front which has come across Western Australia and its coming into South Australia, bringing wind and showers in the south today,” a Weatherzone spokeswoman said “As the low weakens, it is pulling some cold air. By about Monday, we’re looking at showers across Victoria and also southern parts of NSW.”
        Moving into Tuesday, the low is expected to move across Victoria, and towards the New South Wales Coast “Most of the action this week will be throughout southern parts of Australia. A lot of cold air is wrapped up in these lows so hopefully we should see some snow this week,”

        I think Nick lives near me in central Victoria so would be well aware of the severe, repeat severe cold snaps we have had throughout July.
        Shame.

      • Nick

        While I agree that the sample error is additional, it is not reasonable to dismiss the measurement uncertainty which seems to be near-universal when discussing temperatures. People are taking the temperature readings as gospel, literally. The uncertainties of each measurement are not being propagated through to the final answer. Adding a huge number of additional samples does not make the instruments more precise or more accurate. It is simply not true that measuring 10,000 times as many points (which reduces the sampling error) will reduce the uncertainty of all the individual measurements. This is basic stats too.

        What I see repeatedly is people assuming the measurements are as utterly precise as their multi-decimal place calculator mantissa cares to display. There is (apparently) confusion about the difference between taking a single instrument to 1000 sites to make measurements at the same time, versus taking readings on 1000 different instruments, one at each site, at the same time. Further, taking 1 measurement with each instrument is not the same as taking 100 measurements at that moment on each of the 1000 instruments, one at each location.

        Daily temperature measurements are the lowest quality of all possibilities: one measurement recorded from each instrument, a different one at each site.

        Additional sampling at additional sites is a good idea of course, but each sample is of a ‘different thing’ so it cannot be treated the same as having used, for example, 10 instruments read once at each site, or 10 readings on one instrument at each site, to get a better fix on where the mean lies for each sampled location. Even estimating more exactly where the mean lies does not reduce the uncertainty about the measurements themselves which is an inherent property of the instruments.

        Finally, measuring the temperature at 1000 sites using 1000 instruments that are plus-minus 0.02 degrees C does not mean the final answer, the average temperature, is plus-minus 0.02. That’s not how it works. There is a formula for error propagation.

        The claim that 2015 was 0.001 degrees warmer than 2014 is not supported (or denied) by the evidence. No one knows because that is a value far smaller than the propagated uncertainties.

      • Richard V,
        “Throughout the time series almost every year involves a different sample”
        Yes. That is why anomalies are essential. Comparing averages of different samples is possible, provided that the expected values of each is the same (or nearly). If you toss a coin 100 times, you get usually from 40-60 heads. That is still true if you toss 100 different coins, provided they are fair (homogeneous).

        An analogy is the DJIA. There is a series that has been tracked for many years. Not only do the companies change, but their weighting is constantly changing. This requires adjustments with similar effect to anomalies. Not many people think the DJIA is thus invalidated.

      • angech,
        “I think Nick lives near me in central Victoria”
        Well, I live in the big city. The daily numbers for July are here. Average max 14.5°, min 6.6. Long term averages (here) are 13.5 and 6.0, so it was warm on both counts. But there were some cold mornings.

      • Crispin,
        “Finally, measuring the temperature at 1000 sites using 1000 instruments that are plus-minus 0.02 degrees C does not mean the final answer, the average temperature, is plus-minus 0.02. That’s not how it works. There is a formula for error propagation.”

        Yes, there is. Deviations cancel, and the contribution to error of the mean is reduced. Broadly, variances add, so the sum of N is N*each. But then you scale each down by N, reducing the variance of the contribution to mean by N^2. So with error 0.02 and 1000 in sample, the contribution of that source of error to the mean is .02/sqrt(1000) ~ 0.00063. Correlations may increase that, but it’s pretty small. Sampling error is much larger.

      • Driving down the local rural roads around Goulburn in cooler months I have seen Jack Frost with my own eyes.
        At night the ice crystals he brings glisten like diamonds in a silver sky along vast tunnels of light picked out by the car headlights.
        Just think, the local BOM has entered this fantastic fairy land with fantasy data.
        Who would have believed that?

      • angech,

        I think Nick lives near me in central Victoria so would be well aware of the severe, repeat severe cold snaps we have had throughout July.

        I think he made it clear enough that he was responding to a comment claiming that ‘all’ of Australia was cold in July. The BOM July map above does indeed show below average temperatures in parts of Victoria; but it also shows above average temperatures in many other regions, including across much of the Northern Territory.

        Too early to specify lower troposphere temperatures above Australia, but UAH is suggesting there was a considerable month-on-month temperature rise across the Southern Hemisphere in general; the anomaly is up from 0.09C in June to 0.27C in July: http://www.drroyspencer.com/2017/08/uah-global-temperature-update-for-july-2017-0-28-deg-c/

      • Nick, you persist in your avoiding the measurement uncertainty.

        Emphasis added:

        “Deviations cancel, and the contribution to error of the mean is reduced. Broadly, variances add, so the sum of N is N*each. But then you scale each down by N, reducing the variance of the contribution to mean by N^2. So with error 0.02 and 1000 in sample, the contribution of that source of error to the mean is .02/sqrt(1000) ~ 0.00063.”

        You are referring to determining a better estimate of the true position of the mean, which is the middle of the range of uncertainty, but the measurement uncertainty remains unaffected. You get no cookie for that contribution. If you know the measurement uncertainty is ±2 degrees and that the middle of the range, the Mean, is 16.6616 degrees, you only know that the answer is between 14.6616 and 18.6616 degrees. A critical element of this is that we are not measuring the temperature of one place a large number of times, we are measuring the temperature in a large number of places once each. There is no expectation that the readings will be the same in each location. Each measurement has an uncertainty, and together the uncertainty is larger than any one measurement’s. The average cannot be better known that the contributing components.

        The measurement uncertainties are irreducible when performing computations. I can suggest this very brief document for instruction:

        http://ipl.physics.harvard.edu/wp-uploads/2013/03/PS3_Error_Propagation_sp13.pdf

        An average is a sum of values divided by a number known exactly, so is the same as propagating the uncertainties in the usual manner:

        SQRT(uncertainty1^2+uncertainty2^2+uncertainty3^2+…uncertaintyN^2)

        “So with error 0.02 and 1000 in sample”, assuming all the instruments are identical, the propagated measurement uncertainty is:

        SQRT(0.02^2 * 1000) for an uncertainty of ±0.63 degrees. In theory the measurement errors could be as large as 20 degrees (0.02*1000) but that is extremely unlikely. It is just as unlikely that the true average lies exactly on the mean, however well its position is known.

        The true average temperature lies somewhere within a span of 1.26 degrees (68% confidence) and the center of that span is known to ±0.00063 degrees as you demonstrated. There is a 32% chance that the true average of the 1000 measurements lies outside the 1.26 degree range.

        The only way to reduce the final value of the propagated uncertainty is to make more accurate and precise measurements. If they were ±0.004 which is technically possible, the range is reduced to ±0.127*2 = 0.253. Obviously when averaging 10’s of thousands of measurements that are each ±0.2 degrees, the uncertainty of the final value is large. Climate scientists have been marketing the median as the ‘average temperature known with great precision’ by pretending that each measurement was perfect which is not only untrue, but impossible.

      • Crispin,
        “If you know the measurement uncertainty is ±2 degrees and that the middle of the range, the Mean, is 16.6616 degrees, you only know that the answer is between 14.6616 and 18.6616 degrees.”
        No. These numbers, if based on sd, mean that for one reading of 16±2, there is about a 2/3 chance that the reading lies between 14 and 18 (and 1/6 that it is >18). But suppose you take the mean of 100 such numbers, in different locations (same EV). What would have to happen for the mean to be >18 (if the range is also 16±2)? Basically, almost all 100 errors would have to be positive, averaging +2. That is not a 1/6 chance; in fact it is very unlikely.

        The adding of variance, and the consequent discounting of the contribution to the mean, accounts for this cancellation.

      • the fact anomalies are used as opposed to outright temperatures is all people need to realise the level of obfuscation going on nick.

      • Nick,

        You are still evading the point:

        >>“If you know the measurement uncertainty is ±2 degrees and that the middle of the range, the Mean, is 16.6616 degrees, you only know that the answer is between 14.6616 and 18.6616 degrees.”

        >No. These numbers, if based on sd, mean that for one reading of 16±2, there is about a 2/3 chance that the reading lies between 14 and 18 (and 1/6 that it is >18).

        The point is they are not based on the SD. That is the uncertainty of the instrument’s readings. Look in the instructions. There is an uncertainty value provided by the manufacturer.

        There is no ‘measurement’ without an uncertainty attached. Nick, you are again completely missing (or evading) the point. A measurement taken with a properly calibrated 4-wire RTD usually has an uncertainty of 0.02 degrees, obviously depending somewhat on the capability of the reading instrument. If it is a 6.5 digit device then the ±0.02 claim is correct. The reading precision is 0.01 but the uncertainty is 0.02, and uncalibrated after a year, it is about ±0.06 because they drift randomly.

        A measurement error is not susceptible to diminution.

        >But suppose you take the mean of 100 such numbers, in different locations (same EV). What would have to happen for the mean to be >18 (if the range is also 16±2)? Basically, almost all 100 errors would have to be positive, averaging +2. That is not a 1/6 chance; in fact it is very unlikely.

        That comment does not address the measurement uncertainty at all. You are just repeating things about the calculation, with greater certainty, of the position of the center of the range of uncertainty.

        The magnitude of the uncertainty is an inherent property of the measurement apparatus. Making 10 million measurements will not make any one of them less uncertain. That uncertainty is an inherent property that propagates through all calculations using those measurements. It is that uncertainty which provides us the range limits within which the actual answer probably is to be found (with 68% confidence).

        For those who are appalled by the revelation that regional or global temperatures cannot possibly be calculated to a precision of 0.001 degrees using inputs that are uncertain by ±0.02 or ±0.5 degrees, I have a way to explain the difference between using a firm count and a measurement. This will help you understand why Nick’s avoidance is essential for those wanting to support the meme that the global average temperature is known with any precision.

        +++++

        Cafeteria Uncertainty

        Consider a school with students in 4 rooms and a large number of them in the largest, the cafeteria. What is the average number of students per room?

        Room 1 = 12 students
        Room 1 = 10 students
        Room 1 = 20 students
        Room 1 = 15 students
        Cafeteria = There are students coming in and out and many of them are in motion so it is not possible to count them exactly. You can do the next best thing which is to count as accurately as you can making various additions and subtractions, finally arriving at a best estimate of 255 ±8. That is the best you are able to do with the observers and time and methods you have (together, “the apparatus”).

        The number in each of the 4 small rooms is known exactly, so there is no uncertainty about the data. There is no “±” involved as students do not come in fractions.

        The average number of students in a room is therefore:

        (12+10+20+15+(255 ±8)) / 5 = 62.4 ±Some number.

        It is not “62.4” because there is uncertainty about exactly how many students are in the cafeteria. The true answer is literally not known because of a ‘measurement uncertainty’. The real answer is probably between 61 and 64. We don’t really know.

        Conducting this exercise in 1000 similar schools will not reduce the magnitude of the uncertainty. Calculating the global average temperature is like trying to count the number of students in cafeterias only. All the uncertainties from all the estimates have to be accommodated in the final report. They do not ‘average out’ or ‘diminish’. They are hard-wired into the data and retained by the calculations.

        The global temperature numbers you have been seeing pretend that all the measurements are ‘counted students’ and known absolutely, and that the average contains no “cafeteria uncertainty”. This is simply not true.

        We can be very certain about the number of stations reporting because they can be counted. But every single measurement reported by every single station has an uncertainty attached to it. These uncertainties are compounded based on the formula given a few posts up-list. The linked Harvard paper explains how to propagate these uncertainties through different types of calculations. They never get smaller. With each calculation, the uncertainty as to the true value of the answer grows.

  9. Temperature “anomaly” is meaningless without the “average temperature” used to calculate the “anomaly”. Having that is the only way you can compare one data set versus another. Since they keep changing the data, you end up in situations like we have now, where the “current warmest year ever” has a lower “average temperature” than years in the past.

    • I think it should be the other way round, if your aim is to assess whether it has been unusually hot or cold.

      If I told you the average temperature for July in my local town was 18 degrees, it wouldn’t mean all that much. If I told you it was 2 degrees above the average for the past 30, or 100, years, or whatever, you’d know that it was unusually warm.

      The anomaly is a more useful figure to give you that comparison at a glance.

      • You misunderstand, Lets say you tell me the “anomaly” for your town in July 1934 was 2 degrees above average. But due to “improvements made to the data” you discover 80 years later that the “anomaly” for that same July is now amazingly 1 degree BELOW “average”, for whatever arbitrary portion of time you’ve chosen. Let’s say that you mistakenly also gave me that 18 degree average temperature. Happily we can grab the current “data set”, and see that the “average temperature” for your town in July 1934 is now only 15 degrees! Wow! And now we can realize, that today’s “hottest ever July” in your town, is actually 1 degree cooler than it was in July, even though the reported “anomaly” is “the warmest EVAH”. I’ve seen this in the data. It is a complete sham.

  10. Yes, Walter. Much closer this time!!

    And no dramatic reversal of any trends to get excited about. It looks as though 2017 is going to be rather cooler than both 1998 and 2016, and will be fighting 2010 for the bronze medal.

  11. Nick Stokes

    “As so often, leaving out what it is the uncertainty of. It is standard error of a mean, not the measurements, and in this case, the trend, which is a weighted mean. The standard error (uncertainty) of a mean is sample error – not measurement error. What might have happened if you had chosen a different sample. And that certainly does reduce with larger samples. That is why polls, drug trials etc spend money to get the largest samples they can afford.”

    Mr. Stokes’ comment as regards larger samples is accurate but only when the samples are chosen at random and the population being sampled has not changed between samples. Sadly there is reason to believe that neither of these criteria are present in adjusted temperature records.

    • Solomon Green

      Mr. Stokes’ comment as regards larger samples is accurate but only when the samples are chosen at random and the population being sampled has not changed between samples.

      Are you saying that opinion pollsters poll exactly the same people every time before an election? Clearly that’s not the case. The variation between temperature stations is bound to be far lower over time than the variation between punters interviewed by election pollsters.

      What matters in both cases is:

      1) sufficient sample size, and

      2) reasonable geographic (or demographic, in the case of polls) weighting.

      • DWR54

        No. I am not saying that.

        1). Polsters should poll from the same population. There is no point in polling from different populations.

        2) They should have strict rules as to what proportions to sample but those who accord with those rules should be selected at random. The rules should be designed to ensure that the distribution of the samples is roughly proportionate to the distribution of the population.

        i suspect that we are probably on the same wavelength but you do not make it clear that the samples must always be taken from the same population.

        If, for example, you are referring to opinion polls, the mean of a sample opinion taken on the first day of the month can be radically different from that taken from a similar (or even identical sample) taken on the eighth day of the same month,

        Hence to combine the two samples in order to obtain a larger sample is nonsensical. Similar logic has to be applied to all sampling.

      • Solomon Green
        “1). Pollsters should poll from the same population.”
        Yes, but they don’t poll the same people.

        “2) They should have strict rules as to what proportions to sample but those who accord with those rules should be selected at random.”
        That is the issue of homogeneity (of population). Ideally the proportions should be exact, but weighting can be used to correct if they aren’t. As long as you know.

        “Hence to combine the two samples in order to obtain a larger sample is nonsensical.”
        It’s a matter of degree and deciding what you are looking for. Polls are anyway taken over several days which are averaged. You can average polls over a month or a year, as long as you’re clear that it is a year average that you are looking for. Then you have to work about how you sampled in time. That’s an integration issue.

      • i think recent results of opinion polls around the globe tend to support solomons position. the poll results are junk, much like the temperature “data”.
        see polls on trump, brexit and the recent uk election as examples.

      • Greetings Nick & Solomon,

        Solomon Green: “1). Pollsters should poll from the same population.”
        Nick Stokes: “Yes, but they don’t poll the same people.”

        The main problem that pollsters have real trouble with is identifying the correct sub population to sample from in the first place, namely the actual voters who are motivated enough to show up and vote on election day (or by early-voting mail-in ballot), which is different than the population as a whole. It would be easy if everybody voted. But determining who all the motivated sub populations are, based on the current issues, is difficult and fluid, and little factors like the actual weather on election day can change the percentages of those sub populations that actually show up and vote.

        The second problem that pollsters have is actually reaching the interested sub populations. The traditional method is by hard-wired telephone, which is increasingly unpopular today. I know many people that only have cell phones, and my hard-wired line is throttled via caller-id whitelisted call screening, so most calls go directly to an answering machine.

        And as a resident of Chicagoland, I can tell you with absolute certainty that it is impossible, by any method, to reach the many thousands of the deceased voters that participate in elections every year. Why are they not represented in the polls?

  12. Anomalies in thousandths of a degree C. ??

    Predicting a month’s anomaly when it is 90% over,
    and still getting it wrong ??

    This is brilliant climate science satire.

    I can barely stop laughing at the “precision”.

    This article completely refutes the global warmunists.

  13. Nick Stokes,

    “‘Hence to combine the two samples in order to obtain a larger sample is nonsensical.’
    It’s a matter of degree and deciding what you are looking for. Polls are anyway taken over several days which are averaged. You can average polls over a month or a year, as long as you’re clear that it is a year average that you are looking for. Then you have to work about how you sampled in time. That’s an integration issue.”

    I agree about polls in general but DWR54 was writing about opinion polls. Opinions are fickle, political opinions in particular.

    “Polls are anyway taken over several days which are averaged” and that explains why opinion polls have failed spectacularly in many countries over the last few years. The other main reason is that samples have not been properly constructed.

  14. Why use linear projections? Everything about the solar system is periodic. Is Fourier analysis used to estimate future temperatures in climate science? Just wondering. My back-of-the-envelope analysis of temperature trends combines probability estimates based on a sparse data-set and the rate of change of temperatures to predict a likely range in which a future temperature will lie in the near term. All that can be hoped for is to narrow likely range estimates with more data and better analyses. See http://www.uh.edu/nsm/earth- atmospheric/people/faculty/tom-bjorklund/ for a working paper, which is a few months out-of-date.

    A lot of weather events are cited as evidence of long-term climate change. What’s up with that? Of what value are weather observations in long term predictions?

Comments are closed.