July was also the 329th consecutive month of positive upwards adjustment to the U.S. temperature record by NOAA/NCDC

Andrew Freedman

I’ve noticed there’s a lot of frenetic tweeting and re-tweeting of this “sound bite” sized statement from this Climate Central piece by Andrew Freedman.

July was the fourth-warmest such month on record globally, and the 329th consecutive month with a global-average surface temperature above the 20th-century average, according to an analysis released Wednesday by the National Climatic Data Center (NCDC).

It should be noted that Climate Central is funded for the sole purpose of spreading worrisome climate missives. Yes it was a hot July in the USA too, approximately as hot as July 1936 comparing within the USHCN, No debate there. It is also possibly slightly cooler if you compare to the new state of the art Climate Reference Network.

But, those comparisons aside, here’s what Climate Central’s Andrew Freedman and NOAA/NCDC won’t show you when discussing the surface temperature record:

Final USHCN adjusted data minus raw USHCN data Graph created by Steve Goddard
It isn’t hard to stay above the average temperature value when your adjustments outpace the temperature itself. There’s about 0.45°C of temperature rise in the adjustments since about 1940.

Since I know some people (and you know who you are) won’t believe the graph above created by taking the final adjusted USHCN data used for public statements and subtracting the raw data straight from the weather station observers to show the magnitude of adjustments. So, I’ll put up the NCDC graph, that they provided here:

http://www.ncdc.noaa.gov/img/climate/research/ushcn/ts.ushcn_anom25_diffs_urb-raw_pg.gif

But they no longer update it, nor provide an equivalent for USHCN2 (as shown above), because well, it just doesn’t look so good.

As discussed in: Warming in the USHCN is mainly an artifact of adjustments on April,13th of this year, this graph shows that when you compare the US surface temperature record to an hourly dataset (ISH ) that doesn’t require a cartload of adjustments in the first place, and applies a population growth factor (as a proxy for UHI) all of the sudden, the trend doesn’t look so hot. The graph was prepared by Dr. Roy Spencer.

There’s quite an offset in 2012, about 0.7°C between Dr. Spencer’s ISH PDAT and USHCN/CRU. It should be noted that CRU uses the USHCN data in their data, so it isn’t any surprise to find no divergence between those.

Similar, but not all, of the adjustments are applied to the GHCN, used to derive the global surface temperature average. That data is also managed by NCDC.

Now of course many will argue that the adjustments are necessary to correct the data, which has all sorts of problems with inhomogenity, time of observation, siting, missing data, etc. But, none of that negates this statement:  July was also the 329th consecutive month of positive upwards adjustment to the U.S. temperature record by NOAA/NCDC

In fact, since the positive adjustments clearly go back to about 1940, it would be accurate to say that: July was also the 864th consecutive month of positive upwards adjustment to the U.S. temperature record by NOAA/NCDC.

Dr Spencer concluded in his essay Warming in the USHCN is mainly an artifact of adjustments :

And I must admit that those adjustments constituting virtually all of the warming signal in the last 40 years is disconcerting. When “global warming” only shows up after the data are adjusted, one can understand why so many people are suspicious of the adjustments.

To counter all the Twitter madness out there over that “329th consecutive month of above normal temperature”, I suggest that WUWT readers tweet back to the same people that it is also the 329th or 864th consecutive month (your choice) of upwards adjustments to the U.S. temperature record.

Here’s the shortlink to make it easy for you:

http://wp.me/p7y4l-i66

The climate data they don't want you to find — free, to your inbox.
Join readers who get 5–8 new articles daily — no algorithms, no shadow bans.
0 0 votes
Article Rating
154 Comments
Inline Feedbacks
View all comments
richardscourtney
August 20, 2012 2:09 pm

A. Scott:
At August 20, 2012 at 1:39 pm you make a good argument about the importance of UHI. But I write to comment on the part of your post that says

IMO a paired station approach – where known UHI affected stations are compared to a group of high quality stations outside the UHI area to determine the UHI affect and bias – is the better way. This should also be applied to any lower quality station data. Then you get a true measure of the UHI affect and can compensate accordingly.

You may well be right, but such sets of stations don’t exist (especially not over long times), so your suggestion is of merely academic interest.
Richard

richardscourtney
August 20, 2012 2:51 pm

Atomic Hairdryer:
Thankyou for your post addressed to me at August 20, 2012 at 12:42 pm.
I agree that there are specific examples where data from the past can be justified as needing correction for it to be compared with newer data obtained by an alternative method.
And I agree with your excellent example of such a case.
Importantly, I very strongly agree with the need to retain – and to report – the original and the corrected data.
However, with respect, your argument and your illustration support what I said.
You have discussed two data sets which are of similar kind except that they are obtained using different methods. In this case the two data sets can be intercalibrated and – when the calibration difference is known – then either data set can be adjusted to enable direct comparison of the data obtained by the different methods.
Now, imagine that such intercalibration were not possible for the two data sets and you did not know to which set each datum belonged. It would not then be reasonable to guess how to adjust each datum so it could be compared to the others. And the global temperature issue is two orders of magnitude worse than that.
In your example you have only two data sets. A global temperature time series of a century has 100 data sets because each year in the global temperature series has a unique data set (measurement locations, number of measurement sites, the measurement methods and measurement equipment all differ between years). And there is no possibility of knowing how to intercalibrate any of them.
The ‘adjustments’ to original measurement results are intended to overcome this problem but they cannot because the effects of sampling error are not known.
The sampling errors change from year-to-year as the number and locations of measurement sites change. Hence, any ‘adjustment’ to measurements at individual sites will affect the sampling error. It cannot be known if the adjustment will increase or decrease the sampling error which has unknown magnitude but it is probably large. Hence, adjustments intended to reduce one error (e.g. UHI effect) may increase effect of sampling error with resulting increase the total error, and it cannot be known if and when this is true. Therefore, no adjustment can be justified.
This is basic stuff in measurement theory, and – as I said – it is clear that the compilers of the temperature time series are inadequately educated in measurement theory.
Richard

richardscourtney
August 20, 2012 3:00 pm

Peter Roessingh:
re. your post addressed to me at August 20, 2012 at 2:03 pm.
Please see the reply to Atomic Hairdryer which I have just posted. It explains the point which you seem to have not understood.
Richard

Jan P Perlwitz
August 20, 2012 3:03 pm

Smokey wrote:

I see the GISS climate charlatan didn’t like the chart I posted. Tough noogies, Perlwitz. Reality intrudes on your fantasy.

Why would you say that, Smokey? Are you playing games now? I very much doubt you don’t know that my comments to your “charts” have mysteriously vanished from here. And now you pretend I didn’t know what to reply to it.
Btw: Moderator “dbs” didn’t seem to see a problem with Smokey insulting some other person in his comment here.

August 20, 2012 3:09 pm

In politics, it is not who votes that counts; it is who counts the votes. In weather reporting, it is not what the thermometers show; it is who says what the thermometers show.

A. Scott
August 20, 2012 3:14 pm

Richard, I don’t disagree, however …
… where there are currently stations that could be used in a paired approach there should be a study done IMO to quantify UHI and its affect on trends vs. paired stations.
… where there are not good paired stations outside the UHI areas they should be added, so going forward we can establish a meaningful comparison – to use hard data to study the issue and trends. Over time accumulate accurate data which would allow a more accurate understanding to be developed.
I think use might possibly be made of the large numbers of home reporting stations for something such as this. Where you have a strength in number of reporting stations, the effect of the accuracy of any one station becomes somewhat less important.

Gail Combs
August 20, 2012 5:58 pm

Paul Homewood says:
August 20, 2012 at 6:01 am
….It is surely time for NCDC to publish each year a full comparison of raw and final temperatures, with a full explanation of the difference. I can’t imagine there is any other organisation, public or private, who could get away with massaging data in the way they do without full transparency and independent justification.
I would guess the public at large would be furious if they were told the truth.
___________________________________
I keep thinking of the IRS and “Creative Accounting” Too bad we can not sic the IRS auditors on the Climastrologists, although if any of them read this blog they may have a few doubts by now on how the Climastrologists treat other data sets like their income tax records, all those conferences for example.

Gail Combs
August 20, 2012 6:25 pm

KR says:
August 20, 2012 at 11:57 am
Anthony – Quite seriously, if you have evidence indicating incorrect handling of TOBS changes, by all means publish it….
_____________________________
Good grief, Anthony is ONE person working part time on his own dime. The amount of disentangling by UNPAID volunteers of the mess made of the climate records by Climastrologists is really quite astonishing. Especially when you consider the active and malicious thwarting of FOIAs and even lawsuits as I showed in my earlier comment
Climastrologists are also among the highest paid academics in the USA so there is no excuse for the trash they are generating at the taxpayers expense.

…What about climate scientists? Well, university lecturers and professors earn an average of $49.88 an hour over a 1,600-hour work year, for a total salary of about $80,000. In the public sector, “atmospheric, earth, marine, and space sciences teachers, postsecondary” earn considerably more than the average university teacher ($70.61 per hour). They also work much less (1,471 hours each year), and despite their lower workload, they pull down about $104,000 a year….
So climate scientists are very well compensated, out-earning all other faculty outside of law in hourly-wage terms. What about the rest of the public sector? Astonishingly, only one other public-sector profession — psychiatrist — pays better than climate science, at just over $73 an hour. In other words, climate scientists have the third-highest-paid public-sector job, ranking above judges.
What about the private sector? That’s led by airline pilots, who earn about $112 an hour, but work for only 1,100 hours a year, followed by company CEOs at an average of $91 an hour. Physicians and surgeons earn almost as much as CEOs, at $89.51 an hour. Private-sector law-school professors, interestingly enough, earn far less than their public-school counterparts, at $82 an hour. After that come professor-level jobs in engineering, at $76.11, and dentists, at $73.19. These are the only private-sector professions that pay more than climate science….
http://www.nationalreview.com/articles/261776/all-aboard-climate-gravy-train-iain-murray

It should not be up to volunteers to be scrutinizing the weather stations and the data when we have paid trillion to have it done. Time to fire the lot of you and save ourselves the money and agrivation

August 20, 2012 6:28 pm

Once a group of officials, from the Club of Rome in 1991 and earlier, to Ottmar Edenhofer, head economist of the IPCC openly admit “ But one must say clearly that we redistribute de facto the world’s wealth by climate policy. Obviously, the owners of coal and oil will not be enthusiastic about this. One has to free oneself from the illusion that international climate policy is environmental policy. This has almost nothing to do with environmental policy anymore”
Now once these open statements have been made, what is there left to argue on the climate. It is not, and never has been about the climate, but a world government (again, Edenhofer): “Basically it’s a big mistake to discuss climate policy separately from the major themes of globalization. The climate summit in Cancun at the end of the month is not a climate conference, but one of the largest economic conferences since the Second World War. ”
Basically we have the top spokesman for the IPCC itself happily telling the world it is not about the climate. Now had anyone outside the internet bothered to print this (they didn’t) we wouldn’t get victims like the one today who posted on my Facebook about low lying cities being underwater after we’re dead. It’s not her fault, she is probably busy and trusts what the papers tell her as most people do. I like to check things for myself, and with the internet have absolutely no trouble doing so. Of course the longer this drags on the harder it’ll be to produce warming and its effect when the sea level drops and the ice caps are overall stable whatever they do locally. If I won the lottery (as was being discussed on the radio today after someone here won over £100 million) I’d buy a paper for a day and publish it myself, and commission an independent company for an hour long documentary.
We already have the data. If the media simply shared what we already know this would be over in a week, trust me.

Frank K.
August 20, 2012 7:31 pm

KR (again):
“TOBS changes have been primarily a US issue…”
Why? What evidence do you have for the rest of the world? Why not Canada, Mexico, Alaska, Hawaii (by the way, the last two are part of the U.S.)? Show us the metadata…
By the way, we really don’t know that the TOBS algorithm is being applied correctly at NOAA since no one can find the TOBS adjustment computer program. It’s not on their web site. Please let us know if anyone finds it.

August 20, 2012 8:13 pm

In 2007 I copied a list of record temperatures for my area. I did it again in April of 2012. I compared record highs of the two. 21 of the of the records covering common years have been changed. Some by as much as 5*F.
I can understand correcting typos from when the handwritten or typed paper records were entered into a database. (“OOPS! That 7 was really a 9.”) I can understand correcting problems that may have cropped up when, say, a DOS database was converted to a Windows database.
But to adjust a number in order to “correct” it? Put an asterisks on with a notation explaining why you think it may be suspect.

Peter Roessingh
August 21, 2012 1:29 am

JP wrote on August 20, 2012 at 7:20 am
“Do you find it rather strange when the entire “signal” can be accounted for by subtracting the sum total of the adjustments?”
Not in this case no. It depends on the nature of the correction. If instrumental changes cause a well defined change in measured values that is two times as big as the signal, it is well possible that the correction is double the signal magnitude, yet the signal is not less real because of that. If you want to argue that there is something wrong with the correction you need to point out what the problem is. Just stating the magnitude of the correction is meaningless.
Richardscourtney does indicate a problem by by pointing out out that sampling error is unknown and makes it impossible to do a meaningful correction. I disagree. In no way the presence of one error makes it impossible to correct for another well described one. In addition the problem of sampling errors was investigated recently (Shen et al 2012). Here is their conclusion:
“The sampling error analysis, the previous studies on observational errors, and the comparison between our current work and Menne et al. (2009) reveal the impact of station errors, sampling errors, and consequences resulting from different grid sizes and data aggregation methods. Although these errors may be of nontrivial magnitude and may influence the rank of the hottest or coldest years, they are not large enough to alter the trend of the [contiguous U.S. surface air temperature]”
In summary, the original post suggests a problem with the corrections, but the discussion here did not yield any evidence for that.
Shen, S.S.P., Lee, C.K. & Lawrimore, J. (2012)
Uncertainties, Trends, and Hottest and Coldest Years of U.S. Surface Air Temperature since 1895: An Update Based on the USHCN V2 TOB Data
Journal of Climate 25: 4185-4203. DOI: 10.1175/JCLI-D-11-00102.1

richardscourtney
August 21, 2012 7:54 am

Peter Roessingh :
Your post at August 21, 2012 at 1:29 am says

Richardscourtney does indicate a problem by by pointing out out that sampling error is unknown and makes it impossible to do a meaningful correction. I disagree. In no way the presence of one error makes it impossible to correct for another well described one. In addition the problem of sampling errors was investigated recently (Shen et al 2012). Here is their conclusion:

“The sampling error analysis, the previous studies on observational errors, and the comparison between our current work and Menne et al. (2009) reveal the impact of station errors, sampling errors, and consequences resulting from different grid sizes and data aggregation methods. Although these errors may be of nontrivial magnitude and may influence the rank of the hottest or coldest years, they are not large enough to alter the trend of the [contiguous U.S. surface air temperature]”

In summary, the original post suggests a problem with the corrections, but the discussion here did not yield any evidence for that.

I refuse to accept that you are sufficiently stupid as to believe what you have written, so I conclude that you are being disingenuous. I explain my conclusion as follows.
Firstly, you admit that “sampling error is unknown and makes it impossible to do a meaningful correction.” But assert “. In no way the presence of one error makes it impossible to correct for another well described one.”
Your assertion would be true if sampling error were a constant, but – as I explained – it is not.
(a) There is no way to discern the magnitude of sampling error in any one year.
and
(b) Sampling error differs from year-to-year.
therefore
(c) There is no way to discern if the observed changes are effect of varying sample error.
And the quotation from the paper you cite states point (a) and dismisses it in the same sentence; viz.

“Although these errors may be of nontrivial magnitude and may influence the rank of the hottest or coldest years, they are not large enough to alter the trend of the [contiguous U.S. surface air temperature]”

So the error magnitudes are not quantified (i.e. they “may be of nontrivial magnitude”) but they are known to be too small to alter the results (i.e. “they are not large enough to alter the trend”).
You claim to swallow that pseudoscientific nonsense!?
Furthermore, the issue of (c) is not addressed at all.
(As an aside I point out that in the same post I am answering you dismiss JP’s observation asking;
“Do you find it rather strange when the entire “signal” can be accounted for by subtracting the sum total of the adjustments?” I stress that point (c) is
There is no way to discern if the observed changes are effect of varying sample error.)
Papers like those of Shen et al 2012 and Menne et al. 2009 pass pal review and get published in climastrology. Similar papers in the real sciences are rejected by peer review and get rejected for publication.
As you say

the original post suggests a problem with the corrections, but the discussion here did not yield any evidence for that.

Similarly, when Nelson put his telescope to his blind eye he said, “I see no ships”.
Richard

phlinn
August 21, 2012 9:09 am

I’ve noted the raw vs adjusted numbers before, mostly in slashdot comments but at least once on climate audit. It’s good to see this problem being noted more widely. I’ve never seen any explanation for why TOBS adjustments fairly neatly follow a parabolic curve (excel fitted a quadratic with r=.98). However… when I checked USHCN v2 the adjustments were negative, but larger. One of the graphs is wrong, or it’s using different data than USHCN v2 as it was in 2010.
GHCN is almost as interesting and actually where I started checking after the back and forth about darwin. The last century has a definite upward trend in adjustments. Nearly linear in fact.

phlinn
August 21, 2012 10:39 am

hmm… rereading my previous comment, I meant “negative, but with an upward trend”.

Peter Roessingh
August 22, 2012 12:42 am

richardscourtney August 21, 2012 at 7:54 am wrote:
“You claim to swallow that pseudoscientific nonsense!?”
I suggest you write up your critique of Shen et al 2012 and publish it, either in a peer reviewed journal or elsewhere on the internet. Publishing your data is how real science is done, not by making ad hominem attacks in a blog. I am (again) done here.

richardscourtney
August 22, 2012 1:52 am

Peter Roessingh:
Your pathetic post at August 22, 2012 at 12:42 am says in total;

richardscourtney August 21, 2012 at 7:54 am wrote:

“You claim to swallow that pseudoscientific nonsense!?”

I suggest you write up your critique of Shen et al 2012 and publish it, either in a peer reviewed journal or elsewhere on the internet. Publishing your data is how real science is done, not by making ad hominem attacks in a blog. I am (again) done here.

That was not ad hom.
You quoted Shen et al 2012 saying
1.
The errors may be large
2.
and are not known
3.
but don’t affect the result.
And I asked – with incredulity – if you accepted that pseudoscientific nonsense.
My incredulous question is warranted because if the errors are large but not known then it is impossible to determine whether or not they affect the result.
If you did not like what the quotation says then why did you present it?
And my post answered your evasion about “publishing your data is how real science is done” in my post where I commented on peer review. I add that in real science any pertinent information is considered. Only pseudoscience uses excuses to ignore pertinent data (e.g. only information published in particular places will be considered).
Richard

Peter Roessingh
August 22, 2012 4:37 am

Richard,
Regarding the ad hom: I was talking about your qualification of Shen et al. If you want to call their work pseudoscience – a fairly strong statement-, you need to back that up with detailed criticism.
I did not say the errors are unknown, those are your words. I cited Shen at al. as a source that explores the magnitude of those errors, and cited their conclusion.
Neither did I say i did not like Shen at al.’s conclusion. Those are again your words. If *you* don’t like their conclusion then please detail the problems you see with their data treatment, either here, or better, in a paper.
Finally what makes you think I, or anybody else wants to ignore pertinent data? I certainly did not say so.
Peter.

richardscourtney
August 22, 2012 7:15 am

Peter Roessing:
Your post August 22, 2012 at 4:37 am makes a series of untrue assertions.
You say to me:

Regarding the ad hom: I was talking about your qualification of Shen et al. If you want to call their work pseudoscience – a fairly strong statement-, you need to back that up with detailed criticism.

No, one example of pseudoscience is sufficient and I provided two.
I remind that
science consists of seeking the nearest possible approximation to ‘truth’ by formulating ideas and seeking information which refutes an idea then amending or rejecting the idea in light of the information
but
pseudoscience consists deciding an idea is ‘truth’ then seeking information which supports the idea while ignoring or rejecting information which refutes the idea.
My first example of their pseudoscience was my having cited their statement which you had quoted and my showing it to be logical nonsense that supports a contention. i.e. I said;

So the error magnitudes are not quantified (i.e. they “may be of nontrivial magnitude”) but they are known to be too small to alter the results (i.e. “they are not large enough to alter the trend”).

I could have used ridicule by saying something like this.
So, these ‘scientists’ admit the errors “may be of nontrivial magnitude”, admit they don’t know how big the errors are, but conclude the errors don’t affect their results. And that is what they call science! It is excusing information which refutes their adopted ‘truth’.
Secondly, I pointed out that they made no consideration of the effect of the sample differing from year-to-year and, therefore, there is no way to discern if the observed changes are effect of varying sample error. Either their having ignored that was pseudoscience or it was incompetence. I say it was pseudoscience: are you saying it was incompetence? Either way, it demolishes their conclusion.
You say to me:

I did not say the errors are unknown, those are your words. I cited Shen at al. as a source that explores the magnitude of those errors, and cited their conclusion.

Er, Ahem. How can I put this? Ah, I know how, I say you are using sophistry.
I said the effects of sampling errors are unknown and you cited and quoted Shen et al. as saying those errors “may be of nontrivial magnitude” (which means they are not known). Shen et al. do NOT quantify the sampling errors. Indeed, they cannot because nobody knows how to quantify the error of a single datum when there is no independent calibration available: the sample of each year is a unique set (i.e. it is a datum) that cannot be assessed as part of a collection of different sets. (The diameters of an apple, a pear and a banana cannot be used to assess errors in the determination of the apple’s diameter in the absence of independent calibration.)
You say to me:

Neither did I say i did not like Shen at al.’s conclusion. Those are again your words.

Oh! Really? So you posted their conclusion because you did not like it?
I find that strange, especially when you only provided that in your post and you wrote on the basis of it

In summary, the original post suggests a problem with the corrections, but the discussion here did not yield any evidence for that.

That “discussion” consisted solely of your presentation of your quotations from Shen et al..
And you say

If *you* don’t like their conclusion then please detail the problems you see with their data treatment, either here, or better, in a paper.

I stated the problems with their analysis; i.e. it makes an illogical deduction and fails to address the major problem with the data. No more detail is needed. (Having proven the contents of a bin are garbage then one does not need to detail each item of rubbish in the bin.)
And you conclude

Finally what makes you think I, or anybody else wants to ignore pertinent data? I certainly did not say so.

Alarmists often try to ignore pertinent data and examples are legion. For example, they want everybody to ignore that statistically significant global warming stopped for the last 10 years but existed for each of the three previous 10-year periods.
However, since you don’t want to ignore pertinent data, I await your response to the fact that observed variations in global temperature may be induced by variations in the sample used to obtain global temperature because the sample changes from year-to-year and the sample error of each year cannot be determined.
Richard

Peter Roessingh
August 22, 2012 1:34 pm

The ball is clearly in your court. Shen et all. have written an 18 page long paper adressing the topic of sampling errors and you call that pseudoscience. As far as I can see, you base that conclusion on my quote of the last 9 lines. That simply will not do.

KR
August 22, 2012 3:36 pm

Also relevant is Weithmann 2011 “Optimal Averages of US Temperature Error Estimates and Inferences” (http://sdsu-dspace.calstate.edu/bitstream/handle/10211.10/1792/Weithmann_Alexander.pdf?sequence=1).
Richard Courtney – Nobody who understands the science claims that we know everything exactly. But claiming, as you do, that “The effect(s) of sampling error are not known, and there is no way they can be known, so there is no known way to model them correctly.” is just nonsense, and appears to be an Argument from Complexity (http://www.don-lindsay-archive.org/skeptic/arguments.html#complexity) – a rhetorical trick.
Sampling errors are a quite well understood topic in science, and analysis of error sources allows determining the bounds, the possible range of errors.
A simple Reductio ad absurdum of your argument: If uncertainty was (as you claim) complete ignorance, we could never risk getting out of bed – because our sampling of personal experience is never random, never complete, and we cannot know everything about our circumstances. Yet somehow we manage in the face of that uncertainty…

richardscourtney
August 23, 2012 3:26 am

Peter Roessingh and KR:
Peter Roessingh, OK choose to ignore what I wrote if you want. Do whatever makes you feel comfortable. Truth is truth whether or not you ignore it.
KR, you (deliberately?) misrepresent what I have written.
At no time have I claimed “uncertainty is complete ignorance”. On the contrary, I have stated that (a) statistical uncertainty is easily assessed for the data in a data set
but
(b) the uncertainty of an individual datum cannot be determined unless it is part of a data set or has an independent calibration.
I am surprised that you claim to be ignorant of those truths.
The problem in this case is that each datum (e.g. annual or monthly) for average global temperature is an individual datum. It is NOT part of a data set. This is because the measurements used to provide each datum are a unique data set.
I will simplify to extreme so hopefully you understand the point.
In one year all the measurements are obtained in the tropics,
and
in the next year half the measurements are taken in the tropics and half in the Arctic.
An average temperature is obtained from the data in each of those years. And the averages show that year two has a lower average than year one. That does not indicate the globe cooled between the two years: it is an effect of different measurement sites.
A statistical analysis can be conducted on the data of each year. And it will give probability limits (e.g. 95% confidence) for the obtained average temperature of each year. However, it will not deconvolute the effects of the different measurement sites. Indeed, the calculated confidence limits will be misleading. The average in year one will provide narrower 95% confidence limits than the confidence limits of the average for year two. However, the average obtained for year one is a less accurate indication of global temperature than the average obtained in year two.
And a statistical analysis can be conducted on the total data of both years to obtain confidence limits of them both. But the result will be an error of unknown magnitude. This is because the calculation of confidence assumes a random sample, and the samples are NOT random. Indeed, they are not even consistent from year to year.
So, there is no known way to assess the effects on the average of changes to the measurement sites.
Any assertion (e.g. Shen et al.) that error limits (i.e. confidence limits) have been determined cannot be correct unless a revolutionary method of statistical analysis is also presented. Nobody has yet devised such a revolutionary method.
Richard

KR
August 23, 2012 7:04 am

Richard Courtney – Thank you for clarifying your point.
“…there is no known way to assess the effects on the average of changes to the measurement sites.”
Incorrect: the data from different (or changing sites) are not independent – we do know something about areas not sampled. See Hansen and Lebedeff 1987 (http://pubs.giss.nasa.gov/docs/1987/1987_Hansen_Lebedeff.pdf), the correlation of temperature _anomalies_ (note: anomalies, changes, not absolute temperatures) out to quite large distances. Measuring the temperature anomaly at any point gives you considerable information about the surrounding area, albeit with a correlation and certainty decreasing with distance (providing computable probability limits, dependent on sample distances).
Samples of temperature anomalies taken from different points within a range of correlation are related, they are all part of an interdependent data set: and hence your claim of independence and lack of relationships between different sample sites simply does not hold, and the uncertainties can be determined.

richardscourtney
August 23, 2012 12:16 pm

KR:
I am answering your post at August 23, 2012 at 7:04 am.
You say the data from measurement sites are not independent. So what? I did not say they are? I said the data sets from individual years are independent because the numbers and positions of measurement sites are unique for each year.
I am familiar with Hansen & Lebedeff (1987). But it proves my point.
As you and they say, information from immediately adjacent to a measuring station can be inferred from the data of the measuring station. But the quality of the inferred data degrades very rapidly with distance from the measuring station. Hence, assumptions are made as to the nature of the degradation.
If H&L did provide definitive information on the degradation then different teams (e.g. GISS and HadCRUT) would all use the same assumptions for the degradation with distance from a measuring station. They don’t.
Indeed, H&L admit their system is merely a guess. They say their method is “designed to provide accurate long-term variations” and they assess whether it does that by comparison with performance of a climate model.
How does one know if their guess is a good representation of the model performance or if the model is a good representation of the guess? All one can say is that the model and the guess agree to described standards. That agreement is meaningless unless the climate model emulates real-world changes at a regional level, and no climate model does that.
Indeed, the fact that their guess emulates the climate model is indicative that their guess is wrong because the model does not emulate changes at a regional level: no climate model does.
So, the use of data measured at a site to infer data at another place is merely a guess. And the confidence in the accuracy of the inferred data is zero because one cannot determine the confidence of a guess.
Furthermore, data for regions far distant from any measurement station is a pure guess. This is so obvious that each team does not include regions far distant from any measurement site. But they use different assumptions concerning which areas to not include. And the effect of this is exactly the same as my extremely simplified example in my explanation you answered.
The quality, the number, and the sites of measurements vary from year to year. Hence, the guesses vary from year to year. In other words, the application of the guesses makes no difference to the basic issue which I stated; viz.

The problem in this case is that each datum (e.g. annual or monthly) for average global temperature is an individual datum. It is NOT part of a data set. This is because the measurements used to provide each datum are a unique data set.

And all this brings us back to where I ended my last post; i.e.

Any assertion (e.g. Shen et al.) that error limits (i.e. confidence limits) have been determined cannot be correct unless a revolutionary method of statistical analysis is also presented. Nobody has yet devised such a revolutionary method.

Richard

KR
August 23, 2012 1:40 pm

Richard Courtney – You’ve digressed _hugely_ from the discussion, which (AFAICS) regards estimating temperature changes in observations. Climate models are a different topic entirely, and not at all relevant to estimating uncertainties in observations.
GISTEMP uses scaled anomaly correlations, HadCRUT uses spatial blocking for area weighting – different researchers, different approaches. While I have my personal opinions as to which approach works better, they have their reasons for their choices, their approaches; both methods are supportable.

“So, the use of data measured at a site to infer data at another place is merely a guess. And the confidence in the accuracy of the inferred data is zero because one cannot determine the confidence of a guess.”False

Use of data to infer values at another place are estimates, not guesses, supported by _measurements_ of local correlations. Not assumptions (as you claim), but measurements leading to correlations with confidence intervals. Local correlation (whether by distance weighted averaging as per GIS or simple regional block weighting as per HadCRUT) means that location sampling at any time point consists of a single, interrelated data set. Spatial sampling has calculable uncertainties, and hence error limits can be determined.

“The problem in this case is that each datum (e.g. annual or monthly) for average global temperature is an individual datum. It is NOT part of a data set. This is because the measurements used to provide each datum are a unique data set.”False

Hello? Year to year measurements are also part of an interdependent data set, as there is continuity in sampling across those years. Unless you can document a single moment when the entire set of weather stations were abandoned and a new set built, breaking continuity – one, mind you, with no calibration against previous data. Even if the entire network is replaced over time, continuity and cross-calibration ensure that there is only one data set. The “revolutionary method of statistical analysis” you seem to be demanding is called time series analysis (more literature than I care to quote), from that comes information about temporal trends. Claims that time series data cannot be acquired, dealt with, or have it’s error limits calculated are just nonsense.
Claiming that climate data points in time and space (in the presence of temporal continuity and spatial correlation) are not related is a specious argument. Error limits and confidence intervals _can_ be determined for both.

I have to say that I find the _sheer variety_ of claims that amount to “There’s uncertainty, therefore we know nothing, therefore don’t believe what anyone says about the data” both astound and appall me. There are limits to our knowledge, uncertainties – but uncertainty does not mean ignorance.
Adieu