
I’ve noticed there’s a lot of frenetic tweeting and re-tweeting of this “sound bite” sized statement from this Climate Central piece by Andrew Freedman.
July was the fourth-warmest such month on record globally, and the 329th consecutive month with a global-average surface temperature above the 20th-century average, according to an analysis released Wednesday by the National Climatic Data Center (NCDC).
It should be noted that Climate Central is funded for the sole purpose of spreading worrisome climate missives. Yes it was a hot July in the USA too, approximately as hot as July 1936 comparing within the USHCN, No debate there. It is also possibly slightly cooler if you compare to the new state of the art Climate Reference Network.
But, those comparisons aside, here’s what Climate Central’s Andrew Freedman and NOAA/NCDC won’t show you when discussing the surface temperature record:
![USHCN-adjustments[1]](http://wattsupwiththat.files.wordpress.com/2012/06/ushcn-adjustments1.png?resize=640%2C465&quality=75)
Since I know some people (and you know who you are) won’t believe the graph above created by taking the final adjusted USHCN data used for public statements and subtracting the raw data straight from the weather station observers to show the magnitude of adjustments. So, I’ll put up the NCDC graph, that they provided here:
http://www.ncdc.noaa.gov/img/climate/research/ushcn/ts.ushcn_anom25_diffs_urb-raw_pg.gif
But they no longer update it, nor provide an equivalent for USHCN2 (as shown above), because well, it just doesn’t look so good.
As discussed in: Warming in the USHCN is mainly an artifact of adjustments on April,13th of this year, this graph shows that when you compare the US surface temperature record to an hourly dataset (ISH ) that doesn’t require a cartload of adjustments in the first place, and applies a population growth factor (as a proxy for UHI) all of the sudden, the trend doesn’t look so hot. The graph was prepared by Dr. Roy Spencer.
There’s quite an offset in 2012, about 0.7°C between Dr. Spencer’s ISH PDAT and USHCN/CRU. It should be noted that CRU uses the USHCN data in their data, so it isn’t any surprise to find no divergence between those.
Similar, but not all, of the adjustments are applied to the GHCN, used to derive the global surface temperature average. That data is also managed by NCDC.
Now of course many will argue that the adjustments are necessary to correct the data, which has all sorts of problems with inhomogenity, time of observation, siting, missing data, etc. But, none of that negates this statement: July was also the 329th consecutive month of positive upwards adjustment to the U.S. temperature record by NOAA/NCDC
In fact, since the positive adjustments clearly go back to about 1940, it would be accurate to say that: July was also the 864th consecutive month of positive upwards adjustment to the U.S. temperature record by NOAA/NCDC.
Dr Spencer concluded in his essay Warming in the USHCN is mainly an artifact of adjustments :
And I must admit that those adjustments constituting virtually all of the warming signal in the last 40 years is disconcerting. When “global warming” only shows up after the data are adjusted, one can understand why so many people are suspicious of the adjustments.
To counter all the Twitter madness out there over that “329th consecutive month of above normal temperature”, I suggest that WUWT readers tweet back to the same people that it is also the 329th or 864th consecutive month (your choice) of upwards adjustments to the U.S. temperature record.
Here’s the shortlink to make it easy for you:
![ts.ushcn_anom25_diffs_urb-raw_pg[1]](http://wattsupwiththat.files.wordpress.com/2012/03/ts-ushcn_anom25_diffs_urb-raw_pg1.gif?resize=640%2C494)

Frank K. quotes me and replies:
Since I already had said something about this in the comment to which Frank K replies, I guess he is overburdened with reading more than one sentence. /sarc
Daveo says:
August 19, 2012 at 7:35 pm
Smokey on August 19, 2012 at 7:12 pm
No smokey, the adjustments are there for a reason and are clearly explained.
What is dishonest is the TLS graph you put up earlier in this thread, and called it global temps.
___________________________________
So how come when these “Adjustments” were taken to a court of law in New Zealand, we had so much dancing around?
From the “A goat ate my homework” excuse book: More major embarrassment for New Zealand’s ‘leading’ climate research unit NIW
NZ sceptics v. NIWA – summary of case
Affidavits are for ever
Don’t mention the Peer Review! New Zealand’s NIWA bury the Australian review:
The Australians must have said something awful.
And the same dancing around happened in Australia –
Senator Cory Bernardi put in a Parliamentary request to get our Australian National Audit Office to reassess the BOM records.
Threat of ANAO Audit means Australia’s BOM throws out temperature set, starts again, gets same results
And of course there is the Phil Jones of the CRU excuse
The Dog Ate Global Warming
The Climategate e-mails shed New Light on Jones’ Document Deletion Enterprise
Now you want us to believe a rabid Luddite activist like Hansen would not tinker with the US temperature data to support his obsession with banishing coal “Death Trains” and all other carbon based fuels?
The headline looks wrong. “July was also the 329th consecutive month of positive upwards adjustment to the U.S. temperature record by NOAA/NCDC”.
Certainly the TOB corrections are mostly upwards, but looking at the NOAA chart there were drops in the early 90s, a couple in the late 90s and also one in the late 80s.
I mean I understand that you need a soundbite that’s as strong as the science-based climate side, but just using a false one doesn’t bring any reason to your cause.
I guess you figured that wasn’t important.
TonyM says:
August 19, 2012 at 8:54 pm
It is misleading to put a headline up which in effect questions the authenticity of the method but does not give an explanation about the reasons.
I suggest the reasons that the adjustments are positive is that many stations have been/are moved to areas with less localized influence and hence are cooler.
__________________________
No that is incorrect. They tossed out a lot of stations but kept airports which I guess they call “Rural”
The ‘Station drop out’ problem
Graphs of stations dropped by region.
Graph 1
Graph 2
These individual weather stations show how really bad airports are:
Here is a quick look at the only city & close by airport listed for North Carolina. The city is on the North Carolina/Virgina border and right on the ocean. Take a look at the city vs the airport. Norfolk City and
Norfolk International Airport
North to south through the middle of the state
Raleigh NC
Large city in the middle of NC – Fayetteville NC
South – Lumberton NC
Other Coastal Cities:
Middle – Elisabeth City
South – Wilmington NC
“small cities”
North – Louisburg
South – Southport
Note that Raleigh/Durham airport at 4:12 AM is 2-4F warmer that the surrounding stations. A closer look WUWT: RDU’s paint by numbers temperature and climate monitoring
Here is the raw 1856 to current Atlantic Multidecadal Oscillation Amazing how the temperatures follow the Atlantic ocean oscillation as long as the weather station is not sitting at an airport isn’t it?
Jan P Perlwitz says:
August 19, 2012 at 4:12 pm
What he is saying is that the adjustment of past, present or future data is an unforgivable sin. In of the real sciences, the adjustment of recorded data without clear, calculated and reason judgement would be a dismissable offence. However, it seems that in climate scamming anything goes as long as the mùoney keeps rolling in.
I hope this helps you to understand. THE ADJUSTMENT OF RECORDED DATA IS CHEATING !!!!
[snip . . you may wish to reconsider and rephrase your comment, thanks . . kbmod]
Friends:
Let us be clear.
There is no justifiable reason to alter values measured decades in the past.
For example, if temperature measurements were taken at different times of day then that is a cause of uncertainty (i.e. inherent error). Using assumptions to adjust for these times of measurement does not reduce uncertainty: it introduces additional unquantifiable error from the assumptions.
The entire subject of the surface temperature data sets shows that the compilers of these data sets are ignorant of basic measurement theory.
For example, the plotted temperatures are averages (i.e. means) intended to show trends over a region (e.g. the contiguous US, a hemisphere, the globe, etc.). But if such trends are to be meaningful indications of changes over the region then the mean has to be obtained from
(a) a statistically random sample
or
(b) the same population used as the sample for each datum.
However, (a) is not possible because the measurement sites are not randomly distributed. And (b) is not possible because the measurement sites differ from year-to-year (e.g. individual sites ‘move’ or close).
The adopted solution has been to compensate for the lack of a random sample by adjusting the available data. In principle this can be correct. The lack of randomness is a distortion to what is being measured (this is like viewing an image at an angle: the angle distorts the image). So, a model of the distortion is obtained and the data is adjusted according to the model (this is like determining the wrong viewing angle for an image and adjusting the image by that angle).
However, the distortion created by the lack of a random sample is not known and cannot be determined (this is like not knowing the angle at which an image is viewed). Therefore, any model of the distortion is a guess: and any compensation for the distortion by use of the model is a guess.
So, the ‘adjustments’ to the data sets are mere guesses. And these guesses have no validity because there is no calibration possible for determination of their validity. Arguments about UHI magnitude – and similar issues – do not change this because the problem is a sampling problem.
Re-adjusting data from decades in the past can only be an alteration to the compensation model: i.e. use of a different guess.
There is no justifiable reason to alter values measured decades in the past.
Richard
Wombat says:
August 20, 2012 at 12:40 am
The headline looks wrong. “July was also the 329th consecutive month of positive upwards adjustment to the U.S. temperature record by NOAA/NCDC”.
Certainly the TOB corrections are mostly upwards, but looking at the NOAA chart there were drops in the early 90s, a couple in the late 90s and also one in the late 80s.
___________________________________
“Drops” are NOT the same as going below zero (negative) so pull another rabbit out of your hat. There is a positive upwards adjustment added to everything some are just not as big an adjustment.
Also thanks to the drop in “Official” temperature stations we now have 92% of GHCN stations in the USA sited at airports. A discussion of one airport site that illustrates the problems with weather stations at airports RDU’s paint by numbers temperature and climate monitoring Despite the fact that station is at 4:12 am, 2-4F warmer than any of the surrounding area temperatures it is adjusted UP!!! This also supports the 2F warmer problem link
If you look at the photos you can see that the area is surrounded by blacktop.
So since around 1989 stations were being dropped and now the USA is represented by airport stations (92%) KNOWN to have higher readings than the surrounding areas and on top of that we add a steadily increasing “adjustment”
Is Hansen still going to be steadily increasing the “adjustment” as a glacier builds of NYC?
Peter Roessingh says “There is a very clear description of the reasons behind the adjustments here:
http://www.ncdc.noaa.gov/oa/climate/research/ushcn/ushcn.html“.
Others have commented – and I haven’t read all comments – but to put it very simply:
They describe the process, not the reasons.
The “adjustment” is even worse than you thought:
http://stevengoddard.files.wordpress.com/2012/08/screenhunter_182-aug-19-22-55.jpg
richardscourtney says:
August 20, 2012 at 1:32 am
Friends:
Let us be clear.
There is no justifiable reason to alter values measured decades in the past.
For example, if temperature measurements were taken at different times of day then that is a cause of uncertainty (i.e. inherent error). Using assumptions to adjust for these times of measurement does not reduce uncertainty: it introduces additional unquantifiable error from the assumptions.
The entire subject of the surface temperature data sets shows that the compilers of these data sets are ignorant of basic measurement theory…..
_______________________________
AMEN!
The manipulation, messaging and adjustment of the data to fit the preconceived notions of CAGW is the common thread throughout Climastrology. From Callender’s tossing out all the historical results for CO2 above 280 ppm to Mauna Loa tossing out “outliers” to the burying of Ice Core results showing CO2 in the past was well over 350 ppm to denying the existence of the historically documented LIA and medieval warm period it is data manipulation all the way down.
Any hypothesis that requires this much data manipulation should be shot and buried with a stake in its heart!
@Mike Jonas
Did you actually look at it? The reasons are clearly stated, changes in design of thermometer housing, changes in the type of thermometers, Urban heat island effect and so on. And each step backed up by peer reviewed papers for the details.
You asked me to rephrase. I’m not sure how to make my meaning more clear but I’ll have a go.
The mod should not edit my post without making it clear it was a mod’s edit not mine. If he or she doesn’t like my observation, then he or she can say so using their own nic. Either that or not publish my post at all. Mods shouldn’t be replacing commenters’ words with their own preferred wording. Hope my meaning is clear now.
(My comment in question should be removed from this thread in any case, since as I already said in a follow up comment I inadvertently sent it to the wrong thread.)
“There’s quite an offset in 2012, about 0.7°C between Dr. Spencer’s ISH PDAT and USHCN/CRU.”
There’s also quite an offset between Dr Spencer’s ISH PDAT and Dr Spencer’s UAH USA48 data. His +0.013C per decade trend in ISH PDAT since 1973 contrasts against his +0.23C per decade trend in UAH USA48 since 1979. The UAH USA48 trend is 18 times faster. Not an inconsiderable difference.
Perhaps the proof of the pudding is in the eating? For example, is the observed rate of glacier retreat in Glacier National Park, Montana over the past 30 years more suggestive of a +0.013C or a +0.23C per decade rate of temperature increase?
DWR54:
At August 20, 2012 at 2:41 am you ask:
I answer, neither. It is mostly indicative of precipitation changes.
Richard
@richardscourtney (August 20, 2012 at 1:32 am)
You are assuming the distortion can not be determined, but that is not correct, the nature of the main changes (measurement time, switching housing, use of modern electronic sensors) can be modeled very well. See also KR’s response above(August 19, 2012 at 8:32 pm). To call it “mere guesses” has no basis. Read the related papers, you will see they all have adequate discussion of error margins.
Peter Roessingh says:
“Read the related papers…”
I think Richard Courtney has read more learned papers than Peter Roessingh is even aware exist.
If this summer in the US climate is considered not to be ‘normal’ then most likely it is not caused by CO2 concentration, since there is not much abnormal about CO2 this year compared to the last or one before. Instead this summer should be compared to 1934 and 1953/54.
This map shows
http://weather.unisys.com/surface/sst_anom.gif
California current appears to be ‘much cooler’ than normal (less evaporation), and by the ocean current loop’s appearance that was the case for some time.
Less evaporation in the west Pacific means less rain in the US mid-west
One reason why that could happen most likely is the turn over in the Kuroshio-Oyashio currents system. These current systems have been temporarily disturbed by tectonic movements of Honshu in March of 2011 (M9 earthquake and many subsequent strong aftershocks), the powerful tsunami causing break in the thermohaline layering.
Wikipedia’s list of the Japan’s M8+ earthquakes
http://en.wikipedia.org/wiki/List_of_earthquakes_in_Japan
01 September 1, 1923 M8.3
March 2, 1933 M8.4 Major drought 1934
December 20, 1946 M8.1
March 4, 1952 M8.1 Major drought 1953-4
May 16, 1968 M8.2
September 25, 2003 M8.3
March 11, 2011 M9.0 Major drought 2012
Japan’s major earthquake in month of March could have a high probability of causing major drought in the USA (3 of 7 and all in March). Current takes one year to reach Canada and few more months down to California coinciding with the time when a strong evaporation is needed to provide summer rains across the mid-west.
Two September quakes causing Kuroshio current’s turn-over would reach California 15 months later, which is mid-winter, so by the summer effect may peter out, causing only a minor drought.
I have very little confidence in Dr. Spencer’s ISH PDAT. Or rather, I have none – the whole way how it was created is fishy. First of all, it is not based on population changes, it is based on single population snapshot from 2000 and a number of assumptions that this single population snapshot will be somehow reflected in linear temperature trend over the whole period for that site. Apart of that, it basically declares all stations with low population as the only ones which are correct and adjusts all of the rest to match it. Not mentioning the overall trend acquired from the processing is even lower than average trend from the lowest population stations so I’m afraid there are some more errors in the math.
That being said, I don’t think GISS is ok either. My personal guess is there is some tiny little rounding error somewhere in their homogenization procedure which leads to accumulating slightly more positive than negative adjustments, resulting in the net change we can see today. But that’s nothing more than a guess, I would need to spend a lot of time looking into their code and methods to confirm or disprove that.
NOAA habitually (monthly) makes adjustments to their entire global temperature dataset. So far in 2012, we have downloaded 21 different versions of their dataset. They appear to be adjusting the temperatures several times per month. In fact, since we only check once every 6 to 7 days, the number ’21’ might be low since we could have missed a few over the last 7.5 months.
To give you an example of what this means, their record of the January 1880 (yes, 1880) has been adjusted 21 times – that is they have reported 21 different temperatures for January 1880
since December 31, 2011. And remember, these are only the adjustments applied in 2012, so far, and they have been doing this for years. (These are global temp adjustments, not U.S. alone.)
We do have a wide variety of fake temperature charts that may be of interest: http://www.c3headlines.com/fabricating-fake-temperatures.html
Was the following what the mod took exception to, since the other part of my post was published? Not sure why that would be. This is the gist of it:
People may want to come back to this article after the promised adjustments have been made to the data in the recent unpublished Watts et al paper. This includes adjustments such as TOBs, UHI and other relevant changes to raw land surface temperature records in the USA.
Probably worth noting that had the USA had a standard time of observation over the years, like many other countries, a lot of confusion would be avoided (except by those who intend confusion).
Peter Roessingh:
At August 20, 2012 at 3:52 am you say to me
Firstly, thankyou for your advice but I assure you that I have read the related papers and I have had direct communication with compilers of the data sets. In case you doubt this, I draw your attention to what I wrote my post in this thread at August 19, 2012 at 2:32 pm and I ask you to read the document (especially its Appendices) linked from that post.
I point out that the link is to the UK Parliamentary Record and it shows my submission to the Select Committee of the UK Parliament that ‘investigated’ (i.e. whitewashed) ‘climategate’. If that submission were false then I would be guilty of perjury.
Appendix A is an email from me which was leaked from CRU as part of ‘climategate’ and – as is clear from what it says – it is part of discussion of these issues with compilers of the global temperature data sets and their major associates. Appendix B is a draft paper (of which I was Lead Author) which fully justifies what I have written in this thread. And the document itself is an explanation of how the frequent – and unjustifiable – changes to the data sets were used to prevent publication of that paper.
Secondly, the adjustments are pure “guesses” for the reason I explained in my post at August 20, 2012 at 1:32 am. Simply, there is no way to validate the correction model for sampling error so it cannot be known if any ‘adjustments’ will increase or reduce the effects of the sampling error. In other words, you are plain wrong when you write saying to me
The effect(s) of sampling error are not known, and there is no way they can be known, so there is no known way to model them correctly.
Any ‘adjustments’ to address
or anything else are assumed to improve the accuracy of the determination of global temperature. ut they may be compounding the major effect of sample error, and there is no way to determine if that is the case. Simply, each time it is assumed an ‘adjustment’ is made then it is a guess that the ‘adjustment’ will not make the total error larger. “Explanations” of how and why the ‘adjustments’ are conducted cannot alter that.
My email in the link was discussion of this very problem with specific reference to masking.
Richard
Eventually they’ll be asserting boiling at room temperature. Anything to support an agenda.
@ur momisugly Jan Perlwitz
And why is Mr. Watts talking about the US surface temperatures and the adjustments in the USHCN data, although the quote he cites is about the globally averaged surface temperature anomaly?
If a high quality USHCN dataset need such heavy adjustments, there is not a cat in hell’s chance that the vast majority of low quality global stations can provide reliable information.
@ur momisugly Ian W
It is truly amusing that these adjustments are somehow causing glaciers to melt . How do they do that?
Glaciers have been melting back since the late 1800’s, and this reflects the fact that temperatures are higher now than during the LIA, when glaciers expanded hugely. This does not mean that temperatures are continuing to increase.
I’ll give you a clue. Take an ice cube out the freezer and it will start to melt. If it is still melting 10 minutes later, does this mean the temperature in the room is increasing?