"Thorough, not thoroughly fabricated: The truth about global temperature data"… Well, not *thoroughly* fabricated.

Guest post by David Middleton

Featured image borrowed from here.

ARS Tecn

If you can set aside the smug, snide remarks of the author, this article does a fairly good job in explaining why the surface station temperature data have to be adjusted and homogenized.

There is just one huge problem…

US Adjusted
“In the US, a couple systematic changes to weather stations caused a cooling bias—most notably the time of observation bias corrected in the blue line.
Zeke Hausfather/Berkeley Earth”… I added the the natural variability box and annotation.  All of the anomalous warming ince 1960 is the result of the adjustments.

 

Without the adjustments and homogenization, the post-1960 US temperatures would be indistinguishable from the early 20th century.

I’m not saying that I know the adjustments are wrong; however anytime that an anomaly is entirely due to data adjustments, it raises a red flag with me.  In my line of work, oil & gas exploration, we often have to homogenize seismic surveys which were shot and processed with different parameters.  This was particularly true in the “good old days” before 3d became the norm.  The mistie corrections could often be substantial.  However, if someone came to me with a prospect and the height of the structural closure wasn’t substantially larger than the mistie corrections used to “close the loop,” I would pass on that prospect.

 

Just for grins, I plotted the UAH and RSS satellite time series on top of the Hausfather graph…

 

US Adjusted_wSat
US raw, TOBs-adjusted and homogenized temperatures plotted along with UAH and RSS global satellite temperatures.  Apples and oranges? Sort of… But still very revealing.

 

I think I can see why the so-called consensus has become so obsessed recently with destroying the credibility of the satellite data.

 

Addendum

In light of some of the comments, particularly those from Zeke Hausfather, I downloaded the UAH v5.6 “USA48” temperature anomaly series and plotted it on Zeke’s graph of US raw, TOBs-adjusted and fully homogenized temperatures.  I shifted the UAH series up by about 0.6 °C to account for the different reference periods (datum differences)…

USA48

I used a centered 61-month average for a 5-yr running average.  Since there appears to be a time shift, I also shifted the UAH ahead a few months to match the peaks and troughs…

USA48x.png

The UAH USA48 data do barely exceed the pre-1960 natural variability box and track close to the TOBs-adjusted temperatures, but well below the fully homogenized temperatures.

 

0 0 votes
Article Rating
212 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Anthony
January 21, 2016 2:14 pm

Is that Dick Dastardly from Wacky Races?

Paul Mackey
Reply to  David Middleton
January 22, 2016 7:39 am

Also Muttley was the intelligent one…..

Richard G.
Reply to  Anthony
January 21, 2016 2:20 pm

Snidely whiplash’s evil twin?

Marcus
Reply to  Richard G.
January 21, 2016 3:10 pm

Dudley Do Right’s evil adversary…Snidely Whiplash…I still have his poster on my wall !! Bwa ha ha !

Dawtgtomis
Reply to  Richard G.
January 21, 2016 9:46 pm

I made a ringtone of the theme from Dudley Doright to announce calls from a canadian friend.

Dr. S. Jeevananda Reddy
Reply to  Anthony
January 21, 2016 4:07 pm

Anthony — my opinion is that from figure 2, the satellite data is in line with the nature. The surface temperature rarely accounts the “climate system” and “general circulation pattern” impacts. The satellite data accounts these. Because of this the satellite data series are below the raw surface data.
Dr. S. Jeevananda Reddy

Reply to  Dr. S. Jeevananda Reddy
January 21, 2016 9:21 pm

The satellite will always be “below” the surface data because the satellites measure an average weighted to the surface but which contains data to the tropopause at ~217K. The surface thermometers measure a nano layer at about the height of a human nose. What is important is when the trend is different, and it is.

Dr. S. Jeevananda Reddy
Reply to  Dr. S. Jeevananda Reddy
January 21, 2016 10:22 pm

gymnosperm — it is o.k. when we are dealing with single station data but it is not o.k. when we are drawing a global average. The groundbased data do not cover the globe, covering all climate systems and general circulation patterns. That is ground realities. This is not so with the satellite data. They cover all ground realities around the globe. If there is some mix in terms of ground and upper layer contamination in satellite data, this can be easily solved by calibating the satellite data with good ground based met station data that is not contaminated by urban effect and contaminated by urban effect. Calibration plays the pivotal role.
Dr. S. Jeevananda Reddy

Dr. S. Jeevananda Reddy
Reply to  Dr. S. Jeevananda Reddy
January 21, 2016 10:57 pm

cont— In previous posts under discussion section — Some argued that the atmospheric temperature anomalies are necessarily different from surface anomalies. Usually, atmospheric anomalies are less than the surface maximum in hot periods and higher than the surface anomalies in cooler periods. It is like night and day conditions. We need to average them and thus they should present the same averages both surface & satellite measurements.
Dr. S. Jeevananda Reddy

JJB MKI
Reply to  Dr. S. Jeevananda Reddy
January 22, 2016 5:26 am

:
Shouldn’t the warming effects of CO2 be most apparent as anomalies in the mid-troposphere – exactly where the satellites (and balloons) measure?

AndyG55
January 21, 2016 2:23 pm

“I think can see why the so-called consensus has become so obsessed recently with destroying the credibility of the satellite data.”
And now that the El Nino is starting to ease, that desperation will become MANIC !
And great fun to watch….. as the dodgy bros salesmen go to work !

AndyG55
Reply to  AndyG55
January 21, 2016 2:29 pm

Marcus
Reply to  AndyG55
January 21, 2016 3:11 pm

..WTF ?????

AndyG55
Reply to  AndyG55
January 21, 2016 3:23 pm

Sorry Marcus, if you don’t get the link to a certain pair of the “best” salesmen.

Reply to  AndyG55
January 21, 2016 7:13 pm

Thanks, Andy. That gave me a chuckle. Love the ending. 😀

AndyG55
Reply to  AndyG55
January 21, 2016 9:32 pm

“Love the ending”
Soon……. soon !! 😉

jclarke341
January 21, 2016 2:41 pm

“We must get rid of the Medieval Warm Period!” “We must get rid of the pause!” “We must get rid of the Satellite data!” “We must discredit any one who would questions us!” “We must exaggerate the threat so that people will listen!” “We must get the people to do what we tell them to do!”
Does any of that sound scientific in the slightest. No, of course not. In order for the AGW myth to continue, science itself must be redefined or discredited, and that is exactly what is happening.

Reply to  jclarke341
January 21, 2016 3:11 pm

Indeed. From the Climategate emails, one Phil Jones, a formerly respected scientist:
“I can’t see either of these papers being in the next IPCC report. Kevin [Trenberth] and I will keep them out somehow — even if we have to redefine what the peer-review literature is!”

Curious George
Reply to  Michael Palmer
January 21, 2016 3:58 pm

They did.

Marcus
Reply to  jclarke341
January 21, 2016 3:13 pm

Don’t forget ” Climate deni@rs should be charged under RICO laws ” !!!!

Reply to  jclarke341
January 23, 2016 8:31 am

The IPCC redefines “science” in AR4 ( https://www.ipcc.ch/publications_and_data/ar4/wg1/en/ch1s1-2.html ) with an argument that is of the form of proof by assertion: the conclusion of the argument it true regardless of contradictions. The conclusion is that it is OK to replace falsifiability with peer-review. Among other contradictions to this conclusion is the motto of the Royal Society: nullius in verba (take no one’s word). To sustain its bias toward curbs on greenhouse gas emissions the IPCC is forced to try to trash falsifiability as the projections of its climate models are not falsifiable.

Latitude
January 21, 2016 2:51 pm

The satellite data was right…until it was wrong…because they are falling out of orbit….even though they have been adjusted for that since day one…………

Reply to  Latitude
January 21, 2016 3:19 pm

Well hopefully they’ll finish falling out of orbit soon so nobody will have to listen to them anymore.

January 21, 2016 2:58 pm

What would you get for a global mean if you used the temperature data as concentration for mineral X using the tools that you use? ± 1°C?
I don’t know how difficult it is to homogenise the data in oil and gas exploration but the data is simply the mass of mineral divided by the sample mass for many samples taken in places for that specific purpose. You are then using this to get the mass of product in a large volume of the Earth’s crust. This is a lot different to using means of max and min temperatures that are affected a lot by very localised conditions other than the change in amount of thermal energy in the atmosphere in the region so that data from even nearby stations rarely look alike (if you expand the axis to the change in GMTA since 1880). On top of that, would you base your decision on a result that is merely a few % of the range of values that you get in the one spot?
The changes to the global temperature anomaly are the very suspicious ones. Just what was needed to reduce the problem of why there was a large warming trend in the early 20th C that wasn’t exceeded when emissions became significant. And this is the difference between data homogenised in 2001 and now. The problem is not (just) homogenisation but the potential to adjust the data to what you want to see.

Reply to  David Middleton
January 21, 2016 3:34 pm

Should have left it as the last paragraph. Just hate how temperature is treated like a simple intensive property.

Tom Halla
January 21, 2016 3:00 pm

OMG!

Science or Fiction
January 21, 2016 3:02 pm

As far as I can see they will have to do a major adjustment to the ocean temperature record as well, to get the observations in line with the predictions: Have you ever wondered if any warming is really missing?
In terms of temperature:
For 0 – 2000 m ocean depth, a temperature increase of 0,045 K is reported for the period from 2005 – 2015:
The temperature increase deduced from the theory put forward by IPCC is :
0,064 K for the lowest limit for radiative forcing (1,2 W / m2)
0,13 K for the central estimate for radiative forcing (2,3 W / m2)
0,19 K for the highest limit for radiative forcing (3,3 W / m2)
Hence, the observed amount of warming of the oceans from 0 – 2000 m is also far below the lowest limit deduced from IPCC´s estimate for anthropogenic forcing.

Jjs
January 21, 2016 3:05 pm

Can you show the ballon data also with sat data? Always makes for a better conversation with the agw religious brother in law?

Alan Robertson
Reply to  Jjs
January 21, 2016 3:08 pm

That would be the UAH 5- yr mean on the graph.

Alan Robertson
Reply to  Alan Robertson
January 21, 2016 3:10 pm

I just gave you some incorrect info…not UAH data, sorry.

Marcus
January 21, 2016 3:07 pm

I think can see why the so-called consensus has become so obsessed recently with destroying the credibility of the satellite data….
Are you missing an ” I ” ??…..I think ” I ” can see why the so-called consensus has become so obsessed recently with destroying the credibility of the satellite data.

Marcus
Reply to  David Middleton
January 21, 2016 3:20 pm

Hmmmmm, is that a Marcus D’oh or the authors D’oh ??

Marcus
Reply to  David Middleton
January 21, 2016 3:29 pm

Ok, you corrected it, so it wasn’t me !! LOL

Marcus
Reply to  David Middleton
January 21, 2016 3:46 pm

Thanks for the great article !!

January 21, 2016 3:07 pm

The divergence of the raw data from the processed curves in the first graph seems to coincide with the Great Thermometer die-off. Raw data are thinned out to provide more scope for adjustments. Anyone who thinks that these wildly interpolated and thrice-processed data have any useful degree of accuracy should just hand back their science diploma.

jayhd
January 21, 2016 3:07 pm

“NOAA climate scientists”?
Excuse my skepticism, but anyone who works for NOAA and also calls himself, or herself, a climate scientist, has two strikes against them as far as credibility is concerned. As for adjusting past temperature data, that should be a no go. Too many questions can be raised. Just graph the data as it was measured. When collection instrumentation and/or methods change, draw a lineplot the new data, and note what and why it changed.

Doug
Reply to  David Middleton
January 21, 2016 9:57 pm

I found a lot of oil on old six fold data….and learned even more about data corrections. Corrections need to be done, but the potential for abuse is ever present. It all pales in comparison to the abuse one can conjure up with a modelling program. What answer do you want?

Notanist
January 21, 2016 3:09 pm

I would suggest that what needs adjustment the most is their willing suspension of critical thinking, but then I remember they’re getting paid to put this stuff out. Sad that to make the necessary adjustments to science to get it back on track, we’ll first have to make a major adjustment to the political climate.

January 21, 2016 3:10 pm

You know it is political when Karl Mears of RSS starts to disown his own work. No matter that the satellites agree with the radiosondes (weather balloons).
The consensus will have a harder time disowning them. I imagine that attack will soon come along the lines of mostly land. We already know NOAA chose to fiddle SST rather than land to ‘bust’ the pause via Karl 2015. Much easier to fiddle sketchy ocean data pre Argo. And Argo aint that great.

Reply to  ristvan
January 21, 2016 3:13 pm
KTM
Reply to  ristvan
January 21, 2016 3:33 pm

His quotes in this story were typical. He also cited his collaboration with a Warmist scientist to show that you need CO2 to make their models work.
This is one of the most bass ackwards zombie arguments that keeps coming back no matter what. Just because they can’t make their model work without leprechauns doesn’t mean leprechauns exist.

January 21, 2016 3:14 pm

“There is just one huge problem…
Without the adjustments and homogenization, the post-1960 US temperatures would be indistinguishable from the early 20th century.”

It’s not a huge problem. The US is not the world. It did indeed have a very warm period in the 1930’s, whereas the ROW had a much smaller variation.
And of course, comparing US and global RSS etc is also pointless. Regional averages are usually much more variable than global.
In the US, TOBS adjustment has a particular trend effect, for well documented reasons. But globally, adjustment has very little effect.

Marcus
Reply to  Nick Stokes
January 21, 2016 3:31 pm

Most of the data pre 1960 is from the U.S and U.K.

AndyG55
Reply to  Nick Stokes
January 21, 2016 4:53 pm

Perhaps Nick can help out.
Can you find and put pictures up for the GHCN station in Addis Ababa.
Thanks.

Reply to  AndyG55
January 21, 2016 8:59 pm

Complete story and pictures here.

AndyG55
Reply to  AndyG55
January 21, 2016 9:43 pm

Love the graph on p29, shows the central urban station COOLING since 1980 despite being urban, while the airport temp continues to increase.
Ps.. no pic of the actual urban Addis Ababa station though.. part of GHCN.

AndyG55
Reply to  AndyG55
January 21, 2016 10:02 pm

ps.. I hope you don’t mind if I use it as a classic example of just how bad airport weather stations can be, compared even to urban stations. 🙂 Thanks Nick 🙂

AJB
Reply to  AndyG55
January 22, 2016 11:32 am

Interesting stuff here too. And here. Relative dates a bit vague though.

Reply to  Nick Stokes
January 21, 2016 5:05 pm

And who says that the ROW had a smaller variation? I call BS on that one – the data coverage is just not there. Nobody knows.

Reply to  Michael Palmer
January 22, 2016 12:21 am

“And who says that the ROW had a smaller variation?”
I do.

Khwarizmi
Reply to  Nick Stokes
January 21, 2016 6:03 pm

The US is not the world. It did indeed have a very warm period in the 1930’s..
===========
Nick forgot to mention that his region in Australia also experienced a very warm period at the same time.
Argus, November 29, 1937 – “Bushfires menace homes at the basin”
January 13, 1939 – Black Friday bushfires in Victoria:
* * * * * * * * *
“In terms of the total area burnt, the Black Friday fires are the second largest, burning 2 million hectares, with the Black Thursday fires of 1851 having burnt an estimated 5 million hectares.”
– wikipedia
* * * * * * * * *
Argus, February 14, 1939 “Incendiary Peril” & “Sweltering Heat: 106.8 Deg. In City”
Argus, February 14, 1939 “Hose ban soon”
Current weather + 7 day forecast for Melbourne in the middle of a super-Nino “global warming” summer 80 years later:
http://www.bom.gov.au/vic/forecasts/melbourne.shtml

Reply to  Khwarizmi
January 21, 2016 8:53 pm

Yes, January has been not too hot, though with some lapse. Pretty warm last quarter of 2015. If you are interested, the history of hot days in Melbourne is here. You can look up the summers (incl 1939) in unadjusted detail. 114.1F on Jan 13. But that was just one hot summer. They have been getting more frequent. 115.5F in 2009, Black Saturday.

AndyG55
Reply to  Khwarizmi
January 21, 2016 9:45 pm

2015 was 11th in the only reliable data set, ie UAH Australia.
and of course a massive warming trend
http://s19.postimg.org/539zid2yb/Australia.jpg

Patrick MJD
Reply to  Khwarizmi
January 22, 2016 12:57 am

“Nick Stokes
January 21, 2016 at 8:53 pm”
Nothing unusual Nick, nothing unusual for summer!

Lewis P Buckingham
Reply to  Khwarizmi
January 22, 2016 2:28 am

There was an interesting letter in today’s Australian commenting on the suggestion that rising temperatures might jeopardise the future of the Australian Open.
‘..is not supported by data.Melbourne maximum temperature records for January, readily available from the Bureau of Meteorology website, show no long term trend and the warmest January in the series was 1908. Unfortunately, in an act of scientific vandalism, in January 2015 the BoM closed the Melbourne observing site that had operated for more than 120 years.
Future trends will be contentious’
The problem for Australians is to know what the data is and the reasons for homogenization.
So what was the temperature in Melbourne after Jan 2015?
How was it calculated?
Are the only ones to know this those who hacked the BoM?

Reply to  Khwarizmi
January 23, 2016 1:53 pm

Stokes
You should be aware of how poor the former La Trobe site was,
http://climateaudit.org/2007/10/23/melbournes-historic-weather-station/
while the 19thC temperatures came from a site in the Botanic Gardens and how the automated stations read higher. Sometimes there is over a degree difference between a short spike in temperatures and the half hour readings. What are the chances such spikes would be picked up as someone popped out to take a reading?
http://www.bom.gov.au/tmp/cdio/086071_36_13_2020785628048329213.png
You then cherry pick this one station and look at extreme temperatures to highlight global warming is real, even though there are only a few degrees difference between the 20 hottest days recorded (pretty obvious that if taken under the same conditions in the 19thC that the readings could be >2°C more), then claim that 7 of the 20 are in the last 20 years is meaningful.
Then you ignore that both the highest monthly mean for Jan and Feb at the site were over 100 years ago, with the Feb readings taken in the gardens.

Robert Austin
Reply to  Nick Stokes
January 21, 2016 6:05 pm

Ah yes. In the small part of the world where there is extensive if not pristine temperature data, there is a cooling. But this cooling is overwhelmed by warming in regions of the earth where temperature data is scarce to non existent. Nevertheless, through the prestidigitation of adjustments and homogenization, these wizards of climate science can determine the earth’s temperature anomaly to the hundredth of a degree. I stand slack jawed in amazement.

MarkW
Reply to  Robert Austin
January 22, 2016 6:44 am

100 years ago, data was recorded to the nearest degree.
Yet through fancy statistical manipulations, they claim to be able to know the actual temperature at the time to 0.01C. (And that’s without getting into data quality and site management issues.)

GeeJam
January 21, 2016 3:15 pm

March 13th 2016. World Wide Discharge a CO2 Extinguisher Day.
I’m up for it – might even open a few 2L bottles of Lemonade simultaneously – and make some bread – and . . . .
Cannot wait to see how unprecedentedly hot April will be as a result of all that ‘harmful’ gas.

Marcus
Reply to  GeeJam
January 21, 2016 3:32 pm

WHAT !! No beer ??

GeeJam
Reply to  Marcus
January 21, 2016 3:39 pm

Beer, Sodium Bicarb, Limescale Remover, you name it . . . . It’s gonna be fizzing on the 13th March.

Duncan
Reply to  GeeJam
January 21, 2016 3:40 pm

Ha, Ha, I envisioned a similar stunt at the next World Climate March, fake a Liquid CO2 truck crash (use some benign substance) where multiple ‘leaks’ have to be plugged. It would be entertaining to see the crowd go into full crisis mode.

GeeJam
Reply to  Duncan
January 21, 2016 3:58 pm

I can see the headlines . . . . “185 Million CO2 Fire Extinguishers were discharged simultaneously around the world yesterday in a silent protest by the sceptic community and, surprisingly, the expert’s prediction of Armageddon hasn’t happened after all.”

highflight56433
Reply to  GeeJam
January 21, 2016 4:42 pm

Consider completely replacing water vapor with CO2 and temperatures do what? Now consider deluding water vapor with CO2 and temperatures do what? Consider that solar heat is constant, thus there is a fixed number of photons that can heat the atmosphere, therefore the higher the concentration of CO2 the higher the number of photons that “heat” CO2 rather than heating water vapor. It’s a duh moment. Cheers!

Moa
Reply to  highflight56433
January 21, 2016 5:24 pm

solar luminosity is approximately constant (there is low-level variability, but let’s ignore that for now).
solar *magnetic activity* is far from constant. And solar magnetic activity affects the greatest greenhouse gas, ‘water vapor’ via mediation of cosmic ray flux. See the work by Nir Shaviv et al.

Dawtgtomis
Reply to  GeeJam
January 21, 2016 10:26 pm

I own a couple of 20# CO2 extinguishers that have the original seals and still weigh out. Sequestered gas from 1945. Maybe that’s a good enough reason to ice down some lovely beverages with them at the solstice.
[Keep the bottles for the next hot day: Jan 22, Feb 22, March 22, April 22 …. or June 22. Might be warm again by Sept 22, since we believe in equal opportunity hemispheres. .mod]

Dawtgtomis
Reply to  Dawtgtomis
January 21, 2016 10:29 pm

Ouch- “sequestered liquid carbon pollution” -I should have said.

Dawtgtomis
Reply to  Dawtgtomis
January 21, 2016 10:32 pm

And while I’m being dyslexic, I meant the equinox. Bed time I guess.

john harmsworth
Reply to  GeeJam
January 22, 2016 1:56 pm

Better eyeball that lemonade closely! Wouldn’t want to be paying for 500mL when it’s .06 short!

KTM
January 21, 2016 3:20 pm

One key point they didn’t discuss was that the high quality 1/2 stations have a smaller warming trend than the low quality stations. They say all these corrections are eliminating the biases, yet the biases clearly remain.
And then they set their biased data on a pedestal to undermine all other datasets. In the case of the Pause-buster data, it sticks out like a sore thumb against many other curated datasets put out by establishment groups, yet they insist it’s the new standard.
For the two highest quality datasets, USCRN and ARGO, they have now adjusted BOTH of them to match low-quality data. The set the initial conditions for USCRN at an anomaly of almost +1 degree, based on historic USHCN data. And they adjusted the ARGO data to match the ship intake data, rather than doing the opposite.
It’s as if they don’t WANT high quality data as a solid reference point.

Brian H
Reply to  KTM
January 21, 2016 6:12 pm

No “as if” about it.

Reply to  KTM
January 21, 2016 10:31 pm

Low resolution information is their ally. That allows them to infer that large areas surrounding a favorable warm reading can then be said to match that warm reading. I think that I see that same method at play at NCEP with their ENSO region data as compared to the other data sets for the ENSO regions. They are then able to legitimately state that their “picture” of the regions is correct according to their rules.To see what I am referring to click on the current NCEP ssta graph, and then compare that to what Weather Zone or Tropical Tidbits show.

Reply to  KTM
January 22, 2016 10:14 pm

No uscrn stations.. 110 in the USA.. Match the other thousands of “bad” stations. Which means that the science of ” rating ” sites isn’t settled.

January 21, 2016 3:22 pm

Global temp: it’s like a boring two horse race you can’t see properly, if you can see it at all.

January 21, 2016 3:23 pm

“I think I can see why the so-called consensus has become so obsessed recently with destroying the credibility of the satellite data.”
You should have plotted the RAW RSS data.
Nobody is obsessed with destroying the credibility of the modelled data produced from satellites.
People are interested in what the actual adjustments are and the uncertainties.
But Look Dave..
You like RSS. RSS has to perform a TOBS correction.
Do your trust the physics of that TOBS correction?
If yes… Then you just trusted a GCM.. because RSS corrects its data with the aid of a GCM

Marcus
Reply to  Steven Mosher
January 21, 2016 3:35 pm

Are you really that stupid or are you just practicing to be a liberal politician ??

Reply to  Steven Mosher
January 21, 2016 3:36 pm

And UAH uses radiosondes. So attack UAH first. Got it. Thanks for the little additional insight.

Bear
Reply to  David Middleton
January 21, 2016 4:15 pm

By that logic the surface measurements aren’t temperatures. They’re analogs based on the expansion of liquids in a tube or the change in electrical current in a circuit.

Reply to  David Middleton
January 22, 2016 10:16 pm

Rss produces temperatures. Uah does not.

Curious George
Reply to  Steven Mosher
January 21, 2016 4:05 pm

Steven, please link to how a TOBS correction comes from a GCM.

Reply to  Curious George
January 22, 2016 10:17 pm

Read the rss ATBD. I have linked to it

Curious George
Reply to  Curious George
January 23, 2016 2:18 pm

Dear Steven, I just spent 15 minutes trying to find your link. Did you link to it here, or elsewhere? In 2014, 2015, or 2016? What the hell is RSS ATBD? Googling it yields “Did you mean:
rss tabs or rs qbd or airs atbd or rss tcd”
Thank you very much for wasting my time. Are you unhelpful on purpose? What GCM did you mean?

AndyG55
Reply to  Steven Mosher
January 21, 2016 4:30 pm

Hey Mosh, can you find and put pictures up for the GHCN station in Addis Ababa,
Thanks.

Patrick MJD
Reply to  AndyG55
January 21, 2016 7:09 pm

I think it is at Bole airport, but I can’t find any pictures. I could of course ask one of my ex-wife’s family or friends there to take a picture. Either way the air quaility is fairly bad given most people use open charcoal fires for cooking and making coffee and Addis is at about 2500m above sea level. There has been a massive build up in the city of hi-rise buildings, other dwellings and roads so UHIE would be significant I would say.

AndyG55
Reply to  AndyG55
January 21, 2016 7:19 pm

The point is, Patrick, that they are using the data.
They SHOULD know exactly where it is coming from.
As far as I can determine it might be at the PO right in the middle of the city, with massive UHI effects…
and it would be one of 5 or 6 stations smeared across an area the size of the USA.
But I bet THEY DON’T KNOW !!! and certainly won’t account for it.

AndyG55
Reply to  AndyG55
January 21, 2016 7:25 pm

ps. If you go to http://www.ncdc.noaa.gov/cdo-web/datatools/findstation
and type in addis ababa, with a bit of zooming you should be able see the location on a street named Cunningham St.
Then go to Google Earth and have a look at its situation !
Our esteemed host goes up to class 5 in his surface station set.. I think this would be one of those.

AndyG55
Reply to  AndyG55
January 21, 2016 7:26 pm

forgot.. you need to pick a daily or monthly dataset

AndyG55
Reply to  AndyG55
January 21, 2016 7:27 pm

pps.. there may aslo be another one at the airport.. always a really good place for a weather station… NOT !

Patrick MJD
Reply to  AndyG55
January 21, 2016 7:45 pm

And here in Sydney, Australia, whenever the station at the airport reads higher than AVERAGE (FFS), it’s trotted out as proof of global warming leading to climate change. Ethiopia is a great place to visit BTW.

ralfellis
Reply to  AndyG55
January 22, 2016 1:21 am

If it is in Cunningham St, then it is right in the middle of the mad-cap city. There are two small parks just north of Cunningham St, so one presumes that the met station is in one of those parks. But the Google image is not very clear for that region.
http://s8.postimg.org/yswry8nat/addis_ababa.jpg

AndyG55
Reply to  AndyG55
January 22, 2016 2:15 am

As far as I can tell, the GHCN map puts it at the post office, bottom left.
Again the point is.. THEY SHOULD KNOW.,
but the likes of Mosh, Zeke, who work as salesmen for BEST, have not responded.

Reply to  Steven Mosher
January 22, 2016 10:19 pm

Read the rss ATBD. I have linked to it. It’s a global climate model.
Sorry you lose.

January 21, 2016 3:37 pm

I fail to see the point in debating the various temperature effects (cart) of climate change before the CO2 cause (horse) has been solidly demonstrated. Anthro CO2 is trivial, CO2’s RF is trivial, GCM’s don’t work. Trust the force, Luke!
Prior to MLO the atmospheric CO2 concentrations, both paleo ice cores and inconsistent contemporary grab samples, were massive wags. Instrumental data at some of NOAA’s tall towers passed through 400 ppm years before MLO reached that level. IPCC AR5 TS.6.2 cites uncertainty in CO2 concentrations over land. Preliminary data from OCO-2 suggests that CO2 is not as well mixed as assumed. Per IPCC AR5 WG1 chapter 6 mankind’s share of the atmosphere’s natural CO2 is basically unknown, could be anywhere from 4% to 96%. (IPCC AR5 Ch 6, Figure 6.1, Table 6.1)
The major global C reservoirs (not CO2 per se, C is a precursor proxy for CO2), i.e. oceans, atmosphere, vegetation & soil, contain over 45,000 Pg (Gt) of C. Over 90% of this C reserve is in the oceans. Between these reservoirs ebb and flow hundreds of Pg C per year, the great fluxes. For instance, vegetation absorbs C for photosynthesis producing plants and O2. When the plants die and decay they release C. A divinely maintained balance of perfection for thousands of years, now unbalanced by mankind’s evil use of fossil fuels.
So just how much net C does mankind’s evil fossil fuel consumption (67%) & land use changes (33%) add to this perfectly balanced 45,000 Gt cauldron of churning, boiling, fluxing C? 4 GtC. That’s correct, 4. Not 4,000, not 400, 4! How are we supposed to take this seriously? (Anyway 4 is totally assumed/fabricated to make the numbers work.)

JohnWho
January 21, 2016 3:38 pm

Is that “raw” data the data from all surface stations including those that are not properly sited and those that are in recognized Urban Heat Islands BEFORE any adjustment(s)?

Marcus
Reply to  JohnWho
January 21, 2016 3:45 pm

Logically, you would think that making ” adjustments ” to the data to compensate for the ” Urban Heat Islands “, the temps would adjust down….but every adjustment ALWAYS go up !! Weird eh !

JohnWho
Reply to  Marcus
January 21, 2016 3:50 pm

Well that and the fact that virtually all the stations not sited properly also need their temps adjusted down.

Bruce Cobb
Reply to  Marcus
January 21, 2016 3:51 pm

But, they also adjust the past down, so that works out.

Reply to  Marcus
January 21, 2016 5:42 pm

This is Berkely Earths take on Adelaide
http://berkeleyearth.lbl.gov/stations/151931
Adelaide West Terrace is the most reliable data for the State from the 19th C to 1979.
Notice the cooling in the data from 1940 to the mid 1960s in both the max and min.
http://www.bom.gov.au/jsp/ncc/cdio/weatherData/av?p_display_type=dataGraph&p_stn_num=023000&p_nccObsCode=36&p_month=13
http://www.bom.gov.au/jsp/ncc/cdio/weatherData/av?p_display_type=dataGraph&p_stn_num=023000&p_nccObsCode=38&p_month=13
The minimum shows a large rise after 1960 because West Terrace was changed during the sixties from a mainly residential street to a thorough fare with car dealerships lining the streets.
The Airport shows a steady increase from 1950 in the minimum temperatures.
http://www.bom.gov.au/jsp/ncc/cdio/weatherData/av?p_display_type=dataGraph&p_stn_num=023034&p_nccObsCode=38&p_month=13
West and NW of the airport (north of the current station) was swamp. The local river emptied into this and slowly drained south and north through swamps with overflow through Breakout Creek which was merely a drain. That area that was swamp was partially drained for market gardens. Breakout Creek was then turned into a diversion of the river to flow directly to the sea in the late 60s and the area was then gradually built up into suburbs.

AndyG55
Reply to  Marcus
January 22, 2016 1:13 pm

Yep, their “regional expectation” (lol) gives them carte blanche to hack and adjust at will.
They have managed to CREATE a nice warming trend since about 1960, where little exists in the real data.

Reply to  Marcus
January 22, 2016 10:21 pm

No [,] half of our adjustments go down.

JohnWho
Reply to  JohnWho
January 23, 2016 6:02 am

Back to my original question:
“Is that “raw” data the data from all surface stations including those that are not properly sited and those that are in recognized Urban Heat Islands BEFORE any adjustment(s)?”

Just Some Guy
January 21, 2016 3:48 pm

This data “adjustment” stuff reminds me of contract claims disputes in the construction industry. You have raw data, which typically everyone can agree on. Then you have methods and assumptions for calculating the claim amount, which are disputed almost 100% of the time. I’ve seen cases with highly credentialed experts on both sides of a contract claim coming up with widely differing cost analysis, depending on whether they represent the owner, or the contractor.
Especially considering the tiny temperature anomaly scales, it strikes me as extremely likely the final adjusted graphs being produced by these environmental activists posing as scientist are showing wildly exaggerated warming.
What’s really disturbing to me as that the public only sees the “warmist” version of the (adjusted) data. And that data is presented to the laymen as concrete evidence, as if the graphs themselves represent undisputable raw data.

jmarshs
Reply to  Just Some Guy
January 21, 2016 5:34 pm

It’s more like geotechnical reports that don’t “tell” you what’s going on below ground between borings. You can (must) make assumptions, but once you break ground, you have a geoengineer on site taking gathering additional data.

Reply to  jmarshs
January 21, 2016 6:56 pm

Jmarshs: Yes, many times I have changed designs and construction procedures on projects I worked on because of that “TWEEN” stuff. Water, rock, contaminants, grave sites, to name a few. Murphy’s Law.

Just Some Guy
Reply to  jmarshs
January 21, 2016 7:41 pm

Key difference between climate history reconstruction and geotechnical reports: With geo-reports, you can, as you correctly stated, have a geoengineer go on-site to gather additional data and therefore improve your knowledge of what is below ground. But with climate history that is not possible because there are no time-machines around to go back and gather the missing data. Aside from using “proxy data”, they are forever stuck with the limited information that was gathered at the time. That is why I will instantly not trust anyone claiming they have figured out an accurate global temperature trend from thermometers over the past 150 years to a high degree of certainty.

jmarshs
Reply to  jmarshs
January 21, 2016 9:16 pm

And I’ve hit a few basements from old demolished houses!

jmarshs
Reply to  jmarshs
January 21, 2016 9:18 pm

@Justsomeguy
See my post a couple down! We’re in complete agreement.

Michael C
Reply to  jmarshs
January 23, 2016 10:01 am

The key to Geotech investigations (my field) is the consistency of the data from a grid that is established according to the economic and social status of the building project. Say one was building a hospital compared to a chicken house. In the case of the hospital should there be a great variation over the standard grid one must go back in and bore more holes and keep drilling more holes until the entire subsurface is understood and measured for strength/stability etc with a factor of safety (e.g. 3+) far exceeding the demands of the building. Should there be a soft spot that does not meet the minimum standard one must map this with a degree of accuracy of a meter scale. Not too many hospitals built on land tested to western standards fail. Here is the proof
How does this compare with temperature measurement on a global scale? Climate scientists could learn a lot from engineers

Curious George
January 21, 2016 4:00 pm

Adjustments should be seen not as a necessity, but as an opportunity.

Just Some Guy
January 21, 2016 4:25 pm

I have a question for anyone with knowledge: These “TOBS” adjustments. Are they done on a case by case basis? Or as a global change? In other words did someone actually comb through each and every record looking for time of observation changes? or did they just sort of “wing it” with a single adjustment to all the data at once?

Reply to  Just Some Guy
January 21, 2016 4:31 pm

In the US, it was supposedly combed through. TOBS is trivial compared to US UHI and microsite issues. Surface stations project. Previous guest post on same.

Reply to  Just Some Guy
January 22, 2016 9:53 am

Time of obs is agreed between the volunteers and NWS. If an observer wants to change, he asks. Those requests are recorded.

Reply to  Just Some Guy
January 22, 2016 10:09 am

“I have a question for anyone with knowledge: These “TOBS” adjustments. Are they done on a case by case basis? Or as a global change? In other words did someone actually comb through each and every record looking for time of observation changes? or did they just sort of “wing it” with a single adjustment to all the data at once?”
Which TOBS adjustment are you talking about.
A) The TOBS adjustment for the US
B) The TOBS adjustment for Satellities.
They BOTH do TOBS adjustments.
For the US.
There are Three seperate approaches all validating each other.
1. The Case by Case approach. This has been validated EVEN BY CLIMATE AUDIT SKEPTICS
and by John Daly. Every station is treated seperately
2. NOAA statistical approach. Matches the case by case. every station is treated separately
3. Berkeley earth statistical approach. Matches the case by case. every station is treated separately
For SATELLITES.
The TOBS adjustment is performed by using a single GCM model
Different GCM models give different answers.

john harmsworth
Reply to  Steven Mosher
January 22, 2016 2:11 pm

I have to question the method here. How is it possible to perform a case by case analysis on every individual site when wind speed, wind direction, moisture conditions, local activity, time of day shading and wind blockage are all changing constantly in so many ways? Seems to me these stations should either be classified as standard compliant or unreliable/useless. This would leave us with fewer readings but at least they would be believable.

Reply to  Steven Mosher
January 24, 2016 3:10 pm

Simple John. Go read the validation papers. Years of hourly data. Out of sample testing. It works. Your questions are not important.

Reply to  Steven Mosher
January 24, 2016 8:58 pm

Steven:
Links to one or more validation papers would be welcome. I’m skeptical as validation is impossible in lieu of identification of the sampling units but my current understanding is that no sampling units underlie the climate models..
Terry Oldberg

Just Some Guy
Reply to  Just Some Guy
January 22, 2016 1:15 pm

Thanks ristvan, Nick and Steven for answering my question.
Stephen, I was referring to TOBS adjustment for thermometers. BTW, Just to point out it’s pretty obvious what you are trying to do with associating the satellite data with “GCM models”. A adjustment based on a calculation is just that. The accuracy of an adjustment depends how well they can verify the accuracy of the calculation against real data. Whether you call it a “model” or something else means nothing. Nice try though.

Reply to  Just Some Guy
January 24, 2016 3:08 pm

Read their paper. Different gcms have different diurnal cycles. They don’t validate against the real world.

Mike the Morlock
January 21, 2016 4:35 pm

This is not science, it is political speech. Re-read Mr Scott K Johnson’s statements . They are attacks on congressman L.Smith.
Note the insults to him, the misrepresentation of of the facts that prompted his interest. Note the term “stump speech”
He received Whistle blower’s statements that something was wrong. He is required by law to investigate.
Since Mr Johnson choose to make it a campaign issue , the forums distributing his “political ads” must grant equal time for dissent.
michael.

MarkW
Reply to  Mike the Morlock
January 22, 2016 6:55 am

It’s like the days of the old “Fairness Doctrine”.
It’s only political, when it disagrees with the current govts position.
Anyone agreeing with the govt (or the Democrats for that matter) are by definition, not being political.

jmarshs
January 21, 2016 5:07 pm

People who work in the applied sciences often have to make assumptions and/or interpolate data when no better data is available. However, they do so on the condition that they will receive feedback in the future which will allow them to make adjustments to account for incorrect assumptions. If we have to know everything there is to know before we start a project, then no buildings will get built, no patients will get cured and no oil will be found.
But historical temperatures are just that – historical. And historical data, with known errors, should be scrapped, not tweaked. There is no way to go back in time to see if you are “right”. They cannot form the basis of geoengineering i.e. trying to control the “temperature” of the planet.
This brings to light the two factors that make geoengineering the climate impossible. 1) Insufficient power to affect the system and 2) Lack of timely feedbacks to make adjustments.
Irregardless if CAGW is true or not, the most we can do is what humans have always done – adapt.

January 21, 2016 5:17 pm

Your figure with RSS and UAH data look quite odd; are those their U.S. lower 48-only numbers?
UAH doesn’t seem to have U.S.-only processed data available for their new beta version; for the current operational version (5.6) they do, and they are available here: http://vortex.nsstc.uah.edu/data/msu/t2lt/uahncdc_lt_5.6.txt
If I plot UAH 5-year averages on my graph (which gives me from 1983-present, since prior to 1983 there isn’t 5 years of UAH to average, as it begins in 1979) for all the series from 1983 to present, aligning all series to the 1983 5-year average, I get this. UAH at least (version 5.6) seems to agree much more with adjusted data than raw data, and looks nothing like you graph.
http://s24.postimg.org/pb575mexx/tavg_ushcn_raw_tobs_adj_v3.png
If you know where I might find U.S. data for UAH v6, I’d be happy to plot that as well.
Also, you neglected to show the figure featuring global adjustments. Turns out you actually get less warming in the adjusted data, not more. Funny that.
http://cdn.arstechnica.net/wp-content/uploads/2016/01/noaa_world_rawadj_annual.png

Reply to  Zeke Hausfather
January 21, 2016 5:28 pm

Ahh, looks like you are actually comparing global land/ocean UAH and RSS data to land-only U.S. lower 48 temperatures. That would explain it.
Oddly enough, the U.S. is not the entire globe (we are a measly 2% of it), so comparing a global land/ocean record to a U.S. land record is not very useful or revealing. Since UAH actually produces a U.S. 48 land record (which I linked above), I’d suggest using it. It seems to agree significantly better with the adjusted data than with the raw data.

AndyG55
Reply to  Zeke Hausfather
January 21, 2016 5:34 pm

Hey Zeke, can you find and put pictures up for the GHCN station in Addis Ababa,
Thanks.

AndyG55
Reply to  Zeke Hausfather
January 21, 2016 5:36 pm

“UAH doesn’t seem to have U.S.-only processed data available for their new beta version; ”
Yes they do.

AndyG55
Reply to  AndyG55
January 21, 2016 5:38 pm

And UAH USA48 and 49 trends are almost an exact match for USCRN trend.

AndyG55
Reply to  AndyG55
January 21, 2016 5:40 pm

Here is UAH USA48 (V6) this century.
http://s19.postimg.org/bbw6r5lwz/USA48_land.jpg

Reply to  AndyG55
January 21, 2016 5:45 pm

You are right; I snooped around the FTP site a bit more and found it.
http://s12.postimg.org/y1sf7avbx/tavg_ushcn_raw_tobs_adj_v4.png
Turns out that the new version of UAH (6.0) is closer to the raw data, while the older version (5.6) is quite similar to the adjusted data.
So you could use the recently-adjusted UAH data to argue against the adjustments in U.S. temperature data. However, there is more than a little irony in that given the size of the adjustments that were made this year to UAH data. As Carl Mears has shown, you can get a wide range of trends for satellite data depending on the parameters you choose for orbital decay and diurnal cycle adjustments, much wider than the range of uncertainty in the surface record:
http://skepticalscience.com//pics/rss_ensemble_box.png

AndyG55
Reply to  AndyG55
January 21, 2016 5:49 pm
AndyG55
Reply to  AndyG55
January 21, 2016 5:51 pm
Reply to  AndyG55
January 21, 2016 5:52 pm

Also, being “an exact match for USCRN trends” doesn’t tell you much. USCRN trends are actually slightly higher than USHCN trends (though not significantly so in the mean) during the period of overlap:
http://s11.postimg.org/o4fdmuls3/CONUS_average_combined.png

AndyG55
Reply to  AndyG55
January 21, 2016 5:56 pm

Seriously zeke.. two totally different measuring regimes, and you think that the almost exact match is just a coincidence.? roflmao.
I thought you were a mathematician. !!!!
I’ve been waiting for someone to start playing that game, it was so obvious from the start.

Just some guy
Reply to  AndyG55
January 21, 2016 5:59 pm

Carl Mears graph of “uncertainty” is astonishing. Does this teeny tiny little sliver of uncertainty in his graph include uncertainty from UHI, poor station siting, and in-filling of the data for the massive parts of the globe with no record? They must have God-like powers of divination to be that certain of the global average temperature to such accuracy.

Brandon Gates
Reply to  AndyG55
January 22, 2016 12:08 pm

Just some guy,

Carl Mears graph of “uncertainty” is astonishing. Does this teeny tiny little sliver of uncertainty in his graph include uncertainty from UHI, poor station siting, and in-filling of the data for the massive parts of the globe with no record?

It’s worth noting that Carl Mears’ main contribution to that plot was for RSS, not the surface data. The calculations for that plot are Kevin Cowtan’s, and no, the HADCRUT4 portion of that plot does NOT include all the estimated uncertainty. When the MET informed him of his error, he factored the additional uncertainties into the calcs:
http://skepticalscience.com/surface_temperature_or_satellite_brightness.html#115558
The original spread in the trends was about 0.007C/decade (1σ). Combining these gives a total spread of (0.007^2+0.002^2+0.002^2)^1/2, or about 0.0075 C/decade. That’s about a 7% increase in the ensemble spread due to the inclusion of changing coverage and uncorrelated/partially correlated uncertainties. That’s insufficient to change the conclusions.

They must have God-like powers of divination to be that certain of the global average temperature to such accuracy.

Precision, you mean. This is what the MET have to say about uncertainty in global average surface temperature anomalies:
http://www.metoffice.gov.uk/hadobs/hadcrut4/data/current/time_series/HadCRUT.4.4.0.0.monthly_ns_avg.txt
http://www.metoffice.gov.uk/hadobs/hadcrut4/data/current/time_series/HadCRUT.4.4.0.0.annual_ns_avg.txt
From 1979-2012, the mean monthly uncertainty is ±0.151 K and the mean annual uncertainty is ±0.087 K.
A 2-sigma uncertainty in linear trend of ±0.015C/decade calculated over the same interval is a different animal entirely.

Just Some Guy
Reply to  AndyG55
January 22, 2016 12:58 pm

Brandon Gates said “Precision, you mean….”
Well no, Brandon, I meant accuracy. Perhaps I misinterpreted Zeke’s comment about “uncertainty range”. I assumed his little graphic by Mears was referring to the uncertainty range with respect to accuracy and not precision, because accuracy is what really matter here. If that graph posted is about “precision” and not “accuracy”, then it’s kind of irrelevant, in my opinion.

Brandon Gates
Reply to  AndyG55
January 22, 2016 2:19 pm

Just Some Guy,

If that graph posted is about “precision” and not “accuracy”, then it’s kind of irrelevant, in my opinion.

On review, I may have implicitly overstated my case. When dealing with temperature anomalies, we do care about accuracy when it is confirmed or suspected that the mean absolute error of an instrument has changed abruptly or is changing over time, both of which tend to introduce bias in trends. Otherwise, what we care about is precision — by how much we expect a given reading to deviate from its mean absolute error.
That all said, the graph Zeke posted is part of an article wherein Kevin Cowtan is making an explicit argument about trend precision, which I think is entirely relevant when the topic is comparing the reliability of (A)MSU-derived trend estimates vs. thermometer-derived trend estimates.

Just Some Guy
Reply to  AndyG55
January 22, 2016 3:24 pm

“We do care about accuracy when it is confirmed or suspected that the mean absolute error of an instrument has changed abruptly or is changing over time, both of which tend to introduce bias in trends. ”
AKA: Fiddling with the data. Trying to outsmart the data which you “suspect” is is showing the wrong trend is a recipe for wrong results and user bias.

Brandon Gates
Reply to  AndyG55
January 22, 2016 4:05 pm

Just Some Guy,

AKA: Fiddling with the data. Trying to outsmart the data which you “suspect” is is showing the wrong trend is a recipe for wrong results and user bias.

Quite possible, which is why I think it is good scientific practice to detail such adjustments in peer-reviewed literature, retain the unadjusted data so that it can be compared at its most granular level to the adjusted data, and to publish the computer codes which perform the adjustments.
Conversely, naively assuming that the raw data contain little to no error might be a recipe for wrong results due to data bias. Again, I think it is good scientific practice to always suspect that such errors exist, and either attempt to rule them out, or upon finding them estimate the error they introduce and correct for that error.
Now, both RSS and UAH have applied adjustments over the years because they suspected that things like orbital decay and diurnal drift were biasing the results of their trend estimates. Since that is a “recipe for wrong results and user bias” according to you, do you therefore reject their temperature anomaly products?

Just Some Guy
Reply to  AndyG55
January 22, 2016 5:05 pm

Brandon said, “Now, both RSS and UAH have applied adjustments over the years because they suspected that things like orbital decay and diurnal drift were biasing the results of their trend estimates. Since that is a “recipe for wrong results and user bias” according to you, do you therefore reject their temperature anomaly products?”
No I do not and I will explain why.
Orbital decay and diurnal drift are not “suspected” problems and do not require any human judgement in the corrections. They are issues which are known for a fact to exist and can be corrected with mathematical formulas. The accuracy of those formulas can be verified by calibrating the data with known measurements made by the weather balloons. This is far different from the case with the incomplete and flawed ground-based thermometers. (note I am not talking about TOBS adjustment here, I’m talking about the so-called “homogenization” and problems like UHI and station-siting which are being incorrectly assumed by yourself as a non-problem.) You yourself used the phrase “suspected that the…. error (of a particular instrument) is changing over time”. You have no way to verify the accuracy of such suspicions and so must rely on human judgement and computer models. Any time human judgement gets involved with a complex analysis of data, there will inevitably be human bias in the final product.
And btw, yes I’ve read the studies which try to dismiss the significance of UHI in the ground-based temp record. These studies miss the concept of UHI entirely. They attempt to categorize stations by either “urban” or “rural”, as if UHI were a sort of “diseases” which infects some thermometers but does not infect others. UHI effects do not work that way.

Brandon Gates
Reply to  AndyG55
January 22, 2016 6:59 pm

Just Some Guy,

Orbital decay and diurnal drift are not “suspected” problems and do not require any human judgement in the corrections.

They weren’t always known, were not initially corrected for, and would not have been identified if they had not first been suspected issues. Since humans are applying the corrections, I cannot for the life of me understand why you’d think no human judgement is involved.

They are issues which are known for a fact to exist and can be corrected with mathematical formulas. The accuracy of those formulas can be verified by calibrating the data with known measurements made by the weather balloons. This is far different from the case with the incomplete and flawed ground-based thermometers.

Rhetorical question: why not just use radiosonde data then instead of futzing around with orbital corrections?

(note I am not talking about TOBS adjustment here, I’m talking about the so-called “homogenization” and problems like UHI and station-siting which are being incorrectly assumed by yourself as a non-problem.)

No, I don’t assume that. Read my previous statement again: Conversely, naively assuming that the raw data contain little to no error might be a recipe for wrong results due to data bias. Again, I think it is good scientific practice to always suspect that such errors exist, and either attempt to rule them out, or upon finding them estimate the error they introduce and correct for that error.
I do not consider surface-based observations exempt from those same principles.

You yourself used the phrase “suspected that the…. error (of a particular instrument) is changing over time”. You have no way to verify the accuracy of such suspicions and so must rely on human judgement and computer models.

Well heck, if we knew all there is to know, we wouldn’t need to do science at all. On that note, I don’t understand how it is you know that “I” have no way of verifying suspected issues with the surface temperature record?

Any time human judgement gets involved with a complex analysis of data, there will inevitably be human bias in the final product.

I agree with that. It’s THE reason for peer review, and even that doesn’t catch every error.

And btw, yes I’ve read the studies which try to dismiss the significance of UHI in the ground-based temp record.

Such as?

These studies miss the concept of UHI entirely. They attempt to categorize stations by either “urban” or “rural”, as if UHI were a sort of “diseases” which infects some thermometers but does not infect others. UHI effects do not work that way.

I must confess, I do have a difficult time imagining that a weather station surrounded by corn fields in the dead center of Nebraska is going to be “infected” by UHI from Los Angeles.

Just Some Guy
Reply to  AndyG55
January 23, 2016 1:14 pm

[blockquote] I cannot for the life of me understand why you’d think no human judgement is involved. [/blockquote]
I am surprised that you still seem to not get my point that not all adjustments are equal. All I can say at this point is please re-read my previous comments. Or if you still disagree, then we’ll just have to agree to disagree.
[blockquote]I don’t understand how it is you know that “I” have no way of verifying suspected issues with the surface temperature record?[/blockquote]
I think you are missing the distinction between [i]knowing[/i] when there is an issue, and merely [i]suspecting[/i] one.
[blockquote]I agree with that. It’s THE reason for peer review, and even that doesn’t catch every error.[/blockquote]
I’m glad we can agree on something. Unfortunately, where it involves climate science, the peer review process has been subverted into a gate-keeping function. We have fraudsters like Michael Mann to thank for that.
[blockquote]I must confess, I do have a difficult time imagining that a weather station surrounded by corn fields in the dead center of Nebraska is going to be “infected” by UHI from Los Angeles [/blockquote]
No. But a station that was next to dirt road in 1972 but next to a small town shopping center in 2015 might still be counted as “rural” and yet would still show some UHI effects. Likewise a station that was installed the middle of an already urbanized downtown Denver in 1950 would be considered “urban” but might not show any UHI effects all between 1950 and 2015. UHI is caused by the [i]growth[/i] of manmade structures over time. It’s not a virus that only affects all urban stations and none of the rural ones. A study which just compares the trends of “urban” vs “rural” is meaningless, even more so when one considers that most weather stations have rather short time periods. What’s more revealing is the [b]fact[/b] heavy urban areas show significantly higher current temperatures than those in nearby rural areas. As far as I’ve seen, none of the warmists’ studies are able to reconcile their “results” against the proven reality of UHI effects.

Just Some Guy
Reply to  AndyG55
January 23, 2016 1:17 pm

[blockquote] I cannot for the life of me understand why you’d think no human judgement is involved. [/blockquote]
I am surprised that you still seem to not get my point that not all adjustments are equal. All I can say at this point is please re-read my previous comments. Or if you still disagree, then we’ll just have to agree to disagree.
[blockquote]I don’t understand how it is you know that “I” have no way of verifying suspected issues with the surface temperature record?[/blockquote]
I think you are missing the distinction between [i]knowing[/i] when there is an issue, and merely [i]suspecting[/i] one.
[blockquote]I agree with that. It’s THE reason for peer review, and even that doesn’t catch every error.[/blockquote]
I’m glad we can agree on something. Unfortunately, where it involves climate science, the peer review process has been subverted into a gate-keeping function. We have fraudsters like Michael Mann to thank for that.
[blockquote]I must confess, I do have a difficult time imagining that a weather station surrounded by corn fields in the dead center of Nebraska is going to be “infected” by UHI from Los Angeles [/blockquote]
No. But a station that was next to dirt road in 1972 but next to a small town shopping center in 2015 might still be counted as “rural” and yet would still show some UHI effects. Likewise a station that was installed the middle of an already urbanized downtown Denver in 1950 would be considered “urban” but might not show any UHI effects all between 1950 and 2015. UHI is caused by the [i]growth[/i] of manmade structures over time. It’s not a virus that only affects all urban stations and none of the rural ones. A study which just compares the trends of “urban” vs “rural” is meaningless, even more so when one considers that most weather stations have rather short time periods. What’s more revealing is the [b]fact[/b] heavy urban areas show significantly higher current temperatures than those in nearby rural areas. As far as I’ve seen, none of the warmists’ studies are able to reconcile their “results” against the proven reality of UHI effects.
[Not sure what you’re trying to do, but you can ONLY use the html “angled brackets” signs, in this WordPress site. Use the “Test” section link at top the home page to edit this entry, and leave [ ] square brackets for the mods. .mod]

Just Some Guy
Reply to  AndyG55
January 23, 2016 1:32 pm

Attention Mod: Sorry about the formatting errors. Here is a (hopefully) fixed version. 

I cannot for the life of me understand why you’d think no human judgement is involved.

I am surprised that you still seem to not get my point that not all adjustments are equal. All I can say at this point is please re-read my previous comments. Or if you still disagree, then we’ll just have to agree to disagree.

I don’t understand how it is you know that “I” have no way of verifying suspected issues with the surface temperature record?

I think you are missing the distinction between knowing when there is an issue, and merely suspecting one.

I agree with that. It’s THE reason for peer review, and even that doesn’t catch every error.

I’m glad we can agree on something. Unfortunately, where it involves climate science, the peer review process has been subverted into a gate-keeping function. We have fraudsters like Michael Mann to thank for that.

I must confess, I do have a difficult time imagining that a weather station surrounded by corn fields in the dead center of Nebraska is going to be “infected” by UHI from Los Angeles

No. But a station that was next to dirt road in 1972 but next to a small town shopping center in 2015 might still be counted as “rural” and yet would still show some UHI effects. Likewise a station that was installed the middle of an already urbanized downtown Denver in 1950 would be considered “urban” but might not show any UHI effects all between 1950 and 2015. UHI is caused by the growth of manmade structures over time. It’s not a virus that only affects all urban stations and none of the rural ones. A study which just compares the trends of “urban” vs “rural” is meaningless, even more so when one considers that most weather stations have rather short time periods. What’s more revealing is the fact heavy urban areas show significantly higher current temperatures than those in nearby rural areas. As far as I’ve seen, none of the warmists’ studies are able to reconcile their “results” against the proven reality of UHI effects.

Just some guy
Reply to  Zeke Hausfather
January 21, 2016 5:37 pm

Question: how can any satellite record be “relative to 1900-1920”. I thought satellite records started in ’79.

Reply to  Just some guy
January 21, 2016 5:46 pm

My mistake on the label. I was updating the figure cited in the original post and forgot to change it. It should be “relative to 1979-1984”

Werner Brozek
Reply to  Zeke Hausfather
January 21, 2016 7:51 pm

If you know where I might find U.S. data for UAH v6, I’d be happy to plot that as well.

See:
http://vortex.nsstc.uah.edu/data/msu/v6.0beta/tlt/uahncdc_lt_6.0beta4.txt

AndyG55
January 21, 2016 5:53 pm

Thanks to USCRN, all US temperatures have been brought under some semblance of adjustment control.
Trouble is, that leaves the rest of the world for the alarmista to play with. And they do.

Reply to  AndyG55
January 21, 2016 6:02 pm

Oddly enough, I have an academic paper on the subject of USHCN/USCRN comparisons that just got accepted for publication. We can chat about it in detail next month :-p

AndyG55
Reply to  AndyG55
January 21, 2016 6:09 pm

Don’t forget to mention the mathematical impossibility of the match shown in that link. 😉

AndyG55
Reply to  AndyG55
January 21, 2016 6:10 pm

ps.. any luck on that pic of the Addis ababa weather station.?

Reply to  AndyG55
January 21, 2016 5:59 pm

You do realize that the homogenization code used is public, and that it’s exactly the same in the post-CRN period as the pre-CRN period, right? This idea that NOAA suddenly did something different when the CRN came online is silly conspiracy-mongering.
ftp://ftp.ncdc.noaa.gov/pub/data/ushcn/v2/monthly/software/

KTM
Reply to  AndyG55
January 21, 2016 9:36 pm

comment image
Pure coincidence, right Zeke?

NeedleFactory
January 21, 2016 6:02 pm

The cited article (at ArsTechnica) begins, as its very first example of the scientific need for adjustments, the measurement of groundwater level over time. This example contradicts the article’s argument, stating: “Automatic measurements are frequently collected using a pressure sensor suspended below the water level. Because the sensor feels changes in atmospheric pressure as well as water level, a second device near the top of the well just measures atmospheric pressure so daily weather changes can be subtracted out“.
Thus the device collects two raw measurements, subtracting one from the other, producing a “raw” difference between the two sensors. This is the design of the apparatus. Neither sensor is adjusted.
David Middleton closes with “I think I can see why the so-called consensus has become so obsessed recently with destroying the credibility of the satellite data. Indeed. The so-called consensus has claimed that satellite also “requires adjustments,” ignoring the fact that — unlike ground weather stations — the “adjustments” are part of the measurement design, not after-the-fact fiddling.

Reply to  NeedleFactory
January 21, 2016 6:13 pm

Oh really, orbital decay adjustments and diurnal cycle change adjustments are part of the measurement design? That would be news to UAH and RSS. Especially UAH who released a new (and quite different) adjusted version of their record just 6 months ago.

NeedleFactory
Reply to  Zeke Hausfather
January 21, 2016 6:43 pm

I am happy to be corrected — but I thought even school children knew about orbital decay and am surprised the scientists did not expect it. I would have thought brightness differences during the day would also have been expected. I’d appreciate any references about this you might supply. TIA.

MarkW
Reply to  Zeke Hausfather
January 22, 2016 7:00 am

They expected it, however actual decay was not the same as expected decay, which was way the adjustment was necessary.

Richard M
Reply to  Zeke Hausfather
January 22, 2016 10:19 am

Really … the lack of proper adjustments 15-20 years ago was the subject of the recent video attacking Christy and Spencer.

Reply to  Zeke Hausfather
January 24, 2016 3:02 pm

Orbital decay and diurnal adjustments are made by both rss and uah. Without these adjustments you get nonsense data.

Brandon Gates
Reply to  NeedleFactory
January 22, 2016 3:22 pm

NeedleFactory,

The so-called consensus has claimed that satellite also “requires adjustments,” ignoring the fact that — unlike ground weather stations — the “adjustments” are part of the measurement design, not after-the-fact fiddling.

https://courses.seas.harvard.edu/climate/eli/Courses/global-change-debates/Sources/10-Mid-tropospheric-warming/more/Christy-etal-2007.pdf
11. Caveats
[52] We point out that data sets based on satellites undergo constant examination by the developers and users. These data are observed by complicated instruments which measure the intensity of the emissions of microwaves from atmospheric oxygen, requiring physical relationships to be applied to the raw satellite data to produce a temperature value. Further, the program under which these satellites were designed and operated was intended to improve weather forecasts, not to generate precise, long-term climate records.
[53] Since 1992 the UAH LT data set has been revised seven times or about once every 2 to 3 years. There is no expectation that the current version (5.2, May 2005) will not continue to be revised similarly as better ways to account for known biases are developed and/or new biases are discovered and corrected. Thus the production of climate time series from satellites will continue to be a work-in progress.

Emphasis added. Paragraph 53 there is an example of what I consider particularly good scientific thinking, and would be an entirely appropriate statement even if the (A)MSUs and related instrumentation had been purpose-designed for generating long-term climate records — which, like most of their surface-based weather station cousins, they weren’t.

Greg Locock
January 21, 2016 6:08 pm

Thank you for a very informative article, and also for linking to the ars technica article, which as you say has a snarky tone but does a good job of broadly describing the whys and wherefores of homogenisation.

J
January 21, 2016 7:11 pm

You should read the comments forums to this article.
I comment frequently in their forums when they are on climate topics which they often are.
They are a fully committed to the cause crowd, enforcing the AGW dogma.
If one offers any dissenting ideas they are immediately attacked, some times correctly so. But in other cases cogent arguments are made, then there is a coordinated squad of enforcers that will attack, eventually devolving to swearing at the poster with assorted an hominums.
You should look at the comments to that story and see the partisan tatics.

Reply to  J
January 21, 2016 8:07 pm

I was motivated to make comments by such behaviour just before the Climategate emails came out. I made a comment that was hardly sceptic but defensive of those who wanted to question “The Science” and point out flaws. Got a bollocking for an innocent and non-oil-funded scepticism.

January 21, 2016 7:35 pm

Since when was the argument that temperature record was good as is? A quick look at nearby stations shows how you couldn’t be sure of anything unless the change was tenfold bigger and you restricted yourself with rural sites with few changes.
The argument is that with such large adjustments that you can’t have much confidence in a result that shows such a small trend globally and far from uniform across the globe. Even without accusations of fudging, it still isn’t good enough for basing policies on.
But the result of the adjustments is not the data being offset across the range, a small increase in the trend over the whole range nor is the plot of the differences from previous estimations noisy. The difference is a very smooth plot of what the activists wanted to see.

601nan
January 21, 2016 7:47 pm

Back in the day, we would call NOAA “Shits and Giggles.”
Ha ha
Even today they can’t figure out the meaning of an arithmetic mean from a mean.
Don’t ask a NOAA employee what is a geometric mean to an arithmetic mean! That, at a bar in Bethesda, would start a fight and the police would be called to close the bar.
Ha ha

January 21, 2016 10:18 pm

News Flash: Climate change caused by newly discovered super planet with orbit of 20,000 years. The planets are aligned right now. Big changes expected. Lost Pluto but found Micky Mouse.

January 22, 2016 12:07 am

Problem with USHCN : The logic around TOBS adjustment appear sound, but the documentation that observers actually did change their time of observation as needed for the huge adjustments from 1970, to 1980 to 1990 to 2000 to 2015 appear weak. So the foundation is weak.
When at the same time the number of missing datapoint in the most recent years has exploded, it becomes increasingly hard to explain adjustment as usual: “Well in the old days, collecting of temperature data was so very bad..” style.
On top, it seems that Karls UHI writing 1986 for USA has been forgotten and people are forced to believe this issue is tiny.

Tim Hammond
January 22, 2016 12:47 am

I understand homogenisation to get a global temperature. I understand adjusting data to get a consistent set of data for comparisons. What I don’t understand is where the “better” data comes from that is used for homogenisation and adjustment.
If we have lots of high quality data showing this warming, lets see it. If we don’t then you can’t use lower quality data to adjust higher quality data and claim you end up with a better result.
I don’t care what field you are talking about, you just cannot.

Marcus
Reply to  Tim Hammond
January 22, 2016 2:08 am

Shit plus more shit equals really good shit ?

Reply to  Tim Hammond
January 22, 2016 8:58 pm

And anyone who has experience in Africa knows, an extra $10 will buy you any temperature you want.

January 22, 2016 2:23 am

Why is this post only discussing the effect adjustments and such have on temperatures for the United States when one gets radically different results if they look at temperatures for the entire planet? Did the author just not look at what happened with global results, or is this just massive cherry-picking? Either way, it’s pretty ridiculous. Whatever value there may be in looking at US temperatures, there would be far more value in looking at global ones.
Especially since the post compares the temperatures it examines to global satellite temperatures. Why use global temperatures in one case but US temperatures for the other? Given basically nothing the post says holds true for global temperatures, it just looks like massive and intentional cherry-picking to me.

Reply to  David Middleton
January 22, 2016 3:17 am

David Middleton, you say:

Did you read my post? How about the article?
I focused on the US because the adjustments to the US temperature measurements equals the warming anomaly.

But that does nothing to contradict the idea you cherry-picked US temperatures for your post. Your post never even acknowledges the fact the article you’re discussing talks about global temperatures. Nobody could possibly know that unless they went to the article and read it for themselves. A person who had only read your post would naturally assume it only discussed US temperatures since that’s all you ever mentioned in reference to it.
That you find the US adjustments suspicious doesn’t justify completely ignoring the global temperatures. If you had wished to give a fair discussion of the US adjustments, you should have acknowledged your post says nothing about global temperatures, which you accept don’t suffer from any of what you describe and was a key topic of the article you responded to.

Regarding plotting the global satellite data on Zeke’s US graph, I clearly stated in English, “Just for grins, I plotted the UAH and RSS satellite time series on top of the Hausfather graph…”

This does nothing to justify your actions either. Saying you did something “for grins” doesn’t change whether or not it is misleading. If you wanted to publish the figure you made fairly, you should have cautioned readers you were comparing results for very different areas, and thus your figure really has no meaning. Instead, you provided the figure and said:

I think I can see why the so-called consensus has become so obsessed recently with destroying the credibility of the satellite data.

Which could only be based on the figure you published comparing the US land record to the global satellite record, a nonsensical comparison.
This post rests entirely upon cherry-picking. Your claims now, that you just didn’t want to talk about the things you didn’t cherry-pick, does nothing to change the fact you cherry-picked US results and presented them as though they were of central importance. If anything, your response just makes it clear what I said was true.

richardscourtney
Reply to  David Middleton
January 22, 2016 3:52 am

Brandon S? (@Corpus_no_Logos):
You say to David Middleton

But that does nothing to contradict the idea you cherry-picked US temperatures for your post. Your post never even acknowledges the fact the article you’re discussing talks about global temperatures. Nobody could possibly know that unless they went to the article and read it for themselves. A person who had only read your post would naturally assume it only discussed US temperatures since that’s all you ever mentioned in reference to it.

Any reasonable person would consider that the processing of US temperature data is a sample of the processing applied to all the temperature data.
Are you claiming that US data is processed differently from elsewhere?
If it is then how is that justified?
And if it is not then what are you complaining about?
Richard

Reply to  David Middleton
January 22, 2016 1:48 pm

richardcourtney, the very article this post responds to discusses how US temperatures are different from most of the rest of the world. It isn’t a matter of how the data is processed either. It’s because the US record has certain traits not found in most of the rest of the world.
Besides which, the author of this post decried the idea he was using this post to address anything other than US temperatures when responding to me. That means he rejects the idea he was making the sort of argument you claim. That means not only have you managed to ignore what the original article said, you’ve also managed to contradict the author of this post.

Reply to  David Middleton
January 22, 2016 1:57 pm

David Middleton, you can repeat claims like:

My statement renders the moronic accusation of cherry-picking totally moot.

But they won’t become true simply because you say them a lot and make derogatory remarks about the people who disagree. When you do nothing to address anything your critics say, you’re not actually contributing anything. I explained exactly why what you did was cherry-picking, and your response is nothing more than, “Nuh-uh, that’s stupid.” That isn’t how decent people behave, and it doesn’t do anything to actually show I am wrong. It just shows you’re obnoxious and don’t want to actually have discussions with people who disagree with you.

The caption clearly states that these are two different areas and an apples & oranges comparison.

Which does nothing to address what I said, which was that you should have cautioned readers you were making a nonsensical comparison and warned them that meant it had no meaning. Ignoring half of what I said to claim I am wrong does nothing but support the idea you cherry-pick things to misrepresent them.

Regarding my closing comment, I was being somewhat snarky. However, the fact is that the global satellite temperatures track at or below the raw US temperature measurements which don’t exceed the natural variability of the early 20th century.

This isn’t actually a fact, as it depends on a variety of factors and assumptions, but even if it were, it would be completely meaningless. Temperatures for a small fraction of the globe tell us very little about temperatures for the entire planet. Comparing satellite records to surface records for the US is no more appropriate than comparing them to surface records for Australia, Russia, South America or any other area. That you can cherry-pick one comparison and get a good rhetorical effect out of it doesn’t tell us anything.

richardscourtney
Reply to  David Middleton
January 23, 2016 1:57 am

Brandon S? (@Corpus_no_Logos):
I asked you

Any reasonable person would consider that the processing of US temperature data is a sample of the processing applied to all the temperature data.
Are you claiming that US data is processed differently from elsewhere?
If it is then how is that justified?
And if it is not then what are you complaining about?

and you have replied saying in total

richardcourtney, the very article this post responds to discusses how US temperatures are different from most of the rest of the world. It isn’t a matter of how the data is processed either. It’s because the US record has certain traits not found in most of the rest of the world.
Besides which, the author of this post decried the idea he was using this post to address anything other than US temperatures when responding to me. That means he rejects the idea he was making the sort of argument you claim. That means not only have you managed to ignore what the original article said, you’ve also managed to contradict the author of this post.

Say what!?
I was “making” a “claim” about “the sort of argument” provided by Middleton?
I “managed to ignore what the original article said”?
And I “managed to contradict the author of this post”?
NO, NO and NO.
I asked you for clarification of what YOU were saying.
If you don’t have a valid answer to my requests for clarification then say you don’t. Waving ‘straw men’ about things I did not mention does not ‘cut it’.

And it is meaningless armwaving to say “the US record has certain traits not found in most of the rest of the world” when you don’t specify those “traits” or what you think are their causes.
Richard

hot air
Reply to  Brandon S? (@Corpus_no_Logos)
January 22, 2016 8:52 am

“Why is this post only discussing the effect adjustments and such have on temperatures for the United States when one gets radically different results if they look at temperatures for the entire planet? Did the author just not look at what happened with global results, or is this just massive cherry-picking? Either way, it’s pretty ridiculous. Whatever value there may be in looking at US temperatures, there would be far more value in looking at global ones.”
Global surface temps are the epitome of cherry picking. You assert it is more accurate to use a 1200km radius from a single temperature measurement point? This will give an accurate representation of the temperature over that entire area?. So temperatures in Portland OR = Phoenix or death valley?
It is a PRACTICAL impossibility to get an accurate global surface temperature measurement when you have areas the size of the US with 1 measurement point.
“Well we don’t have coverage so we have to make due with what we have..” is not justification for doing so, The correct response is “We don’t have the coverage or sensor integrity to draw any useful conclusions from the measurements. Until we get something that gives us global coverage, (cough *satellites*) the best we can do is look at the trends from the places where we do have good coverage and see if we can independently confirm those measurements. BEFORE we conclude we see warming, cooling, or wiggling about a mean.
Balloon data and satellite data agree very well with each other, so either they BOTH have identical systematic errors, or they are confirming each other to within their inherent measurement accuracy. Which is more likely?
If the surface temps track the satellites/balloons you can trust the measurements because you now have 3 different measurements telling you the same thing. If they don’t which one would you question? The two that agree or the one that doesn’t?

3x2
January 22, 2016 3:14 am

I think I can see why the so-called consensus has become so obsessed recently with destroying the credibility of the satellite data.
It’s the only thing keeping them remotely honest. Without the sanity check from Satellites they have nothing holding them back.

Eliza
January 22, 2016 4:04 am

Again we forget to show that FOUR radiosonde datasets agree with the TWO satellite sets. There is no argument and NO warming
http://www.globalwarming.org/wp-content/uploads/2013/06/CMIP5-73-models-vs-obs-20N-20S-MT-5-yr-means1.png

sagi
Reply to  Eliza
January 22, 2016 11:33 am

Is anyone keeping this useful graphic up to date with the more recent balloon and satellite values?

January 22, 2016 7:18 am

Re: Thoroughly fabricated … data, 1/21/16:
[W]hy the surface station temperature data have to be adjusted and homogenized.
Data evolve to fit the model, to earn tenure, to make the catastrophe really scary, to loosen the money, to pay the salaries, to buy the next gen computers, to regulate out the capitalists, to elect the socialists.
It’s not wrong — it’s Post Modern Science.

Gary Pearse
January 22, 2016 8:30 am

Here is where it all began:comment image
With 1998 El Nino in, it was clear that 1934 in the US was still the record high year. Knowing that this was likely to be followed by a La Nina, it was likely going to be a long time they would have to suffer with the embarrassing fact that 1934 remained the record. The email from an FOI request shows chronologically how his assistant adjusted the figures until Hansen was happy!! I urge everyone who is wondering about this issue to read the historical first egregious tampering with the official record.
Remember that Hansen in 1988 had the airconditioning turned off and the windows all closed the night before in preparation for his alarmist speech to a sweltering Congress (an obliging congressman [Wirth??], I believe arranged this). Here, ten years later with the 1934 warmth dogging hime, he showed another example of his lack of scruples. Today, we have largely forgotten this historic fact of tampering with official US temperatures. We agonize over and measure the comparatively small changes made by T. Karl and whether the pause is 18 years or whatever, when the pause might in actuality be 80 years. Everyone argues that the US is only 3% of the Globe so it doesn’t mean that globally it is similar. However, the Iceland, Greenland, Canadian and Northern Russian temperature records also had 1930s-1940 as the warmest. WUWT? The Canadian all time high was in two places in Saskatchewan in 1937 when it was 47C!! and it was in the high 30s and 40s throughout much of the rest of Southern Canada as well.

Curious George
Reply to  Gary Pearse
January 22, 2016 9:21 am

Could you please provide a link?

TA
Reply to  Gary Pearse
January 22, 2016 6:09 pm

Gary Pearse wrote:
“With 1998 El Nino in, it was clear that 1934 in the US was still the record high year. Knowing that this was likely to be followed by a La Nina, it was likely going to be a long time they would have to suffer with the embarrassing fact that 1934 remained the record.”
I think 1934 should be cited as the “Hottest Year Evah! when we are talking about records, and not 1998. The Earth is currently *not* experiencing “unprecendented heat” as alarmists would have us believe. We would have to get hotter than 1934, to be experiencing unprecedented heat.
” Everyone argues that the US is only 3% of the Globe so it doesn’t mean that globally it is similar. However, the Iceland, Greenland, Canadian and Northern Russian temperature records also had 1930s-1940 as the warmest. WUWT? The Canadian all time high was in two places in Saskatchewan in 1937 when it was 47C!! and it was in the high 30s and 40s throughout much of the rest of Southern Canada as well.”
Every unmodified temperature chart or data chart I see, from around the world, shows the period around the 1930’s as being the hottest period since that time. I have a list of newspaper headlines of weather events during the decade of the 1930’s that shows there were massive heat waves all over the planet during that time period.
TA

dmacleo
January 22, 2016 9:10 am

thanks, when I dropped that into tip line yesterday
http://wattsupwiththat.com/tips-notes-2/#comment-2125475
I hoped someone would address it

DC Cowboy
Editor
January 22, 2016 9:13 am

It seems curious to me that the adjustments made post 1980 are significantly larger than the adjustments made pre-1980. Did we just not know what we were doing 1980 – present to cause the temps to be consistently measured lower (and a lot lower at that)?

john harmsworth
January 22, 2016 2:39 pm

I live in that area of Saskatchewan that had 47C in 1937 although I’m not that old. The local weather regularly shows record highs for a particular date as occurring in the 30’s. As well, we had significantly hot, dry weather in the 1980’s and I certainly remember some serious heat in the 60’s when I was a kid. Outside all the time and no A/C in most buildings might affect my memories but I’m curious about the 16-17 year gap between major el nino’s. Does this interval show up in records previous to 1998?

Gary Pearse
January 22, 2016 6:21 pm

Curious George
January 22, 2016 at 9:21 am
“Could you please provide a link?” [Query re comment above on 1934 still hotter than 1998 according to Hansen:
http://wattsupwiththat.com/2016/01/21/thorough-not-thoroughly-fabricated-the-truth-about-global-temperature-data-well-not-thoroughly-fabricated/#comment-2126590%5D
Here is the original link to an expose from an FOI request concerning the fiddling with 1934 record hot temperatures to reduce them below that of 1998 and the anger and excitement it garnered. Sorry I forgot to add it to my comment.
http://wattsupwiththat.com/2010/01/14/foiad-emails-from-hansen-and-giss-staffers-show-disagreement-over-1998-1934-u-s-temperature-ranking/
We have to keep hammering this stuff!!

Gary Pearse
Reply to  Gary Pearse
January 22, 2016 6:35 pm

While I’m at it, here is 295 pages of the email flurry surrounding all this and i recommend scimming that, too.
https://wattsupwiththat.files.wordpress.com/2010/01/783_nasa_docs.pdf

Michael C
January 23, 2016 10:28 am

Gary Pearse wrote:
“With 1998 El Nino in, it was clear that 1934 in the US was still the record high year. Knowing that this was likely to be followed by a La Nina, it was likely going to be a long time they would have to suffer with the embarrassing fact that 1934 remained the record.”
Can someone please tell me why El Nino increases global mean temperature and La Nina decreases it? according to the model they do not change mean temperature, they just redistribute it around the globe
Deal to this guys. It is important. Why? because should a logical explanation not be found it proves that surface measurements are not accurate. Do the satellite measurements show the same correlation? If they don’t then ‘bingo!” – you’v found the strongest arguments yet that surface measurement are biased by environment. How the heck could 18 months of El Nino increase mean marine temperature by any degree measurable??

Michael C
Reply to  Michael C
January 23, 2016 11:32 am

Much is made of the 98 Spike. If the records are remotely accurate then this deserves much more attention. A spike can only be the result of 2 factors or a combination of both:
A genuine pulse in mean global temperature due to increased energy input or decreased output
A distinct environmental change that resulted in elevated temperature readings within the zones measured that did not reflect the global mean
Either way it appears almost impossible for mean global temperature to increase at the rate shown for 98. Why – the sea. It is a huge energy soak that is comprised of millions of zones and cells measurable on a meter scale. Any swimmer knows this. Measure the mean? Who do they think they are fooling

January 23, 2016 5:29 pm

Another option: not thoroughly enough fabricated …

January 23, 2016 8:39 pm

Dr. Roy Spencer posted radiosonde data in Figure 7 of
http://www.drroyspencer.com/2015/04/version-6-0-of-the-uah-temperature-dataset-released-new-lt-trend-0-11-cdecade
This indicates the surface-adjacent lowest troposphere warmed since 1979 by about .02, maybe .03 degree C per decade more than the satellite-measured lower troposphere.