Guest post by David Middleton
Featured image borrowed from here.

If you can set aside the smug, snide remarks of the author, this article does a fairly good job in explaining why the surface station temperature data have to be adjusted and homogenized.
There is just one huge problem…

Zeke Hausfather/Berkeley Earth”… I added the the natural variability box and annotation. All of the anomalous warming ince 1960 is the result of the adjustments.
Without the adjustments and homogenization, the post-1960 US temperatures would be indistinguishable from the early 20th century.
I’m not saying that I know the adjustments are wrong; however anytime that an anomaly is entirely due to data adjustments, it raises a red flag with me. In my line of work, oil & gas exploration, we often have to homogenize seismic surveys which were shot and processed with different parameters. This was particularly true in the “good old days” before 3d became the norm. The mistie corrections could often be substantial. However, if someone came to me with a prospect and the height of the structural closure wasn’t substantially larger than the mistie corrections used to “close the loop,” I would pass on that prospect.
Just for grins, I plotted the UAH and RSS satellite time series on top of the Hausfather graph…

I think I can see why the so-called consensus has become so obsessed recently with destroying the credibility of the satellite data.
Addendum
In light of some of the comments, particularly those from Zeke Hausfather, I downloaded the UAH v5.6 “USA48” temperature anomaly series and plotted it on Zeke’s graph of US raw, TOBs-adjusted and fully homogenized temperatures. I shifted the UAH series up by about 0.6 °C to account for the different reference periods (datum differences)…

I used a centered 61-month average for a 5-yr running average. Since there appears to be a time shift, I also shifted the UAH ahead a few months to match the peaks and troughs…

The UAH USA48 data do barely exceed the pre-1960 natural variability box and track close to the TOBs-adjusted temperatures, but well below the fully homogenized temperatures.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
Is that Dick Dastardly from Wacky Races?
I believe so… I thought about using Muttley as the featured image… But figured there might be copyright issues.
Also Muttley was the intelligent one…..
Snidely whiplash’s evil twin?
Dudley Do Right’s evil adversary…Snidely Whiplash…I still have his poster on my wall !! Bwa ha ha !
I made a ringtone of the theme from Dudley Doright to announce calls from a canadian friend.
Anthony — my opinion is that from figure 2, the satellite data is in line with the nature. The surface temperature rarely accounts the “climate system” and “general circulation pattern” impacts. The satellite data accounts these. Because of this the satellite data series are below the raw surface data.
Dr. S. Jeevananda Reddy
The satellite will always be “below” the surface data because the satellites measure an average weighted to the surface but which contains data to the tropopause at ~217K. The surface thermometers measure a nano layer at about the height of a human nose. What is important is when the trend is different, and it is.
gymnosperm — it is o.k. when we are dealing with single station data but it is not o.k. when we are drawing a global average. The groundbased data do not cover the globe, covering all climate systems and general circulation patterns. That is ground realities. This is not so with the satellite data. They cover all ground realities around the globe. If there is some mix in terms of ground and upper layer contamination in satellite data, this can be easily solved by calibating the satellite data with good ground based met station data that is not contaminated by urban effect and contaminated by urban effect. Calibration plays the pivotal role.
Dr. S. Jeevananda Reddy
cont— In previous posts under discussion section — Some argued that the atmospheric temperature anomalies are necessarily different from surface anomalies. Usually, atmospheric anomalies are less than the surface maximum in hot periods and higher than the surface anomalies in cooler periods. It is like night and day conditions. We need to average them and thus they should present the same averages both surface & satellite measurements.
Dr. S. Jeevananda Reddy
@gymnosperm:
Shouldn’t the warming effects of CO2 be most apparent as anomalies in the mid-troposphere – exactly where the satellites (and balloons) measure?
“I think can see why the so-called consensus has become so obsessed recently with destroying the credibility of the satellite data.”
And now that the El Nino is starting to ease, that desperation will become MANIC !
And great fun to watch….. as the dodgy bros salesmen go to work !
..WTF ?????
Sorry Marcus, if you don’t get the link to a certain pair of the “best” salesmen.
Thanks, Andy. That gave me a chuckle. Love the ending. 😀
“Love the ending”
Soon……. soon !! 😉
“We must get rid of the Medieval Warm Period!” “We must get rid of the pause!” “We must get rid of the Satellite data!” “We must discredit any one who would questions us!” “We must exaggerate the threat so that people will listen!” “We must get the people to do what we tell them to do!”
Does any of that sound scientific in the slightest. No, of course not. In order for the AGW myth to continue, science itself must be redefined or discredited, and that is exactly what is happening.
Indeed. From the Climategate emails, one Phil Jones, a formerly respected scientist:
“I can’t see either of these papers being in the next IPCC report. Kevin [Trenberth] and I will keep them out somehow — even if we have to redefine what the peer-review literature is!”
They did.
Don’t forget ” Climate deni@rs should be charged under RICO laws ” !!!!
The IPCC redefines “science” in AR4 ( https://www.ipcc.ch/publications_and_data/ar4/wg1/en/ch1s1-2.html ) with an argument that is of the form of proof by assertion: the conclusion of the argument it true regardless of contradictions. The conclusion is that it is OK to replace falsifiability with peer-review. Among other contradictions to this conclusion is the motto of the Royal Society: nullius in verba (take no one’s word). To sustain its bias toward curbs on greenhouse gas emissions the IPCC is forced to try to trash falsifiability as the projections of its climate models are not falsifiable.
The satellite data was right…until it was wrong…because they are falling out of orbit….even though they have been adjusted for that since day one…………
Well hopefully they’ll finish falling out of orbit soon so nobody will have to listen to them anymore.
What would you get for a global mean if you used the temperature data as concentration for mineral X using the tools that you use? ± 1°C?
I don’t know how difficult it is to homogenise the data in oil and gas exploration but the data is simply the mass of mineral divided by the sample mass for many samples taken in places for that specific purpose. You are then using this to get the mass of product in a large volume of the Earth’s crust. This is a lot different to using means of max and min temperatures that are affected a lot by very localised conditions other than the change in amount of thermal energy in the atmosphere in the region so that data from even nearby stations rarely look alike (if you expand the axis to the change in GMTA since 1880). On top of that, would you base your decision on a result that is merely a few % of the range of values that you get in the one spot?
The changes to the global temperature anomaly are the very suspicious ones. Just what was needed to reduce the problem of why there was a large warming trend in the early 20th C that wasn’t exceeded when emissions became significant. And this is the difference between data homogenised in 2001 and now. The problem is not (just) homogenisation but the potential to adjust the data to what you want to see.
The acquisition and processing parameters of seismic data are not analogous to mineralogy…
http://www.seg.org/resources/publications/interpretation/specialsections/2016/seismic-mistie
Should have left it as the last paragraph. Just hate how temperature is treated like a simple intensive property.
OMG!
As far as I can see they will have to do a major adjustment to the ocean temperature record as well, to get the observations in line with the predictions: Have you ever wondered if any warming is really missing?
In terms of temperature:
For 0 – 2000 m ocean depth, a temperature increase of 0,045 K is reported for the period from 2005 – 2015:
The temperature increase deduced from the theory put forward by IPCC is :
0,064 K for the lowest limit for radiative forcing (1,2 W / m2)
0,13 K for the central estimate for radiative forcing (2,3 W / m2)
0,19 K for the highest limit for radiative forcing (3,3 W / m2)
Hence, the observed amount of warming of the oceans from 0 – 2000 m is also far below the lowest limit deduced from IPCC´s estimate for anthropogenic forcing.
Can you show the ballon data also with sat data? Always makes for a better conversation with the agw religious brother in law?
I just used the Wood for Trees data processing module… They don’t have the balloon data.
That would be the UAH 5- yr mean on the graph.
I just gave you some incorrect info…not UAH data, sorry.
I think can see why the so-called consensus has become so obsessed recently with destroying the credibility of the satellite data….
Are you missing an ” I ” ??…..I think ” I ” can see why the so-called consensus has become so obsessed recently with destroying the credibility of the satellite data.
D’oh!
Hmmmmm, is that a Marcus D’oh or the authors D’oh ??
Ok, you corrected it, so it wasn’t me !! LOL
All D’ohs are on me.
Thanks for the great article !!
The divergence of the raw data from the processed curves in the first graph seems to coincide with the Great Thermometer die-off. Raw data are thinned out to provide more scope for adjustments. Anyone who thinks that these wildly interpolated and thrice-processed data have any useful degree of accuracy should just hand back their science diploma.
“NOAA climate scientists”?
Excuse my skepticism, but anyone who works for NOAA and also calls himself, or herself, a climate scientist, has two strikes against them as far as credibility is concerned. As for adjusting past temperature data, that should be a no go. Too many questions can be raised. Just graph the data as it was measured. When collection instrumentation and/or methods change, draw a lineplot the new data, and note what and why it changed.
As someone who started out trying to interpret sparse 1960’s vintage 6-fold 2d seismic data in East Texas, I actually have a fair bit of respect for what these folks do.
My problem is that their “prospect” could be nothing but mistie corrections.
I found a lot of oil on old six fold data….and learned even more about data corrections. Corrections need to be done, but the potential for abuse is ever present. It all pales in comparison to the abuse one can conjure up with a modelling program. What answer do you want?
I would suggest that what needs adjustment the most is their willing suspension of critical thinking, but then I remember they’re getting paid to put this stuff out. Sad that to make the necessary adjustments to science to get it back on track, we’ll first have to make a major adjustment to the political climate.
You know it is political when Karl Mears of RSS starts to disown his own work. No matter that the satellites agree with the radiosondes (weather balloons).
The consensus will have a harder time disowning them. I imagine that attack will soon come along the lines of mostly land. We already know NOAA chose to fiddle SST rather than land to ‘bust’ the pause via Karl 2015. Much easier to fiddle sketchy ocean data pre Argo. And Argo aint that great.
Yep.
Mears kind of reminds of this story: Our hero of the day: A boy presenting his chopped hand on a plate
His quotes in this story were typical. He also cited his collaboration with a Warmist scientist to show that you need CO2 to make their models work.
This is one of the most bass ackwards zombie arguments that keeps coming back no matter what. Just because they can’t make their model work without leprechauns doesn’t mean leprechauns exist.
“There is just one huge problem…
Without the adjustments and homogenization, the post-1960 US temperatures would be indistinguishable from the early 20th century.”
It’s not a huge problem. The US is not the world. It did indeed have a very warm period in the 1930’s, whereas the ROW had a much smaller variation.
And of course, comparing US and global RSS etc is also pointless. Regional averages are usually much more variable than global.
In the US, TOBS adjustment has a particular trend effect, for well documented reasons. But globally, adjustment has very little effect.
Most of the data pre 1960 is from the U.S and U.K.
Perhaps Nick can help out.
Can you find and put pictures up for the GHCN station in Addis Ababa.
Thanks.
Complete story and pictures here.
Love the graph on p29, shows the central urban station COOLING since 1980 despite being urban, while the airport temp continues to increase.
Ps.. no pic of the actual urban Addis Ababa station though.. part of GHCN.
ps.. I hope you don’t mind if I use it as a classic example of just how bad airport weather stations can be, compared even to urban stations. 🙂 Thanks Nick 🙂
Interesting stuff here too. And here. Relative dates a bit vague though.
And who says that the ROW had a smaller variation? I call BS on that one – the data coverage is just not there. Nobody knows.
“And who says that the ROW had a smaller variation?”
I do.
The US is not the world. It did indeed have a very warm period in the 1930’s..
===========
Nick forgot to mention that his region in Australia also experienced a very warm period at the same time.
Argus, November 29, 1937 – “Bushfires menace homes at the basin”
January 13, 1939 – Black Friday bushfires in Victoria:
* * * * * * * * *
“In terms of the total area burnt, the Black Friday fires are the second largest, burning 2 million hectares, with the Black Thursday fires of 1851 having burnt an estimated 5 million hectares.”
– wikipedia
* * * * * * * * *
Argus, February 14, 1939 “Incendiary Peril” & “Sweltering Heat: 106.8 Deg. In City”
Argus, February 14, 1939 “Hose ban soon”
Current weather + 7 day forecast for Melbourne in the middle of a super-Nino “global warming” summer 80 years later:
http://www.bom.gov.au/vic/forecasts/melbourne.shtml
Yes, January has been not too hot, though with some lapse. Pretty warm last quarter of 2015. If you are interested, the history of hot days in Melbourne is here. You can look up the summers (incl 1939) in unadjusted detail. 114.1F on Jan 13. But that was just one hot summer. They have been getting more frequent. 115.5F in 2009, Black Saturday.
2015 was 11th in the only reliable data set, ie UAH Australia.
and of course a massive warming trend
http://s19.postimg.org/539zid2yb/Australia.jpg
“Nick Stokes
January 21, 2016 at 8:53 pm”
Nothing unusual Nick, nothing unusual for summer!
There was an interesting letter in today’s Australian commenting on the suggestion that rising temperatures might jeopardise the future of the Australian Open.
‘..is not supported by data.Melbourne maximum temperature records for January, readily available from the Bureau of Meteorology website, show no long term trend and the warmest January in the series was 1908. Unfortunately, in an act of scientific vandalism, in January 2015 the BoM closed the Melbourne observing site that had operated for more than 120 years.
Future trends will be contentious’
The problem for Australians is to know what the data is and the reasons for homogenization.
So what was the temperature in Melbourne after Jan 2015?
How was it calculated?
Are the only ones to know this those who hacked the BoM?
@Nick Stokes
You should be aware of how poor the former La Trobe site was,
http://climateaudit.org/2007/10/23/melbournes-historic-weather-station/
while the 19thC temperatures came from a site in the Botanic Gardens and how the automated stations read higher. Sometimes there is over a degree difference between a short spike in temperatures and the half hour readings. What are the chances such spikes would be picked up as someone popped out to take a reading?
http://www.bom.gov.au/tmp/cdio/086071_36_13_2020785628048329213.png
You then cherry pick this one station and look at extreme temperatures to highlight global warming is real, even though there are only a few degrees difference between the 20 hottest days recorded (pretty obvious that if taken under the same conditions in the 19thC that the readings could be >2°C more), then claim that 7 of the 20 are in the last 20 years is meaningful.
Then you ignore that both the highest monthly mean for Jan and Feb at the site were over 100 years ago, with the Feb readings taken in the gardens.
Ah yes. In the small part of the world where there is extensive if not pristine temperature data, there is a cooling. But this cooling is overwhelmed by warming in regions of the earth where temperature data is scarce to non existent. Nevertheless, through the prestidigitation of adjustments and homogenization, these wizards of climate science can determine the earth’s temperature anomaly to the hundredth of a degree. I stand slack jawed in amazement.
100 years ago, data was recorded to the nearest degree.
Yet through fancy statistical manipulations, they claim to be able to know the actual temperature at the time to 0.01C. (And that’s without getting into data quality and site management issues.)
March 13th 2016. World Wide Discharge a CO2 Extinguisher Day.
I’m up for it – might even open a few 2L bottles of Lemonade simultaneously – and make some bread – and . . . .
Cannot wait to see how unprecedentedly hot April will be as a result of all that ‘harmful’ gas.
WHAT !! No beer ??
Beer, Sodium Bicarb, Limescale Remover, you name it . . . . It’s gonna be fizzing on the 13th March.
Ha, Ha, I envisioned a similar stunt at the next World Climate March, fake a Liquid CO2 truck crash (use some benign substance) where multiple ‘leaks’ have to be plugged. It would be entertaining to see the crowd go into full crisis mode.
I can see the headlines . . . . “185 Million CO2 Fire Extinguishers were discharged simultaneously around the world yesterday in a silent protest by the sceptic community and, surprisingly, the expert’s prediction of Armageddon hasn’t happened after all.”
Consider completely replacing water vapor with CO2 and temperatures do what? Now consider deluding water vapor with CO2 and temperatures do what? Consider that solar heat is constant, thus there is a fixed number of photons that can heat the atmosphere, therefore the higher the concentration of CO2 the higher the number of photons that “heat” CO2 rather than heating water vapor. It’s a duh moment. Cheers!
solar luminosity is approximately constant (there is low-level variability, but let’s ignore that for now).
solar *magnetic activity* is far from constant. And solar magnetic activity affects the greatest greenhouse gas, ‘water vapor’ via mediation of cosmic ray flux. See the work by Nir Shaviv et al.
I own a couple of 20# CO2 extinguishers that have the original seals and still weigh out. Sequestered gas from 1945. Maybe that’s a good enough reason to ice down some lovely beverages with them at the solstice.
[Keep the bottles for the next hot day: Jan 22, Feb 22, March 22, April 22 …. or June 22. Might be warm again by Sept 22, since we believe in equal opportunity hemispheres. .mod]
Ouch- “sequestered liquid carbon pollution” -I should have said.
And while I’m being dyslexic, I meant the equinox. Bed time I guess.
Better eyeball that lemonade closely! Wouldn’t want to be paying for 500mL when it’s .06 short!
One key point they didn’t discuss was that the high quality 1/2 stations have a smaller warming trend than the low quality stations. They say all these corrections are eliminating the biases, yet the biases clearly remain.
And then they set their biased data on a pedestal to undermine all other datasets. In the case of the Pause-buster data, it sticks out like a sore thumb against many other curated datasets put out by establishment groups, yet they insist it’s the new standard.
For the two highest quality datasets, USCRN and ARGO, they have now adjusted BOTH of them to match low-quality data. The set the initial conditions for USCRN at an anomaly of almost +1 degree, based on historic USHCN data. And they adjusted the ARGO data to match the ship intake data, rather than doing the opposite.
It’s as if they don’t WANT high quality data as a solid reference point.
No “as if” about it.
Low resolution information is their ally. That allows them to infer that large areas surrounding a favorable warm reading can then be said to match that warm reading. I think that I see that same method at play at NCEP with their ENSO region data as compared to the other data sets for the ENSO regions. They are then able to legitimately state that their “picture” of the regions is correct according to their rules.To see what I am referring to click on the current NCEP ssta graph, and then compare that to what Weather Zone or Tropical Tidbits show.
No uscrn stations.. 110 in the USA.. Match the other thousands of “bad” stations. Which means that the science of ” rating ” sites isn’t settled.
Global temp: it’s like a boring two horse race you can’t see properly, if you can see it at all.
“I think I can see why the so-called consensus has become so obsessed recently with destroying the credibility of the satellite data.”
You should have plotted the RAW RSS data.
Nobody is obsessed with destroying the credibility of the modelled data produced from satellites.
People are interested in what the actual adjustments are and the uncertainties.
But Look Dave..
You like RSS. RSS has to perform a TOBS correction.
Do your trust the physics of that TOBS correction?
If yes… Then you just trusted a GCM.. because RSS corrects its data with the aid of a GCM
Are you really that stupid or are you just practicing to be a liberal politician ??
And UAH uses radiosondes. So attack UAH first. Got it. Thanks for the little additional insight.
Good points, Steve.
The raw satellite data aren’t even temperatures.
It’s not about trusting data sets or models (very useful tools). My problem is with anomalies that are less than or equal to miste corrections.
By that logic the surface measurements aren’t temperatures. They’re analogs based on the expansion of liquids in a tube or the change in electrical current in a circuit.
Rss produces temperatures. Uah does not.
Steven, please link to how a TOBS correction comes from a GCM.
Read the rss ATBD. I have linked to it
Dear Steven, I just spent 15 minutes trying to find your link. Did you link to it here, or elsewhere? In 2014, 2015, or 2016? What the hell is RSS ATBD? Googling it yields “Did you mean:
rss tabs or rs qbd or airs atbd or rss tcd”
Thank you very much for wasting my time. Are you unhelpful on purpose? What GCM did you mean?
https://www.ncdc.noaa.gov/cdr
https://www.ncdc.noaa.gov/cdr/fundamental/mean-layer-temperature-rss
http://www1.ncdc.noaa.gov/pub/data/sds/cdr/CDRs/Mean_Layer_Temperatures_RSS/AlgorithmDescription.pdf
section 3.4.1.2 Local Measurement Time (Diurnal) Adjustments
Next
http://images.remss.com/papers/rsspubs/Mears_JGR_2011_MSU_AMSU_Uncertainty.pdf
page 7
Hey Mosh, can you find and put pictures up for the GHCN station in Addis Ababa,
Thanks.
I think it is at Bole airport, but I can’t find any pictures. I could of course ask one of my ex-wife’s family or friends there to take a picture. Either way the air quaility is fairly bad given most people use open charcoal fires for cooking and making coffee and Addis is at about 2500m above sea level. There has been a massive build up in the city of hi-rise buildings, other dwellings and roads so UHIE would be significant I would say.
The point is, Patrick, that they are using the data.
They SHOULD know exactly where it is coming from.
As far as I can determine it might be at the PO right in the middle of the city, with massive UHI effects…
and it would be one of 5 or 6 stations smeared across an area the size of the USA.
But I bet THEY DON’T KNOW !!! and certainly won’t account for it.
ps. If you go to http://www.ncdc.noaa.gov/cdo-web/datatools/findstation
and type in addis ababa, with a bit of zooming you should be able see the location on a street named Cunningham St.
Then go to Google Earth and have a look at its situation !
Our esteemed host goes up to class 5 in his surface station set.. I think this would be one of those.
forgot.. you need to pick a daily or monthly dataset
pps.. there may aslo be another one at the airport.. always a really good place for a weather station… NOT !
And here in Sydney, Australia, whenever the station at the airport reads higher than AVERAGE (FFS), it’s trotted out as proof of global warming leading to climate change. Ethiopia is a great place to visit BTW.
If it is in Cunningham St, then it is right in the middle of the mad-cap city. There are two small parks just north of Cunningham St, so one presumes that the met station is in one of those parks. But the Google image is not very clear for that region.
http://s8.postimg.org/yswry8nat/addis_ababa.jpg
As far as I can tell, the GHCN map puts it at the post office, bottom left.
Again the point is.. THEY SHOULD KNOW.,
but the likes of Mosh, Zeke, who work as salesmen for BEST, have not responded.
Read the rss ATBD. I have linked to it. It’s a global climate model.
Sorry you lose.
I fail to see the point in debating the various temperature effects (cart) of climate change before the CO2 cause (horse) has been solidly demonstrated. Anthro CO2 is trivial, CO2’s RF is trivial, GCM’s don’t work. Trust the force, Luke!
Prior to MLO the atmospheric CO2 concentrations, both paleo ice cores and inconsistent contemporary grab samples, were massive wags. Instrumental data at some of NOAA’s tall towers passed through 400 ppm years before MLO reached that level. IPCC AR5 TS.6.2 cites uncertainty in CO2 concentrations over land. Preliminary data from OCO-2 suggests that CO2 is not as well mixed as assumed. Per IPCC AR5 WG1 chapter 6 mankind’s share of the atmosphere’s natural CO2 is basically unknown, could be anywhere from 4% to 96%. (IPCC AR5 Ch 6, Figure 6.1, Table 6.1)
The major global C reservoirs (not CO2 per se, C is a precursor proxy for CO2), i.e. oceans, atmosphere, vegetation & soil, contain over 45,000 Pg (Gt) of C. Over 90% of this C reserve is in the oceans. Between these reservoirs ebb and flow hundreds of Pg C per year, the great fluxes. For instance, vegetation absorbs C for photosynthesis producing plants and O2. When the plants die and decay they release C. A divinely maintained balance of perfection for thousands of years, now unbalanced by mankind’s evil use of fossil fuels.
So just how much net C does mankind’s evil fossil fuel consumption (67%) & land use changes (33%) add to this perfectly balanced 45,000 Gt cauldron of churning, boiling, fluxing C? 4 GtC. That’s correct, 4. Not 4,000, not 400, 4! How are we supposed to take this seriously? (Anyway 4 is totally assumed/fabricated to make the numbers work.)
Is that “raw” data the data from all surface stations including those that are not properly sited and those that are in recognized Urban Heat Islands BEFORE any adjustment(s)?
Logically, you would think that making ” adjustments ” to the data to compensate for the ” Urban Heat Islands “, the temps would adjust down….but every adjustment ALWAYS go up !! Weird eh !
Well that and the fact that virtually all the stations not sited properly also need their temps adjusted down.
But, they also adjust the past down, so that works out.
This is Berkely Earths take on Adelaide
http://berkeleyearth.lbl.gov/stations/151931
Adelaide West Terrace is the most reliable data for the State from the 19th C to 1979.
Notice the cooling in the data from 1940 to the mid 1960s in both the max and min.
http://www.bom.gov.au/jsp/ncc/cdio/weatherData/av?p_display_type=dataGraph&p_stn_num=023000&p_nccObsCode=36&p_month=13
http://www.bom.gov.au/jsp/ncc/cdio/weatherData/av?p_display_type=dataGraph&p_stn_num=023000&p_nccObsCode=38&p_month=13
The minimum shows a large rise after 1960 because West Terrace was changed during the sixties from a mainly residential street to a thorough fare with car dealerships lining the streets.
The Airport shows a steady increase from 1950 in the minimum temperatures.
http://www.bom.gov.au/jsp/ncc/cdio/weatherData/av?p_display_type=dataGraph&p_stn_num=023034&p_nccObsCode=38&p_month=13
West and NW of the airport (north of the current station) was swamp. The local river emptied into this and slowly drained south and north through swamps with overflow through Breakout Creek which was merely a drain. That area that was swamp was partially drained for market gardens. Breakout Creek was then turned into a diversion of the river to flow directly to the sea in the late 60s and the area was then gradually built up into suburbs.
Yep, their “regional expectation” (lol) gives them carte blanche to hack and adjust at will.
They have managed to CREATE a nice warming trend since about 1960, where little exists in the real data.
No [,] half of our adjustments go down.
Back to my original question:
“Is that “raw” data the data from all surface stations including those that are not properly sited and those that are in recognized Urban Heat Islands BEFORE any adjustment(s)?”
This data “adjustment” stuff reminds me of contract claims disputes in the construction industry. You have raw data, which typically everyone can agree on. Then you have methods and assumptions for calculating the claim amount, which are disputed almost 100% of the time. I’ve seen cases with highly credentialed experts on both sides of a contract claim coming up with widely differing cost analysis, depending on whether they represent the owner, or the contractor.
Especially considering the tiny temperature anomaly scales, it strikes me as extremely likely the final adjusted graphs being produced by these environmental activists posing as scientist are showing wildly exaggerated warming.
What’s really disturbing to me as that the public only sees the “warmist” version of the (adjusted) data. And that data is presented to the laymen as concrete evidence, as if the graphs themselves represent undisputable raw data.
It’s more like geotechnical reports that don’t “tell” you what’s going on below ground between borings. You can (must) make assumptions, but once you break ground, you have a geoengineer on site taking gathering additional data.
Jmarshs: Yes, many times I have changed designs and construction procedures on projects I worked on because of that “TWEEN” stuff. Water, rock, contaminants, grave sites, to name a few. Murphy’s Law.
Key difference between climate history reconstruction and geotechnical reports: With geo-reports, you can, as you correctly stated, have a geoengineer go on-site to gather additional data and therefore improve your knowledge of what is below ground. But with climate history that is not possible because there are no time-machines around to go back and gather the missing data. Aside from using “proxy data”, they are forever stuck with the limited information that was gathered at the time. That is why I will instantly not trust anyone claiming they have figured out an accurate global temperature trend from thermometers over the past 150 years to a high degree of certainty.
And I’ve hit a few basements from old demolished houses!
@Justsomeguy
See my post a couple down! We’re in complete agreement.
The key to Geotech investigations (my field) is the consistency of the data from a grid that is established according to the economic and social status of the building project. Say one was building a hospital compared to a chicken house. In the case of the hospital should there be a great variation over the standard grid one must go back in and bore more holes and keep drilling more holes until the entire subsurface is understood and measured for strength/stability etc with a factor of safety (e.g. 3+) far exceeding the demands of the building. Should there be a soft spot that does not meet the minimum standard one must map this with a degree of accuracy of a meter scale. Not too many hospitals built on land tested to western standards fail. Here is the proof
How does this compare with temperature measurement on a global scale? Climate scientists could learn a lot from engineers
Adjustments should be seen not as a necessity, but as an opportunity.
I have a question for anyone with knowledge: These “TOBS” adjustments. Are they done on a case by case basis? Or as a global change? In other words did someone actually comb through each and every record looking for time of observation changes? or did they just sort of “wing it” with a single adjustment to all the data at once?
In the US, it was supposedly combed through. TOBS is trivial compared to US UHI and microsite issues. Surface stations project. Previous guest post on same.
Time of obs is agreed between the volunteers and NWS. If an observer wants to change, he asks. Those requests are recorded.
“I have a question for anyone with knowledge: These “TOBS” adjustments. Are they done on a case by case basis? Or as a global change? In other words did someone actually comb through each and every record looking for time of observation changes? or did they just sort of “wing it” with a single adjustment to all the data at once?”
Which TOBS adjustment are you talking about.
A) The TOBS adjustment for the US
B) The TOBS adjustment for Satellities.
They BOTH do TOBS adjustments.
For the US.
There are Three seperate approaches all validating each other.
1. The Case by Case approach. This has been validated EVEN BY CLIMATE AUDIT SKEPTICS
and by John Daly. Every station is treated seperately
2. NOAA statistical approach. Matches the case by case. every station is treated separately
3. Berkeley earth statistical approach. Matches the case by case. every station is treated separately
For SATELLITES.
The TOBS adjustment is performed by using a single GCM model
Different GCM models give different answers.
I have to question the method here. How is it possible to perform a case by case analysis on every individual site when wind speed, wind direction, moisture conditions, local activity, time of day shading and wind blockage are all changing constantly in so many ways? Seems to me these stations should either be classified as standard compliant or unreliable/useless. This would leave us with fewer readings but at least they would be believable.
Simple John. Go read the validation papers. Years of hourly data. Out of sample testing. It works. Your questions are not important.
Steven:
Links to one or more validation papers would be welcome. I’m skeptical as validation is impossible in lieu of identification of the sampling units but my current understanding is that no sampling units underlie the climate models..
Terry Oldberg
Thanks ristvan, Nick and Steven for answering my question.
Stephen, I was referring to TOBS adjustment for thermometers. BTW, Just to point out it’s pretty obvious what you are trying to do with associating the satellite data with “GCM models”. A adjustment based on a calculation is just that. The accuracy of an adjustment depends how well they can verify the accuracy of the calculation against real data. Whether you call it a “model” or something else means nothing. Nice try though.
Read their paper. Different gcms have different diurnal cycles. They don’t validate against the real world.
This is not science, it is political speech. Re-read Mr Scott K Johnson’s statements . They are attacks on congressman L.Smith.
Note the insults to him, the misrepresentation of of the facts that prompted his interest. Note the term “stump speech”
He received Whistle blower’s statements that something was wrong. He is required by law to investigate.
Since Mr Johnson choose to make it a campaign issue , the forums distributing his “political ads” must grant equal time for dissent.
michael.
It’s like the days of the old “Fairness Doctrine”.
It’s only political, when it disagrees with the current govts position.
Anyone agreeing with the govt (or the Democrats for that matter) are by definition, not being political.
People who work in the applied sciences often have to make assumptions and/or interpolate data when no better data is available. However, they do so on the condition that they will receive feedback in the future which will allow them to make adjustments to account for incorrect assumptions. If we have to know everything there is to know before we start a project, then no buildings will get built, no patients will get cured and no oil will be found.
But historical temperatures are just that – historical. And historical data, with known errors, should be scrapped, not tweaked. There is no way to go back in time to see if you are “right”. They cannot form the basis of geoengineering i.e. trying to control the “temperature” of the planet.
This brings to light the two factors that make geoengineering the climate impossible. 1) Insufficient power to affect the system and 2) Lack of timely feedbacks to make adjustments.
Irregardless if CAGW is true or not, the most we can do is what humans have always done – adapt.