Guest essay by Brendan Godwin
Background
I worked for Australia’s Bureau of Meteorology – BOM for 2 years from 1973 to 1975. I was trained in weather observation and general meteorology. I spent 1 year observing Australia’s weather and 1 year observing the weather at Australia’s Antarctic station at Mawson.
As part of it’s Antarctic program, Australia drills ice cores at Law Dome near it’s Casey station. On our return journey in 1975 we repatriated a large number of ice cores for scientific analysis. The globe’s weather and climate records are stored in these ice cores for the past 1 million years approximately.
Australia’s Antarctic program went by the name of Australian National Antarctic Research Expedition or ANARE for short. This is now known as Australian Antarctic Division or AAD. Returned expeditions formed a club called the ANARE Club of which I have been a member since 1975. Members have many functions and reunions and they have a reunion dinner every year. At this dinner there has always been guest speakers from Australia’s Antarctic Division. These guest speakers are usually someone of the caliber of the Divisions Chief Scientist or the Operations Manager and the talks are designed to keep members updated on the Antarctic scientific program.
The annual dinner is also a place where members keep in touch with each other and network and this communication continues throughout the year via email.
The International Panel on Climate Change – IPCC
The IPCC was created by and is a joint 50/50 partnership between the World Meteorological Organisation – WMO and United Nations Environment Programme (UNEP). It has extremely narrow terms of reference in that it’s role is to determine that humans are causing global warming. In that regard it is only looking at human induced forcings over the past 150 years, just to make sure it reaches that result. That makes it a political body with a political agenda.
World Meteorological Organisation – WMO
The WMO has structurally changed since 1974. Today it is headquartered in Geneva, Switzerland. When I went through training with the BOM, the WMO had a shared global headquarters between Melbourne, New York, Moscow and London. I don’t know when this structure changed. Australia had a leading role in the WMO and was a dissemination point for weather data.
Australia’s Bureau of Meteorology – BOM
BOM’s headquarters are in Melbourne. Australia has claim to 5.9 million square kilometres, about 42% of Antarctica. That claim is on hold while the Antarctic Treaty is in place. On the Antarctic continent Australia has 3 full time stations, Mawson, Davis and Casey, as well as a 4th, Macquarie Is., in the Southern Ocean. BOM has a full time presence on all these stations. Weather data is collected throughout the day and night at all these stations. At Mawson in 1974, we collected not only our own data but all the weather data from Davis, the Japanese station at Syowa and the Russian station at Molodezhnaya. Mawson sent all this data to the Overseas Telecommunications Commission – OTC in Sydney where it was forwarded on to BOM in Melbourne. A second Russian station, Mirny, was collected by Casey and forwarded on the BOM Melbourne via OTC.
BOM used this data, in conjunction with all the observational data obtained from all the weather stations and observational points throughout Australia, as part of Australia’s weather maps and forecasting. Additionally, Melbourne was the WMO distribution point for all weather data in our region. BOM Melbourne collected and collated all this data and forwarded it on to the WMO.
Temperature Data and IPCC’s Climate Change
In 2013 I attended an ANARE Midwinter Dinner – MWD. Australian Antarctic Division – AAD’s Acting Chief Scientist Dr Martin Riddle was our guest speaker at this function. I met with him over canapes before the dinner and spoke with him for about 20 minutes. I tried to get a sneak preview what his talk was going to be about. He said he was Australia’s lead scientist on the IPCC and, aside from giving us an update on the scientific program in the Antarctic, he was going to talk about climate and global warming. I asked him, were we not in an interglacial warm period in the 100,000 year Milankovitch Cycle and wasn’t all this current warming natural? His jaw dropped and was aghast. Our discussion ended there and he raced off not looking too happy. I couldn’t help but getting the feeling that I wasn’t supposed to know anything about the Milankovitch Cycles. It seemed like no one was supposed to know this.
It seems apparent that we all are just supposed to listen to what the IPCC are telling us and don’t ask questions. So what are the IPCC telling us?
The IPCC have produced 102 climate models to predict our future climate. The world’s meteorological organizations use weather models to forecast and predict weather and have been for many years. They have proved to be very accurate over 4 days and reasonably accurate over a week. The IPCC’s climate models are notoriously inaccurate. We’ve had these models now for some 30 years and we now have 30 years of data to compare them against. They are not even close to accurate.
Dr Roy Spencer is a meteorologist, Principal Research Scientist at the University of Alabama in Huntsville, and the U.S. Science Team leader for the Advanced Microwave Scanning Radiometer on NASA’s Aqua satellite. At an International Conference on Climate Change in his presentation he said referring to the IPCC models below.
Climate models are not even forecasting. Those curves on the chart are hindcasts.¹ They already knew what the answer was but still can’t get them right.
In spite of this, the IPCC seem adamant that there is nothing wrong with their models and it must be the data that is not right. Roy Spencer said: There’s no comparison. The IPCC are now hinting, maybe we shouldn’t trust the observations, let’s just trust the models.
Temperature Adjustments – Homogenization
One has to be excused for being skeptical here but it does look prima facie like the IPCC has asked their 50% partner, the WMO, to give them some temperature data that more closely matches their models. At least 3 of the WMO’s senior partners, BOM – Australia, National Oceanic and Atmospheric Administration – NOAA – America and Met Office – UK are adjusting their temperature data to something that has a much more closer resemblance to the IPCC’s models. There is no evidence that Hydrometeorological Centre of Russia – (Roshydromet), is involved with these adjustments.
They are taking this:
And turning it into this:
An Australian scientist, Jennifer Marohasy, has been taking a close interest in the adjustments BOM are making.² She produces weather forecasting models and has a grave concern about these adjustments. One of the data inputs to weather forecasting models in temperature. It appears that the temperature is not correct/accurate now.None of these organizations will say or explain what they are doing or are being vague when asked. Raw data is being removed from public scrutiny and no one knows if it is actually being destroyed. Officially they are providing no scientific basis for making these adjustments. The adjustments they are making are complex. The 1940/41 and 1998 El Ninos have been wiped from these records.
But they haven’t just lowered and raised the temperatures in one hit, they’ve slowly incriminated the adjustments so that it all looks natural. If they’d lowered and raised them in one hit you’d have a chart that looks like this.
At one of our recent MWD reunions I caught up with and spoke to a colleague who spent many years working at the BOM as a weather observer and forecaster both in Antarctica and Australia. This person is outside of the realm of politics and wishes to remain anonymous. The person’s last job was working on these temperature adjustments. The job of this person’s team was to adjust the temperatures upwards so has been working on adjustments from 1990 until the present.
I asked, why was BOM making these adjustments and it was explained to me this way.
When there are temperature observational points located in the CBD area of large cities where there are tall buildings, it has been well known to BOM and generally, that these temperatures would be half to one and a half degrees C cooler if the tall building and the city wasn’t there. It is a phenomenon known as “the island effect”. It is the same as when, on a cold day, the hairs on your arm stand up and that insulates a warm layer of air close to the skin. Tall buildings do the same thing. Additionally, these tall buildings are heated and air conditioned and every time people walk in and out of the building, hot or cold air blows out altering the ambient street temperatures.
But the anomaly in what this person is saying here is that this person’s team is adjusting country temperatures upwards by half to one and a half degrees C so that they match the city temperatures. That’s creating about a degree C of warming when if they had adjusted the city temperatures down half to one and a half degrees C, they’d be creating approximately a half a degree of cooling.
Jennifer Marohasy’s charts for Rutherglen in country Victoria show this quite clearly. Note these are truncated to 1910.
Conclusion
The Australian Climate Observations Reference Network–Surface Air Temperature (ACORN-SAT) Technical Advisory Forum released a report in 2015 confirming that the Surface Air Temperatures were being adjusted, confirming the process is called Homogenization, confirming that other weather monitoring institutions around the world are making these same adjustments and purporting to justify why the adjustments are being made. Observing practices change, thermometers change, stations move from one location to another and new weather stations are installed. They refused to release their complex mathematical formula used to make the adjustments. They claim that homogenisation is essential in eliminating artificial non-climate systematic errors in temperature observations. non-climate related factors include:
- the replacement of thermometers;
- changes in observing practices;
- expansion of the network into remote locations;
- changes in infrastructure surrounding a weather station;
- relocation of weather stations.
The only reason on that list that really makes any sense is changes in infrastructure surrounding a weather station. You can’t calibrate a thermometer used 100 years ago with one used today. When reviewing Jennifer Marohasy’s paper on Rutherglen, just as one example, none of the above apply yet Rutherglen’s temperatures were still adjusted. In her report Jennifer wrote:
In a special advisory issued by the Bureau in September 2014, it is claimed that the adjustments – which create the artificial warming trend in the homogenised temperature minima – were necessary to make the Rutherglen series consistent with the trends measured at neighbouring weather stations. However, it is apparent that in this advisory, annual raw minima values from Rutherglen are compared with data from neighbouring sites that have already been homogenised. This approach, which may once have been considered fraudulent, is now consistent with the postmodernist epistemology that underpins homogenisation as practiced by the Bureau . . .
Jennifer has requested of BOM why Rutherglen was adjusted when none of the BOM’s homogenization criteria applied and received no response.
My observation of all of this is that these so called reasons for making these adjustments are not reasons but excuses. If any adjustments are to be made, city/urban temperatures should be adjusted down to match what the temperature would be without tall buildings. Adjusting country/regional temperatures upwards to match the city is a fabrication to suit an hypothesis or agenda and the reasons are just an excuse. If there was any real reason for an adjustment, aside from the island effect in cities, it would be for where there is a Stevenson Screen out in the middle of an asphalt car park. That temperature should be adjusted down. Yet all these adjustments are both up and down, depending on the time period, with the end result a temperature chart that resembles catastrophic warming. And that is coincidentally exactly what the IPCC are looking for.
That makes these adjustments political not scientific.
- It is reasonable to make certain adjustments that are intended to improve accuracy.
- Adjustments should be rare.
- Adjustments should not be to suit a political purpose and there should be no mechanism that allows to even make this possible.
- As a scientific practice, the reporting agencies should ALWAYS maintain and report the original raw data. It should be publicly available for download. It should be easy to find and not buried under numerous web pages making it impossible to find.
Once you start introducing reasons to make adjustments then it becomes too easy to use them as an excuse to adjust everything to suit a purpose. It becomes easy to allow for political interference. Political interference should be impossible.
1 https://www.youtube.com/watch?v=ExgKJpJyDXQ
2 Temperature Change at Rutherglen in South-East Australia
Brendan Godwin was a Radio Technical and Officer Weather Observer Bureau of Meteorology for Mawson Station in the Antarctic 1974
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
“The IPCC was created by and is a joint 50/50 partnership between the World Meteorological Organisation – WMO and United Nations Environment Programme (UNEP)”
is this really true?
i had thought that the WMO’s role was only to provide office space to the ipcc
It is true
A question for Nick: I believe that you know how the BOM homogenise Aust. temp. figures, so can you please advise whether Brendan Godwin is correct in his assertion that country temps are adjusted upwards to match CBD temps. A simple yes/no would suffice.
Well Yes!
How is the homogenisation at BOM done?
What are the algorithms?
Like the data which is being painstakingly mined from the searchable old newspaper records in Australia,
where is the place where the public may find the method of homogenisation,and as in science, replicate it.
The run QA.
As I noted here, you can get the complete code and run it yourself, if you want.
Your link ‘here’ just puts me back to your previous comment on this thread where’ the page cannot be found’.
“country temps are adjusted upwards to match CBD temps”
Very unlikely, IMO. It’s based on ignorance of how homogenisation works. It doesn’t change overall temperatures to match others. It changes (generally removes) jumps that are outside some range and are not matched by nearby observations. When that happens, the revised jump is determined by the neighbors, and possibly for some period, if the station data continues to appear unreliable (or has missing obs). I don’t think there is anything that gives preference to CBD.
Nick,
You should watch the presentation by Dr. Jennifer Marohasy I posted above and you may change your mind. Or watch the presentation by Dr. Roy Spencer. If you seriously consider these, I don’t think you will be so hasty to make the comment you made.
“You should watch the presentation by Dr. Jennifer Marohasy”
I don’t much listen to videos. They take far too much time, and are not usually informative. If you can point to something in writing, I’ll read it.
But I don’t think Jennifer knows much about homogenisation.
Nick,
The paper listed with this post is by Dr Marohasy and is a detailed discussion of the problems with homogenization. I thought you may not want to read it. In the video she discusses a twitter conversation with Gavin Schmidt (at 6:50). Referring to the rhobust data set from Amberly, GISS changed a 1degree cooling to a 2degree warming. Gavin stated that there was a discontinuity in the data in about 1980 and it was homogenized with nearby stations. In response to her inquiry regarding their locations, he provided a list of 310 stations from a radius of 600mi and in totally different climatic zones – across the Coral Sea and over the great divide. The un-homogenized data from the nearest station (Brisbane Arrow?) has the same cooling trend and the local network of 6 stations in the immediate area do not have the warming trend of the homogenized data. It is quite understandable that homogenization with data from such a wide area WILL create an artificial trend. Algorithms used for homogenization can either inadvertently or purposefully create an artificial trend. While the concept of homogenization may have some merit in correcting non-weather effects in the data, the use of computer algorithms without ground truthing is a grave mis-application and in my opinion has been used for political purposes.
Nick the reason Jennifer first came across the BOM scam was she was trying to work out if increased irrigation in the area was having an effect on temperatures , it was during her research she discovered the “homogenisation ” and couldn’t get an answer from BOM about why that staked up .
You have to realise there were many older retired scientists and workers who denied the site moves ever happened .
Jennifer’s research was actual climate science as I mentioned above not any of your CAGW type crap science where the end is worked out before the start .
Personally her theory about increased irrigation and irrigation channels having an effect on local climate is certainly a worthwhile effort and to me makes more sense than studying oceans becoming more acidic .
The Rutherglen story is getting twisted out of shape. It is the Bureau that claimed in the ACORN catalogue that the site was always in the same place (and Robert its really better to have studied whht is known about a site before chasing rabbits down a hole).
Reality is the site had moved. Early data are compiled from the Rutherglen Post Office and a previous site referred to as “viticulture” and also possibly Corowa. The site moved then, and again in 1965, and if you took the time, you could show that by studying aerial photographs.
Here from Simon Torok’s thesis (Uni Melb):
Rutherglen 82039 and 82038
01/1914 Composite site move
09/1924 First correspondence
05/1939 Screen opens to west
12/1949 Long grass around site
05/1975 Good site, no changes.
Here is what the ACORN catalogue said:
History
There have been no documented site moves during the site’s history. The
automatic weather station began operations on 29 January 1998.
(Can you spot anything that seems to contradict?)
If you take the data and toss on an Excel line; whatever the trend is, is an artefact.
The Bureau’s homogenisation of the data is also creates an artefact trend. Rutherglens artifacts are shared around by the use of its demonstrably artefact data to homogenise other sites.
Using one myth to dispel another is hardly arguing from a position of strength.
Cheers,
Dr Bill
DCS,
As it happens, I did my own analysis of Amberley here. It’s an absolutely straightforward case, and they have no choice but to adjust. Mosh and I say over and over – GISS does not do these adjustments. NOAA did, and they would have no reason to cast a wide net; there are enough stations within 100Km. And they all say the same thing. It isn’t an issue with trends; there is just a discontinuity at 1980, and comparison with neighbors tells you how much. make that single adjustment and Amberley falls into line.
How do you know the jumps are unbiased? How on Earth you balance out the probable imbalance that comes from land use changes (etc) that are gradual like growing trees and spikes like felling them? These changes match, but the speeds are different in different directions.
I have really much trouble finding anyone who can explain how homogenitasatiion works and how we know it works right. Not that many sciences would be much different though.
Amberley is also a problem. The site itself is sprayed-out; it is bare black soil much of the time; with some annual weedy re-growth at other times.
Using tests designed to detect changes in the mean (or variance) (not process control or Shewhart charts); whatever happened in 1980, caused a highly significant down-step in average Tmin (-0.8 degC; P = 4.85E-05). Nick claims to have analysed the data carefully. It is an abrupt step-down not a trend that only affects Tmin at that time.
Mean Tmax abruptly steps up 0.70 degC in 1991 (P = 2.15E-05). Spraying the site-out would restrict transpiration sufficient to do that. An automatic weather station was installed in 1997 in a small Stevenson screen. (Previously the site had a large screen). Previously, there was also a Fielden analogue AWS for temperature and RH, but there is no record of when that was used/disused.
Cheers,
Dr. Bill
.
Yup, three climate optimums separated by cooling in the last 4000 years the little ice age ending in1880 and the whole community is focusing on the warming from a #!@ur momisugly!!!/??#!! Old period. What gives with all of you! It had better warm up!
Merry Christmas!
Test Post
My reply to Nick Stokes just doesn’t want to appear
Hit the refresh button. (Save your reply’s text first!)
“You could take a trip to the Internet Archives WayBack Machine and duplicate it for yourself, it’s not that difficult.
”Nick Stokes December 21, 2016 at 3:15 pm
Nonsense. The graph is actually made with currently available data on adjustments. Well, currently at the time; the graph is itself a few years old and refers to a long ago version of USHCN. What you need to calculate it is Goddard’s unique (and totally wrong) method of comparing averages. That is in code.
Here’s the page from the WayBack Machine with the old data:
GHCN 1701-12/2000 (meteorological stations only)
Here’s the Link to Tony Heller’s Blog Post
Gavin says his data is fake
You can go dig up the current data and do the comparison to Tony’s chart yourself. I did a 5 year average just like he did and here’s my quick and dirty display I put up there a few days ago:
http://realclimatescience.com/wp-content/uploads/2016/12/32096-1.gif
You can call it nonsense all you want, but just like calling the tail on Abe Lincoln’s dog, a leg doesn’t make it one.
Wasn’t this the fifth instance of this message already?
Like HERE
“…it would be for where there is a Stevenson Screen out in the middle of an asphalt car park…”
I don’t think I would even accept an adjustment in this case. The data source should be completely excluded as not being of sufficient quality.
One of the main issues is whether the globe is any warmer today than it was in the 1940s. Unfortunately, the Southern Hemisphere is sparsely sampled so we really do not know what the position is with respect to the southern Hemisphere. We have a reasonable handle only on the Norther Hemisphere.
In the early 1970s the accepted climate data was as set out below:
http://realclimatescience.com/wp-content/uploads/2016/12/Screen-Shot-2016-12-21-at-5.27.38-AM-down.gif
This is from Science News Volume 107.
This plot was made before Hansen and Schmidt and company got their hands on the data, and cooled the past and warmed the present.
It will be noted that the 1940s anomaly was about 0.7 to 0.8 deg warmer than the 1970 anomaly. Let us assume that since the early 1970s there has been about 0.6 to 0.8 deg C warming, although the satellite data would suggest only about half that amount.
IF the there has been warming of about 0.6 to 0.8degC since the early 1970s, it means that we are today at about the same temperature as was seen in the late 1930s/early 1940s.
This is significant since during this period some 95% of all manmade CO2 emissions have taken place and yet there may have been no measurable change in temperature. If that is the case then it would appear that Climate Sensitivity to CO2 is zero or close thereto.
“In the early 1970s the accepted climate data was as set out below:”
What is set out is, as it clearly says, NH ocean temperature.
Nick, thanks your comment but I think that you are mistaken.
What appears to be set out is the Northern Hemisphere temperature.
As I noted, we have no proper handle on Southern Hemisphere temperatures since it is so sparsely sampled. Thus for example in the 1880s there was only about a dozen stations reporting temperature data. And as you are aware, BOM does not wish to use the data from 1880 which appears to be one of the warmest periods in Australia because it takes issue with the quality of screens etc. This excuse may or may not have some merit, or it may be that the data is inconvenient. I know too little about it to stray into making a comment upon it, other than to note that BOM does not use 19th century data. You will know much more about that than I do.
If the plot is Northern Hemisphere ocean temperature data then that is a very good metric for Northern Hemisphere land temperature since it is the oceans that drive the atmosphere and thus it is the oceans that largely dictate land temperature.
I do not consider the plot it is ocean temperature data since the variation is way too large. As you are aware, due to the vast heat capacity of the oceans, ocean temperature variation is muted.
Where do you see the word “ocean” in this ?
In fact, considering that back in those time ocean data was close to nil, i guess the word “land” was implicit
In any case, NH temp or NH land temp.
@ur momisugly paqyfelyc
I agree.
The plot does not describe itself as ocean temperature, and there is no way that Northern Hemisphere ocean temperatures cooled by about 0.8 degrees. I do not know why Nick considers that it pertains to ocean temperature. I consider that he is mistaken.
This plot is pre ARGO, so there was as you suggest little in the way of ocean sampling.
Further, this plot may cover the change from bucket to engine room measurements. The warmists claim that the change from bucket measurements to engine room measurements warmed the data, and there have been recent adjustments cooling the engine room data. If that adjustment is valid, it would suggest that the 1970 data is running too warm (since it would be engine room data) and needs to be further cooled below the minus 0.2 anomaly!!
There are a number of lines of evidence that suggest that temperatures today, at any rate in the Northern Hemisphere, are not materially different to the temperatures measured in the late 1930s/early 1940s. It may be that there has been no measurable warming since the 1940s
It may be that all that has happened is that there has been some post 1940s cooling, and then some mid 1970s onwards warming such that today it is approximately the same temperature as was observed back in the 1940s.
Richard,
My apologies, you are right. My reading eyesight is not wonderful, and I read mean as ocean.
Exactly Richard.
I see little/no evidence the northern hemisphere is any warmer now than back in the 1930’s and 1940’s. The Arctic has warmed more than other places on the planet, but the difference is still relative and formed similar trends. There is no evidence that the planet trend would change from the Arctic one and the latter showed no difference between the 1930’s and 1940’s and recent centuries temperatures.
http://i772.photobucket.com/albums/yy8/SciMattG/ArcticTempsSurface1936_zpspod7pd2i.png
GISS and HADLEY adjusted warmer temperatures recently in (increasingly false) data sets that can’t have been due to just a bit more Arctic inclusion. It doesn’t make any sense when satellites have been covering far more of the Arctic in area and have not warmed like them.
These adjusted temperatures for GISS and HADLEY do not apply to warmer periods like in the 1930’s and 1940’s plus the cooling period after. Therefore the warming periods in the Arctic when the mid-latitudes are more favourably cold are adding extra warm that was missed previously, giving bias confirmation. Over the decades I have been increasingly appalled by the continuing adjustments with little/no science reasoning involved.
I couldn’t believe the HADLEY grid compared with land temperatures during 2010 for the UK. It was also appallingly biased and the grid looked nothing like the actual observed instrumental station temperatures. The SST’s were really cold around the UK too at the time.
Fair enough include more Arctic coverage, but as soon as this happens it becomes incompatible with observations prior to it. This is also impossible to fix when there are no observations in place before and we are trying to measure such a small change in temperature.
Can’t believe they never thought in the propaganda that colder mid-latitude temperatures were actually caused by warmer air moving into the polar regions. Although as this was somehow new just recently and could be blamed on global warming.
“They refused to release their complex mathematical formula used to make the adjustments.”
Again, that just isn’t true. The Forum (a review body) being cited, said:
“The Forum notes and commends the transparency offered by access to computer code, which is available (in the language Python) from the Bureau on request. This fact is advised on the ACORN- SAT pages on the Bureau of Meteorology website at http://www.bom.gov.au/climate/change/acorn- sat/#tabs=Methods&-network= , referencing the e-mail address Helpdesk.Climate@bom.gov.au.
The Python code was developed for broader use outside the Bureau, as the original development of the code for internal use within the Bureau was in Fortran. The Forum recommends that the Python code be made available as a downloadable link rather than by request.”
The BOM does not need a mathematical formula to make adjustments. 2013 was notionally the hottest year on record in Australia. It was also a year of record solar radiation; by 2 Mjoule per day across the continent. Early in 2014, the BOM comprehensively falsified the data to remove the record radiation.
Weird conspiracy stuff, with no backing, and incomprehensible.
A bald accusation with zero offering of any proof.
Would you care to?
“Toneb December 22, 2016 at 1:06 am
A bald accusation with zero offering of any proof.”
Like AG?
“Like AG?”
Whataboutism. If you object to certain behavior, then why aren’t you leading by example in your opposition to it?
To Nick. thanks for your scepticism. As a passionate Australian, I used to think that our scientific organizations were beyond politics. My disappointment is profound. https://www.skepticalscience.com/australias-hottest-year-humans-caused-it.html The attached link is a discussion from the Sceptical Science web site, where I engaged in a discussion about the temperatures in Australia in 2013. The relevant comments (13 – 35) cover a debate about the BOM published data for solar radiation. The bottom line was that ” Barry” accepted my assertion that the BOM published chart for solar radiation bore no resemblance to the actual data. Some time between Jan 14 when “Barry” undertook to contact the BOM about the inaccuracy of their chart and May 2014, the Bureau comprehensively falsified the data to match the chart. I have downloaded data for Sydney, and a spreadsheet that shows pre and post revision values for Sydney. Canberra. Melbourne, Adelaide, Alice Springs, Perth, Derby, Darwin, Cairns, Brisbane and Hobart. As you can see from the list, the data covers every capital city plus a few regional centres that cover the entire continental area.
2013 in Australia was remarkable because of the continental scale blocking “Highs” that pushed the normal sequence of cold fronts over Tasmania. This (in my opinion) was due to a combination of a few dominant climate influences. A positive Indian Ocean Diode, an abortive El Ninyo and a positive Southern Annular Mode. The weather experienced if fundamentally in keeping with the major climate influences in play at the time. The high level of recorded solar radiation is also in keeping with the very low levels of humidity associated with the blocking highs.
The negative feedback associated with water vapour (not clouds) seems to be universally ignored in the climate debate. It is no coincidence that the hottest regions on earth are not in the tropics and it has little to do with albedo. The climactic influences in play in Australia in 2013 offer a very plausible alternative explanation to the record temperatures. If the science of AGW was settled. there would be no requirement for the BOM to falsify the data.
Just remember that “settled” has several meaning, and “colonized” is one of them.
So “science of AGW is settled”, indeed. To bad the settlers are not the kind of neighbor you’ll want.
Like AG?”
Assuming you mean AGW – the the proof stretches back ~150 years my friend and has been observed to comply with the basic physics of Tyndall, Arrhenius, Fourier and other pioneers at the turn of the last century. Nothing has contradicted that in the satellite age. We have discovered the complexity of the way climate stores and redistributes heat (vis PDO/ENSO) and that heat is being stored away in the oceans prodigiously despite a sun that’s been slowly declining in strength for ~50 years.
The world’s experts are of course incompetent and or in a scam to impose a world-wide socialist gubbermint. And all the “experts” on here know better than them. I mean it’s just common sense innit? (Sarc).
The hand-waving assertions of denizens, such as the one I asked proof of, do nothing but raise cheers from the fan-boys here.
It’s not science and neither is skewing the axes graphs of CO2 vs GMST as happened in a recent thread.
Then we have the hypocrisy of double standards whereby UAH and RSS are just dandy (but chiefly now UAH) because they are the coolest, despite monumental and repeated adjustments – but the likes of GISS is fraudulent because of obvious and exhaustively explained homogenisation of small parts of the world … even with the major change being to sea temps where buoy data has reduced the rate of GW.
Proof iis supplied and you could always climb out of this rabbit-hole and look for yourself.
Or not.
Either way.
Don’t be a hypocrite.
Bruiser,
I read the SkS stuff – it still seems, well, paranoid. You found something on the BoM website which seemed to indicate big increases in solar radiation; BoM was contacted, and amended the website to show more normal values. To most people, that sounds like a mistake on the website was fixed. But you insist that they “comprehensively falsified the data”. But why? Why would scientists rush to cover up an increase in solar radiation (which would be very interesting, if true) just to make sure no-one thought it might be responsible for a warm summer?
@ur momisugly toneb
http://eaps4.mit.edu/research/Lorenz/Chaos_spontaneous_greenhouse_1991.pdf
executive summary :
it’s just impossible to attribute GW (or cooling, or whatever) to any external cause, manmade or otherwise (sun or whatever). The simplest hypothesis (Occam’s razor) is that it is just a product of the very nature of climate, which is to produce this kind of record, because this is Chaos.
But I guess to you Lorenz is just another “denizen”, “”experts” on here that know better than [real experts that toneb is fit and able to distinguish]” (sarc)
what do you know of science, toneb ? do you know how to prove things in science ? Then go get the Nobel Prize you deserve. Because, so far science never has been able to prove a single hypothesis. Never. A Single. One.
Science works the reverse way : it destroys a previous theory with new evidence.
And here you come, and pretends that the GHG theory, the “A” from AGW, has proof ? nonsense.
To prove the “A” from AGW would require to disprove the “Non A”, which is has never been done. And, alas for you, cannot be done: just read Lorenz.
Now, if you just stated : “I think AGW is a better hypothesis that (nonA)GW because …” you could be heard. Wrong, but arguable. But stating that “AGW is proven” is just proving you don’t know Sh!t.
“And here you come, and pretends that the GHG theory, the “A” from AGW, has proof ? nonsense..
If you say so my friend.
QED.
BTW:
Do you even get why I say QED?
FI do you read the *science* any place else than a *sceptic* blog?
Hint:
If so that is ergo why you think the *proof* is “nonsense”.
And so QED
Toneb,
Please state what you imagine to be conclusive evidence that human activities are the primary cause of alleged global warming since, when? You pick a date.
AD 1700? 1750? 1800? 1850? 1900? 1950? 1977? 1988?
There could be a Nobel Prize in it for you, since no one else has yet been able to show such a causative relationship with any high degree of confidence, if at all. The evidence just isn’t there. Indeed all the evidence in the world shows the conjecture false. But, please, have at it. Some of us are all ears, ready to receive the benefit of your mastery of all the relevant scientific disciplines and data.
Chimp:
“There could be a Nobel Prize in it for you, since no one else has yet been able to show such a causative relationship with any high degree of confidence, if at all. The evidence just isn’t there. Indeed all the evidence in the world shows the conjecture false. But, please, have at it. Some of us are all ears, ready to receive the benefit of your mastery of all the relevant scientific disciplines and data..
Now, let’s be honest here.
There is no evidence (it exists in abundance), that would in anyway sway you.
Same for 90% odd denizens. It is simply a self-fulfilling prophecy borne from the function of this Blog.
I fully realise that and merely post here to combat ignorance, as regularly reviewed and corrected by Leif (on a current thread FI) and Nick (on many threads). A few others do to as well as me.
None of us do it with hopes of changing minds, I’m sure, but simply to give the alternative, and often empirical science, that is expressed via thousands of scientists in the IPCC AR’s.
If you wish to have an entire echo-chamber then fine. Never confront what real science says rather than the often mythic and biased stuff that “citizen scientists” here post, and who many come along to clap and cheer uncritically.
What would you say if the peer-review process was like that?
And please don’t give the usual “pal-review” response.
If the science is wrong it will be found out later, if not it will hold, just like QT and SR/GR has up to now.
And anyone espousing the ludicrous scam/frauds argument immediately loses any credibility in the real world, though not here it seems.
The world works via cock-up my friend and not conspiracy.
So we are left with these alternatives….
A) The world’s Earth sciences experts are incompetent.
B) The world’s Earth sciences experts are frauds.
C) They know more than you.
If you come up with anything other than C) you are well buried in this Lewis Carroll land.
Aside from the evidence that is there in abundance if you look at links that Leif, Nick, Griff, DwR54 and a few others who can be bothered with the comeback of the ignorants on here, then just exactly what IS causing warming.
Let’s hunt for the squirrel/s eh?
Let’s see…
It’s not the Sun *stupid* – ask Leif.
It’s not volcanoes.
It’s not CR’s – again ask Leif.
It’s not EN’s (please don’t bury into the centre of the Earth to finger that).
(Clue: it’s a cycle and the oceans are storing heat copiously. If it were a net source of climate heat it should be cooling. (Given energy balance at TOA).
What’s causing the oceans to heat?
No,not sea-floor volcanoes. Just work out the ZJ’s required to raise ocean waters by 0.1C never mind to melt polar ice.
It’s GHG’s.
No models required.
Look at the match of 5.35ln(400/280) – Empirically found via experiment – here….
(Modded to fit with intercepts and volcanic forcing but the curve modelled by the above equ.)
http://berkeleyearth.org/wp-content/uploads/2015/02/annual-with-forcing-small.png
Observed to be exerting an increasing forcing….
http://phys.org/news/2015-02-carbon-dioxide-greenhouse-effect.html
Proof is only available in mathematics and logic. CAGW is neither. You display your ignorance; I suggest you fix that.
@toneb
well, let’s us explain what QED means by example.
Suppose you cope with a fellow that you suspect don’t know sh!t of science. You present him basic science facts, and of course he cannot cope with it : so he changes matter, says “blablabla i don’t hear you” and tries to take high ground by, say, asking if you know what QED means .
Which is proof that, indeed, he doesn’t know sh!t about what he talks about :
QED
[ serioulsy … who you think you are, to dare ask if I know what QED means ? If I didn’t, I would just had to ask internet anyway. ]
BTW:
Do you even get why I said “blablabla I don’t hear you ?”
No hinting.
That’s because I just gave you a reference, peer-reviewed, never falsified, from a respected founding father of climatology. Article that says “be careful guys, we deal with chaos here, strange things happen without any external cause, without any change in the system. It’s just impossible to be sure. Caution, false dilemma ahead” . And then you blablabla : let prove this must be anthropologic just by excluding a few others possibility.
facepalm
A)
“The world’s Earth sciences experts” (WESE) keep saying “we need to study this thing a lot more”, “we need much more data” Which translates “sorry, we are doing are best, but our current best is not enough, still we are incompetent”
So, they are incompetent, so they say themselves.
B)
Con-men always pretend to know and surely don’t let doubt slip out of their mouth, this would let their victim escape. Most WESE don’t do that, so, if they are frauds, they are very inefficient frauds ; I reject this hypothesis, which is not needed anyway (see D). Some do : these are frauds. You know them, it’s easy: those you see in media saying “we know for sure” [be happy : this include those saying “i can prove for sure it’s XXX” where XXX is anything, sun or whatever]. And, happily, some WESE are just saying what I say.
C) They know more than you.
They do. But this doesn’t help. They have to eat (see D).
I have to eat, too, but I don’t depend on a side, whether Koch bro’s or Soros’. I am free to keep to simple scientific facts
D)
Other WESE are just in a loophole: either let their work used in fraud, or don’t work. I guess i would do as they do: pack the thing in enough fuzz-words and pretend this is enough to keep clean, and hope for the best.
@toneb. As someone who was interested in the weather in Australia, I actively monitored the 2013 BOM data as the year unfolded. In April, the solar radiation data hit record levels across the content. The trend continued to the end of the year. I do have proof that the BOM falsified the data. In the case of the Sydney data, the alterations go back to 1 Jul 11. Judging from your later posts, your assumed intellectual and moral superiority is totally unjustified.
Bruiser
December 23, 2016 at 11:36 pm
@toneb. . . . In April, the solar radiation data hit record levels across the content. The trend continued to the end of the year. I do have proof that the BOM falsified the data.
Can you please post your proofs, I would be extremely interested to read them.
Global temps peaked 8,000 years ago and have been going up and down in a downward trend channel since in line with grand solar cycles. Global Temps bottomed out in 1650 to 1700 during that grand solar minimum when there were no sun spots recorded for 50 years and the Thames river froze over. We’ve just passed our grand solar maximum.
Recent research by Professor Valentina Zharkova (Northumbria University) and colleagues Written by Global Warming Policy Forum and published on the Principia Scientific International web site at http://principia-scientific.org/new-solar-research-raises-climate-questions-triggers-attacks/?utm_source=feedburner&utm_medium=email&utm_campaign=Feed%3A+psintl+%28Principia+Scientific+Intl+-+Current+News%29 shows that solar cycles can now be forecast and that we are heading to another grand solar minimum between 2020 and 2053.
The other thing no one talks about is the precession cycles of the earth’s tilt. Currently the southern hemisphere is at perihelion (closest to the sun).
To Phillip and Tony, there is NO proof of AGW (Yes AGW is what I meant).
Nick, that link doesn’t work.
I’m quoting from the pdf report of the Review Committee in 2015. I presume the link worked then. In any case, that rather high-level committee assures us that the code is available.
Nick there appears to be a superfluous space in the link, I’ve corrected it.
Thanks, Phil.
The link does sa“Full details on how the Bureau has prepared ACORN-SAT are available from the technical report “ (link given there)
and
“Python computer source code implementing the percentile-matching algorithm is available by request to: helpdesk.climate@bom.gov.au“
This fact is advised on the ACORN- SAT pages on the Bureau of Meteorology website at http://www.bom.gov.au/climate/change/acorn- sat/#tabs=Methods&-network= ,
Nick, this link doesn’t work.
Try
http://www.bom.gov.au/climate/change/acorn-sat/#tabs=Methods&-network=
This fact is advised on the ACORN- SAT pages on the Bureau of Meteorology website at http://www.bom.gov.au/climate/change/acorn- sat/#tabs=Methods&-network= ,
Nick, that link doesn’t work.
http://www.bom.gov.au/climate/change/acorn-
This page cannot be found.
It’s worth noting perhaps the most significant artificial influence in Australia’s temperature record that the BoM chose NOT to homogenise … the 1972 Celsius metrication of all weather stations.
A majority of Australian weather stations had a 1972 step change up in recorded temperatures, with charts for many sites showing among the biggest jumps in their long-term records that year, particularly minima.
Averaged among all the 112 ACORN stations across Australia, their original raw recordings increased 0.24C in max and 0.34C in min from 1957-71 to 1973-87.
Rutherglen raw is missing 1963 and 1964 annual temps but a comparison of 1965-71 and 1973-79 shows max up 0.27C and min up 0.32C, although the breakpoint is difficult to spot in charts because the site has had an ongoing propensity to cool (until 1998, coinciding with an equipment change).
In its ACORN technical report, the BoM wrote:
“All three comparisons showed mean Australian temperatures in the 1973-77 period were from 0.07 to 0.13°C warmer, relative to the reference series, than those in 1967-71. However, interpretation of these results is complicated by the fact that the temperature relationships involved (especially those between land and sea surface temperatures) are influenced by the El Niño-Southern Oscillation (ENSO), and the 1973-77 period was one of highly anomalous ENSO behaviour, with major La Niña events in 1973-74 and 1975-76. It was also the wettest five-year period on record for Australia, and 1973, 1974 and 1975 were the three cloudiest years on record for Australia between 1957 and 2008 (Jovanovic et al., 2011). The broad conclusion is that a breakpoint in the order of 0.1°C in Australian mean temperatures appears to exist in 1972, but that it cannot be determined with any certainty the extent to which this is attributable to metrication, as opposed to broader anomalies in the climate system in the years following the change. As a result no adjustment was carried out for this change.”
So 1972 metrication didn’t need adjusting by 0.1C because there was record rainfall in following years and, as we all know, whenever there’s heavy rainfall (involving a 10.67% increase in daytime cloud cover from 1966-71 to 1973-78), it causes both min and max to increase (/sarc, just in case).
A possible explanation is that more than 50% of all Australian temperatures from 1957 to 1971 were recorded with a rounded .0 in Fahrenheit, according to the BoM. Another possible explanation is that metrication involved plenty of changes at many Australian sites, including thermometers and in some cases screens as well as a few probable small shifts when the new metric equipment was being installed.
The mean 1972 breakpoint increase of 0.1C acknowledged by the BoM (more likely between 0.2C-0.3C) was probably caused by the combination of temp rounding and equipment changes. It’s possibly the most obvious, universal and understandable artificial influence on trends since the ACORN start year of 1910, and probably accounts for about 20% of the claimed 1C increase in Australian temps over the past hundred years, but too much rainfall means it can be ignored.
It’s part of the Oz ACORN joke, although many countries converted from F to C in the 1970s, which is the decade that climate warming supposedly began to influence trends globally.
The IPCC’s acronym would be more accurate if it were IGPOCC. I encourage greater use of this silly-sounding version, because (for one thing) it would avoid the common misinterpretation of its “I” as meaning “International.”
What everyone’s missed to date is that the BOM have NEVER adjusted temperatures for station moves, etc. If they had, the data would be adjusted forward from the date of the change. Instead they adjust backward from that date, changing the past. For Rutherglen, homogenised temperatures should be tracking HIGHER than raw at present, but they’re tracking together, and the past has been cooled in two steps.
The anomalies they claimed to have been adjusting for are still there, in the record, but hidden by altering historical data. They’re not in the business of correction, but obfuscation and ultimately creating warming where none exists. The CIA is more open and up-front than the BOM. They say they use “statistical techniques” but won’t say what they are. GHCN/GISS has data that has somehow “disappeared” from the BOM database. If the BOM doesn’t like some data, they find an excuse to delete it, except that they never bother to tell anyone what they’ve done.
The contrast between the BOM in Melbourne, and the Tidal Unit in Adelaide (formerly the National Tidal Unit, and before that, a department of Flinders University) is stark. They preserve everything, provide details of technology, methodology, equipment, dates and maintenance logs. Nowhere will you find “Global Warming” or “Climate Change” mentioned. Assessments of historical sea-level change are factual, balanced, logical and easy to read. They are meticulous in presentation, and ease of access, and in recent months, have produced data updates a just a month in arrears. November data will be available in the next week or two. The only adjustments made for a subset of stations is for barometric pressure, and if you’re so inclined, you can find full details of the changes and reasons for making them. Raw data is available for those stations also.
The main BOM is secretive, self-serving, defensive and hostile to enquiry, and are clearly working to an agenda. In Australia, they’re a national disgrace – funded by the taxpayer, the BOM just serves itself, and provides disinformation for the dollars they spend.
Utter nonsense. Confirmation bias at it’s most pathological. Go and speak to them Tony, they are actually very friendly and helpful unless you open up with this sort misapprehended spew.
The main BOM is secretive, self-serving, defensive and hostile to enquiry, and are clearly working to an agenda. In Australia, they’re a national disgrace – funded by the taxpayer, the BOM just serves itself, and provides disinformation for the dollars they spend.
“The IPCC have produced 102 climate models to predict our future climate.”
I think you mean “presented” not “produced” and “project” not “predict”. Though I do agree that too many scientists and others claim (falsely) that the models are intended to predict.
That the models cannot predict is obvious from the fact the projected temperatures have not converged in decades. If I understand correctly AR5, reported an increase in spread of projections.
The models have not converged because the modelers cannot agree on the impact of water vapor and the role of clouds. So much for consensus.
They may have produced them to present, however, the reality is that the models are being used that supercede reality. Whatever the word usage is, is pretty much nitpicking and overlooks what alarmists are actually doing with those ” presentations “. They are attempting to use those presentations to affect laws. And are doing so by executive fiat at least here in the US. …. also the presentations do not alter the fact that data is being compromised in a desperate attempt to agree with the models . Temperature trends are not the only thing they are changing, co2 records have been changed as well.
Quite a good effort despite a few *it’s* when *its* was required, and the somewhat more egregious *much more closer resemblance*. I too was a Bureau Observer although a little later than the writer, and share his concern about “homogenisation” of data.
v’
It is easy to lose site of the notion that what is important is the consequence of a change.
Humans and other organisms adapt. If one were to study the growth bands of various crops over, say, a hundred years, one would see a series of evolutionary changes as the local climate changed. Study of the tree line in mountain regions shows the same evolution over centuries. Cities evolve as well; overlay most any city’s current arial with one from a century ago, and the changes will be evident. Any sea level rise, or fall, and temperature rise, or fall will have to be and will be adapted to. Looking at, say, Britain, it appears the the onset of climate changes is considerably less sudden than the onset of unpayable energy bills. Dear warmists. I don’t give a darn about your hundred year arguments when I can’t pay my bill next month.
The stress on the grid in many parts of the world is a real problem today. The poverty of billions with no proven solution that doesn’t involve economical and dispatchable energy is also a real problem.
The systematic distortion of records to justify policy, as accused, if true, is a real problem but the lack of transparency is a crime against humanity, as as some of what a government keeps secret is trivial but some is not and the tendency towards and permitting of secrecy is typically fatal.
The crap has exceeded reasonable tolerance – we have folks here claiming and defending transparency but Dr, Mann( as a proxy) still defends his data, and the behaviour of the agencies is at best obfuscation and confusion and at worst a cover-up and outright lie..
Let’s throw the whole mess out, get on with our lives, and start over. The satellite record would be a good start.
Solar and wind will find a market without force feeding. The technology will evolve. There will be many failures, and tiny improvements, and fortunes gained and lost, and when needed, if needed, it will likely be ready. Presumptive implementation of today’s technology on a large scale where a market is typically locked in to investments for 30-50 years makes little sense.
From Australia to Britain to the bird killers in the southwest US, we are proving this the hard way. Let’s turn off the grant money and mandates, and see what survives as Science when resources are limited.
https://wattsupwiththat.com/2016/12/21/homogenization-of-temperature-data-by-the-bureau-of-meteorology/#comment-2380318
Nick I don’t understand your graphs. You show Amberley both hotter and colder pivoting around 1980. You say you’ve made an adjustment beginning that year – which does as you say – bring it into line with other stations but your first and last chart show changes to the total record for Amberley; the graphs do not match. I’m sure I’ve probably missed something but it is still not clear what you have done. Those places are all noticeably different in terms of temperature and I know this from first hand experience. I lived in Brisbane and commuted to work at Mt Glorious passing through Stamford twice a day. Mountains to Mangroves is the name of the nature corridor that covers the region and the range of climates across that area is varied to say the least. Isn’t the doctrine of anomalies being consistent across zones a James Hansen invention/theory(?). I have a lot of trouble accepting some of these implicit arguments/theories you are accepting as revealed truth without an explanation. To be clear there is no good theory as to why any of theses stations should follow each other. If it is truly so, then why have stations when one will do just as well! I’m asking a genuine question and am not being critical for the sake of it. I’m finger typing on a mobile device – which I hate to do – please excuse my typos!
Samford grrrrr!!
Scott,
The sentence just above the first graph says:
“I then subtracted the monthly means for that decade, to remove seasonality. So here is the graph:”
“monthly means” is important; it takes out the seasonal variation. It is just anomaly, which wasn’t invented by Hansen or climate science. Subtracting the mean is a standard statistical device.
The second graph, as noted, is a difference graph. The others differ only in the shifts of the red Amberley curve.
The adjustment of real measurement data turns real data into ficticious data. It is in keeping with the AGW conjecture which is really a piece of fiction and not science.
Across the ditch in NZ we had the same. An ex-CRU student (Salinger) corrupted the fine “NZ 7-Station Record”. This was taken to Court, admitted, and is now being repaired as far as possible. There are extensive backup data records, from many Stations, I believe. The upshot so far is NO SIGNIFICANT CHANGE over the last century or so.
http://nzclimatescience.net/index.php?option=com_content&task=blogsection&id=4&Itemid=32
http://www.climateconversation.wordshine.co.nz/
While Prime Minister, Tony Abbot called for an investigation into the BOM. After Malcolm Turnbull replaced him (‘replace’ being a euphemism for ‘stabbed him in the back’) the investigation was called off The reason given? “It might make people loose faith in the BOM.”
Yep.
“the investigation was called off”
None of this is true. Abbott’s minister for the environment did institute an investigation into BoM and ACORN. Its first report was quoted in the head post. It is here. That first report was submitted while Abbott was still PM.
When-ever there is homogenization it is no longer observation data and should for the trash. It is a cheap way of deliberately adjusting data to warm it up more than it actually has. The method used is trash and relies on an extreme judge of assumptions that take it out of the subject called science.
Hence, homogenization is NOT science. Basically blending components together that are down to user discretion and not a investigation using scientific method to confirm validity.
Good point Matt.
So does BOM admit it is not science?
Anyone? Nick?
One sometimes wonders at the qualifications of the loudmouths who are so happy to tell scientists that what they are doing is not science. Only Matt G knows what science really is, I guess.
In fact, science is the process of forming the best estimate of reality. All observations are imperfect, and need to be analysed. In fact, homogenisation is the process of trying to work out what happened to temperature in a region. Things can happen in the environment of a thermometer that are unrepresentative of the region (moves etc). The overall temperature record is a guide – then you have to work it out. This is actually a prelude to working out a regional or global average, which requires putting all those bits together.
Nick,
You can not work out what observations could have shown when they were never there. (this is always a guess and prone to bias confirmation) Likewise it is impossible to work out what micro-climate surfaces are like with changes in areas much smaller than the grids used.
One main reason is because weather can be very different just a few miles away and inversions can occur generally all over the planet especially in polar regions.
Matt,
“Likewise it is impossible to work out what micro-climate surfaces are like with changes in areas much smaller than the grids used.”
A very large part of practical science is based on sampling. You establish a mine based on sampling (exploratory drilling). You estimate ore amounts based on sampling (chemical analysis). If you’re building a bridge, you estimate the quality of your materials by sampling (chemical and stress testing). In every case, you infer the continuum properties in between what you sample. You can never measure everything.
There are etablished ways of testing for sampling consistency. That’s why scientists use anomalies for temperature. They don’t have to worry about micro-climate isues, as long as they don’t vary. The anomalies are coherent over the sampling range. It’s when the environs of the thermometer change in a way that is unrepresentative of the region that you have to adjust.
Nick,
If you can get good continuity from station moves then this should only apply. I were mainly referring to stations with no observations and then guessed or rural stations warmed up to roughly match urban. The error should be taken away not added further to it. Any missing data should be just left out and taken into account, adjusted accordingly. Only observations should be used and completely against guessing missing data and interpolation. I do agree the more samples you have the less likely of error, but stations have dramatically decreased since the early 1990’s so the error has increased with it. This also leads for example to adding 400+ new stations has a much bigger influence recently than it would have had in the early 1990’s.
Due to micro-climate issues this is why changing stations makes a huge difference because it would take about a million of them to nearly resolve it.
> You can not work out what observations could have shown when they were never there.
Science is not the business of being omniscient, Matt G.
> One main reason is because weather can be very different just a few miles away and inversions can occur generally all over the planet especially in polar regions.
Try asking yourself how it is you think you know these things.
Why pick on Matt G? Maybe he has read Alan Chalmers excellent book What is This Thing Called Science? Alan’s qualifications are a BSc in physics and a masters in physics. He also happens to be one of the most respected historians and philosophers of science.
Define “reality”.
This remains something of a mystery to me. What we should be doing is measuring enthalpy rather than temperature. It’s energy that enters and leaves the Earth’s atmosphere, not temperature.
“Define “reality”.”
One def I remember from University days is
“Reality is an optical illusion caused by alcohol deficiency.”
But a more functional definition here might be that reality is what scientists estimate. The test of course is whether you find it useful in interpreting your experience of the world. Science has been helping do that for centuries. The folk here who pronounce on science, haven’t.
You can’t measure enthalpy directly. It’s rare that you can measure extensive quantities rather than deducing. Mass via gravity, maybe a few others. Otherwise you have to deduce from sampling some intensive quantity like temperature and humidity.
Touché 🙂 I’m sure Paul Feyerabend would heartily approve. I certainly do! I’m not sure how useful folks insisting that “the climate has warmed” helps me when my crude estimates from watching the plants in my garden indicate the opposite. Today (Christmas Eve) is usually walnut pickling day. This year, the walnuts are the size of my thumbnail; far too small for pickling.
I am quite sure that when Robert G Brown pronounces on science that his pronouncements are quite sound. YMMV. Ferd and EM Smith are also worth paying attention to, as are some others.
Not my area, but surely it’s not beyond human ingenuity to measure incoming and outgoing energy at TOA. I’m not sure to what extent the variation in humidity follows theory. The pan evaporation paradox wouldn’t be a paradox if it did.
@ur momisugly Matt G
It amused The Git on the first warm day of summer to observe the temperature at Franklin in the Huon Valley where both The Git and Martin Riddle live. According to my smartphone, at mid-day it was 18.7 C, 20 C and 24 C. Franklin had a population of 326 at the last Census.
To Those Who Care About Evidence and Logic,
I’ve been asked by several people to comment at this thread. Mostly to set Nick Stokes straight. But as far as I can tell he prefers to ‘judge’ rather than to ‘think’. Of course it is harder work: ‘thinking’.
Sometimes when we start thinking about something — like why the Bureau would change the temperatures as recorded at the Rutherglen agricultural research stations — it can make us feel very uneasy. Indeed, we risk losing faith in key institutions.
But then again, these institutions have done science a terrible wrong in deciding that observational data can be changed/remodelled until it fits the failed paradigm of ‘anthropogenic global warming’.
The cooling trend in the temperature minima at Rutherglen is real: the data from all the weather stations with long and continuous records in this agricultural region show cooling. This is not surprising, considering how much greener this landscape has become thanks to irrigation.
I would like to thank Brendan Godwin for kindly providing everyone with a link to my 63-page analysis of temperature trends at Rutherglen (and surrounding stations); and the document is also here http://climatelab.com.au/wp-content/uploads/NW2016.001.PP_.Marohasy.pdf
Should Stokes and others prefer something shorter about Rutherglen, there is my submission to the Auditor General… http://jennifermarohasy.com/wp-content/uploads/2016/06/Request-Audit-BOM-Marohasy-Ver2.pdf .
The Auditor General refused my request, indicating that there was already a ‘Forum’ looking into the issue of homogenisation. So far, the ‘Forum’ has refused to work through a single example of homogenisation. It certainly has not considered the travesty that is Rutherglen. And its not really a ‘Forum’, at least it has not opened this issue of homogenisation up for discussion — as might be expected of a ‘Forum’.
Several of us have made submission to the ‘Forum’. You can find links to most of these here : http://jennifermarohasy.com/temperatures/submissions-to-the-panel/ . These submission were made when I understood it was to be a ‘Panel’ of experts.
None of the issues that I raised in my submission have ever been addressed. My submission is here http://jennifermarohasy.com/wp-content/uploads/2014/03/Let-Bob-Baldwin-Re-New-Panel-18-January-1015-F.pdf In this submission I focus in on the dusty hot town of Wilcannia.
It is a travesty: the notion that ‘scientists’ can remodel observation data until it fits a theory. One day the few responsible will be named and shamed, and the mainstream media will lament how this could have ever happened. In the meantime the media turn-away. It really is embarrassing.
Cheers, and Merry Christmas.
Dr Jennifer Marohasy
Well, that submission to the Auditor is interesting – I had never really figured that the fuss was about with Rutherglen Research. It had seemed to be a case of a few small adjustments, and it happened that the three adjustments to minimum increased the trend, causing it to change sign. Well, that can happen. Three in the same direction has about a 1/8 chance, so if you look at enough of them …
But the submission confirms my suspicion that Jennifer doesn’t know about homogenisation. She put the annual min data through a control chart. This makes no sense. Control charting the difference (also done) might be a bit better. But it tests for an out of range jump in a single year. That is a poor test for the change we’re looking for. For one thing, it is sensitive to the time of year. A change that happened mid-year would show the change split over two years, halving the value. More importantly, you need to look at a range of neighboring times, to filter out noise. Jennifer’s change chart can’t do this, because it uses absolute values, so the sign information of the change is gone. It’s just inappropriate testing, and is not what is done.
The second failure of understanding is the fussing about Beechworth being used to compare, when B also has jumps. People often misunderstand how these comparison stations are used. They don’t replace with chunks of data from those stations, unless the subject station has gone haywire, in which case it wouldn’t be in ACORN. What is done is that, when RR has a jump that seems suspicious, neighboring stations are checked to see if they have a similar jump. If not, the jump will be considered spurious and a counter adjustment will be made. But they only look at a few adjacent years of the comparators. So it doesn’t matter if B has artefact jumps, as long as they aren’t at the same time. Often the neighbor records will be shorter overall; all that matters is the comparison period.
I’ll illustrate. Here is the plot of Rutherglen annual minimum. I’ve marked the years of change in green, according to the chart in Jennifer’s report. And I’ve put a five year running mean in red. You can see that each corresponds to a steep down dip. That is what the algorithm picks up, and asks, is this real? So it checks the neighbor stations to see if they did that too. If not, it says it wasn’t, and moves the earlier part up or down to minimise the apparent discontinuity.
http://www.moyhu.org.s3.amazonaws.com/2016/06/ruth.png
The algorithm doesn’t always put the change exactly where you might expect. The exact location does not have much effect on anything that matters. The size is more important. I would have expected the 1928 change in about 1923. But it wouldn’t make much difference.
You might see other jumps (up or down) and wonder, why not those? Remember, they take a weighted average on each side; your eye might not be getting it right (and they are likely using monthly, not annual data). But also, it might have checked some of those jumps and decided the neighbors were similar enough to dispel suspicion.
I see from Jennifer’s references that BoM has a specific doc on Rutherglen adjustments. And now there is no adjustment of minimum at 1928, only 1966 and 1974.
Nick,
The problem I have with your explanation of the homogenization process is that there (obviously) must be a decision made on the magnitude of the jumps to examine, and the magnitude of the difference between the location in question and the comparison locations, and how many comparison locations to use, and other criteria associated with the comparison locations. All of those decisions will have some degree of subjectivity, and be vulnerable to bias. It seems to me, all things being equal, if you don’t have a very specific reason to suspect a particular measurement (not simply based on it being unusually high or low), then it’s probably more prudent to not make an adjustment, particularly because if a seemingly spurious jump occurs for no specific reason, that should be just as likely to occur in both directions (up and down). And, of course in any case, the original (raw) data set should be preserved and available.
I took a look at your discussion on the Amberley data, and you note that Amberley is above the comparison sites pre-1980, and below post-1980, so it is suspect (although Samford shows just the opposite behavior). Actually, I believe you list the more pertinent reason next; i.e., the steeply negative slope of the Amberley data, compared to the neighbor locations (all slightly positive). I have no problem with recognizing that the slope in the Amberley data appears quite striking relative to the nearby sites, but that in and of itself does not mean it is “wrong.” I think that the scientific approach in that case, is to hypothesize (then test) various reasons why it is different, not to simply assume it is “wrong” and fix it. While it seems intuitively unlikely that Amberley is “right” and the other three are “wrong,” that is insufficient reason to assert that is so, and “fix” the errant data, not to mention the fact that such reasoning and assumptions have led many astray in the past..
Sincerely,
Barbara
Barbara,
” All of those decisions will have some degree of subjectivity, and be vulnerable to bias.”
First I must say that I somewhat oversimplified the explanation. It may have once been done as I described, but there are refinements. The first was to create a reference series from the neighbors, subtract that, and then test for changes. That helps focus on changes that are in contrast to the neighbors. But it is complicated by inhomogeneities in the neighbors, which leaves short fragments of the reference series. So the modern way is the Menne/Williams style of pairwise comparisons. Each difference is noisy, but there are a lot of them, and more information overall, if you can resolve it. Victor Venema explains this with far more authority than I can.
But on the subjectivity, the process is usually done by algorithm. I have often tried to explain why it is unwise to try to make exceptions for particular cases that seem to offend skeptics, like Rutherglen. I hope BoM has succeeded. If it is done by algorithm, you can then test bias, by working on synthetic data. They put a lot of effort into trying to introduce very little, while reducing the bias already in the inhomogeneities.
And that is the point of the trade-off. One point Jennifer made in her submission was that it increases noise. That is true, but the context is a massive averaging process. This greatly attenuates noise, but not bias. So, while homogenisation will no doubt make many errors, if it can diminish bias that is a very good trade-off.
Just noting that I wrote a response just now, but it went into moderation. Wishing all a good Christmas, especially the hard-working mods.