Since Graham Lloyd’s article ‘Mercury Rising in BOM probe row’ was published on the front page of The Weekend Australian earlier this month, there has been some confusion regarding the availability of the parallel temperature data.
These are the temperatures handwritten into the Field Books of Meteorological Observations, both the temperatures as recorded by a mercury thermometer, and from the platinum resistance probes, at the same place and on the same day.
I spent the first of several minutes of a prerecorded interview with Michael Condon from ABC NSW Country Hour earlier this month arguing with him about this. He was repeating wrong information from the Bureau of Meteorology’s Chief Customer Officer, Peter Stone.
Specifically, Condon incorrectly claimed that the bureau makes all its temperature data publicly available on its website, including the parallel data. This claim, that is apparently being repeated across university campuses, flatly contradicts the opening paragraphs of Lloyd’s article. Lloyd correctly explained that it was only after a Freedom of Information request and three years of arguing with the Bureau (including over the very existence of these Field Books/A8 reports and whether their release was in the public interest), and then the case eventually going to the Administrative Appeals Tribunal on February 3, that some of the parallel data for Brisbane Airport was released.
So begins an article on page 13 of today’s The Australian, with the same article available online, click here ($ subscription required).
As Lloyd reported early this month, only three years of the 15 years of data for Brisbane airport was released to John Abbot on the Thursday before Easter and this is just a fraction of the 760 years of parallel data the bureau holds for 38 different locations spread across the landmass of Australia.
I’ve had several academics phone and email me over the last few days asking for assistance in locating the parallel data for Brisbane Airport online. I have explained that this was never provided to me in an electronic form, but as over a thousand handwritten pages. I manually transcribed the handwritten entries over Easter and undertook preliminary analysis of this data.
My analysis of the three years of parallel temperature data from Brisbane Airport shows that 41% of the time the probe is recording hotter than the mercury, and 26% of the time cooler. The difference is statistically significant (paired t Test, n = 1094, p < 0.05). The differences are not randomly distributed, and there is a distinct discontinuity after December 2019.
I initially thought that this step-change from an average monthly difference of minus 0.28C in December 2019 to plus 0.11 in January 2020 (a difference of 0.39C) represented recalibration of the probe.
The bureau has denied this, explaining there was a fault in the automatic weather station that was immediately fixed and operating within specifications from January 2020 onwards. Yet even after January 2020, the probe was recording up to 0.7C warmer than the mercury thermometer at Brisbane Airport.

The bureau does not dispute my findings. It has not provided its own analysis of the data beyond claiming that its own assessment of the full 2019-2022 period finds ‘no significant difference between the probe and mercury thermometer’.
I understood this to mean that the Bureau finds no statistically significant difference, but two academics, without seeing or analysing this data, have separately indicated to me that: The Bureau are just correctly saying that the differences are small. They are making no statistical claims. These same academics, having read Lloyd’s article in the Weekend Australian, know that the difference between measurements from the probe and mercury can be up to 1.9C – this is how much the mercury recorded warmer than the probe on one occasion. More usually the probe recorded warmer than the mercury. Yet they insist, and I quote: The Bureau’s rebuttal makes no claim of statistical testing. They are just saying that differences are within tolerance.
It seems that some Australian academics are on the one hand, quite prepared to claim that we should be fearful of temperatures exceeding a 1.5C tipping point, yet at the same be unconcerned about the accuracy or otherwise of measurements from official bureau weather stations.
The question for me continues to be whether the probes that have replaced mercury thermometers at most of the Bureaus’ 700 official weather stations, are recording the same temperatures that would have been recorded using a mercury thermometer.
In his letter to the editor of The Australian on April 19, the bureau’s chief executive, Andrew Johnson, explains that they follow all the World Meteorological Organisation rules when it comes to measuring temperatures. This is as absurd as the bureau’s Peter Stone claiming that all the parallel data is online.
The Bureau is unique in the world in taking instantaneous readings from the probes and using the highest in any 24-hour period as the maximum temperature for that day.
In the US, one-second samples are numerically averaged over five minutes, with the highest average over a five minute period recorded as the daily maximum temperature.
This is to achieve some equivalence with traditional mercury thermometers that have a slower response time, more inertia.
In contrast here in Australia, the bureau, use the highest instantaneous spot readings as the maximum temperature for that day. So, depending on how the probe is calibrated, it can generate new record hot days for the same weather.



I am very grateful that this article is published in The Australian.
I am also grateful that Ross Cameron from TNTRadio has indicated he will follow this saga, and has interviewed me the last two Sunday’s with the show available as a podcast, click here.
You can listen to a fair edit (about 10 minutes of the 50 minute conversation) of my prerecorded interview with Michael Condon on the ABC Country Hour by clicking here. I begin at about 11 minutes.



“Yet even after January 2020, the probe was recording up to 0.7C warmer than the mercury thermometer at Brisbane Airport.”
It also records up to 0.7°C cooler than the mercury. This is the problem with your scattershot reporting of the data. You have 1000 days; with any sort of random scatter you’ll get outliers. That doesn’t tell you anything. The first and basic thing is to report the mean difference. Not just a significance test, but the actual number. You did report a mean for part of the time, but it was clearly wrong, as you acknowledged. But no correction has appeared.
That is why it is important that, if you won’t post a proper summary of the data, at least post the data so someone can figure it out.
“Why should I give you my data, when you’ll only try to find something wrong with it?”
Guess who set this precedent, Nick.
Is she following it for the same reason?
Dunno, but wouldn’t it great if ALL data and scientific processes were always provided from the get-go without other interested parties having to weasel it out of the publishers?
(Kinda like the Fauci / FDA / CDC / Pfizer / Moderna attempt to lock away the clinical test results for their mRNA “vaccines” for 70 years. Why should outside interested parties have to resort to court action to access such publicly-funded info?)
“great if ALL data and scientific processes were always provided from the get-go”
Really? Do you as a taxpayer want the BoM to post second by second output from all its AWS? Even if it costs a fortune?
It wouldn’t cost a fortune Nick. They should publish what they record before they “modify” it. If they use it for anything then they already have the data stored. Providing a simple API access to the data is both trivial and inexpensive.
A single spinning disk drive costs less than $200 a year in electricity and can store more than 10 Tb of data. A year of 1s data for a single station is ~128Mb uncompressed. So you could track 8 measurements for a Gb for a year. So 100 years of data for 100 sites for 8 different measurements uncompressed for a single ~$300 capex and $200 open for data you should already be storing.
If you are dumb enough to use 1s data for “scientific” temperature recording, you’d better be storing it, and if you are using that data it should be trivial to make it publicly available.
YES !!
That’s why UK parliament has ‘Hansard’; so there is a complete record of every thing said (raw data) before lying ba$$terds apply political spin.
All very well 1saveenergy, but who is able to do anything with humongous data files?
As an example my Cooktown tide gauge study (https://www.bomwatch.com.au/bureau-of-meteorology/trends-in-sea-level-at-townsville-great-barrier-reef/) used 10-minute data from 1996 to 2019. The number of cases was 1,242,345. Excel can handle 48,576 cases. It becomes a specialist task downloading then ingesting large datasets, and for the Bureau’s high frequency (1-minute) data it would stretch the capacity of most desk-top PC’s.
It is not about storing such data (they do store it), it is about the BoM having it there knowing that hardly anyone would have the capacity to deal with it. How would you deal with 1,242,345 10-minute tide-gauge datapoints; then multiply that by 10 if it was 1-minute data.
In your case, what would you do with it if you had access to it? What would Jennifer do with it? High frequency data is invariably noisy and is not data that you can ‘cruise around’ looking for something you may stumble-over.
Having it there because everyone thinks they should, serves no purpose unless those wanting the data have the desire, and the skills to do something with it.
(Don’t forget you can purchase 1-minute data if you want it, but you probably need to have a project in-mind and a method of handling it.)
All the best,
Dr Bill Johnston
“it would stretch the capacity of most desk-top PC’s.”
OMG! Climate science is *really* living in the 20th century. GPU units today are rated in Teraflops!
The ability to handle that kind of data is software limited, not hardware limited. If a PC spreadsheet can’t handle the data then put it in a database and use something as simple as Perl or Python to analyze it!
You don’t have to do the whole thing over every time unless the past data is being “adjusted”! If your lag time is months, who cares!
Dear Tim,
I’m not talking about well-funded climate scientists many of whom have access to super-computers, I’m talking about the many arm-chair warriors who think it would be very handy for them to to be able to gaze at large datasets. Updating to a top-of-the range PC just to look at 1-minute data from the BoM is probably outside what most bloggers would be interested in doing.
And thanks for advice, but I personally use R for my work (had to teach myself and I don’t profess to be an expert), but stepping outside Excel would be a challenge for most.
Realistically speaking, most people wanting “all the data” would be unable to use such data, and as I said, if you want it, many high-frequency datasets are available for purchase anyway.
All the best,
Bill
Bill,
Anybody doing data analysis with Excel (or similar) shouldn’t be.
It’s not what spreadsheets were designed for, and it’s remarkably difficult to produce good readable, testable, documented,
validated code with them.
They’re good for noodling around to get a feel for things, but there are much, much better ways to do the job properly.
Storage and bandwidth are cheap, so I have no objection to providing summarised data as well as data at the full available resolution.
It’s better to have the detailed data available and not need it than to need it and not have it available.
Using a proper statistical package is actually far less resource-intensive than doing a dodgy analysis with a spreadsheet, especially working with a proper operating system.
Anybody who is even slightly serious about analysing data.
Do you really think that either of those is a large data set?
That wasn’t even big in the 1990s when I was using daily data sets that size for performance analysis and capacity planning.
Guys, CPU, storage and network bandwidth are 2 or 3 orders of magnitude greater now than 20 years ago for the same price.
old cocky,
Few people with opinions about access to large data sets, would actually be interested in seriously analysing them. The limitations of Excel are well known, however, going beyond doodling, data-diving is a different skill-set.
Aggregating to understandable/useful units, asking the right questions, using the right tests and considering their advantages and limitations are not skills that are intuitive to most people (ask around at your next night-out with friends and watch the shutters come down).
Most people are information consumers not information creators, and for creators, trust in their methods and what they are claiming, is arguably their most important asset.
Putting data, methods and results in the public domain, like I do on http://www.bomwatch.com.au, and being open to defend one’s work and if necessary amend one’s work, is central to the notion of trust.
Yours sincerely,
Bill Johnston
Bill,
You have two separate points above:
That is quite so, but the full data should be readily available to those who do have the requisite inclination and background to perform the analysis. Ideally, subsets or aggregated data would also be made readily available.
The resources to do both are not expensive, despite what people outside the IT field may think.
That seems to be the general consensus of comments here.
I quite agree with Steven Mosher about making data sets and preferably code available with papers, so it would be wonderful if more people followed your excellent example.
Yes. I’ve paid for it, I want access to it, whether I do the sums or others do. Data is data and unfettered access should not be a trivial matter.
An FOI request is not something that should be ignored, challenged or fought in the courts. Just release the data. It’s already been paid for.
It wouldn’t be expensive, scan the paper into PDF or JPG and make it freely available online as a download. One condition for using any part of it, make any electron transcription available as CSV or Excel. Not
restrictions on sharing to anywhere in world.
Or with a bit of effort and investment make it so the public can do the conversion back to a BOM database. Once a sheet has been loaded twice and checked for consistency make it available to the world. Again the BoM could ask/demand copies of source code and adjusted data to argue about errors and assumptions.
Knowing how good predictions are is vital to human survival according to people like you, restricting access should be the last thing you do
I agree in principal, but in prsactice you just gave the guy with the pen millions of bosses. Wrong generation.
Yes!
Who says it would cost a fortune? If you think it would then show your work, don’t just make an unsupported claim!
*MY* weather station setup is capable of recording 1 minute data using nothing more than a Raspberry PI computer. I only collect 5 minute data but the one minute data is available for collection if I so wished. I have collected this data since 2002 and not even touched the memory restrictions on the Raspberry Pi.
Admittedly the RPI has become rather expensive over the past four years but it would still cost less than $200 per station. Add another $100 for an RF “hat” for the RPI and an antenna and you could collect all kinds of data on the cheap! $300,000 for 1000 stations.
The data collected could include temperature, humidity, pressure, wind, rain, insolation, etc. Everything you would need to actually calculate enthalpy and get a *real* handle on the heat instead of using temperature as a poor, poor proxy! The ability to calculate integrative degree-day values would be there for the taking.
If I, an elderly gentleman on a fixed income, can do this why can’t NOAA, NWS, etc. do so?
When is climate science going to join the 21st century?
They already phone home, so it’s really only a matter of additional storage and a server in a DMZ.
I forgot pressure, and didn’t even think of insolation, so that’s another 16 bytes/record. Worst case, it doubles my estimate, but it’s still trivial by current standards.
Dear Tim,
They are talking about error-prone, manually observed data, that must be manually transcribed, then checked for consistency.
Even if is scanned, different handwriting styles mean it still has to be error-checked one value at a time. It could be for instance, that some of Marohasy’s manually transcribed data have been miss-transcribed.
Cheers,
Bill Johnston
Unless we build a time machine and convince the hiring manager to select based on penmanship. OCR.
So what? When is climate science going to join the 21st century?
Start with the data that is recorded by the automated stations and go forward from there.
All I ever hear is “TRADITION”! We’ve always done it this way and that’s how we are going to continue doing it!
Keep doing the old way but By PETE, start doing it the modern way!
Your way is better. The guy at the airport with the pen does not know what a raspberry Pi is. You would not trade standard of living with him.
Since when do leftist care what things cost the taxpayer. Oh wait, whenever it exposes their Central Authoritarian bent to rule over the great unwashed.
I wonder who the “several academics” are who phoned and emailed over the last few days asking for assistance in locating the parallel data for Brisbane Airport online.
Another reason surely, for making the data available.
All the best,
Bill Johnston
But in order to back-up her claims which are dubious at best, she should put her data in the public domain, as I routinely do on http://www.bomwatch.com.au.
Cheers,
Bill
At some future moment, the junior rep who now asks “why isn’t the data online” will be the senior rep. Then you will get your way. By that time all our comments will be simulated by a computer in San Jose and published concurrently with the blog post.
From the article:
“These same academics, having read Lloyd’s article in the Weekend Australian, know that the difference between measurements from the probe and mercury can be up to 1.9C – this is how much the mercury recorded warmer than the probe on one occasion.”
Well if you just skip the parts where he accurately reports the problems with the data in the other direction, I guess you could call that scattershot reporting.
From your comment:
“That is why it is important that, if you won’t post a proper summary of the data, at least post the data so someone can figure it out.”
Maybe the people being paid with forcible extracted tax dollars to do their jobs could at least post a proper summary of the data!
Nick,
That 1000 days is about 3 years worth of data. Do you think the outliers are removed from the actual calculations made in determining the “official” record?
Just the fact that the records are not being released should cause you heartburn. This is public data and paid for by the public. Unless it can be classified as government “top secret”, it should be released in some form or fashion. Anything less is not being responsive.
I also notice you have given no factual data to support your conclusions. Why haven’t you obtained the data to verify it’s accuracy?
Nick
it has to be in the BOM`s interest to get their data accepted and their methods for adjustments accepted, and for their conclusions to be accepted, especially by those that think they disagree, if you can convince your sceptics you have won the arguement .
Best M.O. Allow the data to accessed . . . . let everyone play and check . . . deal with the comments.
If the comments from interested parties are negative you probably messed up somewhere in how you collected/adjusted or presented your data, if the comments are positive you say thank you and carry on with the good work with a greater foundation to work from.
Simples
“If the comments from interested parties are negative you probably messed up somewhere”
Consensus science?
Dear Nick and Jennifer,
Jennifer said “In contrast here in Australia, the bureau, use the highest instantaneous spot readings as the maximum temperature for that day. So, depending on how the probe is calibrated, it can generate new record hot days for the same weather.”
As we have discussed endlessly in various other of her misleading posts on the matter, the Bureau’s platinum resistance probes are designed to report attenuated temperature measurements at the end of each sampling cycle, in a similar way that maximum and minimum thermometers provide attenuated (not instantaneous) values. The problem of spikes is an entirely different problem and unlike you and Lance, I worked with a manufacturer of automatic weather stations on that problem in the 1980’s.
While you keep pedaling-out the same misconceptions, your claims regarding PRT-probes are wrong and misleading. If only you had undertaken regular weather observations and filled-in a few A8 forms like I have, you might know something about the data.
I alerted you previously to a paper by Dr Greg Ayers and Anne Warne who probably know much more than you about metrology and the Bureau’s measurement protocols (https://doi.org/10.1071/ES19032). You and Lance have always had the opportunity to contest Ayers’s findings, but except for hand-waving, you never have.
In their summary of the paper, Ayres and Warne said: “The Bureau explains that this is appropriate because the inherent measurement system time constant means the 1-s data are not instantaneous, but are an average smoothed over the previous 40–80 s”, and yours and Lance’s long-running commentary about this issue is entirely misleading. You either can’t read, don’t read or won’t read anything that disrupts your narrative.
While Ayres’ earlier paper (Ayers, G. P. (2019). A comment on temperature measurement at automatic weather stations in Australia. J. South. Hemisph. Earth Syst. Sci. 69, 172–182. doi:10.1071/ES19010) was less comprehensive and more difficult to understand, it was nonetheless a published work that you and Lance never rebutted. In that paper he found that “A follow-up test for September 2017 at Mildura when a new Tmax record was set found the difference immaterial, with Tmax the same for the averaged or 1-s values. Thus, while the two versions of 1-min air-temperature data showed fluctuating small differences, largest at midday in summer, for the 3 months studied at both sites, fluctuations were too small to cause bias in climatological air-temperature records.”
At least do some reading and become knowledgeable in the areas that you claim expertise in.
Yours sincerely,
Dr Bill Johnston
http://www.bomwatch.com.au
Dear Nick and Jennifer, in referring to “you” I was specifically directing the comment to Jennifer.
If ever the penny drops that temperature measured by the Bureau’s PRT-probes are attenuated to be equivalent to liquid-in-glass thermometers, hopefully Jennifer will become less emotional about things that really don’t matter, and more objective about thins that do.
By the way, attenuated means “having been reduced in force, effect, or value”.
All the best,
Bill
Or what they should read.
What He measured
Interviewed was meteorologist Klaus Hager. He was active in meteorology for 44 years and now has been a lecturer at the University of Augsburg almost 10 years. He is considered an expert in weather instrumentation and measurement. One reason for the perceived warming, Hager says, is traced back to a change in measurement instrumentation. He says glass thermometers were was replaced by much more sensitive electronic instruments in 1995. Hager tells the SZ ” For eight years I conducted parallel measurements at Lechfeld. The result was that compared to the glass thermometers, the electronic thermometers showed on average a temperature that was 0.9°C warmer. Thus we are comparing – even though we are measuring the temperature here – apples and oranges. No one is told that.” Hager confirms to the AZ that the higher temperatures are indeed an artifact of the new instruments.
http://notrickszone.com/2015/01/12/university-of-augsburg-44-year-veteran-meteorologist-calls-climate-protection-ridiculous-a-deception/
Another reason for stopping records and starting new ones. That doesn’t end up hiding or smearing what different devices actually show while they are operating.
“That is why it is important that, if you won’t post a proper summary of the data, at least post the data so someone can figure it out.“
“Specifically, Condon incorrectly claimed that the bureau makes all its temperature data publicly available on its website, including the parallel data.”
It puts all the data it considers informative. And there is a great deal of it, but you don’t seem interested in that. Instead you pester them endlessly for numbers that are uninformative, and hard to get. And when you do get them, you uninform us about the results.
How do we know what’s “informative” unless someone other than the BoM looks at it?
Quite an incredible concept isn’t it:
Information that is not = Information
Only Climate Science could plumb such depths of garbage, fraud and mendacity
Sorry, wrong. I lied.
All Governments do that a lot of the time – but it’s only Climate Science that does it all the time and make such a hash of hiding it.
But how could they hide it, there is nothing else of ‘substance’ in the whole charade.
And it’s very simple why:
The entire GHGE is based upon ‘trapped heat’ where heat means/implies ‘Energy’
But then, Climate Science goes chasing after, and getting its knickers twisted about, ‘Temperature’
Temperature is not Energy so the only thing that could possibly happen, as we see happening, is a train-wreck poorly hidden/disguised in a blizzard of lies
They definitely do not make all their data available online. I had to pay to get a sample of the one minute summary data a few years ago. I also had to pay to get their forecast records to prove that they over-forecast extremes
You do not seem to understand the concept of available. All the goods in your supermarket are available.
“all the data” – no
“publicly available” – no
“on the website” – no
I specifically wanted the forecast data as I had always felt that there was a human bias towards exaggeration of extrema. After discussion with the BoM I was assured this was not the case, but the data proves otherwise
The goods in your supermarket are not paid for using tax dollars. The goods in the supermarket are not publicly owned.
Do you always use such poor analogies?
Isn’t that precious.
They are deciding for you, what data is important and what isn’t.
How dare they ask the government to provide information that the government doesn’t want to give.
They must be closet anarchists.
“They are deciding for you, what data is important and what isn’t.”
Of course they are. Because they are spending taxpayers money in making it available. And taxpayers do not want to pay for rubbish.
They’ve likely spent several orders of magnitude more money on lawyers trying to avoid releasing the data than they would have if they just routinely made all their data publicly available.
…or just print-and-send something inconvenient on request.
If it’s being retained read-only in online or near-online storage for internal use, it shouldn’t be a big deal to provide an alternative front-end to it.
One would hope it’s in a PostgreSQL database rather than flat filles or paying Oracle way too much money.
Even having dedicated systems for publicly accessible copies is comparatively cheap. Storage costs have come down massively over the last few decades – consumer-grade NAS is under $100/TB retail.
Assuming they’re recording
That’s 36 bytes per second per station, 3 MB / station/day in round figures, or a bit over 1 GB/year. For 1,000 weather station, it’s a whopping 1 TB/year.
That would have been serious storage when they first started using the digital equipment in automated weather stations, but it’s not even the amount of data most families download in a year now.
How about just saving and posting the 5 minute averages, like the rest of the world?
Imagine that, 300x less data to post. Or roughly 3MB/year per station. That’s less than a small JPG from your old smart phone.
I don’t think BoM data is recorded at 1 second intervals, but in modern terms it would still be trivial.
For serious data storage, lave a look at the storage requirements for CERN.
No. All the raw data needs to be posted and made available. Then, the officials can publish the “value added” versions, and explain to the public why each number was changed.
The problem is that making the raw data available is cheap and easy. Explaining the “value added ” is time consuming, labor intensive, and potentially embarrassing.
“All the raw data needs to be posted and made available. “
People thump the table about this, but don’t seem to have the slightest interest in the huge amount of data that is available. Every half hour the BoM posts temperatures and much else at the 700+ AWS around the country, within six minutes after recording. They stay up for three days. The summary data is collected in months, and stays up for three months. In CDO, you can get daily raw data for every station, not just AWS, for thousands of stations over their full history.
“They stay up for three days”
Why don’t they stay up? The amount of computer memory needed is miniscule today – my raspberry pi could handle it!
OK, then download it to your raspberry pi.
I have downloaded 5 minute data for my weather station continuously since 2002. It hasn’t hit the limit on storage yet! it all goes into a sqlite database which isn’t even a large database software program!
Huge? Seriously? 700*24*number_of_data_sources*3.
That’s in the order of a couple of megabytes.
Hard drives for all data for 100 years would cost less than the guy with the pen costs in 1 year.
The issue is actually bandwidth for downloading. If all the enthusiasts here actually decided to download it, there would be a problem. And there are a lot more people in the world.
You can throttle the transfer rate by various means.
In any case, bandwidth is cheap.
It’s quite impressive how the available storage and networking capabilities have progressed sice our 512MB SCSI disks and 10 Mbps 10Base2 coax ethernet days.
Move the goalposts much Nick?
BTW: Are you trying to have the government save money for poor people? I wouldn’t think so since you H@te the poor so that you want to force a non-functioning electrical generation system on society, which will disproportionately harm the poor.
That’s just dumb. If the data was publicly available geo distributed copies would reduce underlying bandwidth requirements. Compressed monthly/daily deltas would allow people to update without taking a total dump of data and since the older records are not changing distributed copies would be used. Crypto level hashes on the contents would guarantee that copies matched the original. All trivial.
Bandwidth limiting the site would encourage sharing and would fix the OPEX cost. Not supplying the data is inexcusable.
Are you suggesting having various sites around the world which mirror data?
What a novel idea. We could call them “mirror sites” 🙂
Do you remember the good old days of 2k4 modems, SLIP, uucp, gopher, archie and veronica?
How about RFC 2549?
I remember using a tape recorder to load programs into my Sinclair Z80. Jumped up and down when my homebrew serial port worked with a Radio Shack 300 baud modem!
I started with a TRS-80, but that was before becoming serious about it.
It’s still amazing that people actually paid me for decades to play with computers.
Hmm,, maybe they paid me to attend boring meetings with clueless managers, and allowed me to play with computers as an incentive.
It’s all just a indication of how far behind the modern world the climate science of today is. They won’t even consider moving to integrative degree-day’s instead of hokey “mid-range” daily values!
Actually as long as data is available by station there is no need to take massive data dumps of all stations. I suspect that when my analysis is done, 10 to 12 stations will adequately cover the U.S. for analyzing temperature trends.
Climate science should be doing this currently. As nation’s run into electricity supply problems from unreliables and as costs continue to escalate, the “global” concern is going to wane. Regions are going to want to know what is occuring locally. If temps are not increasing at regional levels, immediate spending will become less necessary.
Odds seem low.
Why would there be a problem with bandwidth? You are still living in the 20th century! I regularly get 10Gb/s download speeds over the internet.
You start on a station-by-station basis and calculate the degree-day values for each station and then combine the stations. Getting data on a station by station basis shouldn’t stress anything anywhere.
That’s your bandwidth. But BoM servers are exposed to the world. What if 1000 Jennifers got to work?
The short answer is “That’s why they pay Akamai”
The long answer is that the data downloads don’t have to be from an open HTTP[S] port. The usual approach is anonymous FTP[S] with a connection limit.
In any case, Tim’s 10 Gbps is consumer-grade, not serious data centre or cloud aggregated bandwidth.
There will always be resource constraints, there are ways to handle those limitations, and, so far, technology improvements keep pushing those boundaries further out.
Good report. Thanks for helping to expose biased data.
This is unrelated to Australia. But if there is a smoking gun that this whole thing is a fad, it’s this https://notrickszone.com/2010/09/21/a-light-in-siberia/. This is written in 2010, but I think it should be given way more attention. The rapid warming seen in the Arctic is the result of heat bias and data filling. Relatively isolation stations show very insignificant and AMO controlled warming. A rural station in the Arctic can just be a station located faraway from Fairbanks or Yellowknife even if it has heat sources nearby.
Those academics questioning this work are probably worried that it will stop the gravy train and expose their deficiencies.
We all have bills to pay. They chose to be climatologists when they were too young to know. Standup Comedian Mulaney has a funny bit about his expensive Ivy League literature degree. Worth a Youtube search for anyone.
Now alarmist bloggers? I have less sympathy for alarmists who daily demonstrate marketable modern skills to try again.
A few dats ago I posted this on WUWT regarding Roy Spencer & his UHI studies.
Some of the content is relevant to this post. Geoff S
………………………
Australia provides a good test bed because of long duration data (some back to 1860s) and low population density. For many months now I have been trying to prepare a set of 50 Aussie “pristine” stations with Tmax and Tmin daily BOM temperatures and other observations like rainfall. The purpose is to create a pattern of baseline, weather/climate alone variations over 120 years in this part of the world.
I have selected the 50 stations and assembled adequate demonstration of lack of population effects. But there, it gets hard. This is mainly because of signal:noise limits in the data. Strange correlations appear, some being: long term trends by customary linear least squares fit are all over the place, from about -1 to +4 degC per Century for Tmax, (Tmin not all done yet); longer data sets (more years) show lower trends;
Trends correlate with distance from the nearest ocean; trends show some relation to station altitude; trends correlate with World Meteorological Organisation number.
There are known influences on trends that I am trying to remove to see residuals. These include metrication from F to C in 1975; change from liquid in glass to Platinum resistance in mostly 1996: change in screen size and shape now and then; station relocations; periods of recording to the nearest whole number C; data errors, such as cut and paste of blocks of data up to a month at a time; unusual data with strings of same consecutive numbers up to 7 days in a row; and more effects.
These raw pristine temperatures need correction for these effects. Some corrections are shown by colleague Dr Bill Johnston on Bomwatch blog, using corrections for rainfall that typically amounts to tens of percent of the total variation. If these effects were small enough to ignore, the 50 trends would match each other in magnitude and pattern. They do not.
In short, I am on the verge of dumping all this exploratory work on the unsuspecting scientific public, with a conclusion that the numbers, originally observed for purposes of early 1900s curiousity are not fit for the purpose of quantification of zero UHI.
Sadly, I suspect that your work will also find that, but at least we can say that we know why it failed, so that wheel needs no more reinvention.
Interestingly, the study I have done so far is entirely consistent with zero global warming at these 50 stations over the last Century.
Geoff
I bet that does not include the Greenland plateau in winter. That is where warming is really happening. Up almost 10C from MINUS 30C to MINUS 20C in 70 years. It has gone from bone chilling deathly cold to just really cold in 70 years but that 10C rise is significant on a global scale.
MINUS 20C _is_ bone chilling deathly cold. Holy frozen haddock, if you forgot a hat you’re doomed.
“This is mainly because of signal:noise limits in the data. Strange correlations appear, some being: long term trends by customary linear least squares fit are all over the place, from about -1 to +4 degC per Century for Tmax, (Tmin not all done yet); longer data sets (more years) show lower trends;”
Not surprised. All fields are plagued by crappy data, and getting more data does not always solve the crappiness. So we (they?) get left trying to justify a thesis using data we (they?) know is crappy because it’s that or serving hamburgers and fries.
The raw data isn’t crappy. It’s just being forced to produce answers it isn’t fit to produce. Data with 0.5C or even 1.0C uncertainty isn’t fit to distinguish temperature changes in the hundredths digit. And, no, averaging does *NOT* increase resolution or accuracy!
So what do you do when your imagined future depends on drawing a conclusion better than “I don’t know?”.
Half the world seems to have punted on the most central of questions but will fight like mad to defend the facade they’ve built over them- I’m intentionally avoiding more specific language. In general “why post, why here, why now, for whom?”
Just for an experiment: Does anyone actually have a ‘proper’ Stevenson Screen/Box?
If so, even cobble one together if not, what about putting a little solar-power meter in there for a few days.
I’d assert that it’s going to see quite a lot of solar power.
Problem for modern sensors and the change from Mercury is that:
…..thus what electronics are recording inside those white louvred boxes is more closely related to the local strength of the sun and passing clouds than anything to do Global Climate.
It’s not just the siting of the weather stations that’s wrong in this modern age, it’s the things themselves.
Meanwhile = a new month and when I go see what my babies have been up to over the preceding month, we are all of course Deeply Concerned about The Temperature of the Surface of The Earth.
On that track: What you see is a composite image of what ( 2 of) my babies saw = two temperature traces recorded in my Norfolk garden over the course of this last month (April 2023)
Top trace: The top (red) one is from a datalogger in contact with the ground/soil/dirt/EarthSurface.
The logger was on a porous plastic ‘weed control’ sheet, had some bubble wrap put over it then a light-grey concrete paving-slab put on top of all that. In the shadiest part of the garden I could find, no chance of direct sun.
Lower trace: The lower (black) one is from a datalogger fixed in a triple layer opaque, light-grey plastic ‘thing’ of my own design. Punched full of holes to get maximum airflow and suspended 5 feet above the grassy lawn about 5 metres away from the ground-based logger and with a perfect East, South & West aspect in a field growing 200,000 baby rose bushes.
Simple question: Which one is a recording of the “Temperature of The Surface of The Earth”
Nice.
I also did something similar but 8 foot down in a freshly dug hoizontal tunnel in clay, South West England, through air movement being zero but with some change of air due to thermal cycling at the vertical covered entrance.
temp range tmin 9C tmax 11 C with no obvious diurnal cycle
as you rightly ask . . .
what is the right temp (and dont trust anyone above ground level)
Hi Peta
Some time ago I read a report on Stevenson screens, and one of the conclusions was that temperatures were up to 0,5C higher if there was nil wind, If this is true then modern temperatures
in urban area with more shelter, would give higher readings than the ones taken in the past.
If I can can find the report I will post it. Bob Evans.
Well said, don’t let nitpicking feel negative .
Nitpick: bullet points should be independent.
”
“
This is the BOM(B) – Published on the website —
‘State of The Climate”
———————————————————————
Observations, reconstructions of past climate and climate modelling continue to provide a consistent picture of ongoing, long-term climate change interacting with underlying natural variability.
Associated changes in weather and climate extremes—such as extreme heat, heavy rainfall and coastal inundation, fire weather and drought—have a large impact on the health and wellbeing of our communities and ecosystems. These changes are happening at an increased pace—the past decade has seen record-breaking extremes leading to natural disasters that are exacerbated by anthropogenic (human-caused) climate change.
These changes have a growing impact on the lives and livelihoods of all Australians. Australia needs to plan for, and adapt to, the changing nature of climate risk now and in the decades ahead.
The severity of impacts on Australians and our environment will depend on the speed at which global greenhouse gas emissions can be reduced.
Re severity.
The faster you reduce greenhouse gas emissions the greater will be the negative impact on the world’s inhabitants.
Chart 2 needs x-axis labels.
Mature working adults will notice whoever’s managing the guy who copies a thermometer number to a paper log at the airport is protecting that guy’s job from automation. A modern teen or twenty might ask “what, that’s a job?” As a gen x-er, I once wired a lab full of chillers and ovens with electronic thermocouple data loggers. Probably today it could be done with a phone ap.
The airport guy’s job could be done orders of magnitude more cheaply and more repeatably, but then what would that guy do? It wouldn’t take Chatbot long to notice all of us biologicals sitting around drinking coffee and posting selfies.
Anyway. We’re picking at a pet issue: “publicly funded data should be publicly accessible” as if we only care about the teen or twenty’s world. The old guy with the pen might need to milk his mercury readings for a few more years to pay teen or twenty’s tuition bills.