Jennifer Marohasy
I watched the coronation of King Charles III last night with my 92-year-old mother. She is from the same generation as the late Queen Elizabeth II, both of them having lived through World War II in London. The air raids, the bombings, the not knowing if it was ever going to end. After days of rain, my mother still remembers waking up to clear skies on 6 June 1944 and to the drone of aircraft overhead.
My mother used to think the war would end with, ‘Hitler gassing us all in the bomb shelter at school.’ But as she remembers it:
The whole blue sky was covered in thousands of aircraft each one pulling a glider. Then came the broadcast on the wireless by Mr Churchill that the Allied troops had landed in Normanby. It was D-Day [Deliverance Day], and Europe was about to be liberated.
The D-Day landing had been postponed by two days because British Meteorologist James Stagg knew how to forecast the weather, enabling the Allies to take the Germans by surprise and stage the landing during a lull in the stormy weather. Otherwise, it may never have happened, or it could have been a failure, a casualty of bad weather. My mother could have been right after all, and the war may indeed have ended quite differently.
Meteorologist James Stagg understood the key variables affecting the weather (including lunar cycles) and he relied on a network of weather stations (mostly at post offices, including at Blacksod Point in the far west of Ireland) to provide information on temperatures and barometric pressure in order to provide accurate weather forecasting.
While the late Queen will always be remembered for never leaving London, even during the ‘Blitz’ – from the German term Blitzkrieg, this was the sustained campaign of aerial bombing attacks on British towns and cities from September 1940 to May 1941 – and being a part of the war effort more generally, the back story that King Charles has fostered is quite different. The new King’s story is all about him being one of the first to realise what he still believes is the real threat posed by human-caused climate change.
I read former Prince Harry, the Duke of Sussex’s memoire Spare when it first came out, and at the same time The King by Christopher Andersen. In both books it becomes evident that despite tragedy after tragedy befalling his immediate family, and Harry perhaps needing some of his father’s time, if not access to therapy, Charles distracts himself by attempting to save the world from climate change.
When Camilla was ‘utterly devastated’ following the death of her brother in New York City, Charles apparently ‘continued to sound the alarm about global warming’. While on the one hand crediting his grandchildren for bringing the issue into sharper focus, Prince Charles missed his first grandson’s birthday instead opting to be part of a ‘saving squirrels from climate change’ campaign in Scotland.
Harry is not critical of any of this in his book, rather appearing in awe of his father’s ability to spend evening after evening reading IPCC white papers on climate change. Camilla, and William, with perhaps less of the warrior spirit so evident in Harry, are less enamoured with King Charles’s obsessions with environmental campaigning if you read between the lines in Christopher Andersen’s book.
So, it is perhaps in deference to the new King’s long-standing penchant for all things catastrophic climate-change, including a stoush with a denier, that the British tabloid newspaper The Guardian has included something on this topic among all the Coronation stories today. Of course, their climate-change article is big on politics while being misleading when it comes to the actual science.
It paints me as the leading protagonist and villain. While never articulating the essence of ‘the campaign’, the article is titled ‘Climate scientists first laughed at a bizarre campaign against the BoM – then came the harassment.’
The title reminded me of the famous quote from Mahatma Gandhi: ‘First they ignore you, then they laugh at you, then they fight you, then you win.’
Apparently, Gandhi was borrowing from Nicholas Klein, a trade union activist who in a 1918 speech, said: ‘First, they ignore you. Then they ridicule you. And then they attack you and want to burn you. And then they build monuments to you.’
John Abbot is identified as my ‘side-kick’ – ‘her IPA colleague’. Certainly, the work that Abbot and I have pioneered over the last ten years should be of immense public interest as it shows a revolutionary and better way to forecast weather and climate. Further, for a period from 2012 until 2017, we had a series of key papers published in the mainstream international climate science journals.
It is disappointing that while peer review is repeated by the Bureau of Meteorology as being a prerequisite for open discussion (and bearing in mind that our peer-reviewed articles have, in fact, been published in reputable climate science journals) and after Professor Rita Karnawati, the head of the Indonesian Bureau of Meteorology, nominated me to head a 25-strong delegation from Jakarta via Brisbane to Melbourne to discuss, among other things, the advantages of using statistical models driven by Artificial Intelligence (AI) as a replacement to General Circulation Modelling for better medium term climate forecasting, the Bureau refused me entry to their headquarters.
These are the circumstances: the 25 Indonesian meteorologists were welcome inside the headquarters, but not me as the Australian leading the group. Before I boarded the plane in Brisbane, the project’s co-ordinator was told that if I landed in Melbourne and proceeded to the Bureau, it would result in a diplomatic incident. So much for The Guardian’s claim that it was me who harassed the Bureau.

The Bureau followed this up by ensuring that the contract we had with the Queensland University of Technology through my company ClimateLab Pty Ltd was terminated. That contract was set up by a Stanford University MBA graduate who told me at our first meeting a year earlier that he was not concerned about the politics of climate change because he was a skilled negotiator. He said he’d even been successful in Kosovo where he had been part of a team sent by then US president Barack Obama to negotiate a peace deal.
But you see the Bureau don’t want peace, at least not if it means they must hand over the most basic of comparative temperature data.
In 1996, the bureau began transitioning from measuring temperatures using mercury thermometers to using platinum resistance probes connected to data loggers. At a small number of weather station some comparative data was recorded into meteorological field books, as hand written entries. It has never been made public. As I see it, they don’t want to transcribe the manually recorded measurements, lest they betray a high level of incompetence, if not malfeasance.
Access to that data would enable a critical assessment of whether the temperature measurements from different types of equipment are comparable enough for the construction of reliable continuous temperature records extending back to at least 1910.
This is the essence of my current ‘campaign against the Bureau’, as Graham Readfearn writing in today’s The Guardian characterises it, the article is here.
John Abbot and I want the 10 to 20 years of data for each of the 38 weather stations where the Bureau has been recording temperatures from both mercury thermometers and resistance probes at the same place and time to be made public. We are happy to transcribe the manual recordings from the A8 reports, which the bureau has variously argued is too onerous a task for its own staff.
It is my contention that the Bureau is only transcribing a tiny fraction of this data, and analysing a still smaller component, because proper scrutiny of its entirety will show that the resistance probes connected to the data loggers are producing readings that are not fit-for-purpose. Certainly, there is no equivalence between the readings from probes and mercury considering the limited amount of data that I have wrangled from the Bureau to date.



The Bureau claims, otherwise, specifically that the reading from the probe are within tolerance without defining what that means. All the time carefully avoid any mention of statistical significance – the usual measure of whether two means (the average temperature from the probe compared to the average temperature from mercury, for example) are equivalent, or not.
According to the article in today’s The Guardian, the Bureau claims that the mercury at Brisbane Airport was on average within 0.02 C of the automatic probe for a period of three years. And I get the same result across the three years. But this overall mean difference is small because there is an abrupt change/discontinuity in the direction of the difference after January 2020.
The readings before January 2020 show the mercury thermometer recording warmer, and the readings after January 2020 show that it is the probe that is recording warmer. This difference balances out mathematically; however, the discontinuity creates havoc if one is interested in climatology and the integrity of long continuous temperature measurements to feed into a statistical model underpinned by AI.
The overall difference (across the three years for Brisbane Airport) is statistically significantly different (n= 1094, p = 0.003, standard deviation = 0.18). The readings from the probe relative to the mercury are all over the shop: that is, they show significant scatter and it is not random.
The three years of parallel temperature data from Brisbane Airport shows that 41% of the time the probe is recording hotter than the mercury, and 26% of the time cooler. To reiterate, the difference is statistically significant (paired t Test, n = 1094, p < 0.05). The differences are not randomly distributed, and there is a distinct discontinuity after December 2019.
I initially thought that this step-change from an average monthly difference of −0.28 C in December 2019 to +0.11 in January 2020 (a difference of 0.39C) represented recalibration of the probe.
The bureau has denied this, explaining there was a fault in the automatic weather station that was immediately fixed and operating within specifications from January 2020 onwards. Yet even after January 2020, the probe was recording up to 0.7C warmer than the mercury thermometer at Brisbane Airport.
A 0.7C difference is enough to generate more record hot days for the same weather, supporting the narrative of the new King – that the planet risks overheating.
****
I will continue this saga in Part 3 soon. In it, I will explain how Anthony Rea, Readfearn’s spokesperson for the World Meteorological Organisation (WMO) passing judgement on the Bureau’s methods, was until recently an Australian Bureau of Meteorology employee.
Indeed, back in 2017, in order to manage a directive from the then Minister for the Environment Josh Frydenberg to provide me with the parallel measurements for Mildura, Rea was hastily moved into the newly created position of the Bureau’s Chief Data Officer.
You can read Part1 of this saga here, https://jennifermarohasy.com/2023/05/the-guardian-temperatures-misinformation-part-1/ that has been republished by Anthony Watts here, https://wattsupwiththat.com/2023/05/05/the-guardian-temperatures-misinformation-part-1/



As an old man, like the new king, I recall Charles spouting nonsense and garbage some years ago. Had he spoken then with wisdom and insight, I may well have watched his coronation but his own words and actions have given me no incentive to waste any time on him.
Jug ears means well, but he was never the brightest bulb in the box
Unfortunately, Prince William has the same climate alarmist attitude as his father.
AN OPEN LETTER TO KING PRINCE CHARLES ON CORONATION DAY
Your false friends have filled your head with lies that have done enormous harm to humanity and to the Crown.
Prince Charles, you will never be my King. You have embraced and actively promoted the two great frauds of our age, the Climate Fraud that has already squandered the lives of hundreds of millions of people in the developing world, and the Covid-19 Fraud that has killed tens of millions with the toxic “vaccines”, and vaxx-injured perhaps a billion more.
Christopher Hitchens called Charles “a bat-eared loon.” Hitchens was also outspokenly critical of Charles’ greater affinity for Islam than for the Anglican religion, of which he is now head.
Re Islam – Charles just wanted more than one wife.
Inbreeding does create problems.
“The Royal Families swim in the shallow end of the gene pool.”
I admit I may have missed something after retirement, but can someone point me to anyone who advocates that data should be hidden for reasons of scientific transparency and honesty?
LW, you are clearly NOT a climate scientist. The UEA ‘climate scientists’ were shown to be advocating exactly that since the Climategate e-mails release back in 2009. SOP in ‘climate science’ for now a very long time.
In fact it was a David Jones from the BoM who bragged in a Climategate email to Phil Jones, head of CRU in the UK, that the BoM’s tactic in responding to requests for records from interested parties was to “snow them” (his words) with so much uncategorised raw data that it was useless for any purpose.
The BoM playbook has not changed it seems.
“Why should I show you the data, you just want to find something wrong with it” Yes?,
I remember the quote, yet I can’t remember the players.
A quibble with the article though. “Camilla, and William, with perhaps less of the warrior spirit so evident in Harry,”
Warrior spirit, sadly sorry, closer to emotionally disturbed woke whipped, but the warrior is MIA.
Agreed – Harry does not display many warrior-like attributes.
Harry needs some therapy. Along with his wife.
I suspect it might work better as far away from his wife as possible ……
Phil Jones to Warwick Hughes, way back in 2005.
One side of that specific episode can be read on the following webpage.
http://www.warwickhughes.com/blog/?p=4203
Yes, Phil Jones was a principal in creating the instrument-era Hockey Stick chart that has fooled so many people into believing the Earth is overheating.
Michael Mann knocked out the warm periods of the past such as the Roman Warm period, with his reconstructions of past data, and then he tacked on Phil Jones’ bogus, bastardized instrument-era Hockey Stick chart which knocked out the Early Twentieth Century Warming, which proves it is no warmer today than it was back then. Climate alarmists can’t have that notion out there because it means CO2 is not a significant factor, so they changed the temperature profile to a “hotter and hotter” profile even though the actual data, shows a completely different temperature profile. A bengin profile. Climate alarmists can’t have that! The can’t scare anybody with that! So they created a fictional temperature record to suit their purposes.
The Temperature Data Mannipulators demoted any and all warming periods prior to the satellite era.
And Phil Jones had a lot of help from other data bastardizers from around the world who were all working on creating the “hotter and hotter and hotter” meme, like the fellow at the BoM. And there were others. It seems all those in charge of official records were involved in this conspiracy. This is a small group of people.
Read the Climategate emails. It shows the collusion. The effort to make things appear hotter than they really are. “What about the blip”, they said? They were referring to the warm temperatures that they needed to get rid of, that were contrary to their climate alarmist meme. And that’s what they did, they cooled the past to make the present appear hotter than normal. It’s all a Big Lie!
So Phil Jones in in the same category as Michael Mann. Both have lied about the Earth’s climate, and their lies have caused great harm to the people of the world, with the unnecessary, hugely expensive, effort to reduce CO2.
And the scientific societies remained silent.
Without the active collusion and support by organized science, the Jones/Mann cabal could never have succeeded.
That was Phil Jones of UEA to Warwick Hughes an Australian scientist who is very critical of the air temperature record.
That is the position of the EPA and their Democratic Party supporters on much of the data that the EPA claims supports their draconian regime (without reference to the Athenian Draco, as reported by Aristotle)
.
Exactly, the need to regain public trust in these corrupted scientific institutions has never been greater, yet here they are obstructing legitimate inquiry thus creating more unwanted suspicion…. why all the secrecy?
Jennifer- just a pedantic point, the ‘D’ in ‘D-Day’ doesn’t stand for deliverance, it’s just military code for an unspecified day in the future when an operation will begin.
Good article, and as the saying goes… if you’re taking flak, you’re probably over the target.
Good on you Jen. That Guardian article was indeed an attack piece. Stick to the facts- the probes read differently often enough to lead to bias, which we see in the temperature record. (I didn’t realise I was part of a conspiracy when I asked the Bureau to explain some of their assertions in past years! )
Best wishes
Ken (of kenskingdom)
Thank you Ken. Over the years you have been an absolute stalwart, a gem, and always curious, and practical.
You were the one who purchased that one second data from Bundaberg and showed how different the last second, highest and lowest second can be within any one minute period.
I next need to write something explaining how the Greg Ayers’ paper (cited by Readfearn in the Guardian article) uses the last second as its point of comparison with a mercury, not the highest and lowest in a minute. The Bureau uses the highest and lowest second in a minute to generate the daily Tmax and Tmin, and therefore Ayer’s work is useless if the objective is comparison.
And I will always remember, when all we had by way of ACORN-SAT values were the daily values that could be downloaded with many missing in each month. And how back in 2014, when Graham Lloyd first became interested in writing a story about homogenisation he wanted to see the comparisons/the difference for an endless number of sites. And how you did the calculations in parallel with me over a period of some weeks, so we could compare results, before I sent my charts showing the annual and/or monthly differences to Graham Lloyd and then he duly sent them to the Bureau.
Thank you.
“the probes read differently often enough to lead to bias”
The probes will always read differently, even if perfectly accurate. Even in the same enclosure, they are measuring different air.
That doesn’t mean there is bias. The measure of bias is the mean, which Jennifer was so reluctant to tell us. Now the Guardian has told us; it was 0.02°C. Negligible. In Mildura, it went the other way. But Jennifer could still pick out outliers.
Nick
In Mildura it went three ways:
I don’t ever include true outliers in any of my analysis or charting, I remove them as part of usually data hygiene, cleaning up of raw data.
Very wide swings in differences between instruments could average out to 0.02, or some other small quantity, but that average might be meaningless as to the question of whether or not the measurements are the same.
So Nick, what is the “science” explanation for banning Jennifer from entry into the BOM while a part of the delegation from Indonesia?
I can think of lots of non-science explanations ..
“while a part of the delegation from Indonesia”
There is a credibility problem in claiming that Jennifer is a leading Indonesian meteorologist.
You didn’t read the article. She was the delegation LEADER!
The BoM invited a group of Indonesian meteorologists. They had trouble believing that Jennifer was one such. So do I.
You have no basis for making any kind of a judgement such as that. Your bias is showing!
Do you think Jennifer is an Indonesian meteorologist?
And yet here she is in a photo with the group and being paid to consult with them on precisely the area of modelling that they are interested in.
Should a tax payer funded QANGO that claims to be doing “science” be able to ban access to Australian citizens, let alone those who are contracted by a visiting international meteorology organisation?
Neither you or the BOM are on firm ground arguing from the point of “credibility”.
I asked about a science perspective, you argue on the basis of prejudice and your ill-conceived perceptions of character.
Each day you plumb new depths of stupid, give it up. Every post establishes you as a unscientific zealot determined to crush any dissent and beclowning yourself in the process.
The measure of bias is not just the mean. Without knowing the variances, there is no way to validate the absolute differences in the distributions. The Sahara Desert and Dubuque, Iowa can have EXACTLY the same mean temperature but no one would call them equivalent in temperature or climate!
Your bias is toward simple means and worse, the means of means of means with no regard as to what marrying different distributions does to the final calculation. You portray yourself as a mathematician but never, ever, address other statistical parameters of any of the distributions you combine through averaging.
Remember, when you are computing anomalies, you are subtracting the means of two random variables. The variances add when you do that. Do you ever examine the variance of the resulting anomaly?
Carolus III is well known for talking to his plants in the conservatory. But I think he does it wrong. He addresses them in English when he should use German. Like when you talk to your dog.
My cat responds best to Norwegian. If that helps at all.
Men plantene mine svarer aldri. Maybe it’s my accent.
Good luck to old jug ears. May he get some sense before he shuffles off the mortal coil. I pity my ancient homeland.
A comment about King Charles, as Jen has the BOM methods part well in hand despite Readfern aspersions.
It is apparently now long standing custom that the UK monarch does NOT comment on political matters lest those comments influence Parliament and the PM. As Prince of Wales, Charles had no such constraint and blathered on about climate change—showing only the he is not the sharpest tool in the tool shed. As King Charles III, now he does. Coronation was a small step in the right direction.
I have noticed over many years that a young fool may learn wisdom as he ages because of his life experiences. An old fool is a different story and so too is a well educated fool. However, many people are simply naive and have never learnt to reason carefully and cogently especially when it comes to the subject of climate change. That is why I hold Clintel and their signatories in high regard because they can spell out their position clearly and with no cussing.
“ Coronation was a small step in the right direction.”
Here is Queen Elizabeth (crowned) talking about Climate Change at Glasgow
“GLASGOW, Nov 1 (Reuters) – Britain’s Queen Elizabeth told the United Nations climate change summit on Monday that “the time for words has now moved to the time for action”, as she urged world leaders to think of future generations when negotiating a deal to limit global warming.”
There were a number of comments at the time that this was a very rare departure from her usual high standard of discretion.
Blame the woke speech writer. She would usually just leave such a comment out, but the press probably got a preprint.
All the Nickwit needs is one rare departure to Nickpick and distract from the point.
The queen was poorly advised.
Good job Jennifer, I don’t know how you do it, dealing with liars and cheats. They make me sick.
“Yet even after January 2020, the probe was recording up to 0.7C warmer than the mercury thermometer at Brisbane Airport.”
Again the same useless picking out of outliers. There were days when it was 0.7°C cooler too. That is what you see when you have random variation, and you look at 1000 days.
Here is a simulation I did with a totally random simulation, mean=0.05°C. Yes, you get outliers, up to 0.6 either way with 1000 samples. But the key is the mean.
Hey Nick
I used to think you actually cared about the truth. But despite your considerable knowledge, including of statistics, you choose to obfuscate and mislead.
So, and yet, I continue to engage, because many people are not as clever as you.
Which mean, the daily, the monthly or the annual?
And so much for caring about statistical significance.
The Brisbane data is NOT randomly distributed as we see when we convert the daily means to monthly means, attached blue line.
Like the Bureau you choose confusion and fear over truth. And that is sad.
Jennifer,
“Which mean, the daily, the monthly or the annual?“
There isn’t much point in doing tests of statistical significance if you can’t get a simple arithmetic understanding of the mean. Those are all virtually the same. There can be a very slight difference in the mean of monthly averages, due to months not always being the same length. But if you would just release the data, we could work that out.
In my simulation, 0,05 was the mean of the distribution used. The mean of the realisation (as plotted) is never quite the same; here it is 0.0533. That would be the daily mean.
A test saying that the observed mean (0.02) is statistically different from zero is pointless. No-one thought it should be zero. The thing is, they measured it, and it is very small
Nick,
A paired t-test will tell you the difference between the daily means is statistically significant. I think that provides a fairly good ‘arithmetic’ understanding. :-).
Tell us the variances of the different distributions Nick. Other statistical parameters would be welcome also! Why do you have a preoccupation with only the means?
Because the mean is the measure of the difference. It is what you are testing. Jennifer kept telling us that she can prove it isn’t zero. But she won’t tell us what it is – what they measured. We had to wait for the Guardian to tell us that.
The BoM playbook remains in force.
Jennifer, look on the bright side. I don’t think you would be a very happy bunny if Redfearn and the Grauniad were, uniquely, being honest about your sterling efforts. Enemies of truth and always will be.
I remember in late 2009, some honest person posted over 1,000 emails, attachments, files and code on the internet. The content of the posting clearly showed the dishonesty and malice of the CRU and the “Climate Scientists” who assisted the IPCC with their political chicanery.
The contents of the leak, universally now called Climategate, initially (and briefly) disgusted even George ‘Moonbat’ Monbiot. Even our old chum Steven Mosher was (briefly) indignant enough to collaborate with Thomas W. Fuller and rush to print “Climategste, The Crutape Letters” in early 2010.
The reaction of Prince Charles, now His Majesty King Charles Iii ?
Why, he cancelled all appointments, jumped in his shiny Aston Martin, drove hotfoot to CRU outside Norwich, so he could wipe away the Director, Phil Jones’s little tears.
That is Charles in a nutshell.
I don’t think I need say more. Although I certainly could.
I wonder how anybody ever finds WUWT these Days. Before all the search engine censorship, WUWT was the world’s most widely read website on climate change because it included all sides in highly scientific, yet readble form. Today, try any search engine on “climate change” or “global warming.” If the engine claims to have more than or or 5 pages of results on ANYTHING, even pancakes or other noncontroversial subject, you wil see that results pages are all the same 1 or 2 pages after the first few pages. And WUWT and other quality skeptic sites never appear.
Some, such as Brave Bowser’s search engine, claim to be unbiased, but results are similar, no WUWT, no positive info on Trump except his own website.
Apparently, Brave uses its own database, but it would seem that that database dgets its results from where Brave users go–when they can’t go to sites they’ve never heard of.
Anybody have any ideas for fixing this deadly serious search engine skewing?
Regrettably, what you say is true. Even DuckDuckGo fails to return WUWT in the first 12 pages for the search: climate change website.
The search climate change blog has more success, bringing up Wikipedia on the third page of search results: Watts Up With That? (WUWT) is a blog promoting climate change denial that was created by Anthony Watts in 2006. However, WUWT itself does not appear in the first 9 pages of search results.
In both searches, I ran up against bot protection, and in spite of successfully identifying myself as not a bot I couldn’t get any further. Like all search facilities on the web, I think DuckDuckGo uses Google’s search engine, and even though they can turn off tracking and modify the search algorithm a bit, they clearly can’t modify it enough.
I have concluded that DuckDuckGo is using Bing now. I’ve seen identical results form both.
Startpage.com is similar. All the sites that appear in the first 6 pages are alarmist.
I got the same results. WUWT, Climate Etc., and Climate Depot are all missing. Something fishy going on!
Just a point of order: Comments should discuss matters of science. Let’s leave out comments about people’s physical characteristics, even if we disagree with them.
Over the years I’ve blocked Bill Johnston’s email addresses. This morning, with yet a new address, he emails me.
The email is unusually polite, he is attaching a draft of something he is hoping Anthony will post here.
In it he begins by redefining the nature of the statistical test that I applied to the Brisbane data. He falsely suggests that a paired t-test is necessarily a test of whether the mean of the different instruments is zero. That is one option, but not the option that I choose.
Looking right now at the print out from my Minitab Paired t Test for the means of the probe and mercury being significantly different or not. Comments displayed with the results include:
“You can conclude that the means differ at the 0.05 level of significance. The mean of the paired difference is greater than zero.”
There is a problem with anarchy, because when people like Nick and Bill, who actually know and understand a lot, are intent on a particular outcome, many well meaning people can be mislead.
Statistics is important to understand a lot in science.
Scientists are often looking for statistical significance as a test of their hypothesis.
I get confused as to whether Nick and Bill genuinely have no idea what I do, and how I apply the statistics, and whether they really don’t understand the first thing about the options for applying such a test. Perhaps they are just looking for an excuse to write something that appears clever, and that they understand is potentially going to cause others to doubt me. That is gaslighting.
And this morning I did receive some really nasty emails, probably from Guardian readers, describing me as a c*nt. I guess there is more than The King emotionally attached to the idea of a climate catastrophe.
I share this information, and the information in the post about being barred from visiting the Bureau in Melbourne with the Indonesian meteorologists, because the more I read about trauma, the more I know that we are less likely to hold it, and have it eat away at us, when we share information. When we share honestly and with good intent amongst friends.
“Scientists are often looking for statistical significance as a test of their hypothesis.”
So what is your hypothesis?
My hypothesis is that Al Gore pays you to put your crap up on here.
Next question?
[ Bangs head on desk … ]
Please re-read the ATL article.
Then re-read all of the other articles Jennifer has written on the difficulty with extracting “raw” comparison data from the BoM.
Her “hypothesis” is :
“There is a statistically significant difference between measurements made by a mercury thermometer and measurements made by a platinum resistance probe, even when housed in the same Stevenson screen.”
Nick Stokes + Bill Johnston : How do you propose testing that hypothesis ?
Jennifer Marohasy : By asking the BoM for copies of the overlapping datasets.
NS + BJ : That data is already available to the public on the BoM website.
JM : No it isn’t. Here are 30 (?) concrete examples of such data that the BoM is now known to possess that is most definitely not available on their website.
NS + BJ : Asking to make that data available constitutes “harassment” of the BoM.
JM (my paraphrasing) : WTF ?!?
“WTF ?!?”
Really, the conversation goes:
BoM: We have measured the difference and it is small – 0.02°C.
JM: But with paired t tests I can reject the null hypothesis that it is zero
BoM: Yes, it is 0.02°C
JM: But I can prove it isn’t zero
BoM: Yes, it is 0.02°C
JM: OK, see you at the tribunal.
Dear Jennifer,
I am always polite, easy to get along with and I like fishing too!
What I said was “Although I intended to place the note on BomWatch and write a small piece for WUWT that referenced the note, at this stage, once complete I will simply just place it on BomWatch (with the data) and leave it there” which is actually quite different to to what you said I said.
I also said in the note that “The unpaired or 2-sample t-test calculates the probability (P) that the mean for each instrument or screen is the same, while the paired or repeated-measures t-test is a one-sample test of whether the mean of the difference between instruments is zero”.
Any reputable stats book will confirm that (i), the paired t-test is a repeated measures test, and (ii), that it is a one-sample test of whether the mean of the difference is zero.
You say that I “falsely” said “that a paired t-test is necessarily a test of whether the mean of the different instruments is zero”. Then you said the printout said ” … The mean of the paired difference is greater than zero”???? what don’t you get? The printout confirms what I said.
The one sample in this case are the differences (a-b=delta), delta is one sample for time(1), the mean of the string of repeated deltas from time(1) to time(n) is compared to zero.
That is the basis of the paired t-test, and as you are using the paired-test (wrongly as it turns out), it is reasonable to expect that you know something about it!
Yours sincerely,
Bill Johnston
http://www.bomwatch.com.au
This is taking me back…
If the hypothesis is that the liquid in glass and the Pt resistor thermometers are from different populations, wouldn’t the two-tailed independent t-test be appropriate?
Assuming, of course, that the other criteria for using t-tests are met.
Well yes, which was the purpose of the note I sent to JM. BTW my intention was to inform, not criticise.
I’m unconvinced that the paired test (which controls for within-subject variation) is appropriate. The paired test is also many times more sensitive (more likely to detect significance between data pairs) than the un-paired test.
A Monte Carlo sampling experiment of raw data pairs (measured in separate screens) at Townsville, found the paired test needed less than 24 randomly chosen data pairs to declare significance, while the unpaired test needed 560 pairs. It is clear too that the air sampled by instruments within a screen is not spatially or timewise homogeneous, which causes me to regard observations as random response variables.
While I could find no strict definition, I said in the draft report “A vexing question at the outset is whether data collected each day by separate instruments in same-sized screens or from different positions in the same screen (Figure 1) represent two independent subject groups (observations with no connection between them), or if data are collected as true, homogeneous data-pairs, longitudinally, from the same subjects, items or things, as required for a valid, repeated measures (paired) t-test.”
People reading this can add comment or subtract as they see fit.
Cheers,
Bill
“If the hypothesis is that the liquid in glass and the Pt resistor thermometers are from different populations, wouldn’t the two-tailed independent t-test be appropriate?”
Well, at least someone has stated a hypothesis to test. But why? No-one doubts that they are from different populations. The issue is, how different?
You know better than that. If their behaviour isn’t significantly different from each other, you can’t rule out the null hypothesis that they are from the same population.
How does it help to rule out a null hypothesis that no-one advanced or believes? They are different instruments. They are not from the same population.
It’s a behavioural population.
You could say that a 0-1″ 0,0001″ Miyutoyo external micrometer and a Brown and Sharpe 0-1″ 0.0001″ micrometer are from different populations because they come from different manufacturers, but they bloody well should read the same on the same gage block.
That was the whole point of the response time customisation, and the reason for Jennifer’s crusade.
“bloody well should read the same on the same gage block”
Well, the thermometers here are measuring different air, which is the main source of variation. But in any case, if you get down to the limit of their accuracy, and take 1000 samples, you should find a difference. With enough repetitions, you surely will.
Jennifer seems to think that if you show a difference is significant, it must be big. But not so, if you have many instances. That is why the simple mean difference is so basic, and how useless it is instead spouting significance tests.
There are ways of testing that “different air packets” hypothesis as well, with suitable experimental design and instrumentation.
Given the volume of Stevenson screens, and the resolution and response times of liquid in glass thermometers, the differences most likely are normally distributed with a mean of zero.
That seems to be the colloquial meaning of “significant”, but I don’t think Jennifer is using it.
If 2 sample data sets are significantly different using the appropriate tests (and Bill gave a good summary), you can reject the (default) null hypothesis that they came from the same population. Getting down to very small variances is a double edged sword.
I don’t quite follow you there. To quote Pauline, “Please explain”.
“you can reject the (default) null hypothesis”
It isn’t the default. They are different instruments measuring different air. A null hypothesis should be plausible. No-one believes your null here, and there is nothing gained by laboriously rejecting it.
“you can reject the (default) null hypothesis”
Jim has a nice exposition on this in a reply to Bill a bit lower down. Basically, the default null for t-tests is that the means are equal (so the samples are from the same population).
If you’re going to labour the “different air” point, you are effectively rejecting the usefulness of any atmospheric measurements.
“Basically, the default null for t-tests is that the means are equal “
So why use a t-test? The BoM isn’t saying the means are equal. It is saying that the difference is 0.02°C. That is the default, if you want to test something.
As I paraphrased above:
BoM: We have measured the difference and it is small – 0.02°C.
JM: But with paired t tests I can reject the null hypothesis that it is zero
BoM: Yes, it is 0.02°C
JM: But I can prove it isn’t zero
BoM: Yes, it is 0.02°C
You’re arguing for the sake of arguing. Is 0.02 degrees C statistically different from zero at a chosen significance level?
Jennifer says “yes” with a pared t-test. Bill says “no” or at least “perhaps not” with a two-tailed independent t-test.
And that’s just on the means of the daily means, and assuming the criteria for t-tests have been met.
The same should really be done for the means of the daily maxima and minima, along with all the other pre-checks which Bill set out below.
Those pre-checks will show things such as the step change in differences, which really mean there are potentially 3 populations, not 2, so there are 3 comparisons needed rather than just 1 (LiG vs early Pt, LiG vs late Pt, early Pt vs late Pt) LiG vs early or late Pt may well not reject the null, but there’s a fair chance tat early Pt vs late Pt will.
There are lots of things which can show up in field tests which don’t crop up in the lab tests.
“You’re arguing for the sake of arguing. Is 0.02 degrees C statistically different from zero at a chosen significance level?”
No, it is a very basic point. Why does it matter? BoM says the difference is 0.02°C. That is unaffected by any testing of whether it is statistically different from zero. All statistical significance shows is that the CI of the 0.02 must be less than 0.02, which can’t be a bad thing.
Agreed, it is a fundamental point, and it applies to all statistical measures, all sites and across humidity ranges.
The BoM went to some effort to replace LiG thermometers with Pt instruments with as near identical behaviour as was practical, including adding thermal mass to replicate response times.
Do the different types of instrument actually behave the same (the null)?
The difference in mean for the entire period is 0.02K, but from the head article: “step-change from an average monthly difference of −0.28 C in December 2019 to +0.11 in January 2020″
Granted, 1-month figures may not be representative, but the 0.02 comes from averaging across that discontinuity, with 2 separate Pt instruments. These really should be separate comparisons as I noted, and, on preliminary visual appraisal (ie a quick squiz), will give results quite different from that 0.02
Applying a constant offset of 0.02 (either adding to th LiG readings or subtracting from Pt) increase the Dec difference to -0.30 and decreases the Jan difference to 0.09. Neither is entirely satisfactory.
Back to the t-test. Do the Pt instruments have identical behaviour to LiG? That’s what the t-tests establish, and it needs to be for max and min, not just mean.
If the null is rejected, they then have to come up with some method of compensating for those differences at all points of the range. It’s a damn site easier to not have to do that.
“That’s what the t-tests establish”
t-tests never establish that. The best they can do is fail to reject the null hypothesis, which could just mean you don’t have enough data.
“If the null is rejected, they then have to come up with “
They work on the measured mean, whatever it is. 0.02, though SS (maybe), is too small to require compensation. Their decision is in no way affected by rejecting the null.
Those hairs you’re splitting are getting into the picometre range by now 🙂
The usual intent of devising tests is to be able to reject the null, so failing to reject the null isn’t the best. As with Jennifer’s Brisbane data example, the null hypothesis that the mean of the daily means of the combined Pt instruments is the same as the LiG probably wouldn’t be rejected. A null that the means of the maxima (or minima) of the 2 Pt instruments is the same may well be rejected.
Yes, more data should allow discrimination between smaller differences.
You have just informally failed to reject the null.
All testing is really doing here is setting a more formal level on when it’s worth compensating.
Means of means of one site don’t <ahem> mean much, especially somewhere like Brisbane. More variable inland locations will have greater diurnal and seasonal ranges, greater humidity ranges, and are more susceptible to phenomena such as willy willies. Those are more likely to show greater behavioural response differences in the instruments.
Now all you need to do is to show that the best scientific way forward to check that the difference isn’t big is to hide the data behind a multiyear obfuscation and FOI resistance exercise
Clown.
Nick,
The very first daily average taken obscures any statistically significant difference in devices. The second average for a month obscures even more differences behind a veil that can not be penetrated.
You are doing your best to destroy this work in your zeal to protect the ability to “correct” data such that long records can be preserved.
Scientific integrity is important. These tests need to be proved incorrect through the scientific method, not by simply declaring it wrong!
You are doing your best to describe precision of a measuring device.
“”””””Jennifer seems to think that if you show a difference is significant, it must be big. But not so, if you have many instances. That is why the simple mean difference is so basic, and how useless it is instead spouting significance tests.”””””
Precision is also called repeatability of measurement. The number of instances can refine the range of precision but it will not reduce it. If two devices continually show a difference, small or large, there is a difference in precision.
You can’t wash away the implications that differences in two devices have. I know you are afraid that anomalies in the thousandths digit are in danger with some of this research. That train has already left the station!
You would do better to begin your own analyses showing that these differences are not significant.
Nick doesn’t have a clue about physical measurements! You might want to tell him what an external micrometer and a gauge block is, lol!
Getting way off topi here, but is thermal expansion taken into account in their design or calibration?
As far is I know, they are supposed to settle to 20 degrees C for calibration.
Nick, you really are ignorant when discussing physical measurements, their accuracy, or their uncertainty.
They are two different devices measuring the same thing, temperature! They are each taking SAMPLES of exactly the same population. The difference in their readings is important.
If the means have statistically significant differences, then that is ANOTHER reason for stopping the prior record and starting a brand new one. At that point, you simply can not claim that only minor adjustment is needed.
“At that point, you simply can not claim that only minor adjustment is needed.”
The mean difference at Brisbane was 0.02°C. That is minor.
If they are measuring the same thing, i.e., the temperature, then they are sampling the same population. However, if they are highly correlated, and they should be, they are not “random samples” of the same population.
For the paired test, the difference should be normal. I’m not sure about this.
For the two-tailed test, it requires both distributions be normal. I can pretty much guarantee that this isn’t true.
If you think about what is being tested, if there isn’t normality, the comparison of means probably doesn’t have any meaning either.
The populations here are the instrumental measurements, not what they are measuring.
If they measure the same thing differently, they are different.
It’s a field test version of calibration tests in a controlled liquid bath
Bill’s test procedure a bit lower down does pre-checks of the distributions.
Here is a link discussing a paired t-test.
——————-
https://www.statology.org/paired-samples-t-test/
Assumptions for this test:
For the results of a paired samples t-test to be valid, the following assumptions should be met:
• The participants should be selected randomly from the population.
• The differences between the pairs should be approximately normally distributed.
• There should be no extreme outliers in the differences.
If the p-value that corresponds to the test statistic t with (n-1) degrees of freedom is less than your chosen significance level (common choices are 0.10, 0.05, and 0.01) then you can reject the null hypothesis.
A paired samples t-test always uses the following null hypothesis:
H0: μ1 = μ2 (the two population means are equal)
—————–
Here is a link for a two-tailed t-test.
https://www.statology.org/two-sample-t-test/
Two Sample t-test: Assumptions
For the results of a two sample t-test to be valid, the following assumptions should be met:
• The observations in one sample should be independent of the observations in the other sample.
• The data should be approximately normally distributed.
•The two samples should have approximately the same variance. If this assumption is not met, you should instead perform Welch’s t-test.
• The data in both samples was obtained using a random sampling method.
If the p-value that corresponds to the test statistic t with (n1+n2-1) degrees of freedom is less than your chosen significance level (common choices are 0.10, 0.05, and 0.01) then you can reject the null hypothesis.
A two-sample t-test always uses the following null hypothesis:
H0: μ1 = μ2 (the two population means are equal)
—————
You’ll notice there are two assumptions for the two-tailed t test that have a questionable outcome.
• Independence — I suspect that since the measurements from both devices are ostensibly measuring the same thing, they will be highly correlated, i.e., not independent.
• Normal distribution of the data — Knowing the general distribution of monthly data, I doubt this is true.
It appears to me that the paired test is an appropriate test for determining if the means are equal. They both can determine if the null hypothesis should be rejected.
This is all getting muddled and I’m getting muddled-up between the Gormans!
To be clear, aside from the cost (which is considerable), I don’t care if the Bureau or Jennifer spends years digitising past records. What I care about is the integrity of the statistical tests being used to compare instruments. What I mean is:
The word independent has two meanings. Firstly whether test subjects/data (between pairs) are independent (unrelated); and that datapoints at time(x) are not related to previous data i.e. data are serially independent.
Something has to be homogeneous – in this case, as the air is the ‘control’ (presumed to be exactly the same for both instruments) the air being measured has to be spatially and temporally homogeneous within the screen (or if instruments are housed in different screens, between them).
Because the paired t-test is intended to control for within-instrument variation, strictly speaking, variation in “the air” cannot contribute to “instrument variation” (i.e. in an oil-bath or ice-bucket sense).
However, air is turbulent within a screen and instruments are not physically located in the same place within screens, and obviously between screens. The probes are also located nearer the back-wall of the box, than thermometers, which are on a frame closer to and facing the door. The rear wall also faces north (towards the sun in the S-hemisphere).
So while many readings will be the same, many won’t also (look across the distribution in Jennifer’s graph). The other problem is differences, when they occur are likely to be small, and not necessarily attributable to either instrument.
Taking all this into account and considering the scale of likely differences (mostly less than the uncertainty of an observation), and that as numbers of observations increase, the chance of detecting small differences increase also, the paired t-test (which tests that the mean difference between consecutive observations is zero) is biased relative to independent or un-paired tests.
(The NULL H0: μ1 = μ2 (the two population means are equal) tests that the difference μ1 = μ2 = 0.) (See: https://statisticsbyjim.com/hypothesis-testing/paired-t-test/)
All this aside, the overarching problem for either of the tests is that daily data are autocorrelated across all lags.
Quite independently of the focus of the post, I have found the commentary to be helpful.
Yours sincerely,
Bill Johnston
Dear Jennifer and friends,
You say “I get confused as to whether Nick and Bill genuinely have no idea what I do, and how I apply the statistics, and whether they really don’t understand the first thing about the options for applying such a test.”
I agree it is confusing and I’m sure Nick finds it so to. However, I’m here to help (reach-out as they say!)
While my note was intended to un-confuse, the part I had not written was the step-by-step protocol.
So here is a summary (presuming you already have your data in a stats package. I used PAST from the University of Oslo: https://www.nhm.uio.no/english/ research/ resources/past/ to generate the data in my plots. (PAST is free, easy to use it is a desk-top application, not a program, and its outputs are the same as R.))
Firstly, calculate summary statistics and check the treatment (instrument) means and moments (1st Quartile (25th pc), median and 75th Q); also, standard error, SD and variance (which are all related).
Calculate the difference between the means as a ratio of the standard deviation (a-b)/SD, this gives Cohen’s d which roughly determines if the difference (the effect size) is likely to be negligible (less than 0.2 SD units), small (>0.2), medium (>0.5) or large (>0.8).
Secondly, plot your data in time order and look for cycles and trends.
Thirdly, make a histogram (graph), overlay the normal distribution line and look to see if your data is normally distributed, bimodal, long or short tailed etc.
Fourth, plot a Q-Q (normal distribution) plot and decide if data need to be adjusted or transformed.
Fifth, plot an ACF (autocorrelation) plot to see what signals are in the data and how they are related (i.e., check data are independent, or if the degree of AC is a concern. [This is under Timeseries in PAST]
Sixth, now the tricky bit, which I do when I summarise the dataset using R. You need an index column on the left of the data (say Column A) that specifies day of the year for all pairs of observations (1 to ~ 366). This can be done in Excel using the date (search “extract day of year from date in Excel”).
Sticking with Excel, do a pivot table based on Day of Year, and find averages for each day. Copy and paste that as numbers a few columns out beside your table of data, say in Cols J and K. That list forms a lookup table for finding the average for each of day of year in your data.
Assuming your data are in Columns E and F, starting in Row3; your day numbers are in Column A and the lookup table is Columns J [day number] and K [day-of-year average], the lookup formula is:
G3=E3 – vlookup(A3, highlight all rows of Cols J and K, then hit F4 ,2)
This Excel command asks Vlookup, to look up the reference row number in ColA, in the first column of the array table (Cols J and K) and return the value in the second column (specified as ,2 close bracket). The overall formula will then deduct the lookup value, from the data value, to give the anomaly value for data in Column F for that day. Copy that formula to the bottom of Column G. Also copy the formula into H3. Highlight the Cell H3, move the array index back one place to Column J, leave the 2 as it was and copy the revised formula to the bottom of the datatable. [If I can do it, it must be easy!]
Now you have four columns, two of raw data and two of day-of-year anomalies for each day, plus the lookup table. (Best to copy and paste the dataset onto a fresh page as numbers, and save your work.)
Now repeat the first four steps using anomaly data –> summary, timewise graph, histogram, Q-Q plot and ACF plot and check how things changed.
You can either then continue with PAST and do Univariate samples two-sample tests, or Minitab, or whatever (because I have written some script that does additional things I use R). If you have the option (which you don’t in PAST (yet)) ask for Cohen’s d with 95% confidence intervals.
As anomalies have been de-cycled and are more likely to be normally distributed, you especially need to check if tests on raw data are still significant when conducted on anomalies.
Any probs, you can email me (you are NOT blocked!)
Kind regards,
The ever helpful Dr Bill
http://www.bomwatch.com.au
I think it would be useful to number the sub-steps of Step 6.
It looks like Step 6 gives a column of daily averages of each instrument, then 2 mirror image columns of the deltas from those daily averages. What is the benefit of that over Jennifer’s approach of using one of the columns of data as the reference?
Thanks Old Cocky,
Step 6 as described is a bit cumbersome, the easiest thing is to provide a workbook example. Starting in Row 3, date is in Column D, Raw data are in Columns E and F. Thus: date, Instrument1, Instrument2. Column B is spare (a useful progressive index for sorting). Calculate day of year in Column A.
Step 6, finds a mean for each day of the year (which is the Pivot table).
The means and their day of year index are manually copied and pasted as numbers to the work-sheet in Columns J and K (there will be 366 rows). That series is the lookup table containing day-of-year averages. (The Pivot can be looked-up directly but it is more difficult to explain how to do it),
Using Column A as the lookup reference and Column J as the lookup index, the rest of the calcs deduct the second column of the lookup table (Column K, but referred to as ,2), from the data column (Column E).
So to describe it seems complex but as I do it all the time, it is actually only
fourfive steps.J3:L?? is the entire array which is fixed by F4, while col_number refers to the number of the column within the array that is “looked-up’ and differenced by the formula (E3 (or F3) minus the looked-up value).
You have convinced me of a need for a worked example.
Nite nite,
Bill
If I read your steps correctly you missed one important step. It appears you are starting with daily averages. You need to back up one average and examine Tmax and Tmin.
Tmax and Tmin are not independent. I can assure you that the two temperatures are highly correlated.
That contaminates all subsequent averages as the base data is not independent. Simply averaging them is not an accepted transform to remove that correlation. A simple average only hides the difference.
Tmax and Tmin are also samples from two very different distributions. Daytime temps follow a sine wave that has some variances due to weather. Tmin is an exponential/high ordered polynomial distribution. Again, averaging Tmax and Tmin hides this variance without dealing with it in a direct manner.
Lastly, neither Tmax or Tmin have a normal distribution around a mean over a month. Seasons guarantee that this will be the case in most months. In most months, you are leaving a season and going into the next one with ensuing varying temps.
Of course Michael Mann was and remains, a great example of the GangGreen Climate Scientists’ use and understanding of statistics.
Not.
I did notice that in Readfearn’s article, he wrote that BoM takes the LAST of the second-by-second temp readings as the entry of record.
Not the HOTTEST of the second-by-second readings?
What’s Up With That?
It is easily disproven, but everything takes time. And in the interim it just adds to the noise and anarchy.
If only we could move forward, on real issues.
Is this function an automated process (ie hard-wired into the equipment), or a programmable function that the equipment installers / operators can customise?
“explaining there was a fault in the automatic weather station that was immediately fixed”.
In situations like this, isn’t the log of readings annotated in some way to indicate when the station was fixed, how long it was broken, that sort of thing?
If not, why not?
It appears that Jennifer was (eventually) given some log values, but was left to guess as to why this step change existed, which hardly seems ideal.
We’re using these readings as evidence that the world is in the middle of a climate crisis (!!!!), so I would have thought that a little bit of care and attention around probe maintenance would be in order. I don’t want to live like a stone-age African unless I really need to.
I will refer to my comment yesterday about probe design. I have no objection to the use of any suitable temperature to reading method or device, Pt resistance, mercury or whatever. I do have objection to the overall design of the probe, as any chage should be as close as possible to the original. I do not understand wh the Pt probe is not enclosed in the same mass of glass ( including the mass of the mercury) as the original, to ensure the same thermal time constant. This is much the most likely cause of any differences in readings, but seems to be ignored. Why, is the most important question here? The overall problem is then that it is impossible to correct the readings in any useful way, so that the responses are exactly the same. Any statistic simply won’t remove the problem, and will cause endless disagreement, which may be the idea behind the scenes! The t test does require that the data has a normal distribution, this is also unlikely if the Pt response is virtually instantaneous, more problems!
It is why the past record should be closed and a new one started. Changing data is not scientific. If it is not useful, it should be declared unfit and move on!
jus’ wonderin’…..
Wind the clock back at least 30, 000 yrs and more.
Picture an Australia that was a tropical and sub-tropical forest/jungle/rainforest
(tiny remnants still remain)
Question: Was there ENSO then?
I would assert: Not
Because: Australia at that time would have been making its own ‘water based’ weather.
There would have been near daily rain and thunderstorms all across the continent.
Exactly as happens in what’s left of any rainforests still on Earth – clouds & rain follow the wake of El Sol as it glides overhead.
Not least also over the oceans themselves.
And in doing that, Australia would have worked as a ‘water bridge’ linking the South Pacific and Indian Oceans.
The great thing about water, which of course Climate Science is in epic denial about, is that water can move/store/dump simply humongous amounts of (heat) energy around with only very modest physical effort on its part.
So, in a similar way to how lightning conductors properly work (gently/draining and defusing energy from up in the sky) all the water/storms/convections on Australia would have constantly and gently drained the heat energy that presently accumulates in The Warm Pool off Eastern Australia – it being what triggers El Nino when that pool gets too big and collapses.
Ordinary rotation of Earth would mean that that pool was constantly trying to build anyway, there’s no stopping that.
But when Australia was burned and became desert: the water, the T-storms and the convection stopped – the warm pool built itself to a massive extent and ENSO started.
…..any better explanation…..
Thus: Let the Murray Darling Basin (re)fill with water – even if you just put a modest little dam across its exit and pump it full of seawater.
Wouldn’t that Fix The Climate as the only significant source of rising temps are when an El Nino happens?
Flooding the Murray Darling would switch off Nino and (global??) temps would stop rising.
I’m sure Nino and Nina can find ways to amuse themselves away from wasting the weather.
;-D
Meanwhile, in that (general) corner of the world:
This is hideous and from the BBC, where else.
Headline:“Climate change: Vietnam records highest-ever temperature of 44.1C
https://www.bbc.co.uk/news/world-asia-65518528
See the screenshot/grab
Two things:
1/ Almost all of Thanh Hoa province is = A City.
It is one great big Urban Island
2/ Just look at the stuff coming out of that river.
There is Soil Erosion
There is your heatwave, your coldwave, your drought, your floodings, your CO2 and:
Yes it is all real – the city and the erosion surrounding it caused it all
The Dancing Angels are innocent
The Phlogiston is circumstantial
The Emperor is getting cold – as we all will be soon
From the article: “As I see it, they don’t want to transcribe the manually recorded measurements, lest they betray a high level of incompetence, if not malfeasance.”
That’s the way I see it, too. Why else would they fight so hard and so dirty to hide the data?
Banning you from entering their building? They *really* don’t want you seeing the data. Personal slights like this are another good sign you are over the target, and an indication of how petty the BOM really is. Or is that fear? It might be fear that makes them swing wildly like this.
Temperature data is public property. It should be made public. The BOM has no good reason not to make it public.
A big danger to delaying Operation Overlord was that at some point the Germans would find out it was Normandy and not the Pas de Calais and concentrate their forces there. What is often missed in the history is that when the next ideal conditions came for the invasion there was a storm that wrecked the US Mulberry harbour. The bad weather in early June convinced many senior German commanders that no invasion would be coming so attended a war game event or as in Rommel’s case, went on leave. Their forecasters had no idea a break was coming in. By such fine margins is history made.