Fussing Over One Degree of Simulation

Reposted from Jennifer Marohasy’s Blog

August 12, 2021 By jennifer

I was at the Australian National University in October 2018, when the largest supercomputer in the Southern Hemisphere began running the simulations that have now been published as the IPCC’s Assessment Report No. 6 (AR6). It’s being touted as the most comprehensive climate change report ever. It is certainly based on a very complex simulation model (CMIP6).

Many are frightened by the official analysis of the model’s results, which claims global warming is unprecedented in more than 2000 years. Yet the same modelling is only claiming the Earth is warming by some fractions of a degree Celsius! Specifically, the claim is that we humans have caused 1.06 °C of the claimed 1.07 °C rise in temperatures since 1850, which is not very much. The real-world temperature trends that I have observed at Australian locations with long temperature records would suggest a much greater rate of temperature rise since 1960, and cooling before that.

Allowing some historical perspective shows that the IPCC is wrong to label the recent temperature changes ‘unprecedented’. They are not unusual in magnitude, direction or rate of change, which should diminish fears that recent climate change is somehow catastrophic.

To understand how climate has varied over much longer periods, over hundreds and thousands of years, various types of proxy records can be assembled derived from the annual rings of long-lived tree species, corals and stalagmites. These types of records provide evidence for periods of time over the past several thousand years (the late Holocene) that were either colder, or experienced similar temperatures, to the present, for example the Little Ice Age (1309 to 1814) and the Medieval Warm Period (985 to 1200), respectively. These records show global temperatures have cycled within a range of up to 1.8 °C over the last thousand years.

Indeed, the empirical evidence, as published in the best peer-reviewed journals, would suggest that there is no reason to be concerned by a 1.5 °C rise in global temperatures over a period of one hundred years – that this is neither unusual in terms of rate nor magnitude. That the latest IPCC report, Assessment Report 6, suggests catastrophe if we cannot contain warming to 1.5 °C is not in accordance with the empirical evidence, but rather a conclusion based entirely on simulation modelling falsely assuming these models can accurately simulate ocean and atmospheric weather systems. There are better tools for generating weather and climate forecasts, specifically artificial neural networks (ANNs) that are a form of artificial intelligence.

Of course, there is nowhere on Earth where the average global temperature can be measured; it is very cold at the poles and rather warmer in the tropics. So, the average global temperature for each year since 1850 could never be a direct ‘observation’, but rather, at best, a statistic calculated from measurements taken at thousands of weather stations across the world. And can it really be accurately calculated to some fractions of a degree Celsius?

AR6, which runs to over 4,000-pages, claims to have accurately quantified everything including confidence ranges for the ‘observation’ of 1.07 °C. Yet I know from scrutinising the datasets used by the IPCC, that the single temperature series inputted for individual locations incorporate ‘adjustments’ by national meteorological services that are rather large. To be clear, even before the maximum and minimum temperature values from individual weather stations are incorporated into HadCRUT5 they are adjusted. A key supporting technical paper (eg. Brohan et al. 2006, Journal of Geophysical Research) clearly states that: ‘HadCRUT only archives single temperature series for particular location and any adjustments made by national meteorological services are unknown.’ So, the idea, that the simulations are based on ‘observation’ with real meaningful ‘uncertainty limits’ is just not true.

According to the Australian Bureau of Meteorology (BOM), which is one of the national meteorological services providing data for HadCRUT, the official remodelled temperatures are an improvement on the actual measurements. This may be so that they better accord with IPCC policy, with the result being a revisionist approach to our climate history. In general they strip the natural cycles within the datasets of actual observations, replacing them with linear trends that accord with IPCC policy.

The BOM’s Blair Trewin, who is one of the 85 ‘drafting authors’ of the Summary for Policy Makers, in 2018 remodelled and published new values for each of the 112 weather stations used to calculate an Australian average over the period 1910 to 2016, so that the overall rate of warming increased by 23 %. Specifically, the linear trend (°C per century) for Australian temperatures had been 1 °C per century as published in 2012 in the Australian Climate Observations Reference Network − Surface Air Temperature (ACORN-SAT) database version 1. Then, just in time for inclusion in this new IPCC report released on Tuesday, all the daily values from each of the 112 weather stations were remodelled and the rate of warming increased to 1.23 °C per century in ACORN-SAT version 2 that was published in 2018. This broadly accords with the increase of 22% in the rate of warming between the 2014 IPCC report (Assessment Report No. 5) which was 0.85 °C (since 1850), and this new report has the rate of warming of 1.07 °C.

Remodelling of the data sets by the national meteorological services generally involves cooling the past, by way of dropping down the values in the first part of the twentieth century. This is easy enough to check for the Australian data because it is possible to download the maximum and minimum values as recorded at the 112 Australian weather stations for each day from the BOM website, and then compare these values with the values as listed in ACORN-SAT version 1 (that I archived some years ago) and ACORN-SAT version 2 that is available at the BOM website. For example, the maximum temperature as recorded at the Darwin weather station was 34.2 °C on 1 January 1910 (this is the very first value listed). This value was changed by Blair Trewin in the creation of ACORN-SAT version 1 to 33.8 °C. He ‘cooled’ this historical observation by a further 1.4 °C in the creation of ACORN-SAT version 2, just in time for inclusion in the values used to calculate a global average temperature for AR6. When an historic value is cooled relative to present temperatures, then an artificial warming trend is created.

I am from northern Australia, I was born in Darwin, so I take a particular interest in its temperature series. I was born there on 26th August 1963. A maximum temperature of 29.6 °C was recorded at the Darwin airport on that day from a mercury thermometer in a Stevenson screen, which was an official recording station using standard equipment. This is also the temperature value shown in ACORN-SAT version 1. This value was dropped down/cooled by 0.8 °C in the creation of ACORN-SAT version 2, by Blair Trewin in 2018. So, the temperature series incorporated into HadCRUT5, which is one of the global temperature datasets used in all the IPCC reports shows the contrived value of 28.8 °C for 26th August 1963, yet the day I born a value of 29.6 °C was entered into the meteorological observations book for Darwin. In my view, changing the numbers in this way is plain wrong, and certainly not scientific.

The BOM justifies remodelling because of changes to the equipment used to record temperatures and because of the relocation of the weather stations, except that they change the values even when there have been no changes to the equipment or locations. In the case of Darwin, the weather station has been at the airport since February 1941, and an automatic weather station replaced the mercury thermometer on 1 October 1990. For the IPCC report (AR5) published in 2014, the BOM submitted the actual value of 29.6 °C as the maximum temperature for Darwin on 26th August 1963. Yet in November 2018, when the temperatures were submitted for inclusion in the modelling for this latest report (AR6), the contrived value of 28.8 °C was submitted.

The temperature series that are actual observations from weather stations at locations across Australia tend to show cooling to about 1960 and warming since then. This is particularly the case for inland locations from southeast Australia. For example, the actual observations from the weather stations with the longest records in New South Wales were plotted for the period to 1960 and then from 1960 to 2013, for a presentation that I gave to the Sydney Institute in 2014. I calculated an average cooling from the late 1800s to 1960 of minus 1.95 °C, and an average warming of plus 2.48 °C from the 1960s to the present, as shown in Table 1. Yet this new United Nation’s IPCC report claims inevitable catastrophe should the rate of warming exceeds 1.5 °C, yet this can be shown to have already occurred at many Australian locations.

This is consistent with the findings in my technical report as published in the international climate science journal Atmospheric Research (volume 166, pages 141-149) in 2015, which shows significant cooling in the maximum temperatures at the Cape Otway and Wilsons Promontory lighthouses, in southeast Australia, from 1921 to 1950. The cooling is more pronounced in temperature records from the farmlands of the Riverina, including at Rutherglen and Deniliquin. To repeat, while temperatures at the lighthouses show cooling from about 1880 to about 1950, they then show quite dramatic warming from at least 1960 to the present. In the Riverina, however, minimum temperatures continued to fall through the 1970s and 1980s because of the expansion of the irrigation schemes. Indeed, the largest dip in the minimum temperature record for Deniliquin occurs just after the Snowy Hydroelectricity scheme came online. This is masked by the remodelled by dropping down/cooling all the minimum temperatures observations at Deniliquin before 1971 by 1.5 °C.

In my correspondence with the Bureau about these adjustments it was explained that irrigation is not natural and therefore there is a need to correct the record through remodelling of the series from these irrigation areas until they show warming consistent with theory. But global warming itself is not natural, if it is essentially driven by human influence, which is a key assumption of current policy. Indeed, there should be something right-up-front in the latest assessment of climate change by the IPCC (AR6) explaining that the individual temperature series have been remodelled before inclusion in the global datasets to ensure a significant human influence on climate in accordance with IPCC policy. These remodelled temperature series are then incorporated into CMIP6 which is so complex it can only be run only a supercomputer that generates so many scenarios for a diversity of climate parameters from sea level to rainfall.

In October 2018, I visited the Australian National University (ANU) to watch CMIP6 at work on the largest supercomputer in the Southern Hemisphere. It was consuming obscene amounts of electricity to run the simulations for this latest IPCC report, and it is also used to generate medium to long range rainfall forecasts for the BOM. The rainfall forecasts from these simulation models even just three months in advance are, however, notoriously unreliable. Yet we are expected to believe rainfall forecasts based on simulations that make projections 100 years in advance, as detailed in AR6.

There are alternative tools for generating temperature and rainfall forecasts. In a series of research papers and book chapters with John Abbot, I have documented how artificial neural networks (ANNs) can be used to mine historical datasets for patterns and from these generate more accurate medium and long-range rainfall and temperature forecast. Our forecasts don’t suggest an impending climate catastrophe, but rather that climate change is cyclical, not linear. Indeed, temperatures change on a daily cycle as the Earth spins on its axis, temperatures change with the seasons because of the tilt of the Earth relative to its orbit around the Sun, and then there are ice ages because of changes in the orbital path of the Earth around the Sun, and so on.

Taking this longer perspective, considering the sun rather than carbon dioxide as a driver of climate change, and inputting real observations rather than remodelled/adjusted temperature values, we find recurrent cycles greater than 1.07 degrees Celsius during the last 2000 years. Our research paper entitled ‘The application of machine learning for evaluating anthropogenic versus natural climate change’, published in GeoResJ in 2017 (volume 14, pages 36-46) shows a series of temperature reconstructions from six geographically distinct regions and gives some graphic illustration of the rate and magnitude of the temperature fluctuations.

ANNs are at the cutting edge of AI technology, with new network configurations and learning algorithms continually being developed. In 2012, when John Abbot and I began using ANNs for rainfall forecasting we choose a time delay neural network (TDNN), which was considered state-of-the-art at that time. The TDNN used a network of perceptrons where connection weights were trained with backpropagation. More recently we have been using General Regression Neural Networks (GRNN), that have no backpropagation component.

A reasonable test of the value of any scientific theory is its utility – its ability to solve some particular problem. There has been an extraordinary investment into climate change over the last three decades, yet it is unclear whether there has been any significant improvement in the skill of weather and climate forecasting. Mainstream climate scientists, and meteorological agencies continue to rely on simulation modelling for their forecasts such as the CMIP6 models used in this latest IPCC report – there could be a better way and we may not have a climate catastrophe.

Further Reading/Other Information

The practical application of ANNs for forecasting temperatures and rainfall is detailed in a series of papers by John Abbot and me that are listed here: https://climatelab.com.au/publications/

Chapter 16 of the book ‘Climate Change: The Facts 2020’ provides more detail on how the Australian Bureau of Meteorology takes a revisionist approach to Darwin’s history.
https://climatechangethefacts.org.au/wp-content/uploads/2021/08/MAROHASY-2020-Rewriting-Australias-Temperature-History-CCTF2020_16.pdf

There is an interactive table based on the maximum and minimum values as originally recorded for each of the 112 Australian weather stations used to calculate the official temperature values as listed in ACORN-SAT version 1 and version 2 at my website, click here:
https://jennifermarohasy.com/acorn-sat-v1-vs-v2/

The feature image, at the top of this blog post, shows Jennifer Marohasy in front of the supercomputer at the Australian National University in October 2018, which was running simulations for the latest IPCC report.

4.7 27 votes
Article Rating
221 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Thomas Gasloli
August 13, 2021 6:18 pm

I have never understood the insane idea of “adjusting” data. If you “adjust” the data you are committing FRAUD. Why is this allowed in meteorology?😡

n.n
Reply to  Thomas Gasloli
August 13, 2021 7:37 pm

With domain knowledge, they feel justified to compensate through normalization the irregularities that develop from instrument, measure, nonuniformity, and sparseness, despite evidence of incomplete, and, in fact, insufficient characterization, and even with super computers, intractable problems.

Last edited 1 month ago by n.n
bill Johnston
Reply to  n.n
August 14, 2021 8:31 am

In other words, they refuse to admit they don’t know!

Mr.
Reply to  Thomas Gasloli
August 13, 2021 7:59 pm

Recorded measurements are data.
“Adjusted data” are constructs, based upon assumptions.
So why bother with actual data in the first place – just run with the numbers someone made up because they fit the objective?

MarkW
Reply to  Mr.
August 13, 2021 9:35 pm

That’s what models are.

Pamela Matlack-Klein
Reply to  MarkW
August 14, 2021 5:24 am

The problem arises when people believe that computer models can generate data. And entirely too many people believe this.

MarkW2
Reply to  Mr.
August 14, 2021 8:36 am

None of this necessarily matters IF you provide confidence intervals that truly reflect what’s being modelled. Given the complexity of the model, the number of variables and the predictive timescales we’re talking about for global climate the confidence intervals would, of course, be pretty poor; and that is why they are never, ever shown.

Quite how this can pass as “good science” is beyond me. Regardless of whether any scientist actually agrees with the claims or not, this simple but fundamental fact reveals just how flaky the whole subject is.

TonyG
Reply to  Mr.
August 14, 2021 1:40 pm

Mr.
“I can make it up – so I did!”

BrianB
Reply to  Thomas Gasloli
August 13, 2021 8:20 pm

Some data could legitimately use a little adjusting to correct for various factors; some couldn’t.
The difference amounts to the difference between science and whatever it is the IPCC and the other climate catastrophists are doing.

MarkW
Reply to  BrianB
August 13, 2021 9:48 pm

When adjust satellite data for orbit drift, that have a very good understanding of how the drift affects the measurements, and they have precise measurements of how much drift has occurred. Because of this, these adjustments are quite justifiable and add very little to the cumulative error.

As a side note, all adjustments increase over all error, because there is no way to be 100% certain that you are adding exactly the right amount of correction. Neither too much nor too little.

Many of the adjustments being made to the ground based data fail on both counts. When they assume that they can smear two stations over all the area between those two stations, that is an assumption that has never been verified scientifically. And even if you could verify that it was a valid assumption in two spots, that proof would not hold for any other two spots.

The attempts to correct for things like station moves and equipment changes are especially egregious, since most of the time there is no data whatsoever to support the chosen adjustments.

Shanghai Dan
Reply to  BrianB
August 14, 2021 1:34 pm

Necessarily, any adjustment made to the original data increases the tolerance; for example, if I adjust from 30 +/- 1 deg C to 29 +/- 1 deg C because #reasons, my tolerance window now must actually be 29 +/- 2 deg C.

Dennis
Reply to  Thomas Gasloli
August 13, 2021 8:44 pm

Adjusting data – like BoM Australia ignoring historic weather record data prior to 1910 to avoid using the very warm periods earlier recorded that would lower their warming trend model.

Tom Abbott
Reply to  Dennis
August 15, 2021 4:50 am

Good point.

What about the data before 1910? It was too hot back then, so they don’t want to include it. Including it would make their scary climate change stories no longer scary.

bdgwx
Reply to  Thomas Gasloli
August 13, 2021 8:57 pm

Let’s say I have a time series of temperatures in a food manufacturing, servicing, or storage environment. At some point I discover that the thermometer is biased high/low by x%. I adjust the data down/up by x% and discover that the food could be compromised. I take corrective actions to prevent human consumption. By your definition my actions are fraudulent because I adjusted the time series data to correct for a known bias. Are you sure you want to stick with the blanket assertion that adjusting data is committing fraud? In fact, wouldn’t it be the opposite such that if I didn’t adjust the data then I would be committing fraud?

AndyHce
Reply to  bdgwx
August 13, 2021 9:22 pm

From many reports, no supporting evidence exists for many, probably most adjustments except “expert judgment”.

MarkW
Reply to  AndyHce
August 13, 2021 10:00 pm

There are many cases where a station was moved. Sometimes only a few dozen feet, sometimes several miles.
The exact day of the move was not recorded, or that record has been lost.
Even worse many of these moves also included a change of equipment. There was almost never an attempt to run the two stations, or pieces of equipment side by side for a time in order to cross calibrate them. Any attempts to retro-actively cross calibrate is nothing more than a best guess.

Almost all temperature records have drop-outs, instead of just admitting that the data doesn’t exist and writing your software to deal with it, they attempt to “infill” from surrounding stations or from different days on the same sensor.
Another word for “infilling” is making it up.

bdgwx
Reply to  MarkW
August 14, 2021 6:58 am

Even if all non-climatic effects were documented you still wouldn’t necessarily know the magnitude of the change point. That’s why PHA is so useful. Although documented change points are used as an input PHA does not require it.

Jim Gorman
Reply to  bdgwx
August 14, 2021 7:47 am

You still don’t know the magnitude of the change. Did you look at the image of temperature ranges I posted below? You are talking multiple degree differences in very little distance. To say that you can adjust temps based on that is simply ignoring real, measured variations between locations. The variance is so large that there is simply no way to make these changes in a scientific manner. Using an algorithm simply puts in the hands of a programmer who probably has no idea about physical measurements and their treatment.

TonyG
Reply to  Jim Gorman
August 14, 2021 1:46 pm

My outdoor stations show sometimes a 2-3 F difference between sensors less than 100 yards apart.

bdgwx
Reply to  TonyG
August 14, 2021 4:37 pm

Exactly. Even a small change like a move of 100 yards can result in a significant discontinuity in the time series.

Last edited 1 month ago by bdgwx
TonyG
Reply to  bdgwx
August 14, 2021 5:50 pm

Yes, at which point it is measuring temperature at a different location and thus is not part of the same data set.

bdgwx
Reply to  AndyHce
August 14, 2021 6:56 am

NOAA uses pairwise homogenization for GHCN-M to correct for known biases like station moves, time of observation changes, instrument changes, and other non-climatic effects. No one is involved in the process or providing “expert judgement”.

Rory Forbes
Reply to  bdgwx
August 13, 2021 9:38 pm

Stick with the food business and leave climate to those who understand something about it.

Mr.
Reply to  bdgwx
August 13, 2021 9:38 pm

So you had data, but then you used those to construct some other numbers that you based a value judgement upon to pursue a course of action.
You used constructs, not data.

bdgwx
Reply to  Mr.
August 14, 2021 6:59 am

No value judgements are made in my scenario.

BCBill
Reply to  bdgwx
August 13, 2021 9:43 pm

It would clearly be fraud if you changed the data and then misrepresented the modified data as the record. To unbiasedly alter a data record you would have the same problems that all data corrupters have- how do you know the historical data is wrong, by how much, from what point in time, does the discrepancy change over time, how do you know that? If you can answer these and other questions then you almost certainly have another more reliable data source. The same is true for weather data, some station data is better than others. So if you and the warmistas have better data why not use that instead of corrupting the suspect data? For warmistas the answer is simple, when they use the best data the warming decreases so they use the corrupted data to get the results they want.

Last edited 1 month ago by BCBill
bdgwx
Reply to  BCBill
August 14, 2021 5:55 am

So you think people should continue to make decisions from the unadjusted data only even knowing that it was biased high/low by x%?

Mr.
Reply to  bdgwx
August 14, 2021 8:57 am

People should make decisions & projections based upon their constructs derived from their treatments of recorded data.

But they then should not be claiming that their projections are “what the data says”.

A bit pedantic, you may think, but aren’t scientific pronouncements supposed to be precise?

(Which is why published papers that are replete with “might, could, perhaps, can, should, etc” and other such vagaries should be rejected.)

TonyG
Reply to  bdgwx
August 14, 2021 1:48 pm

How is it determined that the “adjusted” data is the correct value? What is that derived value checked against to ensure the adjustment was done correctly? And if you have that other information, why not use that instead?

bdgwx
Reply to  TonyG
August 14, 2021 4:36 pm

Calibration.

TonyG
Reply to  bdgwx
August 14, 2021 5:49 pm

“Calibration” – against what?

bdgwx
Reply to  TonyG
August 14, 2021 8:32 pm

A set of known values. You can calibrate thermocouples and RTDs in a “hot-box”. It is standard practice to test the instrument at more than one known value. Fluke is a popular brand, but there are many players in the space.

MarkW
Reply to  bdgwx
August 13, 2021 9:54 pm

In your example, you have no idea when the sensor started going bad, or by how much at any given time. Any attempts to “correct” past data is no more than your best guess as to what the temperature should have been at that time.

In your example nobody is trying to reconstruct past temperatures from faulty data. They just note that past temperatures aren’t reliable and that there is a chance that the food quality has been compromised, therefore the food must be tossed.

There is nothing fraudulent because you aren’t telling people that you know precisely what the temperature was at any point in time. You are just telling people that it is impossible to guarantee that the food is safe.

Jim Gorman
Reply to  MarkW
August 14, 2021 5:26 am

In this example, the fraud is also claiming that “safe” food has been used in the past even while using a temperature sensor that has been neglected by not having a routine calibration process. A scheduled calibration check would have revealed a problem early and prevented a “guess” as to how long it had been occurring.

bdgwx
Reply to  Jim Gorman
August 14, 2021 6:00 am

Yeah, the fraud here would be claiming that the food is “safe” based on unadjusted data knowing that the unadjusted data was biased high/low by x%. By not adjusting the data peoples lives are put at risk. By not adjusting the data and not taking corrective actions in this case you are committing fraud.

mkelly
Reply to  bdgwx
August 14, 2021 7:05 am

In you example thinking th eve food might be compromised wouldn’t you have gotten a complaint from a customer if the food was bad not by discovery of a bad thermometer? Also if is was biased low and stayed cooler wouldn’t the food stay good and no harm was caused?

bdgwx
Reply to  mkelly
August 14, 2021 7:40 am

A low bias on a refrigeration temperature would be problematic.

A high bias on a cook temperature would be problematic.

Yeah, if the adjustment is down for refrigeration or up for cooking then no harm done.

On the flip side, if the adjustment is up for refrigeration or down for cooking then the food is compromised and corrective actions should be deployed.

Last edited 1 month ago by bdgwx
bdgwx
Reply to  MarkW
August 14, 2021 5:57 am

So you think people should continue to make decisions from the unadjusted data only even knowing that it was biased high/low by x%?

Jim Gorman
Reply to  bdgwx
August 14, 2021 8:13 am

That is obviously not the issue. Portraying MODIFIED data as actual measurements, and worse yet, with more accuracy and precision than originally recorded is fraud. At the very least, it should be prominently noted that any use in projections or predictions uses MODIFIED data and what/why each modification was made. Simply stating station movement or disagreement with another station is not adequate.

Why do you think every other scientific or engineering endeavor uses parallel measurements for a period of time? It is to determine actual data for use in making a decision in either modifying data or simply ending that data stream and starting a new one.

You continually betray your lack of metrology training and engineering ethics. Do you think a certified Professional Engineer would allow you to reduce past readings of a strain gauge on one side of a road based upon the readings from the other side (different direction of traffic)? You would never make it as a professional surveyor. This is essentially what UNCERTAINTY is supposed to address in the field of measurements. Trying to reduce or eliminate uncertainty by any means possible robs everyone of the ability to accurately judge the appropriateness of the final result.

Mr.
Reply to  Jim Gorman
August 14, 2021 9:00 am

👍👍👍👍👍👍👍

bdgwx
Reply to  Jim Gorman
August 14, 2021 11:49 am

If that strain gauge sensor was later determined to have a bias then that engineer better take that bias into account when analyzing past data. If he doesn’t then he is negligent.

Likewise, if he doesn’t mandate that the sensor be calibrated and have the appropriate adjustment factored in (whether it be in the instruments internal configuration or via post processing) then he is negligent.

I can’t tell if you answered yes or no to the question.

Shanghai Dan
Reply to  bdgwx
August 14, 2021 7:27 pm

As an engineer, I tell you I wouldn’t use that strain gauge OR the data collected – I’d go repeat the measurement. Trying to “guess” when it went bad is the surest way to get it wrong.

Either accept the data as-is and over-design, or toss the data and re-collect.

bdgwx
Reply to  Shanghai Dan
August 14, 2021 8:28 pm

How do you repeat a measurement in the past?

Tom Abbott
Reply to  bdgwx
August 15, 2021 5:02 am

How do you determine a measurement in the past is wrong? Were the Data Maniplators there at the time?

When they update the modified record, doesn’t this cause all values for all dates to change a little bit? Are changes continually going on at all weather stations that require such widespread changes?

Do Data Manipulators visit every weather station they adjust every time they make a new adjustment?

LdB
Reply to  bdgwx
August 15, 2021 5:30 am

If you can’t redo the experiment then you can’t and your data is useless … so be it and who cares. You are basically trying to make your data life and death and in the rare situation that might occur a judgement call would be made by someone competent to do so but it would obviously be challenged by some …. you actually see that daily with Covid Health professionals.

Last edited 1 month ago by LdB
LdB
Reply to  Shanghai Dan
August 15, 2021 5:30 am

Correct it is what is demanded.

bdgwx
Reply to  LdB
August 15, 2021 5:34 am

If you don’t use that data at your disposal how do you assess risk?

Graemethecat
Reply to  bdgwx
August 14, 2021 12:05 am

There is a huge difference between your scenario and the data tampering of the BOM: in the former the final product is the food, and in the second, the temperatures.

tmatsi
Reply to  bdgwx
August 14, 2021 12:12 am

No you don’t adjust the data. You recalibrate the thermometer so that it reads the correct temperature. THEN YOU ADJUST THE REFRIGERATOR SO THAT THE FOOD IS NOT COMPROMISED.
In fact you should be regularly calibrating the thermometer and adjusting it so that you are always determining the correct temperature and setting your refrigerator accordingly.
Regularl calibration is normal practice within any testing laboratory and seems to have escaped the academics who work in climate change. In fact the philosophical position taken towards measurement should be that the measurement is the measurement and it should remain the same even if you suspect it is wrong. If you do suspect this then the only reliable way to check this is to repeat the measurement.
Furthermore it is not possible to correct a measurement after the fact of measuring and adjustment of temperature records is just plain wrong. Temperature adjustment also implies that our ancestors did not know what they were doing. If you believe this then I remind you that it was the generation born in the late 19th century that invented radio, combustion engines, built the atomic bomb and developed nuclear power amongst other things.

bdgwx
Reply to  tmatsi
August 14, 2021 6:03 am

Calibration involves providing an offsetting adjustment to the data stream to bring the measurements closer to true. If you don’t like adjustments then you won’t like calibration.

czechlist
Reply to  bdgwx
August 14, 2021 4:51 pm

Calibration only assures the instrument was accurate at the time of the verification. The instrument may lose its accuracy immediately and that will not be known until the next accuracy verification. If at next calibration the instrument is not within accuracy tolerances any measurements made in between verification are suspect. Knowing the out of tolerance condition after use is academic as one does not know if the instrument drifted over time or the error occurred recently. If the induced error is recent it would be a mistake to adjust earlier measurements. Consumer risk v producer risk applies as well as calibration interval adjustment

Jim Gorman
Reply to  bdgwx
August 14, 2021 5:21 am

Many of the adjustments are done with “homogenizing” algorithms. In other words, creating out of whole cloth an average temperature between diverse locations and in most instances adding unwarranted precision when it is done. The fraud occurs when the homogenized temperature is portrayed as the ACTUAL temperature at a location to an unprecedented precision.

As an example, look at the image of local temps in northeast Kansas. Imagine these are temps that are recorded in 1900 in integers. Please explain how they can be used to construct an accurate and precise to the 1/100th place average temperature and then homogenized even further to a wider area with very little uncertainty in the final value.

Kansas local temps.jpg
Newminster
Reply to  bdgwx
August 14, 2021 6:28 am

How do you know 50 years later that your thermometer was biased?
If you are claiming that modern systems are more reliable than what was being used then you still don’t know whether or to what extent the original measurement was incorrect.
And even if no fraud is committed in your adjustment you have certainly opened the door to fraud for those less honest than yourself.
And maybe you can explain why it is that overwhelmingly these crude measuring implements seem to have been over-estimating rather than under-estimating. Convenient, no?
If you can’t trust the accuracy of the instrument then you have only unreliable data which is to say no data fit for use.If you can’t trust it, discard it; don’t torture it until it gives the answer you want!

bdgwx
Reply to  Newminster
August 14, 2021 7:12 am

We know temperature readings are biased because station moves, time of observation changes, instrument changes, and various other non-climatic effects cause change points shifting the temperature higher or lower than if no change had occurred.

It is the opposite. The net effect of adjustments on a global scale actually increases the temperature in the past to correct for an under-estimation. See Dr. Hausfather’s post about this here.
comment image

Jim Gorman
Reply to  bdgwx
August 14, 2021 8:21 am

You are making the argument that “anomalies” are not worth the money they cost to make them. If baselines are what cause this, i.e. baselines that cover a change, then that algorithm needs to change. If I move a thermometer from one side of town to the other why would the anomaly change? Why would using a different thermometer cause anomaly values to change? It is because of using long baselines that must incorporate changes to stations. It is a flaw in the baseline process that invalidates the whole procedure to be honest.

bdgwx
Reply to  Jim Gorman
August 14, 2021 11:25 am

I’m not arguing that temperature anomalies are not useful. In fact, I’ll argue the opposite. They are more useful in the context of global mean temperature trends than doing the analysis with absolute temperatures. But that is a topic for a different discussion.

What I am arguing is that we know that certain kinds of changes related to surface station temperature measurements cause change points in the time series. Ignoring these biases is unethical at best.

LdB
Reply to  Jim Gorman
August 15, 2021 5:46 am

Moving the thermometer that sort of distance creates a totally new baseline with a different anomoly. Pick any city you like and look at suburb by suburb changes in temperature there are localized effects that create that.

I am currently in Perth here is the Metro data
http://www.bom.gov.au/wa/observations/perth.shtml
Excluding rottnest island currently temp is 16.2 at Hillary’s and 11.5 at Perth Airport. That is 4.7 degree based on where you baseline your readings and they move differently each day.

You still want to claim moving the site doesn’t matter?

Richard Page
Reply to  bdgwx
August 14, 2021 6:57 am

Bdgwx – that’s what the error range is for. If in your scenario, you saw there was an instrument error range of +/- x, then you would take into account the possible error and no problem would ever occur. By giving an absolute number and not advising of an error range you might be committing fraud and putting people at risk.

bdgwx
Reply to  Richard Page
August 14, 2021 7:49 am

The published uncertainty on the instrument will not cover the scenario when the 4-20 mA signal is scaled incorrectly or if the instrument was placed into service with a bad configuration, orientation, or in a manner not approved by the vendor.

Last edited 1 month ago by bdgwx
Richard Page
Reply to  bdgwx
August 14, 2021 8:14 am

If you are using equipment that you do not know how to set up or use properly and you put members of the public at risk then it is no longer fraud but a matter of criminal prosecution. Are you saying that these people are criminals or indulging in criminal activities?

bdgwx
Reply to  Richard Page
August 14, 2021 11:19 am

The reason why the temperature reading is biased is irrelevant. Any rational and ethical person will take the known bias into account and do any analysis off the adjusted data. It doesn’t matter if the bias was intentional (sabotage) or accidental. It’s a bias all the same and so should be dealt with all the same.

If you’re asking if someone should be prosecuted if they knowingly place an instrument into service that is biased and does not implement the appropriate adjustment then yes I do think that person should be prosecuted if their actions effect the well being of others. If the situation were intentional then it would likely have to be dealt with on a case-by-case basis.

Jim Gorman
Reply to  bdgwx
August 14, 2021 11:37 am

Those are all systematic errors for that instrument only. You can only correct systematic errors through using a calibrated instrument to gather data concerning one instrument. That allows you to develop a correction curve to obtain a genuine measurement. However, it is only done by showing the incorrect reading as the actual data and then showing the correction applied from the curve for that device.

It is never allowed to judge one device from another device that is not a calibration source. Why do you think certified labs have calibration sources to check individual devices against. Even the certified sources must be periodically checked to insure they are within tolerance of the accepted standard.

Put down the numbers, averages, and statistics for 6 mo. to a year. Study 6 semester hours of metrology/uncertainty. Read the GUM and understand it. Take a semester course on engineering ethics. A surveying course at a local tech school wouldn’t be bad. You might learn how to prevent losing your license over property line disputes. Follow a tool and die fabricator for a period of time and see how they handle uncertainty when meeting extremely precise measurement requirements from customers. You’ll see they don’t grab 3 or 4 micrometers and average the readings. They insure the single device is adjusted against a gently treated calibration source. That source is worth more than diamonds to the items they produce. Please note, I’m not talking about an old run of the mill pipe fitter. I’m talking about someone who fabricates precise, work the first time every time, devices for satellites, etc.

bdgwx
Reply to  Jim Gorman
August 14, 2021 4:44 pm

That’s right. Calibration allows you to develop a correction curve. Some instruments allow you to enter the curve coefficients in their transmitter directly. Others allow you to do it in their proprietary post processing software. Others force you to scale their 4-20 mA signals in the PLC. And others force you to do it in the general purpose analysis software. Either way you are applying adjustments somewhere. Adjustments are a good thing. If you aren’t adjusting the data then you’re probably doing something wrong.

Jim Gorman
Reply to  bdgwx
August 14, 2021 5:17 pm

Quit blathering. How many max/min thermometers from 1920 had a PLC dude, let alone a calibration curve. You probably don’t even know what the NWS standard was for uncertainty on those early thermometers that were manually read. Googling stuff about current measurement devices is not going to tell you anyhing about 70 – 80% of the “measured” temperatures around the world since 1750, and that’s being generous.

You can’t and won’t discuss the accuracy, precision, or uncertainty of early thermometers. Do the research and tell the group what the NWS minimum uncertainty of temp measurements in 1920 was.

bdgwx
Reply to  Jim Gorman
August 14, 2021 5:58 pm

I’m more than willing to discuss accuracy, precision, and uncertainty of temperature measurements as I’ve done several times on WUWT already. I’ve even pointed out that the uncertainty on individual measurements is higher than what some on here think And that’s saying a lot because most on WUWT don’t think its good to begin with which I agree with.

But that’s not what this subthread is about. In this subthread I’m challenging the comment “If you “adjust” the data you are committing FRAUD.”

Last edited 1 month ago by bdgwx
Jim Gorman
Reply to  bdgwx
August 15, 2021 6:01 am

If you adjust temperatures and then portray them as actual measured data, you are committing fraud. There is no other scientific endeavor or engineering discipline where modified physical measurements are allowed to be portrayed as the actual physical measurements. Any product using modified information MUST be noted in a prominent fashion and with explanations for each and every change, even those using correction charts. Simply hand waving these changes away by claiming they meet statistical requirements is not sufficient.

Your arguments are illustrative of your lack of training in the physical sciences and engineering ethics. Your excuses and theories would not pass muster at any certified laboratory. Nor would all the failed climate projections from models pass muster if a true audit was made of the data used. I am speaking of an audit such as when the Challenger exploded or what will occur with the building collapse in Florida. One of the first questions asked would be what data was collected to justify the engineering decisions made and were any unqualified changes made to the actual physical measurements. Handwaving about statistical analysis justifying all the changes simply would not be accepted.

You need to answer these questions. How many lay (especially politicians) people and non-climate scientists believe the modified data sets are actually the true measured and recorded physical temperatures between 1750 and the 1980’s? Do any of these people could tell you why and how the changes were made?

Without this being general knowledge, people are being misled into thinking these data sets are the gospel as measured and recorded by meteorologists in the past. You are basically telling these people that these data sets are real data. That is no different from a snake oil salesman telling prospects that the elixir they are purchasing is scientifically proven to help their ailments.

Tom Abbott
Reply to  Jim Gorman
August 15, 2021 5:48 am

“Quit blathering. How many max/min thermometers from 1920 had a PLC dude, let alone a calibration curve. You probably don’t even know what the NWS standard was for uncertainty on those early thermometers that were manually read. Googling stuff about current measurement devices is not going to tell you anyhing about 70 – 80% of the “measured” temperatures around the world since 1750, and that’s being generous.”

Excellent comment.

Yes, manipulating the temperatures from 1750 to the satellite era (1979) is based solely on the opinion of the Data Manipulators as to whether the original reading should be changed.

The Data Manipulators were not present back in time when the temperature reading was taken, so they have no basis in fact for making changes to those temperature readings.

That’s not to say that some sites can be found that made mistakes, but those are few and far between, yet the Data Manipulators assume all the station sites need adjustment, based on nothing.

The only temperature record we can count on to be honest is the written temperature record. We can’t believe any modified temperature record we see that was created after James Hansen got the Human-caused Climate Change scam going in 1988.

Tom Abbott
Reply to  bdgwx
August 15, 2021 5:35 am

None of that applies to temperature readings in the past. You can make that argument for currently recorded temperatures, but as for the temperatures recorded in the past, Data Manipulators are just guessing as to whether the actual temperature recorded was correct or not.

Modifications of the historic temperature record are just guesses, and are biased by the biases of the Data Manipulators.

The only historic temperature record fit to use is the actual temperature readings as written down by human beings. Anything else, is an opinion, sincere or sinister, of the Data Manipulator.

Data Manipulation is the only thing keeping the Human-caused Climate Change scam afloat. Thus, the importance of debunking the bastardized global surface “temperature” record should be obvious as the main goal of skeptics.

The unmodified temperature record shows the Earth has nothing to worry about from CO2 because there is no unprecedented warming.

The modified temperature record tells us we are in big trouble because we are experiencing unpredented warming. But it’s all a Big Lie made up in the minds of Data Manipulators who are trying to sell a Human-caused Climate Change narrative.

Raise your hand if you think last month was the hottest July you have ever experienced. NOAA says last month was the hottest July in human history. Somebody is living in La La Land.

Brooks H Hurd
Reply to  bdgwx
August 14, 2021 6:58 am

The flaw in your reasoning is your assumption that climate scientists have been adjusting temperatures up or down to correct for station moves, equipment changes, time of day measurements or other quantifiable and data supported alterations. If this were the case, then we would all accept that certain adjustments are reasonable.

That is not what has been happening. Temperature adjustments by climate scientists have cooled the past and warmed recent temperatures. Thus these adjustments have either created warming trends where the raw data had no trend or they have increased a trend to show more warming then is supported by the raw data. The article was clear in explaining that adjustments been made to show an increase in warming so that the data conforms to a hypothesis that increases in CO2 concentrations causes global warming.

bdgwx
Reply to  Brooks H Hurd
August 14, 2021 7:51 am

Let me see if I have this right…your argument is that if the adjustment warms the past it is justified, but if it cools the past it is unjustified?

Tom Abbott
Reply to  bdgwx
August 15, 2021 5:52 am

I’m not Brooks, but I think he means the past temperatures were already as warm as today before adjustment, and what is unjustified is for the Data Manipulators to change that and cool the past for no good scientific reason.

LdB
Reply to  bdgwx
August 14, 2021 8:17 am

Your answer doesn’t fly you would be in breach of food production regulations in most developed countries. Instrument calibration are certified by an Authority and if a device is out of calibration it must be discarded or sent to the authority for recalibration and all food produced by that instrument must be scraped and/or recalled.

There is no such thing as adjusting a instrument unless you are the authority … end of story.

What they are doing would be ILLEGAL in many production settings.

Last edited 1 month ago by LdB
bdgwx
Reply to  LdB
August 14, 2021 11:14 am

I would be shocked if any developed country has a regulation mandating that you use unadjusted data especially knowing it had a bias.

Calibration is the process used to figure out what the bias adjustment should be for an instrument. If you don’t agree with adjusting data then you won’t agree with calibration either.

LdB
Reply to  bdgwx
August 14, 2021 5:32 pm

You need to be clear are you talking about playing around with data for fun and giggles or using for legal quality and quantity control purpose. If the latter you simply can’t adjust things LOOK IT UP in your country there is usually an authority.

Here is Australia’s
https://www.industry.gov.au/regulations-and-standards/australias-measurement-system

They are the only body in Australia that can calibrate an instrument or approve a design for measurement anywhere in the country.

So in Australia like most developed nations … no you can’t just adjust an instrument reading.

bdgwx
Reply to  LdB
August 14, 2021 8:43 pm

I think my example was pretty clear. You discover that a measurement timeseries had a quantified bias. The unadjusted data says everything is fine. But the data with the bias adjustment applied says you may have cause for concern. What do you do?

LdB
Reply to  bdgwx
August 15, 2021 5:13 am

Throw the data and repeat … you have no clear idea when, where or how the bias came about. You have no idea what is causing the bias and if the instrument is still accurate across it’s range and operation conditions. Thus you have no option but to throw the data and in legal situations they demand you do that as the instrument fails calibration.

You can make up answers you think are acceptable but they won’t fly in legal or science settings. In those settings the failure would have to be clearly definable and consistent and a competent body able to qualify the new adjusted calibration.

If you want a recent science example Gravity Probe B took 6 years to re-quantify the data from after a failure and were put thru hoops by the science community and the data has a much larger error range than the instrument was capable of.

Last edited 1 month ago by LdB
bdgwx
Reply to  LdB
August 15, 2021 5:33 am

How do you repeat measurements in the past?

Jim Gorman
Reply to  bdgwx
August 15, 2021 11:20 am

“You discover that a measurement timeseries had a quantified bias. ”

Just exactly how do “find” this bias? Especially when the instrument is no longer available or in the location where it was used to make the measurements.

Stop making excuses and straw men.
Justify what is actually being done, right now, on recorded temps from the distant past. As a self-proclaimed expert you should be able to provide the scientific reason for each individual recorded measurement change. You need to justify why using other devices that may or may not be accurate are reliable enough to make changes.

Jim Gorman
Reply to  bdgwx
August 15, 2021 11:03 am

Stop the strawman arguments dude. No one is saying that instruments that have a systemic error should be used. Of course instruments should be calibrated, no one refutes that.

What is being addressed is changing past recorded measurements when you don’t have any knowledge of how well calibration was done, any systemic error, the location parameters, or how the measurement were taken.

In other words, being given a temperature record for a refrigerator used in the past and then saying the new fridge shows different temps with the newer sensor so we need to correct the old measurements to make them agree.

bdgwx
Reply to  Jim Gorman
August 15, 2021 1:24 pm

Thomas Gasloli refutes it when he says If you “adjust” the data you are committing FRAUD.” And based on the comments here there seems to be no shortage of WUWT participants who agree with him.

BobM
Reply to  bdgwx
August 14, 2021 8:32 am

So you found one thermometer was incorrect, “biased high/low by x%.” Did you order EVERY other thermometer reading in the company to be adjusted the same way, whether it was the exact same type or age or installation, and guessing where no thermometer actually exists to measure?

Taking corrective action on one direct measurement is far different from adjusting thousands of direct measurements over a hundred years due to allegedly incorrect or inconsistent readings done today.

Tom Abbott
Reply to  BobM
August 15, 2021 5:54 am

“So you found one thermometer was incorrect, “biased high/low by x%.” Did you order EVERY other thermometer reading in the company to be adjusted the same way, whether it was the exact same type or age or installation, and guessing where no thermometer actually exists to measure?”

That’s exactly what the temperature Data Manipulators do with the temperatures.

bdgwx
Reply to  Tom Abbott
August 15, 2021 2:44 pm

Can you show me which method in the source code does what you are claiming here?

Steve Richards
Reply to  bdgwx
August 14, 2021 9:14 am

@bd “Let’s say I have a time series of temperatures in a food manufacturing, servicing, or storage environment. At some point I discover that the thermometer is biased high/low by x%. I adjust the data down/up by x%”

The problem is we do not know what x% is. You did, you probably use a second, calibrated sensor. That’s how you found the problem. In the Australian BOM, they think of a value, and, that’s it. The value must be right because they thought of it.

bdgwx
Reply to  Steve Richards
August 14, 2021 11:06 am

I just used as x as a generic placeholder. Assume it is known to be 5% or whatever.

WXcycles
Reply to  Thomas Gasloli
August 13, 2021 11:21 pm

Don’t confuse ‘data’ with history+numbers. Histories must be adjusted for political reasons too obvious to state. Definitive observational truths can not exist in indefinite quantum mechanical worlds. Hence BOM necessarily continuously Makes-Stuff-Up™ (MSU), and the MSU models are particularly useful at this. Think of the process as akin to quantum thermometers guiding randomized models, running on quantum computers, using 3rd gen MSU optimised iSnakeoil™ quantum-processors, developed by CSIRO. Thus you shouldn’t be surprised stuff doesn’t add-up, and no longer conforms to quaint concepts like the Arrow-of-Time. Variability of data is an artifact of static facts becoming self-correcting dynamic quantum facts. We have entered a wondrous age of quantum-logic and post-fact Science.

Michael in Dublin
Reply to  Thomas Gasloli
August 14, 2021 3:21 am

Some graphs have margin of error bars. These allow one to see the actual readings but show that the possible errors go both ways. They also give us a useful comparative tool when looking at different charts with different error bars for the same area. When alarmists publish data without error bars I know that they are possibly concealing something that would contradict their narrative.

Last edited 1 month ago by Michael in Dublin
Rick C
Reply to  Thomas Gasloli
August 14, 2021 5:30 pm

Thomas G. > Exactly right. I worked in independent test laboratories my whole career. Everyone was trained in the proper recording of measurement data and the penalty for violations – immediate termination at a minimum. I know of several cases where lab techs or supervisors were caught changing data without clear and well documented justification. Some involved bribery. People were fired, fined and even sent to prison for doing this. Basic scientific ethics seems to be lacking in many of the organizations involved in “climate science”.

Tom Abbott
Reply to  Rick C
August 15, 2021 6:01 am

“Basic scientific ethics seems to be lacking in many of the organizations involved in “climate science”.”

Yes, it does. No doubt, there are a lot of true believers in Human-caused Climate change scam, but there are also a lot of them that know better.

August 13, 2021 6:21 pm

Combining data sets that have been “adjusted”, and were only accurate to the nearest whole degree in the first place, somehow gives the ability to make claims measured to the hundredth of a degree?
That sort of claim makes meta analysis almost look respectable.

Jim Gorman
Reply to  Tom Halla
August 14, 2021 5:29 am

Statistics gone wild!

AGW is Not Science
August 13, 2021 6:25 pm

A nice summary of the climate “data” fiasco might be “Not only is the “data,” scientifically speaking, “crap,” but it’s not even “data” when they finish meddling with it, and the meddling is polluted with confirmation bias.

And while ANN’s may seem to hold some promise, at the end of the day the “algorithms” are still subject to the inevitable “massaging” that will ensure they produce the “right” (i.e., wrong, but politically expedient) answers.

noaaprogrammer
Reply to  AGW is Not Science
August 13, 2021 9:31 pm

Liberals also adjust voting results!

MarkW
Reply to  noaaprogrammer
August 13, 2021 10:02 pm

There is no objective reality, just the very word that flows from the mouthpiece of the party.

Robert Hanson
Reply to  MarkW
August 14, 2021 5:19 pm

Hopefully, you forgot to add the /sarc

Rory Forbes
Reply to  AGW is Not Science
August 13, 2021 9:39 pm

Wrong!

Richard Page
Reply to  AGW is Not Science
August 14, 2021 7:02 am

Most people, when talking about UHI, give the recorded temperature and then say that it may include a UHI bias of, say, 1.5C – 2.0C. I don’t know of anybody reputable that makes adjustments for UHI like you say – it’s just too variable.

meab
Reply to  AGW is Not Science
August 14, 2021 4:11 pm

Ingrown, You were obviously lying when you claimed to have been banned. How do we know that you were lying? Anyone that truly believed that they were banned wouldn’t waste the effort to write that they have been banned.

Then you claimed to have been “partially banned” but when others have had a post banned, the editor often shows that a post was made, then snipped.

Laughable.

Pauleta
August 13, 2021 6:53 pm

NOAA just said July was the hottest ever month since the records began (probably around 1998 or so). I am already panicking. If that Chinese restaurant close to the parking lot where the weather station is located, downtown Phoenix, AZ decides to boil oil outside in the winter, we are screwed.

Dennis
Reply to  Pauleta
August 13, 2021 8:46 pm

Scientist Dr Jennifer Morohasy and colleagues in Australia checked many BoM weather station locations and discovered that many are in heat sinks alongside highways and airport runways and similar, in locations around which city and town building developments have been constructed.

Rory Forbes
Reply to  Dennis
August 13, 2021 9:41 pm

There have been numerous examples of the BoM fiddling with historical measured data all over Australia. Cooling the past is now so prevalent, no one even comments any more.

Richard Page
Reply to  Dennis
August 14, 2021 7:07 am

Even the way BOM takes temperature readings is dodgy. Most meteorological stations around the world will take 2-3 readings over a 5 or 10 minute period to eliminate the possibility of short duration heat spikes. BOM uses one reading.

Tom Abbott
Reply to  Pauleta
August 15, 2021 6:12 am

“NOAA just said July was the hottest ever month since the records began (probably around 1998 or so).”

Actually, NOAA said last month was the hottest July in “human history”! These guys never give up!

I guess NOAA doesn’t know that there are human remains and vegetation that have been found buried underneath glaciers that are currently covering the Earth. That should tell a logical person that it must have been warmer in history than it is now because there were no glaciers there at the time that humans were roaming the ground.

Sea level in Roman times was much higher than today. One of Rome’s major shipping ports in ancient times is now landlocked. That should tell a logical person that it was warmer in human history in the past, than it is today.

But NOAA has a Human-caused Climate Change agenda to sell, so they ignore all that and try to scare people with their obvious lies about current temperatures.

Last edited 1 month ago by Tom Abbott
Clyde Spencer
August 13, 2021 7:38 pm

They are not unusual in magnitude, direction or rate of change, which should diminish fears that recent climate change is somehow catastrophic.

The farther one goes back in time the less certain one can be about the measured value, because of things like diffusion or contamination; and, even with a fixed percentage of error in the date, the absolute error in time increases in proportion to the date back in time. Therefore, time acts like a low-pass filter, suppressing spikes with with high rates of change. It is impossible to state with certainty that current rates of change haven’t been exceeded in the past.

And can it really be accurately calculated to some fractions of a degree Celsius?

One can improve the precision of stationary data, or data that doesn’t change over time, by taking many measurements of the same property with the same instrument, and thus reduce random variation.

However, for a temperature that is increasing, the mean and standard deviation (SD) are going to increase with time, and you can never measure the same temperature even twice. Even the ‘magic’ anomalies* can’t avoid that. Thus, you can’t logically expect to improve the estimate of the mean (and SD) by taking more measurements. The mean will increase in proportion to the rate of change and the length of the time interval over which temperatures are taken. The precision can be no better than the precision of the instrument used to measure the temperature. Therefore, to answer her question, we can’t justifiably claim the precision for temperature data that most climatologists do because environmental temperature data do not meet the criteria for reducing random variation.

*Typically, subtracting a baseline value from recent measurements results in the loss of at least one or two significant figures, reducing the precision, exacerbating the problem of precision!

MarkW
Reply to  Clyde Spencer
August 13, 2021 10:13 pm

There are two sources of uncertainty regarding temperature measurements.
The first as you mention is uncertainty in regards to the instrument itself.
The second has to do with your ability to be certain that a limited number of readings distributed in space, have accurately portrayed the temperature of the entire planet.

Lets suppose you have a room. On one end of the room we have a window into which strong sunlight is streaming. At the other end of the room you have a large radiator, through which near freezing water is being circulated.
Now, take a single thermometer and place it randomly in the room.
How confident can you be that the single thermometer is giving you an accurate reading of the average temperature of that room?

Jim Gorman
Reply to  MarkW
August 14, 2021 5:47 am

There are also all kinds of statistical error being made in the claim of “sampling” theory results. Look at election results to see how inaccurate sampling can be. It’s like sampling the growth rates of Clydesdale horses and Shetland ponies. You can average and calculate the error of the sample means to 1/1000th of an inch. But when you claim that the average applies to both, you are committing fraud.

Clyde Spencer
Reply to  MarkW
August 14, 2021 8:09 am

Yes, sampling protocol is something that rarely gets mentioned. That is because the data being used were not intended for climatology, but instead for meteorology and aeronautics. Except for probably the USCRN, we only have the data that climatologists have appropriated for a use other than what they were intended.

Admin
August 13, 2021 7:42 pm

This site currently gets about 1.5 million views/month. Your Internet commerce skills are as keen as your debating skills.

MarkW
Reply to  Charles Rotter
August 13, 2021 10:05 pm

I’m confident that his blog never gets much above 50 views/month. (His mother is pretty busy taking care of him and doesn’t have that much spare time.) As a result he actually though that 25000 views/month was a huge number.

Rich Davis
Reply to  Charles Rotter
August 14, 2021 6:19 am

Yes, my wife thinks 25,000 is about the page count just for me. In any case it’s got to be about the number of posts by the sockpuppet troll that goes by Mark Ingraham.

n.n
August 13, 2021 7:45 pm

I wonder how many blocking events (i.e. greenhouse effect) and other short-lived phenomena are missed in the historically low sampling rate given the available proxies and sparse observations today.

MarkW
Reply to  n.n
August 13, 2021 10:15 pm

One of the tenets of uniformitarianism is that the present reflects the past.
Since there are rapid temperature excursions during the record we do have, it can be assumed that the number and frequency of such excursions in the past was similar.

Dennis
August 13, 2021 8:43 pm

The creation of climate hoax;

*Around the time the UN IPCC Copenhagen Conference was held hackers released two large batches of emails exchanged between climate change scare creators with many admissions that the objective was to scare and influence people.
*Mathematician Christopher Monckton audited the UN IPCC modelling and revealed the many errors and omissions it contained. He was banned from speaking at IPCC events.

Mike
August 13, 2021 8:58 pm

Specifically, the claim is that we humans have caused 1.06 °C of the claimed 1.07 °C”

God spare me!

MarkW
Reply to  Mike
August 13, 2021 10:16 pm

Would this include all the warming that started long before CO2 started to rise?

Chris Hanley
Reply to  MarkW
August 13, 2021 11:07 pm

Or before human emissions took off.

comment image

Clyde Spencer
Reply to  MarkW
August 14, 2021 8:12 am

When humans clocked in, Mother Nature called it a day and went home, leaving it all up to humans to take care of things.

Tom Abbott
Reply to  Clyde Spencer
August 15, 2021 6:47 am

That’s more or less what the Climate Alarmists are saying.

It’s ridiculous.

What’s even more ridiculous is them ascribing 0.01C to Mother Nature.

Last edited 1 month ago by Tom Abbott
Ozonebust
August 13, 2021 9:04 pm

The fourth paragraph of the head post.
“To understand how climate has varied over much longer periods, over hundreds and thousands of years, various types of proxy records can be assembled derived from the annual rings of long-lived tree species, corals and stalagmites. These types of records provide evidence for periods of time over the past several thousand years (the late Holocene) that were either colder, or experienced similar temperatures, to the present, for example the Little Ice Age (1309 to 1814) and the Medieval Warm Period (985 to 1200), respectively. These records show global temperatures have cycled within a range of up to 1.8 °C over the last thousand years”.

The basis for this analysis is on proxy records located well outside the Arctic region.

When we consider that in the modern satellite era, a large percentage of the “warming” was in the Arctic region (Arctic Amplification (AA)). There is currently no accepted understanding of the causes of Arctic Amplification. My own analysis to understand the mechanism of AA, particularly the past 25 years indicates that,
1 – The 1.8 °C in the paragraph above should have an Arctic Amplification factor added to it, at a minimum of 20% to include AA for that period as a realistic comparison. This gives a value of 2.16C.
2 – that the IPCC or any other political/scientific body cannot assign any of the current warming to CO2 since 1850, without understanding what causes Arctic Amplification.
3 – that when looking at all previous warm periods such as the 1930’s etc, before the satellite era, an Arctic Amplification increase multiplier must be added.

Amplification is a process of amplifying a base value. And that is exactly what is occurring in the Arctic. Atmospheric temperature from warmer latitudes, when transported into a significantly colder location such as the Arctic region, which has no meaningful blocking mechanism, has a significant amplification effect. It is well known and understood that low latitude heat moves to the polar regions.

a happy little debunker
August 13, 2021 9:19 pm

BOM’s data is not fit for purpose in the great Climate Change scandal, as it currently does not meet the WMO standards for such measurements.

Johne Morton
August 13, 2021 9:20 pm

I find it a bit strange that we’re supposed to be as warm as the Eemian now, yet the sea level is still significantly lower, the tundra/forest boundary is still much further south, the forest/grassland boundary in the Great Plains hasn’t moved westward, glaciers that formed in the Rocky Mountains as recently as the LIA are still around…and by now enough time has passed that something should have changed. But we are now living in the Adjustocene, where’s it’s always “worse than we thought”…

Chaswarnertoo
Reply to  Johne Morton
August 14, 2021 3:07 pm

Call me when there are hippos in the Thames, again.

bdgwx
Reply to  Johne Morton
August 14, 2021 9:12 pm

I don’t think the data supports the hypothesis that it is warmer today than during the peak of the Eemian. If you know of a few temperature reconstructions showing that it is likely warmer today than during the Eemian then you might be able to convince me. I have looked at many reconstructions already. I remain unconvinced that it is warmer today. But maybe you can present some reconstructions I’ve not considered yet.

Paul Johnson
August 13, 2021 9:29 pm

If a site location or equipment are changed, shouldn’t there be several months, or preferably a year, of overlapping observations to validate any adjustment of historic data?

MarkW
Reply to  Paul Johnson
August 13, 2021 10:19 pm

Most emphatically yes.

Clyde Spencer
Reply to  MarkW
August 14, 2021 8:17 am

When Los Angeles moved its official weather station from the downtown area to a UCLA campus, there was an immediate 4 deg drop from what was expected for the season.

stewartpid
Reply to  Clyde Spencer
August 14, 2021 9:18 am

Obviously a 6 – 10 degree adjustment is needed to torture the data until it yields the correct result.

Rory Forbes
August 13, 2021 9:43 pm

I’ll pay you three dollars to go away and start your own blog … and that’s twice what you’re worth.

August 13, 2021 9:56 pm

The IPCC’s latest (6th) Report has been released. Its “models” once again proclaim that Earth’s temperature is going to rise to unbearable levels unless CO2 levels are drastically reduced. That’s bunkum.

Few have noticed IPCC’s “models” are simply extrapolations of the correlation:
[Earth’s average Temperature vs CO2 levels]
and so assumes that CO2 is the cause … a priori! They offer NO experimental data to support their guess that increasing CO2 causes Earth’s atmospheric temperature to rise. Yet they have the nerve to claim “The science says …!”

The best the IPCC offers is to reckon that CO2 is a “Greenhouse Gas” because it absorbs infrared radiation (IR) – for example that emanating from Earth – whereas the major gases representing 99% of dry air – nitrogen, oxygen and argon do not.
Further, they and others now re-define a Greenhouse Gas as being one that absorbs IR, whereas originally and correctly it was one that absorbs (Earth’s) heat to warm its surface more than it would be without an atmosphere.

What they fail to disclose is that, while not all gases absorb IR, ALL gases absorb HEAT – simply put a clear plastic bag containing whatever gas into the Sun, then feel it getting warmer. As ALL gases are Greenhouse Gases, the atmosphere’s major gases are by far its biggest heat absorbers ie major greenhouse gases, and largely in proportion to their amount, and CO2 input to global temperatures is negligible

Jim Gorman
Reply to  P Carson
August 14, 2021 6:03 am

All of this should be able to be proved/disproved relatively easy through experiments. Yet there are none that show how these gas mixtures (or even alone) react to the heat provided the earth from the sun. Makes you wonder why.

Doonman
August 13, 2021 9:58 pm

But global warming itself is not natural, if it is essentially driven by human influence, which is a key assumption of current policy.

In order to reach this conclusion, it is necessary to define human existence as non-natural. There is no way around that.

That puts a big dent in the theory of evolution. If the actions of humans are non-natural, then their had to be an intelligent intervention at some point that altered nature.

Not sure the “key assumption of current policy” has been well thought out.

August 14, 2021 12:00 am

Here is my estimation of how maximum temperatures in north Australia have varied since the late 19th century, these are local-in-time averages, weather fluctuations have been removed:

comment image?w=840

The warming in the north has been less than the average over the whole country.

In my view the focus should be on what happens in the next decade, hopefully the recent “pause” (or slowdown) in temperatures will continue, but would it be too late to save some proper power stations from closure?

LdB
Reply to  climanrecon
August 14, 2021 8:36 am

The Port headland data is junk as the station only started construction in 1942 and the whole town only had 120 people and it ran on a volunteer basis. The station moved in 1948 due to cyclone damage and in 1966 and 1981 due to town site changes. The warnings from the Western Australia BOM are crystal clear

Historical metadata for this site has not been quality controlled for accuracy and completeness. Data other than current station information,

particularly earlier than 1998, should be considered accordingly. Information may not be complete, as backfilling of historical data is incomplete.

There simply are no sites in North Western Australia that are useful or reliable earlier than 1980’s

Last edited 1 month ago by LdB
Reply to  LdB
August 15, 2021 6:19 am

The data shown are regional averages, not single stations, the “Port Hedland” region is:

Port Hedland (Onslow to Broome, inland to Newman, Nullagine, Wittenoom)

griff
August 14, 2021 1:23 am

Never mind models and simulation – look at the observations from the real world…

an extensive set of heatwaves with new record temperatures and exceptional rain events, hottest July on record.

You can’t keep ignoring what is happening in real time…

Joao Martins
Reply to  griff
August 14, 2021 3:18 am

Define “exceptional”, please.

Bruce Cobb
Reply to  griff
August 14, 2021 3:58 am

Your ignorance is certainly exceptional, Griffie-poo.

Peter W
Reply to  griff
August 14, 2021 4:33 am

You are forgetting (or ignoring) something. Disaster sells! If you want to sell newspapers, news reports, etc., then be sure to over-report any changes as leading to disaster.

Climate believer
Reply to  Peter W
August 14, 2021 10:50 am

Apocalypse Porn….

Climate believer
Reply to  griff
August 14, 2021 5:55 am

“You can’t keep ignoring what is happening in real time…”

In the Antarctic, sea ice extent increased faster than average during July, particularly in the latter half of the month. By the end of the month, extent was above the ninetieth percentile and was eighth highest in the satellite record.

hottest July on record.” …… oh you mean the very convenient +0.01°C more than 5 years ago…yeah terrible.

Latest global average tropospheric temperatures for July @ +0.2°C above 1990-2000 average. Last month it was below that average, funnily enough -0.01°C.

Figure5.png
Jim Gorman
Reply to  griff
August 14, 2021 6:14 am

You are assuming that proxies allow you to recognize heat and rain events. They do not. The time resolution simply isn’t good enough to allow for that kind of recognition. Worldwide contemporaneous recording of weather events prior to 1900’s are simply to sparse to make any kind of conclusion about weather events today. The only conclusion you can make is against official recorded records since some pretty recent times.

Rich Davis
Reply to  griff
August 14, 2021 6:27 am

YAWN!

In which time period would you prefer to live your life?
[__] Benign low CO2 1675-1750
[__] “Dangerous” CO2 1950-2025

Reply to  griff
August 14, 2021 7:33 am

At the same time record coldwaves in the southern half of the world.

And look at the unusual cooling of the North Pacific :

comment image

Last edited 1 month ago by Krishna Gans
Tom Abbott
Reply to  Krishna Gans
August 15, 2021 7:00 am

It looks a little chilly in the eastern Atlantic, too.

Clyde Spencer
Reply to  griff
August 14, 2021 8:22 am

One has to demonstrate that “what is happening in real time” is truly representative of weather.

Get back to us in 30 years to see if this is the start of a trend or just natural variation in weather.

Tom Abbott
Reply to  Clyde Spencer
August 15, 2021 7:03 am

Griff should get back to us when he finds an unprecedented weather event.

Unprecedented means it has never happened before, Griff.

Richard Page
Reply to  griff
August 14, 2021 8:24 am

We don’t ignore what is happening in real time – I think you’ll find most people on this site have a much better understanding of the real world than you do. What we do not do is conflate weather event’s with climate trends nor do we look at a localised regional weather event and extrapolate it across the entire world. Get a grip Griffy – and do adjust yourself dear, your delusions are showing.

LdB
Reply to  griff
August 14, 2021 8:43 am

Must be a Europe thing that hasn’t made it as news in Australia. All I saw lately was a couple places in Europe with floods which had good footage.

Gregory Woods
August 14, 2021 2:43 am

And can it really be accurately calculated to some fractions of a degree Celsius?

Global temperature is an artificial number – ie, FAKE!

2hotel9
August 14, 2021 3:33 am

That is an awful lot of verbiage for something so easily stated. Climate changes, constantly. It always has and always will. Humans are not causing it and can not stop it. See? Easy peazy.

Bruce Cobb
August 14, 2021 3:37 am

It is quite an amazing coincidence that all adjustments to data favor the Alarmists.

John Phillips
Reply to  Bruce Cobb
August 14, 2021 4:23 am
Rich Davis
Reply to  John Phillips
August 14, 2021 6:36 am

Of course that distribution is totally consistent with the case where there is a systematic cooling of the past and warming of the recent measurements. As long as you jigger it so that each recent warming fraud is offset by a cooling fraud earlier in the series, you can construct a beautifully gaussian distribution.

bdgwx
Reply to  Rich Davis
August 14, 2021 7:29 am

The net effect of adjustments actually raises past temperatures and reduces the overall warming trend compared to the unadjusted data.

Rich Davis
Reply to  bdgwx
August 14, 2021 11:11 am

What it does, if we accept your data at face value, is to warm the past in a period of inconvenient warming from 1900 to 1940, so that modern warming seems unprecedented. At the same time allowing urban heat island effects to exaggerate the present.

But it’s also plain to see that your chart is totally incompatible with the distribution that John Phillips posted. Your data on a chart like his would be skewed entirely toward warming. How do you reconcile that?

John Phillips
Reply to  Rich Davis
August 14, 2021 12:19 pm

As you can see, most of the adjustments occur before about 1950, and they are nearly all to do with ocean temperatures. Prior to around 1950, shipboard readings were taken by pulling a bucket of seawater on board and measuring its temperature. The buckets were mainly poorly insulated meaning the water could warm or cool before the measurement was made. During the 1930s and 1940s the method was changed to taking the temperature of seawater as it was pumped in to cool the engine. Since around 1990 most sea temperature readings are taken by ocean bouys in direct contact with the water.

So, compared to the bouy temperatures, the early bucket temperatures had a cooling bias and the midrange engine intake temperatures a small warm bias (engine rooms tend to be warm places). If you want an accurate, consistent, comparable record of how temperatures have changed – of course you need to adjust for these biases……this is not ‘fraud’.

Rich Davis
Reply to  John Phillips
August 14, 2021 3:16 pm

Sorry, no sale. Admittedly much of my skepticism is at this point a gut feel that this does not add up, combined with a total lack of faith in the bona fides of those perpetrating the “adjustments”.

I’d like to hear the case argued by a skeptical expert familiar with the data, before I could accept your claims as true.

How do you explain the disconnect between bdgwx adjustments chart virtually all being warming adjustments in the early 20th century, and the distribution you posted showing it centered on zero? That’s my first reason for calling BS on this. If you are only showing recent adjustments then clearly adjustments centered on zero is inappropriate in light of UHI requiring cooling adjustments in recent data. If you are covering the same period, then your chart, bdgwx’s chart or both are wrong. His data in your chart should skew entirely above zero.

Next question would be isn’t it convenient that the period of natural warming that in raw data terms looks almost identical to the recent period, has been significantly reduced in warming rate by claiming that the LIA was not as cold as previously thought?

Last edited 1 month ago by Rich Davis
Clyde Spencer
Reply to  John Phillips
August 16, 2021 7:57 pm

… the midrange engine intake temperatures a small warm bias (engine rooms tend to be warm places).

However, when Karl wrote his infamous paper, just before retiring, he adjusted the buoy temperatures to align with the warm engine intake temperatures. I’d call that fraud!

John Phillips
Reply to  Bruce Cobb
August 14, 2021 4:24 am

As Zeke Hausfather demonstrates here, global adjustments actually reduce the trend slightly.

Bruce Cobb
Reply to  John Phillips
August 14, 2021 4:56 am

comment image

John Phillips
Reply to  Bruce Cobb
August 14, 2021 6:23 am

Always a bad idea to trust Tony Heller. There is no such thing as a GISS Land Only dataset, what they have is a Met Stations Only dataset Ts and they publish a combined Land and Ocean dataset LOTI from which one can separate out the Land element. The two datasets use different methodologies and are not comparable.

This did not stop Heller, naturally, and he shows Ts for 2017 and 2000 but Land for 2019. Nick Stokes exactly reproduces Heller’s flawed plot here.

Here’s the apples-to-apples 2017 and 2019 version of Ts. No sign of any tampering.

Always a bad idea to trust Tony Heller.

WUWT GISS Ts.jpg
Bruce Cobb
Reply to  John Phillips
August 14, 2021 8:36 am

Always a bad idea to trust Nick Stokes.

John Phillips
Reply to  Bruce Cobb
August 14, 2021 10:45 am

Ah, but you don’t need to trust Nick. You can check Heller’s data for yourself.
 
Did he plot two completely different datasets under the same legend? Did he use this nonsense to accuse NASA of data tampering? Guilty on both counts!

bdgwx
Reply to  Bruce Cobb
August 14, 2021 9:01 pm

You don’t need to trust Nick Stokes or anyone. Just download the NASA GISTEMP source code and run it on your machine. You can download it, kick it off, and watch it complete in under 30 minutes. It generates csv and txt files with the result for easy analysis. It even downloads the most recent the GHCN-M and ERSST input files for you. The only adjustment GISTEMP makes is in step 2 where it applies the UHI logic. And you can disable that step if you like as many of us have done and see what effect it has. Hint…not much. You can also pick through the code with a fine toothed comb looking for any nefarious. Hint…you want find anything. Nobody ever has. But don’t take Nick’s, John’s, or my word for it. Do it yourself.

Last edited 1 month ago by bdgwx
Anthony Banton
Reply to  Bruce Cobb
August 14, 2021 6:28 am

Mr Heller’s graph is deceptive ….

The following are 2 posts by Nick Stokes regarding Heller’s “confused” interpretation of the GISS adjustments.
Both of which received NO REBUTTAL from him…..

https://realclimatescience.com/2019/06/tampering-past-the-tipping-point/

Nick Stokes says:
June 28, 2019 at 5:53 pm
Paul,
You are missing Genava’s point. Look at the last image he posted. There are four columns. Column 2, labelled 2019 version, is indeed the current version of the Land average, as listed here. Column 4, also labelled 2019 version, is the 2019 version of the met Stations only global index, Ts, listed here. Column 3 is indeed, as Genava says, the 2017 version of Ts, as listed on the History page. And column 1, labelled 2017 version, is identical to 3.

Tony has plotted column 1 against column 2, saying the difference represents data tampering. But they are two different things, “2017” being the Met Stations index for the whole globe, and “2019” being a Land average. They come from 2 different GISS sources, and represent averages over different regions. Columns 3 and 4, which show much less difference, are the proper comparison of like with like. You can also, if you go to the wayback machine, get a 2017 version of the land average. They are different to the numbers in column 1.

Nick Stokes says:
July 3, 2019 at 1:10 am
Mike,
I’ve written a blog post on the matter here. It describes and plots the relevant datasets. At the bottom I have linked all the various sources. I have also appended a .csv file here which has the numbers. The data are
1. GISS Land average (2019 and 2017 versions)
2. GISS Ts (Met Stations only) index (2017 and 2019 versions).

Nick’s Blog post:

https://moyhu.blogspot.com/2019/07/fake-charge-of-tampering-in-giss.html

“As mentioned, I originally set this out in comments at Clive Best’s site, where Paul Matthews first raised the Tony Heller post. I then noted that at that (Heller’s) site, a commenter Genava had observed that the 2019 data plotted was different from the 2019 Ts data, which was the index of the 2001 and 2017 versions. That was on June 27. It got no response until Paul, probably prompted by my mention, said that the 2019 data was current Land data. I don’t think he appreciated the difference between Land and Ts, so I commented June 28 to try to explain, as above. Apart from a bit of routine abuse, that is where it stands. No-one seems to want to figure out what is really plotted, and comments have dried up. Meanwhile the Twitter thread castigating “tampering” just continues to grow.”

Anthony Banton
Reply to  Anthony Banton
August 14, 2021 6:37 am

comment image

Anthony Banton
Reply to  Anthony Banton
August 14, 2021 6:39 am

NOTE: the above graph is Land+Ocean. Heller’s is Land only.

Chris Hanley
Reply to  Anthony Banton
August 14, 2021 2:34 pm

Maybe they adjusted the pre-1945 up when they realized that the trend 1910-1945 was similar to 1980-2015 (approx) but could not be due to human emissions:
comment image

Tom Abbott
Reply to  Chris Hanley
August 15, 2021 7:33 am

Phil Jones supports your case that past warming periods are similar to current warming trends.

http://news.bbc.co.uk/2/hi/8511670.stm

Q&A: Professor Phil Jones
Phil Jones is director of the Climatic Research Unit (CRU) at the University of East Anglia (UEA), which has been at the centre of the row over hacked e-mails.

The BBC’s environment analyst Roger Harrabin put questions to Professor Jones, including several gathered from climate sceptics. The questions were put to Professor Jones with the co-operation of UEA’s press office.

A – Do you agree that according to the global temperature record used by the IPCC, the rates of global warming from 1860-1880, 1910-1940 and 1975-1998 were identical?

An initial point to make is that in the responses to these questions I’ve assumed that when you talk about the global temperature record, you mean the record that combines the estimates from land regions with those from the marine regions of the world. CRU produces the land component, with the Met Office Hadley Centre producing the marine component.

Temperature data for the period 1860-1880 are more uncertain, because of sparser coverage, than for later periods in the 20th Century. The 1860-1880 period is also only 21 years in length. As for the two periods 1910-40 and 1975-1998 the warming rates are not statistically significantly different (see numbers below).

I have also included the trend over the period 1975 to 2009, which has a very similar trend to the period 1975-1998.
So, in answer to the question, the warming rates for all 4 periods are similar and not statistically significantly different from each other”

end excerpt

John Phillips
Reply to  Tom Abbott
August 15, 2021 12:30 pm

 Do you agree that according to the global temperature record used by the IPCC, the rates of global warming from 1860-1880, 1910-1940 and 1975-1998 were identical?
 
The difference being that the earlier 20 and 30-year trends stopped and went into reverse. The modern warming trend is now 46 years in length and shows no sign of letting up.

WUWT Jones.JPG
Tom Abbott
Reply to  Anthony Banton
August 15, 2021 7:23 am

Hansen 1999.

comment image

I thought with all these Hockey Stick charts being posted, a little perspective on temperature profiles was in order.

Notice the prominent warming in the 1930’s on the U.S. chart. It was warmer then than it is today.

The Hockey Sticks don’t show this profile, yet all the unmodified surface temperature charts show the same temperature profile as the U.S. regional chart, where it was just as warm in the Early Twentieth Century as it is today.

The computer-generated Hockey Stick charts are the only charts in the world that show a “hotter and hotter” temperature profile. Imo, they have been manipulated for political/selfish purposes.

The regional charts don’t show this “hotter and hotter” profile because the people who recorded the temperatures back then did not have a bias one way or another about Human-caused Climate Change. They just wrote down the temperatures they saw.

We can trust the past temperature recorders much more than we can trust our current-day Data Manipulators. When in doubt, go with the recorded data and eliminate the Human-caused Climate Change bias of today’s alarmist climate scientists. They have an agenda.

John Phillips
Reply to  Tom Abbott
August 15, 2021 12:55 pm

Notice the prominent warming in the 1930’s on the U.S. chart. It was warmer then than it is today.

Sorry, but that’s not correct. Most of the years after that chart ends were warmer than the thirties. The warmest year in the thirties in the current NASA dataset was 1936 at 1.16. That’s more than half a degree cooler than 2012 and eighth-warmest overall.

2012 1.82
2016 1.65
2017 1.43
2015 1.35
2020 1.34
1998 1.26
2006 1.25
1934 1.16

WUWT.JPG
Last edited 1 month ago by John Phillips
Derg
Reply to  Anthony Banton
August 14, 2021 7:29 am

I wonder if Nick is still shivering or is warming enough for him.

Editor
Reply to  Anthony Banton
August 14, 2021 11:38 am

Paul Matthews says:
June 28, 2019 at 9:09 am
Just in case anyone thinks that Tony is making this up (sorry, but Nick Stokes tried to claim this on another blog), here is Sato’s own web page.
http://www.columbia.edu/%7Emhs119/Temperature/GHCN_V4vsV3/
In the second batch of graphs there is one labelled ‘Land surface temperature difference v4 – v3’ which swings upward during the inconvenient pause era, 2000-2010

=====

Hmmm… did you miss this which was right below Stokes posts, where he had made a strawman claim.

John Phillips
Reply to  Sunsettommy
August 14, 2021 2:15 pm

What is the relevance of a post documenting release notes for GHCN versions to Heller’s trickery? Looks like a lame attempt at misdirection to me (not just me – read the next comment).

Just to recap, Heller used one dataset (ts) for 2000 and 2017 and then switched to another (LOTI) for 2019, then represented the inevitable differences as ‘data tampering’ by NASA – when the only tampering was his chicanery.

Last edited 1 month ago by John Phillips
Graemethecat
Reply to  John Phillips
August 14, 2021 11:20 pm

Why did you assert that GISS Land Only does not exist when it clearly does?

John Phillips
Reply to  Graemethecat
August 15, 2021 11:52 am

Bit of a pedantic nitpick. There is no ‘Land Only’ data product, although you can go get the Land part of the Land + Ocean product. The main GISS product used to be GISS Ts, which uses Met Stations only

until about 1995, there didn’t exist a dataset of sea temperatures of anything like the duration of the land record. So when Hansen and Lebedeff in 1987 published the ancestor of the GISS index, they used whatever station data they could get to estimate surface temperature over the oceans as well as land. Islands had a big role there. This index, called Ts, or GLB.Ts, was their main product until the mid ’90’s, when it was gradually supplanted by LOTI, using ocean sea surface temperatures (SST) as needed, as they became available backward in time.”

The Land/Ocean index (LOTI) gradually became the main GISS product, and Ts was discontinued in the move from v3 to v4. As the name suggests this is derived from Land and Ocean data, and with a bit of work you can retrieve just the Land portion, but it has major methodological differences compared to Ts…

this is something different to GISS Ts. It also uses station data, but to estimate the average for land only. All such averages are area-weighted, but here is is just by land area. So from being very heavily weighted, island stations virtually disappear, since they represent little land. And the weighting of coastal stations is much diminished, since they too in Ts were weighted to represent big areas of sea. ”

So comparing ‘Land’ and Ts, labelling them both as ‘The GISS Land Anomaly’ and saying the differences are a result of ‘data tampering’ is incompetence at best, blatent dishonesty at worst.

(Quotations from the Nick Stokes post linked above.)

Last edited 1 month ago by John Phillips
Rich Davis
Reply to  Bruce Cobb
August 14, 2021 6:38 am

Nothing to see here folks, move along

Geoff Sherrington
August 14, 2021 3:47 am

This is all rather academic, given the errors that go with routine daily temperature measurements.
Or, maybe it would be clearer if the BOM actually answered the question.
How strange it is for officials to give a number and then to say it is not the one you want!!!
……………..

Dear Mr. Sherrington,
You have asked “If a person seeks to know the separation of two daily temperatures in degrees C that allows a confident claim that the two temperatures are different statistically, by how much would the two values be separated?
The internationally accepted standard for determining if two measurements are statistically different is ISO/IEC17043. The latter covers the calculation of a normalized score (known as the EN score), which is a standard method for this type of question.
As previously communicated, the most relevant figure that we can supply to meet your request for a “T +/- X degrees C” is our specified inspection threshold (conservatively within +/- 0.3 ⁰C), but this is not an estimate of the uncertainty of the ACORN-SAT network’s temperature measurements in the field.
From Dr. Boris Kelly-Gerreyn
BOM Manager, Data Quality and Requirements.
Letter dated 7th June, 2019

John Phillips
August 14, 2021 4:22 am

Darwin again, really?

Last edited 1 month ago by John Phillips
Anthony Banton
Reply to  John Phillips
August 14, 2021 5:35 am

Nick Stokes dealt with the Darwin “issue” on his Blog.

https://moyhu.blogspot.com/2009/12/darwin-and-ghcn-adjustments-willis.html#more

GHCN homogenization adjustments are often misunderstood. They are trying to detect and adjust for discrete changes. A station gets moved – a screen replaced – a nearby tree removed. Plus the more widespread changes, such as the 1990’s move toward thermistor MMTS. They don’t correct for UHI.

They do have a reason. The reassuring consequence of GG’s analysis is that, while they can be large (as with Darwin), they do not, as often alleged, dominate the warming signal in their nett effect. There is no reason to believe that their scatter is designed by someone to advance an AGW agenda.”

Richard Page
Reply to  Anthony Banton
August 14, 2021 7:17 am

I dispute whether Nick Stokes actually did anything of the sort. He posted a wide range af possible abstract reasons as to why someone might, conceivably, change a temperature reading. What he did not do, at any point, is specifically state what the exact circumstances and amount were in each case – this is not ‘dealing with it’ – it’s offering excuses.

Jim Gorman
Reply to  Richard Page
August 14, 2021 12:01 pm

Can you believe an engineer designing something involving the safety to humans by using data that has been manipulated without providing each and every change and the exact reason for it along with the reason for the amount of the adjustment?

One thing Nick never deals with is how some stations have multiple changes due to the homogenizing changing neighboring station which then causes changes to a station that has already been changed. Measurements are not sacrosanct to these folks. They are simply numbers to be dealt with and the end justifies the means.

LdB
Reply to  Jim Gorman
August 14, 2021 5:57 pm

It has been explain to Nick a number of times that what he is doing is junk. The simple answer is you can’t blend station data it is highly localized which means it’s very nonstochastic. Two sites only a few kilometers apart may differ by many degrees and in highly non linear ways, most people would have experience with that.

Then you almost always have historic site changes, land use around the area and calibration change factors as sites moved from volunteer manual reading to automatic instrumentation.

The net result is your historic record sequence is flawed and you can’t fix it. In Australia for example you really can’t use anything much earlier than 1980’s because there will be a 1-2 degree error in any remote site.

Last edited 1 month ago by LdB
Clyde Spencer
Reply to  LdB
August 16, 2021 8:14 pm

Two sites only a few kilometers apart may differ by many degrees and in highly non linear ways, …

The assumption for infilling missing temperature data is that the temperatures change smoothly. However, back when I lived in California, I would frequently drive from the Bay Area to the Sierra Nevada in the Summer. Near Davis, there were many acres of well-watered alfalfa. As I would drive by, with my left arm on the door edge (no A/C in those days) it would suddenly get noticeably cooler for a few minutes, and then rise just as quickly as I passed the fields. The assumption is obviously wrong in at least some instances.

Last edited 1 month ago by Clyde Spencer
Tom Abbott
Reply to  Jim Gorman
August 15, 2021 7:46 am

“One thing Nick never deals with is how some stations have multiple changes due to the homogenizing changing neighboring station which then causes changes to a station that has already been changed. Measurements are not sacrosanct to these folks. They are simply numbers to be dealt with and the end justifies the means.”

Good point. They make changes to one site and then this causes changes to other sites, and the actual recorded temperature makes no difference.

We need to bypass this nonsense and only consider the written temperature record. Changing the recorded temperature profile in a computer is what has gotten us into this Human-caused Climate Change scam. The recorded temperatures don’t support the scam.

Clyde Spencer
Reply to  Jim Gorman
August 16, 2021 8:08 pm

Yes, when discarding a data point that appears to be an outlier, it will effect the mean and standard deviations of all the other points, and there may well be one or two new ‘outliers’ pop up. Thus, the expert recommendation is to discard only one outlier, and not prune the whole tree!

LdB
Reply to  Anthony Banton
August 14, 2021 8:54 am

Sorry that answers nothing and the comments about Willis are really funny given Nick Stokes isn’t a “computer modeler”, an “engineer” or a “scientist” either. Nick has a background in Mathematics and Statistics and with a really terrible understanding of physics which he has laid out there by stupid comments over the years.

Did provide a good laugh however … 10 points

Last edited 1 month ago by LdB
george1st:)
August 14, 2021 5:27 am

GIGO , adjust the garbage in and you can adjust the garbage out .

August 14, 2021 5:56 am

Instead of fiddling and cooling past data present data from a large number of sites should be cooled because of the UHI factor. For large cities eg Melbourne Vic Aust a traverse of the city has shown a difference of upto 6C higher in the CBD areas compared to rural locations at the out skirts. There has been a swing to have official weather stations located at airports. Every airport has an UHI effect. It is unfortunate that the UAH measurements were standardised on ground measurements of the adjusted UEA Hadcrut series. But, even then UAH reads lower at present.

Tony
August 14, 2021 6:36 am

It’s value is much higher than that,but i detect jealousy coming from you.How much traffic do you get on your website?

Bruce Cobb
August 14, 2021 7:54 am

The bigger the computer, the more garbage it can hold.

Andy Pattullo
August 14, 2021 10:04 am

A political report based on simulations is an admission of a lack of evidence.

August 14, 2021 10:17 am

Adjusting the past to remove human influence in the present is illogical.

There was no human influence in the past temperatures to remove.

Only recent temperatures have any human influence to be removed.

This entire process of adjusting the past to remove a human influence that was not present in the past is perhaps the weakest link in climate science.

You cannot correct an error that doesn’t exist. Instead all you can do is introduce error.

This problem of adjusting the past really needs to be exposed in paper after paper because it is mathematical nonsense.

August 14, 2021 10:32 am

The problem in modelling the future is that the errors grow exponentially with each iteration.

Even if your forecast error is 0.0001% it is like compound interest. The interest grows and grows until it exceeds the principle.

This problem is well known in the field of Applied Mathematics. Even simple linear programming models used to introduce matrices in high school quickly develop loss of precision errors when run on computers.

It is inconceivable that climate models would not suffer the same fate. For example, total energy.

Climate models will not conserve energy. It is impossible due to LOP errors. Instead they will have to smear this error back into the model each iteration. Already your train has left the tracks.

Graemethecat
Reply to  Ferdberple
August 14, 2021 12:23 pm

Pat Frank has written extensively on the tendency of iterative methods to blow up due to error propagation.

bdgwx
Reply to  Graemethecat
August 14, 2021 8:49 pm

The thing is CMIP6 predicts a warming rate of +0.07 C/decade from 1880-2020. The observation over this period is +0.07 C/decade. According to Pat Frank the probability that CMIP6 would essentially nail the warming rate over a 140 year period is vanishingly small yet it still happened. Don’t hear what I didn’t say. I didn’t say CMIP6 is perfect or that all of its predictions end up as good as the global mean warming trend. I’m just saying that Pat Frank’s hypothesis is that CMIP6 shouldn’t even have gotten in the ballpark on any of its predictions and yet those predictions somehow turned out to be reasonable and generally better than many other models and certainly far better than contrarian models that can’t even get the direction of the temperature change correct.

Last edited 1 month ago by bdgwx
Graemethecat
Reply to  bdgwx
August 14, 2021 11:07 pm

You genuinely believe we can know what global temperatures from the 1880’s to 2 decimal places? That’s assuming global temperature is even meaningful.

bdgwx
Reply to  Graemethecat
August 15, 2021 6:32 pm

I didn’t say that.

Graemethecat
Reply to  bdgwx
August 16, 2021 12:51 am

In that case why are you even talking about global temperature?

bdgwx
Reply to  Graemethecat
August 16, 2021 6:53 am

I’m responding to your post regarding Pat Frank’s analysis of the uncertainty of the CMIP suite of models. His post on WUWT showed that the uncertainty on the global mean temperature is ±11 C over a 90yr period.

Last edited 1 month ago by bdgwx
Clyde Spencer
Reply to  bdgwx
August 16, 2021 8:27 pm

He is showing that the uncertainty range grows to unrealistic values, even if the ‘best guess’ plods along with what seems to be reasonable values.

Why is it that alarmists have such difficulty understanding the concepts of uncertainty? Maybe that is why they are alarmists.

bdgwx
Reply to  Clyde Spencer
August 17, 2021 4:38 pm

Are you saying that Frank’s analysis of CMIP6 uncertainty on the global mean temperature after a 90yr prediction is not ±11 C?

Clyde Spencer
Reply to  bdgwx
August 17, 2021 9:20 pm

No, I’m not saying that at all. What I’m saying is that you don’t understand what Frank is saying, or the implications for an uncertainty that grows with time.

bdgwx
Reply to  Clyde Spencer
August 18, 2021 7:25 am

My understanding of Frank says is that the uncertainty on CMIP6’s prediction of the global mean temperature grows to ±11 C after 90 years.

Jim Gorman
Reply to  bdgwx
August 15, 2021 10:42 am

How many times are you going to show you know nothing about metrology? Dr. Frank’s analysis does not provide “error bars” against which you can measure accuracy. Dr. Frank assessed the uncertainty of the current models. Uncertainty intervals are descriptive calculations of where you can not know what the correct answer actually is. Worse, you can never know what the real answer is when your calculation is within the uncertainty interval.

I’m sorry you do not have a sufficient education in science and specifically metrology to understand uncertainty. As I have already said to you, some metrology courses and perhaps some training in professions where uncertainty is paramount would provide you with the knowledge to learn and accept what uncertainty is.

bdgwx
Reply to  Jim Gorman
August 15, 2021 1:39 pm

And what were the assessed uncertainties on the annual global mean temperature predictions from CMIP5, CMIP6, and GISS Model II?

Jim Gorman
Reply to  bdgwx
August 15, 2021 3:58 pm

I suggest you read Dr. Franks posts here and on the internet about the uncertainty of the projections made by models.

Suffice it to say that none of the projections lie outside the uncertainty intervals for 2100. That means any calculation that ends up in the interval has no proof that it will occur. Narrowing the uncertainty interval must be done in order to claim any certain foreknowledge of what may happen.
Like it or not, uncertainty means don’t know and you can never know what the real truth is.

bdgwx
Reply to  Jim Gorman
August 15, 2021 5:53 pm

I’ll answer the question. He shows an uncertainty for a 90yr prediction of the global mean temperature anomaly from CMIP6 of ±11 C (1σ). That is the equivalent to a trend uncertainty of ±1.2 C/decade (1σ). How do you think we can test to see if this is plausible?

Jim Gorman
Reply to  bdgwx
August 16, 2021 11:17 am

You betray your lack of engineering and metrology everytime you respond. Why? You can not statistically reduce uncertainty. The GUM allows you to show uncertainty as a 1 or 2 sigma when dealing with a single measurand. For iterative measurements uncertainty is additive. The accepted method is to treat iterative measurements as orthogonal and calculated by Root Sum Square (RSS not RMS).

bdgwx
Reply to  Jim Gorman
August 16, 2021 5:08 pm

Maybe you can check my work. The PDF calculation for an error > 0.01 C/decade based on an uncertainty of ±1.2 C/decade (1σ) comes out to > 99%. Is that what you get?

Last edited 1 month ago by bdgwx
Clyde Spencer
Reply to  bdgwx
August 16, 2021 8:24 pm

The thing is CMIP6 predicts a warming rate of +0.07 C/decade from 1880-2020.

Aren’t the models tuned to the history?

You are overlooking the fact that the mean is the ‘best guess,’ but doesn’t guarantee that it is correct. There is a 95% chance that the true value could be as much as +/- 2-sigma.

bdgwx
Reply to  Clyde Spencer
August 17, 2021 6:47 am

That’s my point. According to Pat Frank there is a less than a 1% chance that CMIP6 would predict a warming trend of +0.07 C/decade of a 140 yr period. It was either a lucky shot by CMIP6 or Frank’s analysis of the uncertainty was faulty.

meab
August 14, 2021 4:24 pm

Uhh, Ingrown, the reason CO2 emissions dropped in 2020 was the Covid lockdown. Prior to that, CO2 emissions were steadily climbing. CO2 emissions are already on their way back up and will most likely exceed 2019 emissions soon. Let’s review your assertion in a few months and determine if you were lying (once again).

comment image?w=1752&h=986&crop=1

Tom Abbott
August 15, 2021 4:06 am

From the article: “Specifically, the claim is that we humans have caused 1.06 °C of the claimed 1.07 °C rise in temperatures since 1850, which is not very much.”

The highest temperature the Earth has reached in the 21st century occurred during the year 2016. That is the point where the 1.07C was reached.

We are not currently sitting at 1.07C above the baseline as this article and many other articles imply. We are actually sitting at about 0.5C above the baseline since temperatures have cooled considerably since 2016.

But nobody seems to take notice. They talk like we are at the hottest point in human history and are going higher. But the truth is we are currently going lower.

We need a little better accuracy when reporting our current temperature situation. We are no longer in 2016, and the temperatures are no longer 1.07C above the baseline.

Here’s the satellite record. It shows the real temperatue profile of the Earth:

comment image

bdgwx
Reply to  Tom Abbott
August 15, 2021 2:26 pm

The 1.07C figure in AR6 is the 2010-2019 average. Per UAH the 2010-2019 anomaly is +0.12C. Over the most recent 10 years the figure is +0.15C for a change of +0.03C. That means we are currently sitting at 1.10C using the AR6 baseline and adding in the warming UAH recorded.

Last edited 1 month ago by bdgwx
Tom Abbott
August 15, 2021 4:27 am

From the article: “AR6, which runs to over 4,000-pages, claims to have accurately quantified everything including confidence ranges for the ‘observation’ of 1.07 °C. Yet I know from scrutinising the datasets used by the IPCC, that the single temperature series inputted for individual locations incorporate ‘adjustments’ by national meteorological services that are rather large. To be clear, even before the maximum and minimum temperature values from individual weather stations are incorporated into HadCRUT5 they are adjusted.”

Yes, this is the whole problem.

The Data Manipulators are adjusting the temperatures in order to create a false, scary scenario where the Earth is getting hotter and hotter and is currently at the hottest temperature in human history. NOAA just made that claim yesterday saying last July was the hottest July in human history. It felt rather pleasant to me, considering other July’s I have experienced.

The truth is, going by “unmodified” regional surface temperature charts, the Earth is NOT at the hottest point in human history. There are several periods in recent recorded history that were just as warm as today, in the 1880’s and the 1930’s. And there were other periods in history that were warmer than today such as the Roman warm period. Claiming today is the hottest in human history is a total distortion of the temperature record. The Data Manipulators are lying through their teeth in an effort to sell their Human-caused Climate Change scam.

Tom Abbott
August 15, 2021 4:30 am

From the article: “According to the Australian Bureau of Meteorology (BOM), which is one of the national meteorological services providing data for HadCRUT, the official remodelled temperatures are an improvement on the actual measurements.”

What a joke!

A very costly joke.

Tom Abbott
August 15, 2021 4:35 am

From the article: “Then, just in time for inclusion in this new IPCC report released on Tuesday, all the daily values from each of the 112 weather stations were remodelled and the rate of warming increased to 1.23 °C per century in ACORN-SAT version 2 that was published in 2018. This broadly accords with the increase of 22% in the rate of warming between the 2014 IPCC report (Assessment Report No. 5) which was 0.85 °C (since 1850), and this new report has the rate of warming of 1.07 °C.”

This is not science. It is Fraud posing as science.

tygrus
August 18, 2021 4:43 pm

1) More should be done to validate these formulas adjusting/infilling data.
2) Ensure meticulous records are kept of changes & influences to recorded data.
3) Ensure overlap of recording data when sites move.
4) Replicate the old measuring methods with newer equipment & data to compare eg. use new data to replicate alternative time of day measurements and compare this to data collected using current standards & compare with the models applied to the old data.
5) Quantify the size of the adjustments & error range/behaviour.

%d bloggers like this: