HadCRUT5 shows 14% more global warming since 1850 than HadCRUT4

By Christopher Monckton of Brenchley

They’re at it again. The old lady of temperature datasets – HadCRUT, the only global dataset to reach back to 1850 – has released its revised monthly global mean surface temperature anomalies for 1850-2020. The earlier dataset (HadCRUT4) showed a least-squares linear-regression trend of 0.91 K on the monthly anomalies from 1850-2020 – only just over half a degree per century equivalent.

This was not enough. Like the endlessly-adjusted GISS, RSS and NCEI datasets, HadCRUT5 hikes the trend – and does so by a startling 14%. The usual method is adopted: depress the earlier temperatures (we know so much better what the temperature was a century and a half ago than the incompetents who actually took the measurements), and elevate the later temperatures with the effect of steepening the trend and increasing the apparent warming.

Of course, elaborate justifications for the alterations are provided. It is beyond my pay-grade to evaluate them. However, it is fascinating that the much-manipulated GISS, HadCRUT, RSS and NCEI datasets are managed by climate fanatics, while the UAH dataset – the only one of the big five to have gone the other way – is managed by climate skeptics.

I know the two skeptics who keep the UAH dataset. They are honorable men, whose sole aim is to show, as best they can, the true rate of global warming. But I do not trust the GISS dataset, which has been repeatedly and reprehensibly tampered with by its keepers. Nor do I trust RSS: when Ted Cruz displayed our graph showing the 18 years and 9 months of the last great Pause in global temperature to the visible discomfiture of the “Democrats” in the Senate I predicted that the keeper of the RSS dataset, who describes skeptics as “climate deniers”, would tamper with it to make the Pause go away. A month or two later he announced that he was going to do just that, and then he did just that. As for HadCRUT, just read the Harry-Read-Me file to see what a hopeless state that is in.

And the NCEI dataset was under the influence of the unlamented Tom Karl for many years. I once testified alongside him in the House of Representatives, where he attempted to maintain that my assertion that there had been nearly a decade of global cooling was unfounded – when his own dataset (as well as all the others) showed precisely that.

HadCRUT5 shows a 1.04 K trend from 1850-2020, or three-fifths of a degree per century equivalent, up 14% from the 0.91 K trend on the HadCRUT4 data:

From the HadCRUT5 trend, one can calculate how much warming would eventually be expected if we were to double the CO2 in the air compared with 2020. One also needs to know the net anthropogenic forcing since 1850 (2.9 W m–2); the planetary energy imbalance caused by the delay in feedback response (0.87 W m–2); the doubled-CO2 radiative forcing (3.52 W m–2 taken as the mean in the CMIP6 models); the anthropogenic fraction of observed warming (70%); the exponential-growth factor allowing for more water vapor in warmer air (7% per degree of direct warming); and the Planck sensitivity parameter (0.3 K W–1 m2).

All of these values are quite recent, because everyone has been scrambling to get the data shipshape for IPCC’s next multi-thousand-page horror story, due out later this year. The calculations are summarized in the table. I selected the seven input parameters using three criteria: they should be up-to-date, midrange, and mainstream: i.e., from sources that the climate fanatics would accept.

The industrial era from 1850-2020 is the base period for calculating the feedback response per degree of reference sensitivity over the period. This turns out to be 0.065. Then one finds the unit feedback response for the 100-to-150-year period from 2020 (415 ppmv CO2) to 830 ppmv CO2 by increasing the unit feedback response to allow for extra water vapor in warmer air.

Finally, one multiplies the 1.053 K reference sensitivity to doubled CO2 by the system-gain factor, which is the unit feedback response plus 1: midrange equilibrium doubled-CO2 sensitivity, known as ECS, turns out to be just 1.1 K. If one were to use the HadCRUT4 warming trend, ECS would be less than 1 K. I had previously guessed that the HadCRUT5 trend would be 1.1 K, which implied 1.2 K ECS.

Compare these small and harmless midrange values with the official CMIP6 predictions: lower bound 2 K; midrange 3.7 K; upper bound 5.7 K; lunatic fringe 10 K.

One can work out how many times greater the unit feedback response after 2020 would be when compared with the unit feedback response from 1850-2020 if these absurdly inflated predictions from the latest generation of models were correct: lower bound 14, midrange 19, upper bound 67, lunatic fringe 130.

These revealing numbers demonstrate how insanely, egregiously exaggerated are the official global-warming predictions. There is no physical basis for assuming that the unit feedback response from 2020 onward will be even 14 times the unit feedback response from 1850-2020. At most it might be about 1.1-1.2 times the earlier unit feedback response. Therefore, even the 2 K lower-bound global warming predicted by the models, which implies X = 14, is way over the top.

This is the most straightforward way of showing that the models’ global-warming predictions are without a shred of legitimacy or credibility. They are elaborate fictions. They suffer from two defects: they are grossly excessive, and they are accordingly ill-constrained.

For, as the graph shows, the ECS response to feedback fractions is rectangular-hyperbolic. The feedback fraction (the fraction of ECS represented by feedback response) implicit in the models’ ludicrous predictions generally exceeds 0.5: but there is absolutely no way that the feedback fraction could be anything like 0.5 in the near-perfectly thermostatic climate. When I first showed this graph to a group of IPCC lead authors, they suddenly stopped the sneering to which they had subjected most of my lecture. Suddenly, the lead sneerer fell silent, and then said: “Have you published this?”

No, I said, for at that time I had not worked out what climatologists had gotten wrong. “Well, you must publish,” he said. “This changes everything.”

So it does. But publication is going to be very difficult, not because we are wrong about this but because we are right. If there is going to be little more than 1 K anthropogenic warming over the next century or so, there is absolutely no need to do anything to prevent it. The flight of major manufacturing industries to China, which profiteers mightly from the climate scam sedulouosly promoted in the West by the fawning front groups that it subsidizes, can and should be reversed.

We are taking steps to compel HM Government to pay attention to the truth that global warming will be no more than a third of current official midrange predictions and that, therefore, no net harm will come from it. Watch this space.

4.6 45 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

325 Comments
Inline Feedbacks
View all comments
February 21, 2021 11:48 am

Back of napkin calculation….something, anything (not necessarily CO2) raises the surface temperature say 1 degree….. Stephan Boltzmann equation says that corresponds to 3.7 watts more Q would be emitted from surface….
Look at another important equation, Dalton’s law. 1 degree warmer ocean surface means 7% more water molecules in the air immediately above the ocean…..say the moist air rises, and dry air falls to replace it, then we will eventually get 3.5 % more clouds somewhere, some kilometers and days away by advection, or that afternoon via convection in the form of thunderstorms…lets say cloud albedo is a low .5, so these clouds will reflect 500 W/sq.M for a couple of hours for a daily average of about 40 watts heat reflected. A strong negative 40 watt feedback compared to the 3.7 watt feedback…yes I am comparing apples and oranges, nonetheless fruit.

CO2 is a bit player since its calculated effect is only few watts per doubling, compared to our cloud that only had a 2 hour life…… thus CLOUDS control the temperature of the planet….the vapour pressure of the water that covers 70% of the planet controls the planets temperature….via additional cloud formation.

It is interesting that before clouds to form, the additional water vapour back-radiation causes the temperature to stay a bit higher…we call the combination of humidity and temperature “muggy”. Then, oversimplifying and being somewhat facetious, it starts to rain from the clouds, Ph.D students pack up their equipment and go home. Then they calculate that the increased cloudiness from their instrument readings resulted in a positive cloud feedback, them having missed the rainy low temperature part of the day, or maybe deciding that raindrops had invalidated their net radiometer readings….

Reply to  DMacKenzie
February 21, 2021 5:30 pm

All that…. plus the daily cooling effect of dew point from sunset through near sunrise makes this entire scenario “chaotic” and virtually impossible to model or predict – especially on a global scale.

The above list of like a thousand T stations with little or no uptrend not withstanding… does nothing to explain why GISS continues to show near-hockey-stick uptrends.

We will never get better at this until we ALL agree on some key standard metrics:

  1. No more anomalies to undefined baseline zero actual T value. I use 14.0°C.
  2. No more mixing °K and °C in the same data graphs. Some are mixed up above.
  3. No more confusing heat island T with rural T values. Remove stations in urban zones.
  4. No more altering past and present observed T’s when “updating” databases!
  5. Reduce databases down to about two: UAH and http://www.temperature.global examples.
  6. The above two (satellite atm and rural land + air-over-sea) do not record urban heat.

There… fixed it for ya’s *(^_^)*

Monckton of Brenchley
Reply to  DMacKenzie
February 21, 2021 11:39 pm

Actually, the current best estimate of the CO2 radiative forcing is 3.52 Watts per square meter per CO2 doubling. And one cannot derive that value from the Stefan-Boltzmann equation. Instead, one uses the SB equation to derive the Planck sensitivity parameter, which is the first derivative of that equation: i.e., 288 K surface temperature divided by four times the albedo-adjusted top-of-atmosphere flux density 241 Watts per square meter, or 0.3 Kelvin per Watt per square meter. The product of the Planck parameter and the CO2 forcing gives the reference or pre-feedback sensitivity to doubled CO2: i.e., about 1.053 K.

Reply to  Monckton of Brenchley
February 22, 2021 7:28 am

…agreed, “apples and oranges, nonetheless fruit” is my attempt to give the reader a sense of scope within a short comment without writing something as long as an IPPC report. If I wrote about wide band and narrow band water vapor and CO2 absorption coefficients versus altitude, the average reader would “tune out” and end up not exposed to the main message at all. BTW, great ECS chart in your article….

February 21, 2021 12:09 pm

The good LMB just cannot deal with the Great Reset green Prince Charles right in his own backyard and blames ”China” . See Xi’s Davos , and in fact Russia´s Putin address – they are not playing along with jolly ol´ Britain.
Petitioning HM Gov, well what a farce! Look at BoJo’s betrayal of Brexit!
Problem is in all this the Ami’s think it is a TX problem.
Nuts!

Monckton of Brenchley
Reply to  bonbon
February 21, 2021 3:22 pm

In response to bonbon, I have spoken to Prince Charles and have indicated to him that the science behind the global warming storyline is dubious, though transiently fashionable. However, he wishes to adopt a partisan political stance on this question, as on many others. Whether his abandoning the iron impartiality of our present Sovereign (whom God preserve for as many decades as possible) will protect or endanger the monarchy is more than I can say: but I fear his partisanship will endanger it.

And of course I am not “petitioning HM Government”: I am compelling it to respond.

February 21, 2021 12:12 pm

To demonstrate just how dishonest NASA GISS is, this is what they show as the Global Temperature Graph. It shows a clear uptrend.
https://data.giss.nasa.gov/gistemp/graphs_v4/graph_data/Global_Mean_Estimates_based_on_Land_and_Ocean_Data/graph.html

The problem is, it is extremely hard to find any stations that show any warming at all that isn’t due to the Urban Heat Island Effect or Water Vapor.
https://wattsupwiththat.com/2021/02/21/hadcrut5-shows-14-more-global-warming-since-1850-than-hadcrut4/#comment-3190000

To perpetuate the myth NASA GISS had to work hard to find a station that shows warming. The station they found to highlight is literally synonymous with data corrupted by the Urban Heat Island Effect. NYC Central Park. Here is the graphic they use.comment image

That is how NASA presents their argument, with a station known to be corrupted by the UHI Effect.

Here is NYC Central Park:
New York Cntrl Pk Twr (40.7789N, 73.9692W) ID:USW00094728
https://data.giss.nasa.gov/cgi-bin/gistemp/stdata_show_v4.cgi?id=USW00094728&ds=14&dt=1

Here is basically the same location, but not impacted by the UHI Effect, West Point.
West Point (32.8789N, 85.1808W) ID:USC00099291
https://data.giss.nasa.gov/cgi-bin/gistemp/stdata_show_v4.cgi?id=USC00099291&ds=14&dt=1

NASA literally deliberately chooses corrupt data to manufacture warming. If they simply chose locations sheltered from the UHI and WV they will discover there is no warming.

Columbus (39.1661N, 85.9228W) ID:USC00121747
https://data.giss.nasa.gov/cgi-bin/gistemp/stdata_show_v4.cgi?id=USC00121747&ds=14&dt=1

Reply to  CO2isLife
February 21, 2021 3:12 pm

How NASA GISS shows temperatures:comment image

The Station they choose to demonstrate this is corrupted with the Urban Heat Island Effect:comment image

This is West Point, which is basically the same location (60 miles North) but shielded from the UHI Effect:comment image

NASA deliberately chooses a corrupted data set to make their case for warming.

DHR
February 21, 2021 12:29 pm

“Then one finds the unit feedback response for the 100-to-150-year period from 2020 (415 ppmv CO2) to 830 ppmv CO2 by increasing the unit feedback response to allow for extra water vapor in warmer air.”

But water vapor in air has not been increasing, at least according to the data presented by climate4y.com.

Care to comment my Lord?

DHR
Reply to  DHR
February 21, 2021 12:30 pm

climate4you.com

Monckton of Brenchley
Reply to  DHR
February 21, 2021 3:26 pm

Yes, indeed. The source cited by DHR is Professor Ole Humlum’s excellent monthly climate data update. Specifically, he cites the updated NOAA specific humidity dataset (Kalnay et al. 1996). The data are presented at three pressure altitudes – the near-surface boundary layer; the lower troposphere above the boundary layer; and the mid-troposphere. In the lower troposphere there is no trend in specific humidity; in the mid-troposphere, specific humidity has been declining, directly contrary to the predictions of all the climate models. However, in the boundary layer, where we live and move and have our being, the specific humidity is increasing at the rate of 7% per degree of atmospheric warming – which is precisely the rate of increase that one would expect given the Clausius-Clapeyron relation, one of the very few proven results in the slippery subject that is climatology.

Reply to  DHR
February 21, 2021 3:43 pm

Anyone that says you can increase temperature without also increasing water vapor just isn’t thinking straight. If that assertion was true you would never get water condensed on the lid of a heated pot!

Rob_Dawg
February 21, 2021 12:30 pm

Here’s an idea. Climate insurance pools. No payouts for ±5% prediction outcomes. Then $1t per percent falling outside those bounds. Publishing authors to purchase this insurance from their grant money and backed by their organizations and personal assets.

Reply to  Rob_Dawg
February 21, 2021 1:54 pm

A nice idea for a Properly Functioning Situation but that is the last thing that’s going on here.
Quite totally dysfunctional.

These ‘Scientists’ are behaving like spoilt brat children, racing around the playground shouting fire fire fire. When someone asks ‘Where, I see no fire’ they go racing off somewhere else to point, laugh and hurl insults like ‘Haha, how can you be so stupid’

Then they change the rules of the game, move the goal posts and adjust the data basically, and race around again shouting fire fire fire.

There has to be some control exerted over by, in a normal situation, their parents or teachers but, The Parent in this situation is all our very own Government(s)

Hence why the ‘publishing author‘ bit and ‘insurance‘ wont gain any traction:
Government effectively is the publishing author and the insurance provider.
The brats are thus perfectly safe to carry on doing what they’re doing because left leaning Governments don’t have the guts to slap these brats down.

Now we see where Mr Trump ‘went wrong’ even tho he didn’t do any wrong.
The brats could see what was coming viz: A Slap Down or Swamp Draining and every effort was made to avert that happening.

Despite Biden adding one halfpenny per litre to price of petrol every day since taking office, Trump Derangement Syndrome is still raging- in the BBC, Grauniad and most of, haha, Polite Society here in the UK

ResourceGuy
February 21, 2021 12:45 pm

Stalin would have been so proud.

February 21, 2021 1:03 pm

“The usual method is adopted: depress the earlier temperatures (we know so much better what the temperature was a century and a half ago than the incompetents who actually took the measurements), and elevate the later temperatures with the effect of steepening the trend and increasing the apparent warming.”

As usual, no evidence cited. HADCRUT does not generally adjust station readings, but accepts them as reported by the Met offices. The actual reason for the change is the long overdue use of correct averaging methodology. Till HAD5, they made no attempt to properly estimate the temperature of missing grid cells. They just omitted them from the average, which has the effect of assigning to them the average of the cells that have data. As Cowtan and Way pointed out back in 2013, this leads to a major bias. The gaps are more frequent in the Arctic, and this is a region that has been warming rapidly. Assigning average behaviour to those missing cells artificially dilutes that warming. Taking account of regional behaviour is the right thing to do, and it took them another 7 years to do it. 

Derg
Reply to  Nick Stokes
February 21, 2021 1:15 pm

Settled science indeed 😉

Clyde Spencer
Reply to  Nick Stokes
February 21, 2021 1:38 pm

Stokes
And infilling with some procedure, using nearby stations, replaces missing data, which shouldn’t be used for calculating averages, with extra weight given to the good stations. Weighted, areal interpolations were developed for a parameter, that while varying with distance, do not vary with time. Done properly, the uncertainty should increase when infilling with temperatures that vary with time as a result of moving air masses and abrupt boundaries as at a cold front.

Reply to  Clyde Spencer
February 21, 2021 3:57 pm

You took the words right out of my mouth. Using temperatures north of a river valley to infill temperatures south of the same valley can be off by 1C or even more. Regularly! Meaning you the average is biased high!

The answer is more measuring stations, not more infilling.

Weekly_rise
Reply to  Tim Gorman
February 22, 2021 1:53 pm

You can’t go back in time and install more measuring stations, so that is not a realistic solution.

Also, the HadCRUT temperature analysis is done using anomaly fields, not absolute temperatures, so influence of siting is not significant and covariance extends across much larger regions.

Reply to  Weekly_rise
February 22, 2021 2:51 pm
  1. I told you the scientific way of handling this. You use the data you have. You calculate an uncertainty estimate for the analysis and show how you estimated it. Then that analysis can be compared to an analysis of newer, more populated data sets with their own uncertainty analysis. YOU DON’T MAKE UP DATA!
  2. As you have been told over and over, anomalies destroy any possible weighting. A 0.1C anomaly in Miami is far different than a 0.1C anomaly in Point Barrow.
  3. As you have been told repeatedly, temperatures on one side of a geographical feature can be quite different than temperatures on the other side. E.g. north and south of the Kansas River valley. Thus infilling data from a station on one side into a grid on the other side distorts the data set irrevocably.
  4. Exactly what is the covariance between two single, independent temperature measurements from two different sites? My guess is that you don’t have a clue!
Weekly_rise
Reply to  Tim Gorman
February 22, 2021 3:42 pm

1. Interpolation can be a perfectly valid component of that analysis. There are no newer datasets containing more stations than exist – none of the groups compiling these records can time travel, so far as I’m aware. All are working with the same station networks.
2. You’ve said that to me before, but it didn’t make sense then and doesn’t make sense now.
3. That’s why the anomalies are used. The absolute temperature will be quite different but the anomaly is not likely to be.
4. The covariance is the climate.

Carlo, Monte
Reply to  Weekly_rise
February 22, 2021 4:14 pm

Contrary to the common claims, subtracting a baseline value does NOT reduce uncertainty.

Weekly_rise
Reply to  Carlo, Monte
February 22, 2021 4:53 pm

I believe the benefit of using anomalies is that the anomaly represents a much broader geographic area than does the absolute temperature and it allows you to work with a station network whose composition changes through time, not so much that it reduces uncertainty.

Carlo, Monte
Reply to  Weekly_rise
February 22, 2021 5:02 pm

Averaging Pt. Barrow and Kuala Lumpur then subtracting Cleveland does not give you the “climate”.

Regardless, if you don‘t have a handle on the true measurement uncertainty, you are just fooling yourself.

Reply to  Weekly_rise
February 23, 2021 9:26 am

How in Pete’s name does using anomalies represent a much broader geographical area than using absolute temps? Once again you make a wild claim and have no math to back it up. It’s always just religious dogma with you!

If you take 30 stations in a state, take their measurements at 3PM in the afternoon and get an average, and then delete ten of the stations and take an average, will the resulting average change? If so, why? If you add ten more stations and take an average, will their average change? If so, why?

Why would using anomalies from a baseline being subtracted from the temperatures change any more or less than the change in the absolute average?

What happens with the uncertainty in each situation?

Weekly_rise
Reply to  Tim Gorman
February 23, 2021 1:23 pm

The anomaly represents a broader geographic area than the absolute temperature because the anomaly removes the influence of the specific point location where the measurement was taken. A hot summer in a mountainous region will likely be hotter both at high altitudes and in the mountain valleys, but the two places will have very different temperatures.

Also importantly, the anomaly normalizes the temperature value, allowing records to be combined from a network whose composition is changing through time. It would be a big problem averaging together a record from a mountain and a record from a low valley if the mountain record is half the length of the valley record, unless we use the anomaly. It would introduce a spurious trend.

We can make a really simple example to illustrate this. I created two fake temperature series for a station on a mountainside (StationA) and a station in a valley (StationB). Each station has the same long term trend of 0.1 degrees per year baked in, and each has some random noise thrown on top. They differ by a 7 degree difference in the mean annual temperature, and by the fact that StationA came online in 2005, while B came online in 1990:

comment image

If we simply average the absolute values together, we will get a spurious cooling trend introduced into the series because the composition of the records differs:

comment image

Whereas if we average the anomalies, the trend reflects the underlying common signal between both records:

comment image

Weekly_rise
Reply to  Weekly_rise
February 23, 2021 2:52 pm

I realized too late after posting that the Average Anomaly graph above is only showing the anomaly for Station A. Here is a corrected series of graphs with the correct Average Anomaly (the numbers are generated randomly each iteration, so they’re similar but not identical).

Reply to  Weekly_rise
February 23, 2021 3:26 pm

Do you *really* understand what you did here?

You imposed a rising trend on top of a series with no trend! And you are trying to claim that is is a proper scientific process?

This is what arises when you try to superimpose uncorrelated data on top of each other.

And why *shouldn’t* you get an overall cooling trend if one data series is much less than the other? What you are trying to say is that averaging temperatures in Minnesota with temperatures in Kansas shouldn’t show cooling if Minnesota is cooling and Kansas isn’t!

You’ve just proven the fallacy of trying to come up with a global temperature!

BTW, just exactly how did you calculate the anomalies? Did you use a common base? Or did you just calculate the differences between the values in each set? If you used a common base then why? Is the base in each location the same? If you just subtracted the values then you didn’t actually create an anomaly, just a difference!

Show me the data sets. Visually the blue data doesn’t seem to actually show an increasing trend, the trend looks to be zero!

If these *are* uncorrelated data sets then tell me exactly what you think you have shown!

And if these are uncorrelated data sets then why wouldn’t taking a data set with larger numbers and subtracting a data set with smaller number result in an increasing trend. Jeesh, that’s just basic aritthmetic, it’s not even statisitcs!

I *really* don’t know what you think you have shown here. My belief that you know any statistics at all gets less and less with every message you post.

Weekly_rise
Reply to  Tim Gorman
February 23, 2021 5:46 pm

Tim, I appreciate that these might be new concepts for you. I’ll try to explain them more carefully.

The anomaly is computed by taking the differences between a series and a reference baseline mean. For both series, the same baseline period from 2005-2020 was used to compute the anomaly. For a series, take the mean of the series from 2005-2020. The anomaly for a given year is then the difference between the temperature value and this mean.

You should not get a cooling trend in the mean when the underlying signal for both series is a warming trend – the trend for the absolute values is spurious – we know that it does not represent reality because these are artificial series that we’ve baked a warming trend into. The cooling trend is occurring because the series with a cooler mean temperature doesn’t start until 2005. Using the anomaly prevents this issue because both series are normalized.

Also note in that updated graphics I’ve plotted the trend lines, so you can see that both series have a positive trend, so we’d expect their average to have a positive trend as well.

Reply to  Weekly_rise
February 24, 2021 8:09 am
  1. So where is your baseline that you used to calculate the anomalies? I don’t see it anywhere. Obviously you made it up as well as the data series. Why are you unwilling to show it?
  2. Using a *global* baseline to calculate anomalies still propagates the uncertainty of the global baseline into the anomalies. Why do you show no uncertainty intervals with your data series and the baseline? Talk about being being a new concept. So what is the uncertainty associated with each data point? I can’t even make out the vertical scale on your anomaly graph. It appears to have markings 1, 1, 3, and 5 of the positive side and 1, 5, and 7 on the negative side. Your anomaly for 1992/1993 is about a -7, meaning your baseline had to be about a positive 34. With a baseline of 34 With a baseline of 34, your anomaly for 2015 should have been highly negative (34-34) = 0 and (34-28) = -6. The average of the two is -6 but you are showing about a +1, I can’t really tell because of the crazy vertical index on your graph. I simply can’t tell how you came up with your average anomaly graph. You might want to check your math again!
  3. Again, why wouldn’t you get a cooling trend when averaging a hot data series with a cold data series? You are trying to find an AVERAGE CLIMATE. If you add in a cold data series then why wouldn’t that change the overall trend? There is so much wrong with your analysis. First, these are not stationery data series. Therefore doing a linear regression can be confusing, especially after combining two independent non-correlated data sets. You should try calculating the first differences and see what that shows. That tends to move a non-stationery series into a psuedo-stationery series. That’s also a problem with *all* of the AGW advocates.
  4. Anomalies do *not* normalize anything. Anomalies only hide data. There is simply no doubt that the climate for the blue data series is *much* colder than for the yellow series. How do the anomalies tell you this?
  5. I simply do not know why you keep claiming a data set with a positive trend when added to another data set with a positive trend should give you a positive trend. You are combining data sets that are non-congruent! If you take the 2nd half of a long data set and subtract a significant value from all the data then it will *always* give you a negative trend overall in the whole data set. If you look at your average values from 2005 on there *is* a positive trend. You’ve just highlighted part of the problem with current temperature data sets. Most of them are non-congruent, the data sets making all this up start and stop at different intervals, different measurement techniques are used at different times, uncertainty is not applied anywhere in any data set (all temps are assumed to be 100% accurate) and then the data is bastardized by trying to use infills and homgenazition to create what someon’s opinion says it should all show. That’s about as unscientific as it can possibly get. Each temperature set should be analyzed separately, including its uncertainty, and then *compared* to other data sets. If the compared trends match then you can use the matching trends for current analysis of temperatures. If they don’t match then you have a problem. You can’t mask that problem with hand-waving magic.
Weekly_rise
Reply to  Tim Gorman
February 24, 2021 10:30 am
  1. Here is a version with the baselines plotted, along with the calculated slope of the trends. I’m not sure how I can safely and anonymously share the excel workbook with you, but happy to do that if you point me to a way to do it. In the meantime, here is a screenshot of the workbook, showing how the baseline is calculated for Station A.
  2. The baseline is not global, it is computed for each series using a regional climatology (in this case, the baseline is simply the mean of each station series over the 2006-2020 reference period).
  3. We aren’t trying to find average climate (which is not a meaningful quantity), we are trying to find the average change in the climate between both stations. The average trend is the thing we want to get, and the trend is positive for both stations. Notice in the image above, the average anomaly trend is close to the value of the two trends, but the trend for the average of the absolute temperatures is not even close. It is an artifact of the averaging, and doesn’t reflect the change that is occurring at both sites through time.
  4. The fact that one station is in a colder place is not relevant information for us. We want to know how the temperatures are changing through time (we want to know “is station A getting warmer? Not, “is station A in a warm place?”
Reply to  Weekly_rise
February 25, 2021 8:21 am
  1. I know to subtract.
  2. How did you use a baseline of 2006-2020 for dates earlier than 2006? This is the problem of using non-time consistent data.
  3. You can’t find the “average” change using a mid-range value. All you find is the average mid-range value or the average mid-range anomaly. Nor can you find anything for a point in-between stations. You have no data for the points in-between therefore you miss terrain, altitude, and humidity differences for the mid-points. Trying to interpolate between stations at altitude 1000ft and 5000ft is bound to be wrong. You have no idea of the altitude or pressure slope. This is why even regional averages are so uncertain!
  4. So the fact that D*eath Valley and Kansas City have vastly different altitudes, pressures, and humidity means nothing? The temperature anomalies will tell you everything you need to know about what is changing? ROFL! The average global temperature, even if calculated from anomalies, has HUGE uncertainties, uncertainty intervals wider than the values of the anomalies you are calculating. Remember, the uncertainty interval travels with the anomaly. If its uncertainty is +/- 0.5C then how do you distinguish a difference in the anomaly of less than +/- 0.5C? Especially if you are using a baseline that is a yearly average?
Weekly_rise
Reply to  Tim Gorman
February 25, 2021 12:18 pm
  1. The baseline is an average, not a difference. Recommend you click through the links provided above. The discussion will not be productive if you don’t even look at the materials you demanded and that I’ve provided for you.
  2. The baseline is a reference value. Given that the trend of this series is upward, we’d expect that values prior to the start of the base period would be negative, values after will tend to be positive.
  3. Those reasons are why we use the anomaly rather than the absolute temperature. You’re halfway to having a breakthrough in understanding, just need to push a bit more.
  4. It means nothing to the anomaly, which only measures how different temperature for a given year is in a given location relative to the mean temperature of that location during base period. The uncertainty in the mean is reduced by averaging (by the square root of the number of stations), which means that the uncertainty in the global mean is significantly smaller than the mean uncertainty (standard deviation) of the individual stations.
Reply to  Weekly_rise
February 25, 2021 5:31 pm
  1. if the baseline is an average then what is its uncertainty? if it is calculated from temperatures with uncertainty then the baseline has uncertainty also. And guess what the total uncertainty of a-b is? sqrt(u_a^^2 + u_b^^2). The uncertainty goes UP when you form an anomaly by subtraction, just like it does by addition.
  2. I have no idea what you are talking about. When you take a data set that is trending upward over time and then you basically subtract values from the last half of the data set you are quite likely to turn an upward trend into a downward trend.
  3. See 1. The uncertainty of your anomaly GROWS when you do the subtraction. So you wind up with a result that is more uncertain that what you started with! So why wouldn’t you just use the less uncertain absolute temperature? It only makes sense if you want to ignore uncertainty, which is what most climate scientists seem wont to do.
  4. Again, your anomaly has a wider uncertainty interval than the absolute temperatures. Why would any competent scientist want to move from a more certain data set to a less certain data set?

There is no mean for independent measurements other than the value itself. Each measurement represents a population of size 1.

What is the mean of (70)? What is the mean of (20)? Their covariance is zero so they *are* independent (you *do* know how to calculate covariance, right?).

You cannot simply take independent data points that are not correlated (cov = 0) and put them in a combined data set and call them a random variable with a mean. Why is that so hard to understand?

Weekly_rise
Reply to  Tim Gorman
February 26, 2021 6:53 am
  1. Of course, if there is uncertainty in the measurements, there is uncertainty in the baseline, and uncertainty in the difference between measured temperature and baseline. The uncertainty in the difference between the measured temperature and the baseline will be larger than the uncertainty of the measured temperature alone.
  2. Taking the anomaly has no effect whatsoever on the trend. You can prove this to yourself quite easily.This is an important feature of the anomaly, since the trend is the thing we are actually interested in.
  3. You don’t use the absolute temperature because the composition of the station network changes through time, and averaging absolute temperatures will introduce spurious trends into the data that do not reflect underlying climate signal, but are artifacts of the averaging. You can use absolute temperatures if you do things like infilling, which NOAA does (they do attempt to present an absolute average), but you simply avoid all the potential pitfalls of those approaches by using the anomaly. The error in the global average is quite small because you are averaging together hundreds of stations, and the uncertainty varies inversely with the number of stations you average. GISTemp provides their uncertainty estimate, and you can see that it’s not substantial.
  4. Because the uncertainty in the global mean is quite small, and it’s far more important to do things like preventing preventing changes in the network composition from introducing spurious trends than it is avoiding slightly increasing the measurement uncertainty.
Reply to  Weekly_rise
February 26, 2021 9:19 am
  1. true
  2. First you are trying to trend a time series that is non-stationery. Linear regression on a non-stationery series is typically not done.
  3. If the network changes then START OVER at each change. Don’t try to create data through infilling, homogenization, etc. You don’t actually avoid *anything* by using an anomaly when the anomaly has higher uncertainty than the absolute temps themselves.
  4. The global mean is meaningless. It’s uncertainty is so large that it is impossible to actually discern anything. The global average is calculated using independent, uncorrelated data whose uncertainty is impossible to decrease. And how do you calculate the mean of an average?

You are *still* throwing crap against the wall hoping something will stick. It’s only smelling up the thread!

Feynman: “The first principle is that you must not fool yourself and you are the easiest person to fool.”

That describes you perfectly. You continue to fool yourself!

Reply to  Tim Gorman
February 26, 2021 10:34 am

First you are trying to trend a time series that is non-stationery. Linear regression on a non-stationery series is typically not done.

There’s no point in fitting a Linear regression to stationery time series, as by definition it has no trend. Temperature time series are trend-stationary, that is they become stationary when you remove the trend.

Weekly_rise
Reply to  Tim Gorman
February 26, 2021 11:16 am

“First you are trying to trend a time series that is non-stationery. Linear regression on a non-stationery series is typically not done.”

The presence of a trend is what makes the series non-stationary; estimating trends for such series is done all the time.

“If the network changes then START OVER at each change. Don’t try to create data through infilling, homogenization, etc. You don’t actually avoid *anything* by using an anomaly when the anomaly has higher uncertainty than the absolute temps themselves.”

Start over from what? Stations come online and go offline throughout the network’s history, existing stations are moved to new location, or stations simply stop reporting for a period. All of these occurrences can introduce spurious trends in the series if the anomaly is not used.

“The global mean is meaningless. It’s uncertainty is so large that it is impossible to actually discern anything. The global average is calculated using independent, uncorrelated data whose uncertainty is impossible to decrease. And how do you calculate the mean of an average?”

I just provided the uncertainty estimate for the GISTEMP global mean. It is not large, and it is perfectly possible to discern trends in the data given the uncertainty.

Reply to  Weekly_rise
February 27, 2021 9:10 am

The presence of a trend is what makes the series non-stationary; estimating trends for such series is done all the time.

A trend in time based on point values whose underlying data has different variances is, again, meaningless. You *have* to do something like first differences to identify exactly what is going on. And if estimating trends for time series is done all the time using linear regressions then it being done WRONG all the time.

Think about it. Tmax occurs at different times because of its correlation with the sun’s travel across the sky. That’s why we have time zones for Pete’s sake. Now, let’s assume we are trying to find the average temperature in a geographical area. That has to be done at a single, consistent point in time. If you just use Tmax without any consideration of when it happens then you will totally overestimate the average temperature of the area because the heating represented by the temperature is a curve,

It’s the same thing over the globe. If you want to know the temperature of the globe at any one time then you have to do the measuring all at that same time. You simply can’t find the *average* temperature of the globe by adding up the maximum temperature for all points on the globe. Not every point is at maximum at the same time.

It’s the same thing with infilling temperatures. If you take Tmax at one point and infill into an area 100km away then you are artificially ignoring that the point 100km doesn’t reach Tmax at a different time. When it is Tmax in Kansas City it is less than Tmax in Denver. So the average at the point in time KC hits Tmax is *less* than it would be if you just used Tmax x 2.

But I can’t see where anyone accounts for this time difference. They just willy nilly infill temperatures with no regard for the time difference (i.e. longitude difference), latitude difference, or terrain difference (e.g. altitude, pressure, humidity, etc).

This makes the “average global tempeature* an actual farce.

Start over from what?”

From where ever you have to! If you must provide 15 different series, each with its own uncertainty then so be it. At least it would comply with the tenets of physical science!

“I just provided the uncertainty estimate for the GISTEMP global mean. It is not large, and it is perfectly possible to discern trends in the data given the uncertainty.”

And the GISTEMP ignores everything I’ve talked about above. It certaintly ignores the fact of time differences. The usual excuse, “it uses consistent Time-of_Day” is just a canard. It ignores the time differences in each series of temperatures and serves no real purpose in trying to describe what is going on with the thermodynamic system known as the Earth.

Weekly_rise
Reply to  Tim Gorman
March 1, 2021 6:49 am

“A trend in time based on point values whose underlying data has different variances is, again, meaningless. You *have* to do something like first differences to identify exactly what is going on. And if estimating trends for time series is done all the time using linear regressions then it being done WRONG all the time.”

Your lack of familiarity with these subjects coupled with your inordinate hubris is striking. Please read the linked Wikipedia article and let me know if you have any additional questions.

Tmax happens at the warmest point of a given day, Tmin happens at the coolest point of a given day, it doesn’t matter what time zone you are in, Tmax is a consistent variable to use. It doesn’t matter the specific hour of the day, it just matters what the hottest point in a given 24 hour period was and what the coolest point in the same 24 hour period was.

Reply to  Weekly_rise
March 3, 2021 6:34 pm

stop with the ad hominems. You don’t prove anything with them. Address my assertions instead of evading or equivocating.

it doesn’t matter what time zone you are in”

Of course it matters. If you want to know the condition of the earth AT ANY POINT IN TIME, which is the only *valid* way of calculating the temperature of the earth then it certainly matters what time zone you are in.

Since half the earth is in sunlight while half is in dark then the actual temperature of the earth, AT ANY POINT IN TIME, is not a mid-point average calculated by anomalies. It is a gradient in three dimensions, latitude, longitude, and time, integrated over the entire surface of the earth.

The GAT is a piss poor, meaningless value.

Weekly_rise
Reply to  Tim Gorman
March 4, 2021 12:11 pm

The thing that we want isn’t the instantaneous global temperature at every point in time, but the average daily high and low for the whole globe, which requires a 24 hour interval. Is this clearer for you?

If we had 1000 temperature stations and we were averaging together second-by-second readings, we would need to process 86,400,000 observations a day, 31,536,000,000 a year, year after year and decade after decade, all to estimate something we can just as easily get with 2000 observations per day.

Reply to  Weekly_rise
March 4, 2021 3:10 pm

You way overstate the heat content of the earth if you don’t consider both the contribution of the maximum temps *and* the minimum temps. When you only use the mid-range value it over-estimates total heat content. Averages hide reality. It’s always been so.

Reply to  Carlo, Monte
February 23, 2021 9:20 am

The uncertainties add root sum square regardless of whether you are adding two values or subtracting them.

Reply to  Weekly_rise
February 23, 2021 9:17 am
  1. Interpolation is *NOT* infilling data, i.e. creating data to suit ones purpose. When you “infill” at one place with data from somewhere else that is not “interpolation” it is guessing!
  2. Once again, a 0.1C anomaly at 10C has a relative change of 0.1/10 = .01 = 1%. A 0.1C anomaly at 20C has a relative change of 0.1/20 = .005 = 0.5%. The anomaly at 10C has twice the impact than the one at 20C. If this doesn’t make sense to you then you are being willfully ignorant about it. If you would use the percentage change instead of directly using the anomaly itself your calculated impact on the climate would be far more appropriate. Why don’t the climate modelers and scientists do this? Because they are not true physical scientists.
  3. Anomalies don’t tell you anything about the climate. Temperatures do. Why do you keep trying to deny this. If it is colder on one side of the valley than the other then how do the anomalies tell you that? Especially if you infill the colder side of the valley with a temperature from the warmer side of the valley (or vice versa)?
  4. The covariance is the climate.” What in Pete’s name does this mean? Covariance in temperatures are driven by the sun (i.e. daily change), the tilt of the earth (i.e. seasonal change). Factors that destroy covariance are things like the latitude, the geography (i.e. coastal vs inland), terrain (e.g. river valleys).

You claimed to be some kind of mathematician and statistician but you keep making bogus statements that belie that claim! You can’t even begin to explain the covariance between two temperature measurements from two different stations!

Pat was right. It’s a waste of time discussing this with you . You are obviously ignorant of physical science tenets as well as statistics, regardless of what education you claim.

Reply to  Tim Gorman
February 23, 2021 10:37 am

Once again, a 0.1C anomaly at 10C has a relative change of 0.1/10 = .01 = 1%. A 0.1C anomaly at 20C has a relative change of 0.1/20 = .005 = 0.5%.

What happens if you measure it in Fahrenheit or Kelvin?

The same change shouldn’t have a different effect, just because you’ve changed the zero point.

Anomalies don’t tell you anything about the climate.

They don’t. What they tell you about is climate change.

If it is colder on one side of the valley than the other then how do the anomalies tell you that?

They don’t, but they will tell you if it’s warming or cooling on both sides of the valley. It’s more reasonable to assume that both sides of the valley will change by similar amounts, rather than them having the same temperature.

Reply to  Bellman
February 23, 2021 12:13 pm

Anomalies may be useful in comparing two differing stations but they become useless when trying to determine a “global temperature” through averaging. In that case the percentage of the absolute temperature IS very important. What you are doing with averaging is combining values of different distributions with different variances. When you do this, variance ALWAYS grows. In other words, the mean is less and less important as a descriptor of the appropriate value.

Reply to  Jim Gorman
February 23, 2021 1:25 pm

Which is why you don’t care about a global temperature. What you care about is a global temperature change.

You didn’t answer my question about what happens if you change the zero point of a temperature with regard to percentages. The only logical physical zero point is absolute zero, and the percentage difference between one degree added to 10°C and added to 20°C isn’t going to be much difference when translated to K.

Reply to  Bellman
February 23, 2021 2:28 pm

But they *are* different! The percentage change will be .35% vs .34% when using Kelvin.

You can dismiss those differences but they *are* important, even when working with Kelvin. You must *still* weight each contribution based on its contribution. You can’t simply say “Well they are close so I’ll just treat them as equal.”

Reply to  Tim Gorman
February 23, 2021 3:10 pm

Well yes, they’re one ten thousandth different. We’ll have to disagree about how important that difference is. If you want, do your own global estimates using percentage change from K for each station. I doubt it would make any serious difference.

Reply to  Bellman
February 23, 2021 3:38 pm

If you don’t think they are important then why do you think that a 1.5C increase over a century is important. Percentage wise they aren’t that different.

Using your logic we could just change the scale on your auto speedometer to where the change from 60mph to 70mph would be one needle width on the gauge. Not an important change, right? 70mph to 80mph would just be one more needle width. Not important, right?

Until the HP stop you for speeding!

Reply to  Bellman
February 23, 2021 12:43 pm

No! It doesn’t matter. 0.2/A is going to be a higher percentage than 0.2/(2A).

Jeesh, how many people on there can’t do basic algebra?

They don’t. What they tell you about is climate change.”

NO! They do not! 0.2/A is a vastly different *temperature* change than 0.2/(2A). If you do not weight the change against what is changing they you aren’t learning *anything*. If point A changes from 20 to 21 and point B changes from 70 to 71 the impacts of the temperature change is vastly different no matter what scale you use! It’s simple, basic algebra!

And this doesn’t even cover the whole thing! A temperature anomaly tells you absolutely nothing about pressure or specific humidity – both of which are integral to understanding actual heat content. Do you understand that 100F in Lincoln, NE with 50% specific humidity *feels* much different than 100F in De*th Valley with 10% humidity or 100F in Miami with 90% humidity? It’s because of HEAT CONTENT. And heat content tells you far more about climate than temperature by itself.

“They don’t, but they will tell you if it’s warming or cooling on both sides of the valley. “

How can it? Especially if you have infilled the temperature, and therefore the anomaly, from one side into the other instead of actually measuring it? Climate is the *entire* temperature profile, not an anomaly based on a mid-range temperature. You need to integrate the *entire* temperature profile to get a handle on climate and climate change. That’s why degree-day values are *far* better measurements of climate, both in the short-term and the long term.

Reply to  Tim Gorman
February 23, 2021 3:24 pm

And this doesn’t even cover the whole thing!

Has anyone suggested global temperature change tells you everything about climate change. Of course the effects of global warming, good and bad, will be very different throughout the world.

How can it? Especially if you have infilled the temperature, and therefore the anomaly, from one side into the other instead of actually measuring it?

Well obviously if you’ve only got one thermometer in the valley it’s impossible to know for certain if both sides have warmed – but it’s not a bad assumption. It’s certainly a better assumption than both sides are the same temperature.

Monckton of Brenchley
Reply to  Nick Stokes
February 21, 2021 3:30 pm

Mr Stokes as usual pretends that “no evidence is cited”. He has only to look at the two graphs shown in the head posting to realize that the temperatures in the earlier part of the record have been reduced and the temperatures in the later part of the record have been increased, precisely as I said they had. There is the evidence, right in front of his nose, plain as a pikestaff. The data are readily available from the CRUTEM website, and Mr Stokes could – just for once – have provided some contrary evidence himself, if he had any. But he does not.

Reply to  Monckton of Brenchley
February 21, 2021 4:09 pm

The evidence is in plain statements from the CRUTEM V paper
https://ueaeprints.uea.ac.uk/id/eprint/77985/1/osborn_CRUTEM5_revision1_2019JD032352R_Merged_PDF_3.pdf

First:
“For CRUTEM we do not apply global, statistical algorithms to identify and correct for inhomogeneities: instead we utilise homogenization efforts undertaken by national or regional initiatives, which may benefit from the knowledge of local circumstances or additional observing stations.”

IOW CRUTEM does not adjust the data themselves, and the MET stations did not change their practices for this new version. Instead they explain the real reason for the change:

“Nonetheless, the global-mean timeseries of land air temperature is only slightly modified compared with previous versions and previous conclusions are not altered. The standard gridding algorithm and comprehensive error model are the same as for the previous version, but we have explored an alternative gridding algorithm that removes the under-representation of high latitude stations. The alternative gridding increases estimated global-mean land warming by about 0.1°C over the course of the whole record. The warming from 1861–1900 to the mean of the last 5 years is estimated as 1.6°C using the standard gridding (with a 95% confidence interval on individual annual means of -0.11 to +0.10°C in recent years), while the alternative gridding gives a change of 1.7°C.”

As they say, if you use the old gridding practice, there is little change in trend. It is only if you properly treat missing cells that the higher (correct) trend emerges. 

 

Carlo, Monte
Reply to  Nick Stokes
February 21, 2021 6:13 pm

These bozos can’t calculate the “climate” in the past, what makes you think they can calculate the future?

fred250
Reply to  Nick Stokes
February 21, 2021 9:53 pm

Thanks for CONFIRMING that they ADJUST past data..

…. and smear URBAN WARMING all over the planet over huge areas where it doesn’t belong.

It really becomes a MANIC CASE of GIGO, with the aid of all that data manipulation, doesn’t it.

Monckton of Brenchley
Reply to  Nick Stokes
February 21, 2021 10:32 pm

Mr Stokes, having been caught out in a gross error, wriggles futilely in his usual fashion. Have the HadCRUT4 data been changed to make HadCRUT5? Yes, they have. Have reasons been given for the alterations? Yes, they have, as the head posting fairly points out. Do these reasons constitute an attempt by the usual suspects to assert that they know better today what the global mean surface temperature was a century ago than those who took the measurements? Yes, they do. Have the changes depressed the temperatures in the earlier part of the record and elevated those in the later part, as the head posting states? Yes, they have.

Mr Stokes has a touching faith in the rightness of data that uphold the collapsing notion of rapid, dangerous global warming in which he has placed his faith. He is entitled to his religion, but in this column we do science. The UAH data, for the period since December 1987 when the two datasets overlap, produce far less warming than the HadCRUT data. You pays your taxpayers’ money and you takes your choice.

And, as the head posting says, anyone who places his touching religious faith in Adjustocene tamperings with data has only to read the HarryReadMe file, written by one of the keepers of the HadCRUT dataset, to realize just how much of a fiction that dataset is.

fred250
Reply to  Monckton of Brenchley
February 21, 2021 11:45 pm

LCMofB, I have coined a new term for the mental illness that has infected much of climate science.

I refer to it as ACDS

(Anti-CO2 Derangement Syndrome)

or, since “they” don’t know the difference….

(Anti-Carbon Derangement Syndrome)

Reply to  Monckton of Brenchley
February 22, 2021 2:05 am

“Do these reasons constitute an attempt by the usual suspects to assert that they know better today what the global mean surface temperature was a century ago than those who took the measurements? Yes, they do.”
No, they don’t. Again, as the quote very clearly states, CRUTEM does not adjust the past readings. They rely on the organisations that actually took the readings to make any adjustments needed.

But as they also say, the same calculation performed with HAD5 data in the old style gives basically the same trend. The change to data, which is mainly the addition of new stations, has not changed the trend. They did not depress the past and raise the present. The numbers say so.

As they state, it is the change of averaging method which raised the change since 1850 by 0.1°C. 

“Mr Stokes has a touching faith in the rightness of data”
Well, I do believe in getting data right. But I also have faith in correct mathematical methods, and that is what is at stake here. HADCRUT got their averaging method right, and that made the difference.

fred250
Reply to  Nick Stokes
February 22, 2021 10:52 am

“They rely on the organisations that actually took the readings to make any adjustments needed.”

.

needed to meet the AGS “standard” you mean , hey Nick !!
.

“HADCRUT got their averaging method right,”

.

Only in your ACDS-infected mind.

No evidence it is any more correct than random chance.

You can’t “average” temperatures that are sparse, urban affected, highly tainted with all sort of other issues, and pretend to get something that is anything more that a massive load of GIGO !!

Monckton of Brenchley
Reply to  Nick Stokes
February 22, 2021 12:37 pm

Read Harry-Read-Me and then proclaim once more your touching faith in the rightness of the HadCRUT data.

Clark Johns
Reply to  Monckton of Brenchley
February 22, 2021 3:06 pm

Do your homework. Ian Harris (‘Harry’) never worked on HADCRUT, his working notes referred to the legacy CRU TS 2.1 product and have precisely zero implications for the quality of the dataset being discussed here. Not that somebody’s privately expressed opinions should mean much; I worked for 30 years in software QA and not once did it occur to us to examine the developers’ personal workbooks as a meaningful measure of anything.

Nick is right, you have nothing of substance here. 

Reply to  Nick Stokes
February 24, 2021 5:43 pm

Except there is no reliable test of those averaging methods. That is the first problem. So no-one can prove definitively either way whether they are right or wrong.
The second problem is that the trends the climate scientists produce for individual countries and regions very often do not resemble in any way a basic average of that country’s temperature records. For example check out this for Texas: https://climatescienceinvestigations.blogspot.com/2021/02/52-texas-temperature-trends.html
An average of the temperature anomalies for all the 220 longest records in Texas which are all over 60 years long (the shorter ones are nigh on useless or just repeat the results of the longer records) shows NO WARMING since 1840. Yet the Berkeley Earth adjusted data for those same stations when averaged produces a warming of 0.6-1.2 K, despite 70% of those 220 records having stable or negative trends before adjustment.
Now I don’t expect a simple average of temperature records to give 100% the same results as say HadCRUT5 (or Berkeley Earth), but I do expect them both to at least be in the same ballpark.

Lrp
Reply to  Nick Stokes
February 21, 2021 10:38 pm

Properly treat? Through alternative gridding and homogenisation? That’s data alteration and misleading. If they were just doing it for sport or as a class exercise it wouldn’t matter, but these people publish this garbage as science supporting public policy.

Reply to  Nick Stokes
February 23, 2021 11:58 am

NS —>. When are you going to move forward and begin to publish more geographical focused temperature data. For one I am tired of reading peer reviewed papers that “assume” that the GAT applies everywhere on the globe. Agencies and government really need to be making targeted plans for mitigation of the effects on a local and regional basis.

If there are already areas not experiencing warming everyone needs to know this a prepare appropriately. For instance farmers need to plan for new varieties or even new crops. HVAC folks need to plan for new equipment. By the way, it is funny that we never see any HVAC people publishing data about how their industry is undergoing massive change in the installed capacities.

More and more people are investigating the temperature changes in more focused areas. Few are finding the massive growth in local and regional temperatures that GAT prescribes. This should begin to worry you about what is wrong with GAT.

We can go over the many statistical missteps in how a GAT is determined. What is more worrying is how no attention has been given to doing time series analysis of the trends and the causes. The simple task of projecting a time series requires that the underlying parts be stationary. That means static means, variances and other statistical parameters.

Combining this many different time series can generate spurious trends and I fully expect this is some of what is going with the growth in GAT.

fred250
Reply to  Nick Stokes
February 22, 2021 12:10 am

ROFLMAO

Nick ADMITS that the Met Office adjusts data

So funny. Nick, how do you breathe with both feet in your mouth ??

Certainly we know that basically ALL worldwide data is DELIBERATELY ADJUSTED to try to meet the AGW meme.

fred250
Reply to  Nick Stokes
February 22, 2021 2:08 am

“They just omitted them from the average,”

.

And now they JUST MAKE THEM UP !!

Which leads to the BIAS that they choose.

Thanks for the admission, Nick

Stop chewing both big toes at once.

You look very STUPID. !

Dave Fair
Reply to  Nick Stokes
February 22, 2021 9:55 am

Frankly, none of this matters. One can derive general thermometer temperature patterns since the late 19th Century to help in understanding decadal patterns, nothing else. Assigning most or all of the warming since the mid-20th Century to mankind is a modeling exercise and not worth the attention.

Chris Hanley
February 21, 2021 1:06 pm

It’s comic, just more confirmation that the entire enterprise is absurd.

Bob Johnston
February 21, 2021 1:24 pm

At some point there must be consequences for the liars.

fretslider
February 21, 2021 1:37 pm

Why not save time and skip to hadCRUT 10?

fred250
Reply to  fretslider
February 22, 2021 12:14 am

“skip to hadCRUT 10”

.

Future frozen Arctic ice temperatures will be way over 40ºC

And Death Valley past summer maximums will be well below freezing.

fred250
Reply to  fretslider
February 22, 2021 2:10 am

“skip to hadCRUT 10”

.

Future frozen Arctic ice temperatures will be way over 40ºC

And Greenland Ranch past summer maximums will be well below freezing.

fred250
Reply to  fred250
February 22, 2021 2:11 am

if the other post get released from auto-moderation,

… sorry for the two very similar posts..

Don’t think auto-mod liked the first word De*th Valley

tom0mason
February 21, 2021 1:38 pm

No doubt HADCRUT5 has like HADCRUT4 included a little more Arctic area to their data-set and still has ‘sparse’ coverage for the Antarctic. As https://agupubs.onlinelibrary.wiley.com/doi/10.1029/2011JD017187 puts it for HADCRUT4 —
<blockquote> 6. Improvements to Global Coverage

[65] Both the land and sea components of HadCRUT4 have benefited from additional historical temperature data, as described in section 2. Many of these additional measurements are from regions of the globe that were poorly represented by Brohan et al. [2006]. The resulting improvement in global coverage can be seen in Figure 5. Much of the improvement in coverage in the early record is due to the digitization of additional SST data. The new land station data sourced for CRUTEM4 has greatly improved observational coverage across Russia. Arctic coverage has improved notably (particularly in Russia and Canada) throughout the record. <strong>Measurement coverage in the Southern Ocean and the Antarctic remains sparse. </strong> </blockquote>
[My bold]

We know that the Arctic temperature variation is mostly governed by the oceanic cycles and therefore waxes and wanes greatly (currently warmish). What is missing though is the current and historical variations of the Antarctic region (currently it is still cooling). Thus I would say that both this and HADCRUT4 have some warm bias in them.

son of mulder
February 21, 2021 2:13 pm

So they’ve got it wrong at least 4 times. I can’t wait for Hadcrut6.

February 21, 2021 2:18 pm

Giss regularly makes around 300 changes, corrections or adjustments to their Land Ocean Temperature Index (LOTI) every month. Here’s what that looks like since January 2020:

2020 Number of changes to GISTEMP’s LOTI:
Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
319 240 313 340 298 404 319 370 303 389 381 370

2021
Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
330

Off topic, I’m pretty sure You Tube has censored me:

My second reply to Jacques O’Garoy shows up briefly as posted “1 second ago” and disappears when the page is refreshed. Tried to post three times. I made a screen print of it. It’s not nice to be censored, it’s like receiving very bad personal news. I’d like to find out that, “No, Steve, they don’t allow long involved conversations in a comment thread.”  

ResourceGuy
February 21, 2021 2:19 pm

In this corner we have HadCRUTXX and in this corner we have the AMO. Place your bets.

MarkW
February 21, 2021 2:46 pm

He who controls the present controls the past.
He who controls the past, controls the future.

George Orwell

February 21, 2021 2:51 pm

Facebook is continuing to prevent WUWT articles from being disseminated from within Australia. That is after it says it is only restricting Australian news sites.

Alan M
Reply to  Streetcred
February 22, 2021 3:32 am

Why are you using Facebook to look for what ever ?

February 21, 2021 3:00 pm

In the last 170 years, after paving tens of thousands of miles of heat absorbing asphalt, building tens of thousands of square miles of heat islands (cities), consuming enough energy each year to bring lake Michigan to a boil every three years, and with over 7 billion small furnaces at 98 degrees F, people, the earth has increased in temperature less by 1/3 of 1% since the end of the little ice age. Actually that is fabulously stable!

February 21, 2021 3:05 pm

When I look at the chart, I see down 0.3 degrees over 70 years….then up 0.7 degrees over 30 years….down 0.2 degrees over 40 years and finally up 1.0 degree over 40 years for a total of 1.2 degrees over 180 years. I have little confidence in the data being more accurate than 1.0 degree per century. We have warmed from the Little Ice Age and history says look out for cooling ahead….do not expect the temperature in 2100 to be warmer than today…..do not expect the temp in 2050 to be warmer than today. I am not concerned about temp increases, but the people who claim that temp increases are dangerous and caused by CO2. The CCP is dangerous…the Extremist Left is dangerous.

Roger Knights
February 21, 2021 3:22 pm

It’ll be funny if the current real-world temperature trend turndown just as their alarmism is ramping up. The alarmists are heading for a header.

Monckton of Brenchley
Reply to  Roger Knights
February 24, 2021 10:48 am

I’d expect temperature to continue to drift upward very slowly, as it has over the past few decades. The notion peddled by the likes of Attenborough that we are close to some unspecified “tipping point” is mere Communist drivel, except to the extent that the climate behaves as a mathematically-chaotic object and anything can in theory happen. But warmer weather is no more likely to cause a sudden phase transition than cooler.

Frank Hansen
February 21, 2021 3:51 pm

I knew a family 55 years ago that were the custodians of a castle. One extra duty was to record the temperature twice a day for the meteorological office. This was long before climate was any concern. The afternoon temperature should be recorded at 6 PM, but if they had guests for dinner, who knows, the measurement might be postponed to 8 PM. That would make a huge difference in the average recorded temperature. How can we be sure that all the data of temperature measurements are faithfully recorded. These people were not scientists, but civil servants that may not have grasped the importance of keeping recording absolutely accurate.

Tom Abbott
Reply to  Frank Hansen
February 22, 2021 10:59 am

Computer manipulation does not make the original data any more accurate.

And many here would argue that computer manipulation of temperature data as is being done today is not only less accurate, it actually distorts the temperature record so much it makes the global temperature record not fit for purpose.

Law of Nature
February 21, 2021 4:23 pm

Well.. I know other climate scientists do it as well, but.. without an uncertainty analysis, the factors calculated in the 2nd half of this article are utterly meaningless!
Just phase like “the world is warming another 1K till 2100” is quite meaningless until it can be shown, how this number was derived and how certain the numbers are the statement is based on.

Reply to  Law of Nature
February 21, 2021 4:28 pm

Don’t hold your breath waiting.

Reply to  Law of Nature
February 21, 2021 5:22 pm

Another 1 k over the next 80 years is nothing if it were true….certainly nothing requiring action by world governments. It is absurd to project temps based on CO2…it is not definitely known that CO2 strongly affects temp and it is certain that other factors affect temp and some factors are probably unknown. Climate history tends to repeat until it doesn’t and the next repeat would be temps going back towards the Little Ice Age.

Law of Nature
Reply to  Anti_griff
February 21, 2021 6:31 pm

Aww okay, I guess now I have to make sure, that I did not attribute such a statement fo Christopher, but merely tried to give an example for how numbers can be quite meaningless without a scientific context, we really need an uncertainty analysis before discussing his numbers.

Monckton of Brenchley
Reply to  Law of Nature
February 21, 2021 11:05 pm

No, one does not need an uncertainty analysis when discussing midrange values only. The point being made is that the midrange estimates in the models are, on the basis of the latest mainstream midrange data, three times what they should be.

Reply to  Monckton of Brenchley
February 22, 2021 6:57 am

I don’t agree with this. Mid-range value implies a stated value. All stated values have an uncertainty. Simply picking a mid-range value doesn’t mean it is the true value. Propagating the uncertainty through the calculations allows a judgement to be made about the accuracy of the calculated value using uncertain inputs.

Monckton of Brenchley
Reply to  Tim Gorman
February 24, 2021 10:46 am

Midrange means midrange. Look at the diagram of the rectangular-hyperbolic curve of system response with feedback fractions. The segment where observational evidence shows the midrange ECS to lie allows considerable variability in feedback fractions for remarkably little change in ECS.

Reply to  Monckton of Brenchley
February 25, 2021 8:24 am

That still doesn’t mean the mid-range is the true value. All you seem to be saying is that there is very little coupling between the mid-range value and the ECS.

Monckton of Brenchley
Reply to  Tim Gorman
February 25, 2021 7:28 pm

O, for Heaven’s sake apply a little common sense just for once. Given that the entire interval of ECS requires to be divided by about 3, upper-bound ECS will be below 2 K. Now, that – to all but futile nit-pickers – would indicate, would it not, that midrange ECS is going to be well below the currently-imagined 3.7-3.9 K. Try to concentrate on the main point.

Our paper will provide a particularly thorough treatment of uncertainty. The head posting gives you more than enough evidence that the ECS interval will be far narrower than that which is currently imagined. The chief reason is that the entire ECS interval falls at the left-hand end of the rectangular-hyperbolic response curve, where even large changes in feedback response deliver only small changes in ECS.

The data are sourced in the table in the head posting. Do your own homework. If common sense is not enough for you, it really won’t take very long to do a Monte Carlo distribution. You will just have to accept that in a very brief account in a posting chiefly about the temperature record there is limited room for anything other than the briefest indication of the narrowness of the uncertainty interval, and that we have to be very careful not to allow the climate Communists to prevent publication of our result – which is fatal to their dismal cause – by asserting that we have already published so many details of it that publication in a journal will not be new.

So stop whining, stop being lazy, and just do your own work.

Monckton of Brenchley
Reply to  Law of Nature
February 21, 2021 10:57 pm

In response to Law of Nature, reading the head posting with due care and attention will assist. It is made plain that the analysis leading to the conclusion that ECS is not 3.7 K but 1.1 K is comparing midrange values. In other words, in a short posting I have not gone through each of the seven items of climatological data to consider the uncertainties in each, so as to generate a probability distribution. I have simply confined the analysis to the midrange. However, common sense (confirmed by the more detailed analysis for which there was no space here) demonstrates that just as official climatology’s midrange values should be divided by 3 so the upper and lower bounds must also be divided by approximately 3.

As one indication of how direct calculation of ECS from up-to-date, mainstream, midrange data constrains the interval, the head posting does consider the question of the ratios of unit feedback responses, demonstrating that if one were to assume that official climatology’s interval of ECS guesstimates were correct they would imply a manifestly untenable and unphysical order-of-magnitude increase in the feedback response per Kelvin of direct reference sensitivity between the two periods 1850-2020 and 2020 onwards.

Why do the models produce such obviously excessive predictions of global warming? The chief reason, stated over and over again in the journals (see e.g. Lacis et al., 2010), is that climatologists borrowing feedback formulism from control theory, a branch of engineering physics with which they were not and are not familiar and in which they were not and are not expert, had falsely assumed that feedback processes (chiefly water vapor feedback) respond only to perturbations in global temperature and not also to the input signal, emission temperature. Lacis et al. explicitly state that 25% of global warming comes from reference sensitivity and 75% from feedback response, which is why models have tended to assume that the 1 K direct reference sensitivity to doubled CO2 will become about 4 K ECS.

In reality, however, nearly all of the 22-24 K total preindustrial feedback response was attributable to the fact that, even with no greenhouse gases in the air at the outset, the emission temperature of 255 K (an order of magnitude greater than the 8-10 K preindustrial reference sensitivity to naturally-occurring noncondensing greenhouse gases) itself engendered a feedback response. Or, to put it another way, to melt much of the ice that would have covered most of the globe at emission temperature one needs not only the warming from the preindustrial greenhouse gases but also emission temperature itself.

Climatology, then, has perpetrated a grave error of physics, which has only been noticed quite recently because of knowledge barriers between increasingly specialized scientific realms. When I showed an eminent professor of climatology who is also a formidable expert in control theory IPCC’s definition of “climate feedback” and the “climate feedback parameter” he was visibly furious. He bluntly stated that the definition was “nonsense”. It is because of that nonsense definition that climatology expected and thus predicted high equilibrium climate sensitivities.

Now that Law of Nature knows what climatology’s central error was, he can do a little homework for himself, look up the stated data sources for the seven input parameters to our algorithm and then, if he is capable of programming a computer, run the algorithm and try different values of those parameters to see what effect they have. And, if he has sufficient facility with statistics, he can even put together a probability distribution.

The central point, however, is that the modelers have been doing probability distributions for decades – but they were simply wrong, because they were based on the central assumption that midrange ECS would be about three or four times reference sensitivity. Well, it won’t, which is the main reason why IPCC’s original midrange medium-term estimate of anthropogenic global warming has proven to be 2.4 times observed anthropogenic warming since then.

So don’t maunder on about probability distributions until you have at least observed and acknowledged the elephantine error in the room.

Law of Nature
Reply to  Monckton of Brenchley
February 22, 2021 1:04 pm

Well, aeh thank you for your answer, it seems I hit a nerve!.
>> he can do a little homework for himself, look up the stated data sources
>>So don’t maunder on about probability distributions
Let´s hope, that if I ever post an article here, I will graciously face my critics and try to implement their scientific input, not tell them what to do!

I think the article was quite clear in what you trying to do and your reiteration adds little. I repeat, numbers without uncertainty in science have little to none meaning.
I would assume that it is how the modeler got their numbers wrong as well, however, right now they are not on the spot.

Let me tell you once more then that you are in serious error ignoring the uncertainty of your calculations!
This is especially true, when you devide two almost equal numbers and subtract one from the result, a textbook example on how to blow up error bars!

Monckton of Brenchley
Reply to  Law of Nature
February 23, 2021 4:37 pm

It seems I hit a nerve. Although I had carefully explained that we had allowed -and allowed elaborately – for uncertainties, “Law of Nature” continues to maunder on and presume to lecture us. Why? Well, climate Communists, and fellow travellers and those who see science through a relentlessly totalitarian lens are trained to attack any serious threat to their pathetic orthodoxy by saying that those who have good reason to question the orthodoxy have failed to take due account of a) complexities; b) nonlinearities; or c) uncertainties.

I repeat again: we have allowed for uncertainties. In our current paper out for peer review, we have allowed for them explicitly and in several ways. Therefore it is not we who are in “serious error”.

And it seems that Law of Nature is unfamiliar with the relationship between the system-gain factor in a feedback amplifier and the unit feedback response. The system-gain factor is the ratio of the differential output to the differential input, and the differential unit feedback response is the differential system-gain factor less 1.

The fact that the output signal and the output signal are near-identical is merely an indication that feedback response is very small, for the reasons set out in the head posting.

Interesting that Law of Nature cannot bring himself to state that climatology is in grave error, but instead prefers to use one of the three standard climate-Communist methods of attempting to derail an argument which, simple though it is, appears to be well above his pay-grade.

Law of Nature
Reply to  Monckton of Brenchley
February 23, 2021 7:08 pm

>> I repeat again: we have allowed for uncertainties
First time I hear about it!
I was under the distinct impression that your article ignored all about uncertainties and you even tried to make a point your would not need it!

Uncertainties are part of any real world evaluation and in this case have fatal consequences to the conclusions you drew!

But it is good if you have it, please kindly remind me what uncertainty do the values you use in the lines S4, F1 and F2 show?
(which are the calculations I call textbook examples to blow up error bars and you do it twice consecutively)

>> Interesting that Law of Nature cannot bring himself ..[blabla]
You seem confused whom´s work is under evaluation here!
I raised a valid concern and so far you have been more than evasive!
Please stay on topic, the omission of uncertainties in YOUR article!

Monckton of Brenchley
Reply to  Law of Nature
February 24, 2021 8:55 am

As I suspected, Law of Nature is being merely vexatious. It was made quite plain in the head posting that midrange data had been used in order to derive a midrange ECS estimate. That midrange ECS estimate was only one-third of official climatology’s estimate, suggesting something wrong with the latter.

I had also made it quite plain in the head posting that, because the curve of system response with feedback fractions is rectangular-hyperbolic, the interval of ECS projections for the small feedback fractions derivable from midrange observational data is nothing like as broad as the currently-imagined ECS interval. A very plain diagram in the head posting makes this quite clear.

As if that were not enough, I also explained to Law of Nature, in an earlier comment in this thread, that my co-authors and I had carreid out by a more detailed analysis, for which there was and is no space here, to determine that the observationally-derived bounds on ECS, like the midrange estimate, are approximately one-third of official climatology’s bounds. The diagram will instantly reveal intuitively why the bounds are so much tighter where the feedback fraction is found to be far smaller than official climatology had imagined.

Therefore, Law of Nature’s allegation that he had not previously been told we had conducted an uncertainty analysis is self-evidently and wilfully false.

Since Law of Nature is thus demonstrated to be merely tendentious, and since there is no space for the detailed analysis of the bounds on ECS that is in our paper currently before a leading journal for peer review, there is little further that I can usefully add. One is weary of the constant attempts by those who will not see the obvious to maunder on about complexities, nonlinearities and uncertainties, all of which we have fully allowed for in our paper (which was originally rejected on the ground that it was too long). The short calculation set forth in the head posting is plainly a brief summary. If Law of Nature wants to do his own uncertainty calculations based on the data values and sources helpfully provided in the head posting, he is free to do so. Or he can wait for our paper, which has now been before the jounal for several months, indicating that at least there is nothing glaringly wrong with it, or it would have been thrown back by now.

So, no more whining. Do your own uncertainty calculation. By an ingenious manipulation of the quantities, you can come up with any result you want, in accordance with the von Neumann principle. In climatology, however, there is a long and respectable tradition of papers, including a recent important one by leading true-believers, that focus on the midrange, and (unlike our paper) sometimes only on the midrange. If Law of Nature does not like that, his quarrel is not with us but with official climatology.

Law of Nature
Reply to  Monckton of Brenchley
February 24, 2021 11:46 am

Most of your writing seems to stray far of topic.

But it seems that you do claim to have considered confidence intervals for your numbers, do I read your long aeh illustrations correctly?
Can we agree that omitting them would be highly unscientific?
That seems trivial enough, right?

I cannot seem to find them anywhere in your extensive writings. Of course I take your word for you have done so and thus asked you to please provide them for me.

And please skip all other unrelevant things about my person or length of some papers somewhere and such nonsense, you must understand that it might look like you trying to talk your way out of some very unscientific oversight on your part.

Just the numbers please!? Thank you!
(for those values in the lines S4, F1 and F2 if you would)

Monckton of Brenchley
Reply to  Law of Nature
February 24, 2021 6:50 pm

Most of your writing seems to stray far off topic.

But it seems you have failed to consider my repeated statement that we have indeed considered the uncertainties.

Can we agree that I have indeed repeatedly explained these facts to you, despite your earlier downright dishonest attempt to deny it?
That seems trivial enough, right?

And please skip all other irrelevancies except the glaring fact that climatology has made a large, elementary error of physics. You must understand that it looks very much as though you are merely using a tactic for which paid climate Communists are trained.

Just the acknowledgement, please, that I have told you to wait until our paper is published.

By all means do your own uncertainty calculation, if it amuses you.

Law of Nature
Reply to  Monckton of Brenchley
February 24, 2021 7:58 pm

“But it seems you have failed to consider my repeated statement that we have indeed considered the uncertainties.”

Very good, what are they for those values in the lines S4, F1 and F2?

“By all means do your own uncertainty calculation, if it amuses you.”
There is nothing amusing about your posts!
Without the so far missing uncertainty the statements you made in the article are just wrong!

Monckton of Brenchley
Reply to  Law of Nature
February 25, 2021 7:18 pm

Law of Nature will have to wait for our paper. Climate Communists often try to smoke out more detail than we can safely give without prejudicing eventual publication in a learned journal. If Law of Nature does not like the fact that official climatology often publishes papers concentrating – as I have in the head posting – near-exclusively on the midrange, then his complaint is with official climatology and not with me. there is nothing wrong with publishing a posting – or, for that matter, a paper – that concentrates chiefly on the midrange position.

I have already tried to encourage him to think a little, and simply to look at the rectangular-hyperbolic curve of system response to feedback fractions that is plainly shown in the head posting. Given that the unit feedback response from 1850-2020 is very small based on the tolerably well-constrained industrial-era data, and given that nonlinearity in feedback response with temperature is also very small, and given that the left-hand end of the system-response curve rises very slowly, it should be self-evident to anyone interested in the objective truth rather than in climate Communism that the equilibrium-sensitivity interval will be small. I have explained to Law of Nature that it is about one-third of the currently-imagined interval.

So let him do his own homework – if, that is, he is interested in the objective, which his mendacious interventions here give us reason to doubt. The words “Monte Carlo” may be of assistance. It is no good maundering on about how midrange values are meaningless without the uncertainty bounds. IPCC, for instance, publishes uncertainty bounds, but the entire interval is overstated because IPCC has profiteered by adopting climatology’s mistake. One should not, therefore, imagine that uncertainty bounds are either necessary or sufficient. The purpose of the calculation in the head posting is to give a midrange worked example demonstrating how simple it is to show that official climatology’s predictions are monstrously excessive. Like it or not, that is the main point – a point from which Law of Nature seems unbecomingly anxious to distract attention. Why, one wonders.

Law of Nature
Reply to  Monckton of Brenchley
February 27, 2021 6:23 am

You may think that now you are on page two, this does not matter much, und you might be right.

But here you just admitted, that you made the statements in your article without uncertainties, so you really do not know what you are talking about! (Claims about “fixing it all in a later paper” are irrelevant)

Your statements are uncertain or flat out wrong!
You behave like a charlatan!

Monckton of Brenchley
Reply to  Law of Nature
February 27, 2021 10:43 pm

Law of Nature has now become hysterical as well as mendacious – hallmarks of climate Communism.

I have made it quite plain that the head posting does contain evidence of the most powerful constraint on uncertainty in our estimates of equilibrium climate sensitivity. I have also said nothing about “fixing it all in a later paper”. Instead, I have made it quite plain that in our long paper that has been in the hands of a leading journal’s editor for many months there is a full treatment of uncertainties. We have also produced a shorter paper, which we may also submit shortly, because it is sufficiently different in its approach to constitute a different paper. In that short paper, too, we make explicit provision for uncertainty in the usual way.

I repeat, since the point seems to have escaped Law of Nature: the bounds of the 2-sigma uncertainty interval that we have derived are about a third of the bounds of the CMIP6 interval, just as the midrange is about a third of the CMIP6 midrange.

If Law of Nature would bother to look at the head posting, he would see at once from the graph of the rectangular-hyperbolic system response to feedback fractions precisely why it is that our uncertainty bounds are so well constrained, and official climatology’s uncertainty bounds are so ill constrained.

As I have pointed out upthread, paid climate Communists reveal themselves in various ways, one of which is that they are trained by their Marxist handlers to brush off any serious challenge to the Party orthodoxy by saying that the challenger has failed to take account of the complexities, or of the nonlinearities, or of the uncertainties. Law of Nature has been pre-programmed to behave thus, and has chosen – unwisely – to pretend that we have not taken account of uncertainties, when the head posting and our subsequent comments demonstrate, in surely sufficient detail, that we have.

Crucially, we have derived both the midrange and the bounds on ECS properly, using recent mainstream data and applying a standard statistical method to derive the bounds and the midrange. In the head posting, however, we have confined ourselves, brevitatis causa, to showing a calculation for the midrange. And there is absolutely nothing wrong with that, and to showing a graph that shows at a glance why it is that our uncertainty bounds are about one-third of official climatology’s bounds.

If Law of Nature were a dispassionate observer rather than a trained and paid climate Communist, he would realize that his criticism would be much more justified if it were directed at official climatology, which has erred so flagrantly. Our paper, now under review not only at a journal but also in the corridors of power, makes it quite plain for all who have eyes to see and ears to hear that the entire climate scam is founded upon an elementary and catastrophic error of physics. That is the main point, and no amount of wriggling will avail the climate Communists anything. They have nailed their red flag of tyranny, torture, death and destruction to a sinking ship. The truth is about to emerge into the light of day, whether Law of Nature or any other paid climate Communist likes it or not.

Law of Nature
Reply to  Monckton of Brenchley
March 1, 2021 6:23 am

>> criticism would be much more justified if it were directed at official climatology

Did they post an article here? No, you did!

Now you had ample chances to correct your mistakes and give uncertainties for the values in the lines S4, F1 and F2, but did choose not to do so, that makes you wrong and statements in your article false.

ResourceGuy
February 21, 2021 6:14 pm

What’s the monetary payoff for this pledge of allegiance?

Giordano Milton
February 22, 2021 3:42 am

Lowering levels from the past to make today’s temperature look higher . . .

Hmmm. Just more evidence that “#science” has no meaning and we’ve gone back to the days when the church (the new church now) decides what is truth. I gues if they didn’t have DIShonesty they wouldn’t have any sort of honesty at all

February 22, 2021 8:35 am

Welcome to the Ajustocene period of Earth’s climate history.

February 22, 2021 9:16 am

I could, perhaps, understand the necessity to make some adjustments to the record at some point, but what possible (rational) reason could there be to continually adjust it?