HadCRUT, Numbers 4 And 5

Guest Post by Willis Eschenbach

The HadCRUT record is one of several main estimates of the historical variation in global average surface temperature. It’s kept by the good folks at the University of East Anglia, home of the Climategate scandal.

Periodically, they update the HadCRUT data. They’ve just done so again, going from HadCRUT4 to HadCRUT5.

So to check if you’ve been following the climate lunacy, here’s a pop quiz. What did the new record do to the HadCRUT historical temperature trend?

It decreased the trend, or

It increased the trend?

Yep, you’re right … it increased the trend. You’re as shocked by that as I am, I can tell.

So … here’s the old record and the new record.

Figure 1. HadCRUT4 and HadCRUT5 temperature records. The yellow/black and blue/black lines are lowess smooths of each dataset.

Let’s take a closer look at the changes. Here are just the lowess smooths, which give us a clear view of the underlying adjustments. I’ve added the University of Alabama Huntsville microwave sounding unit temperature of the lower troposphere (UAH MSU TLT) for comparison.

Figure 1. Lowess smooths of HadCRUT4 and HadCRUT5 surface temperature records, and the UAH MSU satellite lower troposphere temperature record. The yellow/black, blue/black, and orange/black lines are lowess smooths of each dataset. The red/black line shows the adjustments made to the HadCRUT4 dataset.

There were a couple of surprises in this for me. Normally, the adjustments are made on the older data and reflect things like changes in the time of observations of the data, or new overlooked older records added to the dataset. In this case, on the other hand, the largest adjustments are to the most recent data …

Also, in the past adjustments have tended to reduce the drop in temperature from ~ 1942 to 1970. But these adjustments increased the drop.

Go figure.

Anyhow, that’s the latest chapter in the famous game called “Can you keep up with the temperature adjustments”. I have no big conclusions, other than that at this rate the temperature trend will double by 2050, not from CO2, but from continued endless upwards adjustments …

After I voted today (no on tax increases), the gorgeous ex-fiancee and I spent the afternoon wandering the docks down at Porto Bodega, and looking at a bunch of boats that I’m very happy that I don’t own. She and I used to fish commercially out of that marina, lots of great memories.

My best regards to all,

w.

4.8 37 votes
Article Rating
306 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
March 3, 2021 10:07 am

A graph going back to the Mediaeval Warm Period would be rather more useful.

Last edited 7 months ago by Stephen Wilde
John Tillman
Reply to  Stephen Wilde
March 3, 2021 10:18 am
Tom Abbott
Reply to  John Tillman
March 3, 2021 7:45 pm

All our current alarmist climate scientists have seen these charts which show it was as warm or warmer in the past, yet now they pretend they never existed and try to sell the narrative that we are living in the warmest times in human history.

Alarmist climate scientists are lying to us.

Reply to  Stephen Wilde
March 4, 2021 3:07 am

Hi Stephen.
I just posted something after you about Eddy. Let me know what you think. BW

Weekly_rise
March 3, 2021 10:20 am

The differences between HadCRUT V4 and V5 have nothing to do with adjustments to the station data, they’re reflecting changes to how HadCRUT treats grid cells that are missing station data entirely. Previously these grids were simply dropped from the global average (which has been the major difference between HadCRUT and other temperature analyses like GISTEMP), and starting with V5 they are infilled using data from nearby grid cells. In particular, this change increases spatial coverage in the Arctic, where station coverage is sparse but where the planet is experiencing the most rapid warming. All of this is described in the documentation, so it is not a surprise in the slightest:

https://www.metoffice.gov.uk/hadobs/hadcrut5/HadCRUT5_accepted.pdf

Last edited 7 months ago by Weekly_rise
Reply to  Weekly_rise
March 3, 2021 10:25 am

“infilled using data from nearby grid cells”

I think adding station data where there wasn’t any station data before is “adjusting the station data” lol

Andrew

Weekly_rise
Reply to  Bad Andrew
March 3, 2021 11:04 am

I would not call that adjusting station data. An adjustment would be taking the value as read from the thermometer and altering (increasing or decreasing) it. It’s fine to have private definitions for things, but it’s probably not useful in public discourse.

Reply to  Weekly_rise
March 3, 2021 11:08 am

“I would not call that adjusting station data.”

Weekly_rise,

So what would you call adding new station data to station data?

Enhancing? Expanding? Modifying?

How would you phrase it?

Andrew

Weekly_rise
Reply to  Bad Andrew
March 3, 2021 11:25 am

HadCRUT v5 is not adding new station data, it’s using statistical methods to increase the coverage of the analysis with existing station data. Think of it in this way: suppose we have two adjacent grid boxes, one of which has a station juuuuust near the edge of the box, at the boundary between both boxes, while the other has no station inside of it. HadCRUT v4 and earlier would have thrown up its hands and said, “welp, guess we have no way of knowing what on earth the temperature inside that empty grid box might be!” HadCRUT v5 would say, “the grid box boundary is an arbitrary delimiter. The fact that a box doesn’t have a station inside it doesn’t mean we have no idea what the temperature there would be. We can use all the stations in nearby boxes to get an idea of what it should be.”

This is clearly a superior approach and one that HadCRUT ought probably to have done from the beginning. I don’t have the historical context to understand why it wasn’t done this way before, but it’s a good change that makes the analysis better.

Reply to  Weekly_rise
March 3, 2021 11:41 am

Weekly_rise,

I’ve heard all this before. What you are saying is that the analysis pretends there is more information now, to cover a larger area where there is no data.

Seems like this leaves room for some creativity. If you know what I mean.

Andrew

Weekly_rise
Reply to  Bad Andrew
March 3, 2021 12:07 pm

Saying, “the CRU hypothetically could be committing fraud,” is a rather worthless position, in my view. The methodology described in the documention is valid, and would produce good results. It’s possible the CRU is committing fraud and not actually following the methodology as written, but such an allegation seems… fanciful, at best. You need some evidence of the fraud you’re alleging.

Gary Pearse
Reply to  Weekly_rise
March 3, 2021 12:35 pm

That uber warming of the Arctic needs explanation after an outflow from there severely froze the central plains and notably Texas and Mexico breaking records across a vast area by double digits. Similarly, were the new record lows in 2019 in Illinois, Ohio, Indiana… the year when sharks, frozen solid washed up on Massachusets beaches.

Bryan A
Reply to  Gary Pearse
March 3, 2021 2:57 pm

That’s because the 3d F warming in the Arctic took it from mind numbing cold to freeze your nards off cold

Jim Gorman
Reply to  Gary Pearse
March 3, 2021 3:48 pm

I wondered the same thing but haven’t had a chance to search further. I know some of the arctic locations like Iceland and Greenland show little if any warming.

bit chilly
Reply to  Gary Pearse
March 3, 2021 5:28 pm

The uber warming in the Arctic is a result of the increased exposure of the Arctic ocean surface,particularly in winter, due to the reduction in sea ice cover.

The sea ice acts as an insulator reducing the amount of heat that radiates to space .less sea ice, more heat in the arctic atmosphere as it passes through leaving the planet. Low Arctic sea ice levels are a planetary cooling mechanism due to the net negative energy balance in the region.

Rory Forbes
Reply to  Weekly_rise
March 3, 2021 2:44 pm

. It’s possible the CRU is committing fraud and not actually following the methodology as written, but such an allegation seems… fanciful, at best. You need some evidence of the fraud you’re alleging.

The “climategate” emails has already proven the fraud. There are hundreds of pages available. It wasn’t prosecuted because they managed to drag their feet long enough to pass the deadline for the statute of limitations.

Izaak Walton
Reply to  Rory Forbes
March 3, 2021 3:51 pm

Rory,
In the UK there is no statute of limitations for criminal cases so that excuse don’t fly.

Rory Forbes
Reply to  Izaak Walton
March 3, 2021 5:55 pm

Although there is no official statute of limitations for criminal cases in the UK (unlike many other EU countries and America), limitation periods do apply to many aspects of business and consumer litigation, including debt recovery.

All that was needed was to show there was no mens rea to obviate criminal intent. A limitation then pertains within the statute.

The decision follows a comprehensive investigation by the force’s Major Investigation Team, supported by a number of national specialist services, and is informed by a statutory deadline on criminal proceedings.

https://wattsupwiththat.com/2012/07/18/climategate-investigation-closed-cops-impotent/

Weekly_rise
Reply to  Rory Forbes
March 4, 2021 5:27 am

Or perhaps it wasn’t prosecuted because multiple independent investigations found no evidence of fraud or misconduct? Guess we’ll never know…

Rory Forbes
Reply to  Weekly_rise
March 4, 2021 9:39 am

It wasn’t prosecuted for the reasons I provided, no more, no less. There were NO “independent investigations”. All were obviously partisan and laughingly conducted white washes. The evidence of fraud was as plain as the nose your face and is still available for scrutiny on these pages, unless you prefer guess work and speculation, like the rest of the “science” you believe in.

Oriel Kolnai
Reply to  Weekly_rise
March 4, 2021 9:56 am

‘No evidence’? Really? What about Wegman:

‘Overall, our committee believes that Mann’s assessments that the decade of the 1990s was the hottest decade of the millennium and that 1998 was the hottest year of the millennium cannot be supported by his analysis.’ ?

AND:

‘A cardinal rule of statistical inference is that the method of analysis must be decided before looking at the data’

(https://climateaudit.files.wordpress.com/2007/11/07142006_wegman_report.pdf)

Deciding what data to include and which to reject after collecting it sounds like ‘fraud’ to me. As does constant refusal of data requests and Jones whinges about people wanting to find ‘something wrong’ with his data.

Weekly_rise
Reply to  Oriel Kolnai
March 4, 2021 10:21 am

The Wegman report was published in 2006, the CRU email hack occurred in 2009. You might also be surprised to discover that Mann is not and never has been an employee of the CRU.

Jim Gorman
Reply to  Weekly_rise
March 3, 2021 3:46 pm

Their explanation of the database needs to explicitly lay out that the final result is dependent upon modified and created data. It should only be used if it meets the specific needs of the user. An easy method of accessing the original, unmodified data should be available.

Lrp
Reply to  Weekly_rise
March 4, 2021 12:20 am

Selling fiction as real is fraud

Reply to  Weekly_rise
March 3, 2021 11:49 am

We can use all the stations in nearby boxes to get an idea of what it should be.
[…]
I don’t have the historical context to understand why it wasn’t done this way before, but it’s a good change that makes the analysis better.

Sorry, but you make me laugh 😀
Take a “may be” temperature makes the analysis better, no, it makes the analysis worthless.

Steve Z
Reply to  Krishna Gans
March 3, 2021 12:50 pm

We should NOT try to interpolate temperatures in boxes with no measurement stations between those measured in neighboring boxes.

Suppose, for example, that there is no measurement station in Olympic National Park in Washington State. We could have measurements in Seattle to the east, on Puget Sound, and measurements along the Pacific coast to the west.

But we should not try to interpolate a temperature for Mount Rainier between those on the coast and in Seattle. The stations on the coast and in Seattle are near sea level, while Mount Rainier is thousands of feet above sea level, and would likely be much colder (and have more precipitation) than the stations along the coast and in Seattle.

Weekly_rise
Reply to  Steve Z
March 3, 2021 12:56 pm

What you describe would be an issue if the absolute temperature were used instead of the anomaly. But while the mean temperature of Mount Rainier is lower than the surrounding lower areas, changes in the mean temperatures are very likely to be consistent between the nearby regions.

Nicholas McGinley
Reply to  Weekly_rise
March 3, 2021 1:59 pm

The existing stations has always been used to account for the whole area, so what has been done amounts to nothing more than giving an area of sparse coverage greater weight.
No matter how you slice it, some people changed how they arrived at a GAST, and as has been the case 100% of the times they have done it previously, the new result conformed to the ideas promoted by these same people.

Last edited 7 months ago by Nicholas McGinley
Streetcred
Reply to  Weekly_rise
March 3, 2021 2:23 pm

Nonsense … each station has its own micro-climatic influences … UHI, poor maintenance, vegetation, etc. Anomalies will not resolve the bad ‘data’.

Weekly_rise
Reply to  Streetcred
March 4, 2021 5:36 am

Using anomalies does not solve all potential data issues. The anomalies just solve the problem of one station being on a hill while another nearby station is in a valley, and the length of both station records might not be equal. It’s the specific problem that Steve Z raises above.

MarkW
Reply to  Weekly_rise
March 3, 2021 7:08 pm

Even anomalies can vary when the terrain is different. If there is a slope, different humidity levels and different wind directions can cause clouds to form.
Lakes can create lake affects that will affect different areas depending on the wind directions.
Reality is not as simple as the AGW activists want to believe.

Weekly_rise
Reply to  MarkW
March 4, 2021 5:40 am

There will certainly be random variability captured in the anomaly that does not reflect regional climate, but the long term trend in anomalies will certainly capture regional climate change, which is the thing we want to be measure to begin with.

John Tillman
Reply to  Steve Z
March 4, 2021 7:24 am

Mt. Rainier, 14,411 feet, lies to the east of Puget Sound, in the Cascades. The highet point on the Olympic Peninsula is Mt. Olympus, 7980 feet high.

But of course your point is valid.

Rob_Dawg
Reply to  Weekly_rise
March 3, 2021 11:54 am

> “The fact that a box doesn’t have a station inside it doesn’t mean we have no idea what the temperature there would be.”

Yes.It.Does. The very definition of “not data.”

Weekly_rise
Reply to  Rob_Dawg
March 3, 2021 12:13 pm

Not have a direct measurement of a quantity doesn’t mean we’re blind about what values the quantity might take on.

Last edited 7 months ago by Weekly_rise
Reply to  Weekly_rise
March 3, 2021 12:17 pm

“two blocks down”

Cause that’s what this is about, right? -guessing what’s two block down.

Nothing new here. Just a Warmer named Weekly_rise making noise.

Andrew

mrsell
Reply to  Weekly_rise
March 3, 2021 12:40 pm

“Not have a direct measurement of a quantity doesn’t mean we’re blind about what values the quantity might take on.”

Unless you take the measurement, you don’t know what it is.

So, yes. Without a measurement you are blind.

Rory Forbes
Reply to  mrsell
March 3, 2021 2:47 pm

The warmists just don’t seem to understand what empirical evidence is.

bit chilly
Reply to  Rory Forbes
March 3, 2021 5:33 pm

Unlike most other scientific disciplines climate science doesn’t appear to be too hot on empirical evidence.

In some cases empirical evidence has been discarded as it didn’t fit preconceived notions of what is should be showing.

Rory Forbes
Reply to  bit chilly
March 3, 2021 6:00 pm

Exactly! The entirety of the basis for the greenhouse effect, which underpins true believer climate “science” is conjecture and speculation. Therefore all they seem to require is the same standard for their supporting evidence. Hell, these same people fully believe that consensus is somehow a function of the scientific method … sort of “majority rules” science.

In fact, there is absolutely no empirical evidence to support the greenhouse effect. CO2 is NOT the control knob of climate.

MarkW
Reply to  Rory Forbes
March 3, 2021 7:12 pm

These are the same guys who consider the output of computer models to be data.

Rory Forbes
Reply to  MarkW
March 3, 2021 8:07 pm

Yep … odd isn’t it? It drives them mad when you point that fact out to them.

Weekly_rise
Reply to  mrsell
March 4, 2021 5:51 am

We physically can’t place a weather station at every single point on Earth’s surface, so the best we can ever do is use a finite number of stations to “represent” the temperature of all the places we don’t directly measure. This is a fundamental shortcoming of living in the actual real world and needing to make actual real measurements of things.

Carlo, Monte
Reply to  Weekly_rise
March 4, 2021 6:59 am

Do you not see your circular reasoning here?

This is not science.

Rob_Dawg
Reply to  Weekly_rise
March 3, 2021 1:33 pm

> “…might take on.”

What if we did that at the grocery store? Average your purchases with the person in front and behind you in line? Effectively giving them each 1.5x weight and you 0 weight? And that’s what we have here. Arctic temps rising 2x the world average getting more weight by creating synthetic temperature records.

Weekly_rise
Reply to  Rob_Dawg
March 4, 2021 5:54 am

The price of several people’s groceries are independent quantities unrelated to each other, this is not true of regional climate. The climate of one area is fundamentally connected to the climate of an area a few kilometers away.

Carlo, Monte
Reply to  Weekly_rise
March 4, 2021 7:02 am

a few kilometers

Is your extrapolation still valid over thousands of kilometers?

Jim Gorman
Reply to  Weekly_rise
March 3, 2021 3:53 pm

Look at your local weather map on tv some night. Temps will vary AT LEAST 1, 2, and even 3 degrees within a square or even 5 miles. That’s anomaly differences of that same value even within a grid square and probably less. How do you choose which to use? An average? That is made up data with NO BACKUP other than we think this may be correct!

MarkW
Reply to  Jim Gorman
March 3, 2021 7:15 pm

The AGW’ers assume that whatever the variance between these locationsis today, it will remain the same, tomorrow, next week, next year.
Anyone who watches weather reports for a few weeks will be able to see that this assumption is not true.

Reply to  Jim Gorman
March 4, 2021 3:01 am

comment image

Example of what you said, measured data

Derg
Reply to  Weekly_rise
March 3, 2021 3:54 pm

MIGHT 😉

fred250
Reply to  Weekly_rise
March 3, 2021 4:08 pm

What a load of GARBAGE

You are blind, certainly within a few degrees either way.

And of course once Urban areas are smeared over vast tracts of non-urban eares, trends will be HIGHLY AFFECTED.

Bryan A
Reply to  Rob_Dawg
March 3, 2021 3:01 pm

Exactly, propagating bad data into an adjacent cell with no data just spreads bad data

MarkW
Reply to  Rob_Dawg
March 3, 2021 7:11 pm

Even the best guess, is still a guess.

Bob Rogers
Reply to  Weekly_rise
March 3, 2021 12:07 pm

This “clearly superior” approach was presumably made up by someone without a lot of experience in the outdoors. If I turn left at the end of the street and walk 1/2 mile it can be as much as five degrees cooler than if I turn right and walk 1/2 mile.

Weekly_rise
Reply to  Bob Rogers
March 3, 2021 12:45 pm

HadCRUT is using anomalies, which are representative of a much larger area than the absolute temperature is. The mean temperature 1/2 mile away might be different than the mean temperature at your house, but a change in the temperature at your house is almost certainly accompanied by a change in the temperature 1/2 mile away.

Last edited 7 months ago by Weekly_rise
Streetcred
Reply to  Weekly_rise
March 3, 2021 2:24 pm

BS.

Carlo, Monte
Reply to  Weekly_rise
March 3, 2021 2:46 pm

Hand waving.

Rory Forbes
Reply to  Weekly_rise
March 3, 2021 2:51 pm

HadCRUT is using anomalies, which are representative of a much larger area than the absolute temperature is.

Which is the reason why they have little understanding of climate and their models fail continually.

but a change in the temperature at your house is almost certainly accompanied by a change in the temperature 1/2 mile away.

That is “almost certainly” wrong … for a long list of reasons.

Reply to  Weekly_rise
March 3, 2021 3:08 pm

comment image

You know what weatherfronts are, right ? (left pic, cold from east to west, right pic snow forecast)
On the one side, you have warm or hot air, on the other cold(er) air.
That front doesn’t move 24/7. It stops or changes the direction.
Enjoy interpolating the right way. 😀

Last edited 7 months ago by Krishna Gans
WXcycles
Reply to  Weekly_rise
March 3, 2021 3:29 pm

What’s wrong with saying that when you have no data you just say, “Sorry, we have no data there, we don’t know the record or the trend.

“Yes! We have no bananas! We have no bananas today!”

How do you know people would not respect that, given it is the truth?

Obtaining basic scientific honesty should not be like pulling-teeth, it should come easily, with no effort. It takes real effort to keep on lying and avoiding that you actually have no bananas today.

Lets interpolate an imaginary banana from this real banana, 2,000 km away … and call it a real banana!

That’s genius level science!

Guess what?

You still don’t know. And you are now lying to everyone, and calling it science … which is another lie.

And this is why you are not, and never can be respected.

MarkW
Reply to  WXcycles
March 3, 2021 7:35 pm

Having no data is actually the best possible result for the AGW crowd. The less data they have, the more opportunities they have to generate the data that they need.

Ulises
Reply to  WXcycles
March 4, 2021 8:26 am

>>What’s wrong with saying that when you have no data you just say, “Sorry, we have no data there, we don’t know the record or the trend.”<<

When you discard gridcells with no data, you lose the correct area weighting, which might affect the result.

WXcycles
Reply to  Ulises
March 4, 2021 11:37 pm

Try not making a map with inadequate data then.

Trying to Play Nice
Reply to  Weekly_rise
March 3, 2021 3:54 pm

Weekly_rise, you are an idiot. You say that “a change in the temperature at your house is almost certainly accompanied by a change in the temperature 1/2 mile away.” If that was true, then the entire planet would change temperature in lockstep all the time. That is obviously, not the way it works in the real world. Yes, I know there was a peer reviewed paper making that claim. That means there are a bunch of idiots out there besides you.

Jim Gorman
Reply to  Weekly_rise
March 3, 2021 4:02 pm

Give it up!

anomalies, which are representative of a much larger area than the absolute temperature is.”

Where do you come up with this sh**te? Why would an anomaly be representative of a larger area? That doesn’t even make logical sense!

Do you ever, ever wonder why these modifications always come out with higher temps? Because you are averaging a high one with a low one. Show us where the infilled data actually uses the lower temp.

If you have one at +2 and one at -2, what do you use? A value that is always higher than the low one. Do you think that might, just might bias the infilled data?

Weekly_rise
Reply to  Jim Gorman
March 4, 2021 6:06 am

Because an anomaly represents a change from the regional climatology at a location. It is unlikely, nigh on impossible, that the climate can change locally over a single point on the earth’s surface and not also have changed in the surrounding region.

Last edited 7 months ago by Weekly_rise
fred250
Reply to  Weekly_rise
March 3, 2021 4:10 pm

“which are representative of a much larger area than the absolute temperature is.”

.

UTTER RUBBISH, weakling.

A large proportion of temperature sits are in urban affected areas

the TREND ARE NOT REPRESENTATIVE of the surrounding areas.

Last edited 7 months ago by fred250
bit chilly
Reply to  Weekly_rise
March 3, 2021 5:36 pm

That right there is the biggest load of nonsense i have ever heard. I seriously hope you are not involved in climate science in any way although it would explain some of the junk science it generates.

MarkW
Reply to  Weekly_rise
March 3, 2021 7:19 pm

It is your assumption that all three locations will warm or cool by the same amount in unison. In the real world, that assumption is rarely valid.

rbabcock
Reply to  Weekly_rise
March 4, 2021 5:40 am

No it is not. Go for a ride on a motorcycle down a road and you will constantly go through warmer and cooler air, so by just riding over a hill into a dip in the road the air temperature can change noticeably. Pass through woodlands and open fields and the same thing happens.

Even in my yard I currently have daffodils in bloom in one part and just coming up in others because of uneven temperatures.

Also how about measuring the direct temperature in the Arctic or Antarctic? How many sensors are there per 1000 km2? But HadCRUT has no issues giving us the air temperatures to the nearest hundredth of a degree. I never see error bars on any of what is published.

Weekly_rise
Reply to  rbabcock
March 4, 2021 7:36 am

The absolute temperatures vary quite a bit over short distances, just because of changes in landscape, but the long term regional climatology does not vary so much, and changes in the climate at one point in a region are almost certainly echoed in changes in the climate at other points within the region. Anomalies are measuring change from the average regional climate (is it hotter or colder than normal for this spot?).

While cautious to avoid introducing additional confusion into the discussion, imagine that instead of the regional climatology, our “baseline” is simply the annual average temperature from January to December for a region. We might expect that a spot on a hill will be cooler in the summer than a spot in a nearby valley because of differences in altitude, so the absolute temperature might be quite different. But both points will both be experiencing summer at the same time, so the anomaly for both can be expected to be higher than the annual average “baseline.” The same goes for long term climate – different points in a region might have different absolute temperatures due to local differences in terrain, land cover, altitude, etc., but their long term climate will be evolving similarly over time.

The Hadley Centre does certainly publish error estimates for the HadCRUT dataset, you can view them here, or download the data to play with yourself.

Mr.
Reply to  Weekly_rise
March 3, 2021 12:16 pm

Here’s another analogy Weekly –

I had to have treatment for skin cancers on my arms recently.
The medicos identified dozens & dozens of little buggers all up & down my arms.
They prepared a plan to treat each spot with a variety of treatments over a few weeks.
I said – “why do a spot-by-spot exercise? Wouldn’t it be more practical to just assume that all the skin on my arms needs treatment, and have at it?”
They said – ” just because one small area presents evidence of a skin cancer, we can’t assume that the adjoining skin areas are in the same condition. That would be overreaching. But even if there were contiguous detections, they could be in wholly different stages of development, and require different approaches”

So, as the old saying goes –
“when you ASSUME, you make an ASS out of U and ME”

Editor
Reply to  Weekly_rise
March 3, 2021 1:13 pm

Adjustment is when the set of measured values is changed. In this case you start with a data value that hasn’t been measured, ie. it’s not in the set of measured values. You end with a data value that has been constructed by a computer, ie. the set of measured values has been changed. It’s an adjustment.

Nicholas McGinley
Reply to  Weekly_rise
March 3, 2021 1:55 pm

Area above the Arctic circle: ~4% of the globe.
Area between the Tropics: ~40% of the globe

Bryan A
Reply to  Weekly_rise
March 3, 2021 2:54 pm

Think of it this way…
All the temperature sites in the L.A. basin go off line and return 9999. So the data is appended with nearby grids like Mojave and Death Valley. Or instead of land based grids, they utilize the grids due west in the Pacific. What could go wrong with that?

Ulises
Reply to  Bryan A
March 4, 2021 8:49 am

Make an educated choice. I´d go for the Pacific, but have never been there. The worst thing to do would be to calculate with 9999, something which has happened many times in the history of computers.

bit chilly
Reply to  Weekly_rise
March 3, 2021 5:22 pm

Weeklyrise, this explanation is utter nonsense, as is the pha method and any other method that involves pseudo data as opposed to physical measurement in the area specified.

My front door faces north east, my back door south west. i can easily see a 10 c difference in temperature at certain times of year depending upon prevailing conditions.

Anyone claiming a measured temperature applies to any other spot on the planet other than where the measuring equipment is sited is flat out wrong. If climate science was held to the same rules as all the other scientific disciplines the error bars on these published trends would make a mockery of their claimed significance.

Fin
Reply to  Weekly_rise
March 3, 2021 6:56 pm

By statistical methods, do you mean kriging, nearest-neighbour analysis, interpolation, gaussian smoothing or something else. If so then the increase in HadCRUT v5 should be marginal as for most of those methods, a best unbiased linear estimation is used…

Loren C. Wilson
Reply to  Weekly_rise
March 3, 2021 7:02 pm

What it should be? Or what it is?

pHil R
Reply to  Weekly_rise
March 4, 2021 9:25 am

You apparently don’t understand statistics, their limitations and their blatant misuse.

menace
Reply to  Weekly_rise
March 5, 2021 8:24 am

Yeah but wasn’t HADCRUTV4 also doing same sort of infilling for uncovered regions? Why is the higher “degrees” (driving 0.2C higher global averages for current temps) of infilling in V5 somehow “better” than what technique they used to infilling for V4?

You should understand why it looks fishy to a layperson.

guidoLaMoto
Reply to  Weekly_rise
March 3, 2021 2:20 pm

You’re right– That’s not “adjusting data,”..It’s “fantasizing data where none exists.”

Rory Forbes
Reply to  guidoLaMoto
March 3, 2021 2:56 pm

That’s it exactly. An interpolation between two measured locations might be useful in some respects, but when they average two interpolated loci, they’re wandering into fantasy land.

The Canadian Arctic has few measuring stations. The Antarctic has even fewer per given area. Yet they somehow manage to be able to provide near certainty with temperatures to the 100th of a degree. Amazing (if you believe them)

Jim Gorman
Reply to  Weekly_rise
March 3, 2021 3:05 pm

That is normally addressed as “correcting” data and requires detailed records of why and where.

Shanghai Dan
Reply to  Weekly_rise
March 3, 2021 4:16 pm

Perhaps the word you’re looking for isn’t infilling or adjusting, but fabricating.

But I’m just an engineer…

Loren C. Wilson
Reply to  Weekly_rise
March 3, 2021 7:01 pm

So how does the new estimate compare with the average of stations without infilling? If it is the same, then no need to infill – you are just making up data to claim better coverage. If it is different, how do you know it is more accurate? By your own admission, the stations are sparse and don’t always report. Hard to use only part of them to infill and then compare your estimated temperatures to reality. In any other discipline, infilling is condemned as poor research at best, fraud at worst.

Lrp
Reply to  Weekly_rise
March 4, 2021 12:18 am

It’s called making up stuff

Streetcred
Reply to  Bad Andrew
March 3, 2021 2:19 pm

Clearly they’ve learned this from the Australian BoM … create infill ‘data’ from other artificially ‘enhanced data’ … do not under any circumstances use ‘real’ data which does not support the narrative. For example … https://kenskingdom.wordpress.com/2020/10/05/more-questionable-adjustments-cape-moreton/

Gerald Machnee
Reply to  Willis Eschenbach
March 3, 2021 10:43 am

Willis, I am no longer surprised.
Tony Heller has been calling it fraud for years. Over 50% is now filled and created. Soon we will not need observations. Look at Heller’s correlation of temp adjustment and CO2 increase- 97 to 98 % correlation coefficient. And some are still calling that science.

Weekly_rise
Reply to  Gerald Machnee
March 3, 2021 11:32 am

Heller is quite incorrect in the way he treats the surface station data, and his claims about the infilling done by NOAA are quite wrong (and not relevant to the recent changes by HadCRUT). There is a really nice article here that lays out the arguments against Heller’s approach nicely. Recommend giving it a read.

Reply to  Weekly_rise
March 3, 2021 11:53 am

Out of your link:
When fact checkers at Polifact

LOL, reason to stop reading further.

Weekly_rise
Reply to  Krishna Gans
March 3, 2021 12:58 pm

You should have kept reading just a few words further. The author goes on to say, “Unfortunately, they [politifact] didn’t give a clear rebuttal.” And then goes on to deliver the clear rebuttal that Politifact failed to.

Reply to  Weekly_rise
March 3, 2021 2:46 pm

Sorry, couldn’t find any rebuttal, what I found was a false rectification.

Weekly_rise
Reply to  Krishna Gans
March 3, 2021 6:35 pm

I eagerly await your explanation of what was false about it.

Reply to  Weekly_rise
March 4, 2021 3:02 am

All is false about, so simple 😀

fred250
Reply to  Weekly_rise
March 3, 2021 6:56 pm

Lots of blatant arm-waving , devoid of science.

Weekly_rise
Reply to  fred250
March 4, 2021 6:13 am

Can you be more specific? I’m genuinely interested to hear a factual rebuttal of the information in the article, since it seems to me to be quite compelling. Snappy lines get you more brownie points from your peers, but they don’t contribute to meaningful discussion.

fred250
Reply to  Weekly_rise
March 3, 2021 7:01 pm

The whole article basically ADMITS THAT THEY FABRICATE DATA.

Thanks for making out point for us, moron. ! !!

You are doing a “griff”, by presenting articles that back up our facts.

In-filling DOES NOT GIVE YOU THE RIGHT DATA.

It gives you FABRICATED DATA

It CANNOT IMPROVE any results.

MarkW
Reply to  Weekly_rise
March 3, 2021 7:44 pm

The problem isn’t that Politifact isn’t clear. The problem is that Politifact is a heavily biased source.

fred250
Reply to  Weekly_rise
March 3, 2021 6:55 pm

“Heller is quite incorrect in the way he treats the surface station data”

WRONG..

You just DON’T LIKE IT because it exposes the FRAUD of the HadCrud fantasy fabrication.

the fact this clueless joker writing the article references zeke horsefather, tells you all you need to know.

Zeke is ALL IN on data fabrication and mastication.

Tony Heller uses WHAT WAS RECORDED.

Get over it.

Weekly_rise
Reply to  Willis Eschenbach
March 3, 2021 10:51 am

The change in temperature is reflecting the fact that the Arctic is warming faster than any other place on earth, and the Arctic was essentially not represented in HadCRUT V4 and earlier. You’re not seeing an increased trend in the recent decades because of increasing sampling through time, you’re seeing an increased trend in the present day because the Arctic has been warming the fastest in the past few decades.

Is that any clearer?

Last edited 7 months ago by Weekly_rise
Dave Fair
Reply to  Willis Eschenbach
March 3, 2021 11:47 am

The use of anomalies hides many sins.

philincalifornia
Reply to  Dave Fair
March 3, 2021 12:37 pm

Exactly, it’s a form of scientific fraud in itself – pretending that a large area, that requires less energy to increase the temperature, is reflective of the global energy budget.

Did they infill the missing grids in the Antarctic, where it has been cooling?

Weekly_rise
Reply to  Willis Eschenbach
March 3, 2021 11:51 am

The anomaly is a combination of trend+noise, so picking out individual years and asking why they’re higher or lower in a particular version is not a fruitful exercise. The anomaly in the Arctic now has proper spatial weighting, so it actually represents the area of the globe covered by the Arctic. Whatever the anomaly for the Arctic region might have been in a particular year, it’s now receiving accurate representation in the global average.

Also, this is a dataset of anomalies, it doesn’t matter whether you’re increasing coverage in a warmer or colder part of the world, it only matters whether you’re increasing coverage for a part of the world that is warming or cooling.

Last edited 7 months ago by Weekly_rise
fred250
Reply to  Weekly_rise
March 3, 2021 12:12 pm

HadCrud an AGENDA DRIVEN FABRICATION that bears little resemblance to measured temperatures in MOST of the world.

Last edited 7 months ago by fred250
philincalifornia
Reply to  Weekly_rise
March 3, 2021 12:43 pm

While I appreciate your comments, just ask yourself this – if the Arctic was cooling faster, do you think this exercise would have been produced? It’s called the Texas sharpshooter fallacy.

Jim Gorman
Reply to  Weekly_rise
March 3, 2021 4:53 pm

Part of the problem with anomalies is that when averaged, you get a linear view of what the earth looks like. That’s why it appears that everywhere is warming the same. That’s the averages work. On the earth, that simply isn’t true.

Here is what a gradient for earth might look like. As you can see temperatures are not linear. This is to be expected as you move from a hot temp to a cold temp. As you integrate delta T’s, they would seldom turn out linear. This means GAT never describes the gradient properly and why a variance with an average is important.

Screenshot 2021-03-03 184041.jpg
lee
Reply to  Weekly_rise
March 3, 2021 6:26 pm

“The anomaly is a combination of trend+noise” So how do they separate trend from noise? Is the noise perhaps constant? If so can you provide evidence of this?

“so it actually represents the area of the globe covered by the Arctic” I think you meant “misrepresents”.

Louis Hunt
Reply to  Weekly_rise
March 4, 2021 5:36 am

“…it’s now receiving accurate representation in the global average.”

If it is now “accurate,” are they through changing the methods they use to derive the global temperature? Is this the gold standard for now and the future? or are they going to change their methods again because today’s methods, like previous methods, will be deemed sub par? Until they stop changing the data, we cannot rely on today’s data because it’s going to change again. Who knows how much. That makes it virtually worthless.

Weekly_rise
Reply to  Louis Hunt
March 4, 2021 7:14 am

They will continually update the surface temperature analyses as improved methodologies are developed and as they’re able to add more data. The changes always provide incremental improvement through time.

To note, though, that this particular change just brings HadCRUT into line with methods other groups have been using for a very long time, so in that way they have been lagging the pack.

KevinM
Reply to  Weekly_rise
March 5, 2021 7:14 am

Repetitive commentary is drowning an interesting debate here between Willis and Weekly. Please resist the temptation to post variations of the same comment you’ve posted thirty times in every thread for the last 3 years. I get it. You think they’re liars.

Bellman
Reply to  Willis Eschenbach
March 3, 2021 12:16 pm

Weekly, let’s take the year 1970. During that time, the Arctic was warming faster than the rest of the planet, just like today.

Was it? GISS shows the Arctic as being colder than average that year.

amaps.png
Reply to  Weekly_rise
March 3, 2021 11:43 am

“…the fact that the Arctic is warming faster than any other place on earth,”

I call that conforming the data to meet expectation. Your discipline has decided what is fact and then adjust the data to agree. It’s pseudoscience crap.
No other discipline of science would even consider just making up missing data to “fill in the blank.”
So many of today’s climate “scientists” are frauds, and most don’t know it or won’t admit what they are doing is junk science. They just continue to perpetrate an artifice on the fraud on the public’s confidence. A con game to keep the grants and prestige flowing for professional advancement, while satisfying a political agenda of their grant masters.

Reply to  Joel O'Bryan
March 3, 2021 11:46 am

Chris Moncton posted an article on this HadCrut update a couple of weeks ago. I highlighted in a snapshot what Hadley CRU had done to the past 150 years ago.

There simply can be no ethical explanation for this crap.

HadCRUT46v50.jpg
Tim Gorman
Reply to  Joel O'Bryan
March 3, 2021 5:34 pm

If you don’t have data across most of the Arctic then how do you assume that it is warming faster than any other place on earth? That’s an opinion and not a fact.

fred250
Reply to  Weekly_rise
March 3, 2021 12:08 pm

“The change in temperature is reflecting the fact that the Arctic is warming faster than any other place on earth,”

Except IT ISN’T

COOLING in the Arctic from 1980-1995

comment image

Then a step up at the 1998 El nino

Then NO WARMING THIS CENTURY until the 2015 El Nino Big Blob event, gradually subsiding.

comment image

fred250
Reply to  Weekly_rise
March 3, 2021 12:09 pm

And the Arctic was actually similar or WARMER in the 1940s than now.

comment image

comment image

Last edited 7 months ago by fred250
Carlo, Monte
Reply to  Weekly_rise
March 3, 2021 2:48 pm

How do you know this? Extrapolation into a huge region?

Jim Gorman
Reply to  Weekly_rise
March 3, 2021 4:09 pm

Nick Stokes recently criticized Andy May for the same explanation. That more Arctic stations in the northern Pacific artificially biases the temperature trend lower. It apparently never crossed his mind that the lower temperatures may be more correct.

Isn’t it funny how COLDER Arctic temperatures show up in one place, but suddenly they become warmer in another?

Dave Fair
Reply to  Weekly_rise
March 3, 2021 10:41 am

Then why no change to the 1930s through ’40s “data?” Smell test, anybody?

MarkW
Reply to  Weekly_rise
March 3, 2021 10:58 am

surprise, surprise, surprise
When making up data, they manage to create data that makes the trend look worse.
Don’t worry boys, we’ve made sure that our phony baloney jobs are safe for another year.

Reply to  Weekly_rise
March 3, 2021 11:29 am

If we assiduously collected station data from existing sites and then used ONLY that data, you would have a global average that may not be exact, but changes over time would be clear and reliable. After all, it is the trend that they worry about. It’s the misbegotten idea that we have to have a true, cobbled up, global average that makes this all an exercise in making up data.

John Andrews
Reply to  Charles Higley
March 3, 2021 11:34 pm

http://temperature.global

I check this every day.

Reply to  Weekly_rise
March 3, 2021 11:35 am

That’s called “making shit up” in my science book.
No other discipline of science would allow such data creation of officially produced products that others are to rely, especially not public policy.

That the effect of “adjustments” always go in one direction, i.e. cooling the past, warming the present, suggests ill motives at work, not ethical data science.
That these “updates” happen with shrugging acceptance in the climate world demonstrates how far lost the discipline is into a la-la fantasy land of pseudoscience.

Last edited 7 months ago by joelobryan
Weekly_rise
Reply to  Joel O'Bryan
March 3, 2021 11:45 am

I can’t speak for all fields of science, but interpolation is quite a common technique, as far as I’m aware, and this goes also for spatial interpolation of geographic data. A GIS can take a set of elevation points and interpolate between them to construct a continuous elevation surface, and I think very few people would call such an exercise “making shit up,” or would argue that the terrain is more accurately represented as a landscape of point-scale spikes in elevation present only where we have data points.

Reply to  Weekly_rise
March 3, 2021 12:00 pm

You’re dealing with historical data, not physical geography. If I interpolate elevation between where I live in Tucson Arizona (2,500′) and Oracle Arizona (4,500′), Mt Lemmon at 9,100′ is going to smack my ass on the trip.

These historical temp adjustments are simply Orwellian manifestations and not to be trusted.

“Who controls the past controls the future. Who controls the present controls the past.” – 1984, Orwell.

Hadley CRU is simply adjusting the past in the present to attempt to control the future. No other plausible explanations exist at this point.

Weekly_rise
Reply to  Joel O'Bryan
March 3, 2021 12:40 pm

There are good reasons to believe that the climate of nearby regions is connected, and that the anomaly represents a region with a radius of many kilometers around the station. And, as I mention earlier, the gridding is an arbitrary convention – the fact that a station sits only in a single grid cell does not mean it only represents a region bounded by the cell. In fact such an assumption is certainly untrue, so that the fact that HadCRUT is no longer making this assumption is a good thing.

Carlo, Monte
Reply to  Weekly_rise
March 3, 2021 2:55 pm

Wow, you are a true believer in this stuff.

Congratulations.

Simon
Reply to  Carlo, Monte
March 3, 2021 3:43 pm

Not just a true believer but my radar is telling me a true understander.

Derg
Reply to  Simon
March 3, 2021 4:00 pm

Is this like Russia colluuuusion Simon?

Is that truth 😉

fred250
Reply to  Simon
March 3, 2021 4:17 pm

No, a totally clueless ACDS non-entity, trying to defend the indefensible

Maybe on an extreme LOW level of comprehension,

brain-washed into IGNORANCE

…. just like you are Simple Simon

MarkW
Reply to  Simon
March 3, 2021 7:49 pm

YEs, he understands that what he is doing is wrong, but he is better than average at coming up with ever more imaginative excuses for doing so.

Reply to  Weekly_rise
March 3, 2021 3:25 pm

comment image

Antibes, between Nice and Menton (nearer to the mountains)

Take a coastline, Mediterranean sea on the south, in the North, 1.5 – 2 km away the upswinging Sea Alpes, ca. 10 km latest away even snow. What will you correctly interpolate there ? Not even the same climate zone, but you will make us believe, the climate of nearby regions are connected. There is no connection at all.

Last edited 7 months ago by Krishna Gans
Shanghai Dan
Reply to  Krishna Gans
March 3, 2021 4:25 pm

I love walking the beaches here in Ventura, CA, when it’s sunny and 70 – and looking at the snow-capped Topa Topa mountains just 30 miles away.

Tim Gorman
Reply to  Krishna Gans
March 3, 2021 5:40 pm

Take a look at Denver or Colorado Springs versus Pikes Peak. Look at the north side of the Kansas River Valley and then the south side of the Kansas River Valley. Look at San Diego and Ramona (25 miles east of San Diego.

The infill makes *NO* adjustments for geographical or terrain differences between points.

Bryan A
Reply to  Weekly_rise
March 3, 2021 3:31 pm

Which is of course why San Francisco can be at Average Temp on a given day and Oakland can be 2d above average on the same day while Santa Rosa can be 5d above average. Using Oakland to infill SF produces a temp anomaly increase of 2d while infilling with Santa Rosa will give SF an anomalous increase of 5d

WXcycles
Reply to  Weekly_rise
March 3, 2021 3:42 pm

You sir, are a loon.

fred250
Reply to  Weekly_rise
March 3, 2021 4:14 pm

WRONG,

Urban areas are affected by urban warming

There is NO REQUIREMENT for regions to be homogeneous

That is a LIE to allow the corruption of data.

Jim Gorman
Reply to  Weekly_rise
March 3, 2021 5:04 pm

What kind of anomalies would you expect from this map. I see a difference of 12 degrees within 150 miles. There are many more of 2 or 3 degrees. Using a common baseline, the anomalies are all over the place. Remember, this is in the Great Plains with only small hills for terrain. What would you interpolate some of the surrounding temps to be? We can look them up too!

kansas temp map.jpeg
Weekly_rise
Reply to  Jim Gorman
March 4, 2021 6:19 am

The anomalies will contain a combination of signal+noise (climate+weather), you have to look at them over long periods of time to determine whether there is a trend.

Carlo, Monte
Reply to  Weekly_rise
March 4, 2021 7:06 am

Why must there be a trend?

Plot the standard deviations of all this averaging versus time, maybe it will show you something. Hint: averaging does not remove uncertainty.

Weekly_rise
Reply to  Carlo, Monte
March 4, 2021 7:43 am

There will not necessarily be a trend, if the climate is not changing.

“Hint: averaging does not remove uncertainty.”

An average will certainly have lower uncertainty than a single measurement. In fact the uncertainty of the mean is inversely proportional to the number of observations used to compute the mean.

Carlo, Monte
Reply to  Weekly_rise
March 4, 2021 12:46 pm

Wrong.

Go learn some statistics and then go study the GUM.

Weekly_rise
Reply to  Carlo, Monte
March 4, 2021 3:33 pm

Thank you for your guidance, I just took a look at the GUM and it confirmed my earlier comment. In particular:

“The experimental variance of the mean s2(q‾‾) and the experimental standard deviation of the means(q‾‾) (B.2.17, Note 2), equal to the positive square root of s2(q‾‾), quantify how well q‾‾ estimates the expectation μq of q, and either may be used as a measure of the uncertainty of q‾‾.”

Jim Gorman
Reply to  Weekly_rise
March 5, 2021 8:55 am

Let’s take a look at the reference from the GUM that you are giving.

As you can see it says,

for a series of n measurements of the same measurand, the quantity s(qk) characterizing the dispersion of the results and given by the formula: 

In case you have trouble understanding this, it means multiple (n) measurements of the same thing (measurand).

When ‘n’ = 1, then ‘s’ becomes undefined, as it should be for a one time, single measurement. You have been told this multiple times. Single measurements DO NOT have a probability distribution.

Experimental standard deviation means building up a probability distribution for many measurements of the same thing each of which have random errors. The mean of this distribution is the “true value” as given by the measurement device. The standard deviation describes the “dispersion” of possible values around that mean.

For single measurements that have no standard deviation, one must use the best estimate available and that is plus/minus half the interval of the next digit. Thus, 75 +/- 0.5 F. See the attached next definition in the GUM.

See in Note 1, “the half-width of an interval having a stated level of confidence.” That is the +/-0.5.  

Screenshot 2021-03-05 at 105014 AM.jpg
Jim Gorman
Reply to  Jim Gorman
March 5, 2021 9:03 am

The stupid first screenshot didn’t post.’

Screenshot 2021-03-05 at 101318 AM.jpg
Jim Gorman
Reply to  Weekly_rise
March 4, 2021 3:36 pm

Answer this with a description of what the “uncertainty of the mean” means along with references that discuss it in mathematical terms.

Jim Gorman
Reply to  Weekly_rise
March 4, 2021 3:35 pm

Temperatures trends are not and do not indicate climate. They are a very poor proxy for climate. Consequently, “climate + weather” is meaningless.

Station temperature series are time series data. As such they must be analyzed as time series. Averaging a multitude of different time series and doing some kind of a regression hides multiple problems like spurious correlations.

Here is a link to some info on time series analysis. There is much more on the internet about this.

6.4. Introduction to Time Series Analysis (nist.gov)

Jim Gorman
Reply to  Weekly_rise
March 5, 2021 7:58 am

And just exactly what makes up that trend? Tell us if it is something different than daily temperature, anomalies in your case, that is averaged together. It doesn’t matter if you are infilling absolute temps or anomalies, you are by default infilling made up temps.

Tim Gorman
Reply to  Weekly_rise
March 3, 2021 5:38 pm

There are good reasons to believe that the climate of nearby regions are *NOT* 100% connected. Yet no weighting is done with the adjustments. That’s magic, not science.

Reply to  Weekly_rise
March 3, 2021 12:12 pm

Imagine the fact, that we had 2 or 3 points in Germany with night temps. below 0°C in May, in June and July (happend really last year) Surrounding data may have been 5 – 10°C. Will give a nice interpolation, don’t you think ?.

Last edited 7 months ago by Krishna Gans
fred250
Reply to  Weekly_rise
March 3, 2021 12:15 pm

Yes weakling, they are smearing URBAN WARMING all over the globe, where it doesn’t, in reality, exist

Infilling and homogenisations are total bastardisations of reality !

Carlo, Monte
Reply to  Weekly_rise
March 3, 2021 2:54 pm

What you are doing is not interpolation, and you can’t invent new information where none exists.

Like the magical image enhancement software on TV that can take four pixels out of a reflection and produce a perfect image of the car license plate.

Trying to Play Nice
Reply to  Carlo, Monte
March 3, 2021 4:11 pm

You mean the CSI TV shows aren’t always completely true? That blows a big hole in most of the science believers knowledge of the world.

Carlo, Monte
Reply to  Trying to Play Nice
March 3, 2021 8:50 pm

But it sure makes it easy to get the perpetrators within the allotted 45 minutes.

WXcycles
Reply to  Weekly_rise
March 3, 2021 3:40 pm

… as far as I’m aware, and this goes also for spatial interpolation of geographic data.

We used to call that the creation of cumulative error, within geographic overlays.

Now people like you pass it off as actual “data”.

Contemptible … and pathetic … in the extreme.

Circus is in town!

Carlo, Monte
Reply to  Joel O'Bryan
March 3, 2021 2:50 pm

AKA dry-labbing.

Rob_Dawg
Reply to  Weekly_rise
March 3, 2021 11:51 am

Creating new cells that previously had no trend (no data) and infilling with adjacent data known to be rising twice as fast as the world average is very much an example of adjusting the station data.

John Tillman
Reply to  Weekly_rise
March 3, 2021 12:25 pm

Alaska’s all-time high temperature is 100 F, set at Fort Yukon in 1915. Fairbanks reached its all-time high of 99 F in 1919. Despite increasing urbanization, these records still stand.

The hottest temperature ever recorded in Greenland was also in 1915, of 86.2 F. A supposed new record in 2013 was promptly shown to be phoney.

John Tillman
Reply to  John Tillman
March 3, 2021 12:45 pm

Record high for Siberia is 109.8, set in 1898 at Chita. Granted, it lies only a bit north of 52 N. Couldn’t find a record high for Belushya Guba, the largest settlement on Novaya Zemlya or Uelen on the Chukotka Peninsula.

Last edited 7 months ago by John Tillman
bit chilly
Reply to  John Tillman
March 3, 2021 5:47 pm

John, i am afraid weekly rise won’t understand your post as it uses actual data to demonstrate a point that goes against the narrative they would like to promote.

John Tillman
Reply to  bit chilly
March 3, 2021 6:06 pm

Yet I thank Weekly Rise for commenting here. We need CACA adherents to show the complete and total depravity and lack of scientific support for the catastrophic, man-made global warming scam, which has cost the world so many lives and so much treasure.

To bed B
Reply to  Weekly_rise
March 3, 2021 12:40 pm

I’ll agree with you. It’s clear that station data is adjusted to better reflect the temperature trend of the grid, which is independent of station data.

Its good to find some common ground.

To bed B
Reply to  To bed B
March 4, 2021 12:11 am

“which is independent of station data”

I was expecting a giggle rather than -8.

Nicholas McGinley
Reply to  Weekly_rise
March 3, 2021 1:49 pm

There is no one who is surprised that they have the sophistry all worked out in advance.
No one.
BTW…when there were fewer stations, those stations were automatically interpolated/extrapolated to the places where there are no stations.
Because at that time they represented all of the data for that region.

Last edited 7 months ago by Nicholas McGinley
Carlo, Monte
Reply to  Weekly_rise
March 3, 2021 2:43 pm

An exercise in extrapolation, got it.

Reply to  Weekly_rise
March 3, 2021 3:00 pm

WR
If more new data is added (infilled) in regions that are warming most – the Arctic – then of course the total overall warming trend will increase. More warming signal is added to the total.

The job’s a good-un!

WXcycles
Reply to  Weekly_rise
March 3, 2021 3:18 pm

… and starting with V5 they are infilled using data from nearby grid cells.

Worked for BOM too.

Sensor surrounded by tarmac? … Check!

Interpolate for 1,500 km radius where no data exists … Check!

Instant ‘BOM-Data’™.

This was not possible before computers, because people used their brains back then, and realized it was nuts, and completely inaccurate and dishonest.

Michael Jankowski
Reply to  Weekly_rise
March 3, 2021 3:33 pm

“…Previously these grids were simply dropped from the global average (which has been the major difference between HadCRUT and other temperature analyses like GISTEMP)…”

What “major difference?” They claimed HadCRUT4 was “very consistent” with other global and hemispheric analyses.

Global surface temperature data: HadCRUT4 and CRUTEM4 | NCAR – Climate Data Guide (ucar.edu)

“…starting with V5 they are infilled using data from nearby grid cells…”

Actually, according to the link about, there are TWO V5 releases…one with infilling, and one without: “In late 2020, the Met Office Hadley Centre released HadCRUT5. HadCRUT5 is offered in two different versions. The first version is similar to HadCRUT4 in that no interpolation has been performed to infill grid cells with no observations.”

Why would they release a V5 that is the same a V4?

Michael Jankowski
Reply to  Weekly_rise
March 3, 2021 4:00 pm

I’ll take it a step further…

Lines 597 onward in your own link mention some of the differences between v4 and the non-infilled version of v5 that include adjustments to past data (albeit not land stations, but sea data). Of course these adjustments increase the warming trend as they always seem to do. Are you just ignorant, or dishonest?

My favorite part of the paper started around line 636 where they professed that there is less uncertainty with the infilled data v5 than the non-infilled v5. Are they just ignorant, or dishonest?

A C Osborn
Reply to  Weekly_rise
March 4, 2021 1:56 am

How anyone thinks that an area the size of the Arctic can affect the whole world’s average temperature needs to explain why the USA and Antarctic doesn’t.
The majority of unadjusted records show no warming and in fact many show cooling over the last 3 decades. Take a look at the work of Kirye here
https://notrickszone.com/2021/03/02/jma-data-winter-global-warming-left-japan-decades-ago-no-warming-in-32-years/
With other posts there.

Mark BLR
Reply to  Weekly_rise
March 5, 2021 3:43 am

From page 5 of the PDF file you linked to :

… statistical analysis methods were not used in HadCRUT4 or its underpinning land and marine datasets to infer temperature changes in regions where measurements are not available. An independent application of local statistical interpolation methods to HadCRUT4, in a study by Cowtan and Way (2014), found that statistically infilled reconstructions showed recent warming over high latitude regions that is not proportionately represented in global mean temperatures calculated from the non-infilled HadCRUT4 data set.

From page 6 :

Here, two ensemble surface temperature datasets are presented. The first, the “HadCRUT5 non-infilled dataset”, adopts the gridding and ensemble generation methods of HadCRUT4 (Morice et al., 2012). The second, the “HadCRUT5 analysis”, uses a statistical infilling method to improve the representation of sparsely observed regions.

My understanding from this (and other reading around the subject) is that the “Non-infilled / HadCRUT5 analysis” version of HadCRUT5 corresponds to an “updated HadCRUT4” — which I think is the version used for “comparison” purposes by Willis in the ATL article, NOT the “Infilled” one (!?) — while the “Infilled” version is intended to replace the “Cowtan & Way / kriged HadCRUT4” one.

NB : It is entirely possible that my “understanding” (and “thinking” ?) is WRONG !

Valid “comparison” projects would therefore be either :
1) “HadCRUT5 Non-infilled” vs. HadCRUT4, or
2) “HadCRUT5 Infilled (/ analysis)” vs. C & W

From the HadCRUT5 webpage (direct link) you can download TWO versions of the “Ensemble means, Global (NH+SH)/2, monthly” data :
– “HadCRUT5 analysis time series” (at the top of the webpage), and
– “HadCRUT5 non-infilled time series” (you just need to scroll down a bit for this one …)

You will then need to download (and learn to use) the “h5tools” package (Linux, as I did) or application (Windows / Mac-OS).

Hint : I ended up using the following “h5dump” commands to extract the “Infilled” and “Non-infilled” datasets from my local copies of the netCDF (.nc) files.
h5dump -d tas_mean -y -w 17 -o HadCRUT5_1.data HadCRUT5_1220.nc
h5dump -d tas_mean -y -w 17 -o HadCRUT5_2.data HadCRUT5_Non-infilled_1220.nc

The first line of each [.]data file contains the monthly anomaly (w.r.t. the 1961-1990 reference period) for January 1850, the 2052nd contains the December 2020 anomaly.

ResourceGuy
March 3, 2021 10:23 am

A normalized graph of climate ‘research’ funding might also be interesting.

JP Miller
March 3, 2021 10:24 am

One wonders how the people behind this work can keeps straight face, or how any honest climate scientist cannot wonder WTF! Especially when one compares previous data sets to 5 and sees that ALL the adjustments—in the name of getting the most accurate trends, of course—increase the trend.

Only a fully-corrupted climate science and political process (which funds the scientists) could accept these changes without intense doubt and thorough examination. Then, we have the media who are shameless cheerleaders.

Sadly, how our society thinks and acts in regard to climate-related stuff will only come to reason when sufficient cold for sufficiently long that society’s foundations are rocked will bring us brackets to reason and reality.

In the interim, we skeptics can console ourselves by poking at the insanity around us.

Chris Hanley
Reply to  JP Miller
March 3, 2021 1:34 pm

The Climate Research Unit is now run by Mr Bean.

Alan M
Reply to  Chris Hanley
March 3, 2021 4:07 pm

That must be why they make us laugh

a_scientist
March 3, 2021 10:28 am

Ah yes, the usual…
Cooling the past and warming the recent data to “enhance” the trend.

Dave Fair
March 3, 2021 10:37 am

Willis, I noted (Mk. 1 eyeballs) that ver. 4 and 5 adjusted temperature data were essentially the same during the 1930s and ’40s. On either side (earlier and later) ver. 5 was cooled additionally, up until the magic mid-1970s acceleration where it took off above ver. 4. What happened in the 1930s and ’40s that obviated the need for additional cooling adjustments made to the ver. 5 data both immediately before and after that period?

Scissor
March 3, 2021 10:44 am

“Why the blip?”

Michael Jankowski
March 3, 2021 10:44 am

HadCRUT is dead. Long live HadCRUT!

Felipe Grey
March 3, 2021 10:59 am

World population grew from 3.9 billion in 1973 to 7.8 billion in 2020 (it doubled). Twice as many people crammed into mostly urban spaces (cooking, heating, transport, air travel etc generally warming the surrounding environment where most measurements take place) affecting the surface temperature record. Urban Heat Island effect is observably at least 1 degree Centigrade higher currently in cities compared to rural areas and night-time minimums have generally increased in the records (due to concrete storing more heat). Hardly any observable impact on rural temperature or troposphere temperatures though. This is not adequately compensated for. It’s all BS really.

fred250
Reply to  Felipe Grey
March 3, 2021 12:17 pm

“This is not adequately compensated for.”

Not onlyis urban warming not adequately compensated for…

They actually USE that urban warming and smear it all over non-urban areas creating a TOTALLY FAKE and MEANINGLESS representation of the global temperature.

Robert W Turner
March 3, 2021 11:08 am

They seem to have forgotten about 10% of the planet as well. So they decided to interpolate from hundreds or thousands of km away in the Arctic where it’s warming and simply leave out the Antarctic where it is possibly cooling. What will they think of in revision 6 to cool the past and warm the present?

John Tillman
Reply to  Robert W Turner
March 3, 2021 11:29 am

The South Pole hasn’t warmed at all since records began there in 1958.

But recently a remote interior region previously unsampled was found to be much colder than previously imagined. Using GISS’ standard of 1200 km., this frigid, off-world-like vast area could have been filled in with “data” from a relatively balmy coastal station.

Science, post-modern.

Rud Istvan
Reply to  John Tillman
March 3, 2021 1:07 pm

JT, BEST disagrees with your observation.They have a slight warming trend. It was manufactured by their automatic data QC algorithm, as explained in footnote 25 to essay When Data Isn’t in my ebook Blowing Smoke.

BEST rejected 26 very low temperature months in their station 166900, because the next nearest station did not also show them, thus creating a slight warming for Antarctica. The next nearest station is McMurdo, 1300km away and 2700 meters lower on the coast. You see, 166900 is Amundsen-Scott AT the south pole. It is arguably the best maintained, and certainly the most expensive weather station in the world. That is how warming trend sausage is made.

John Tillman
Reply to  Rud Istvan
March 3, 2021 6:09 pm

So, when necessary, the 1200 km standard is tossed, and 1300 km is accepted. Why am I not surprised?

Bellman
Reply to  Robert W Turner
March 3, 2021 12:26 pm

No. They use the same techniques for both Arctic and Antarctic.

Carlo, Monte
Reply to  Bellman
March 3, 2021 2:57 pm

Information creation?

Derg
Reply to  Carlo, Monte
March 3, 2021 4:03 pm

Information fabrication

Rud Istvan
March 3, 2021 11:13 am

Compare to UAH. 5 is worse than 4, and both are ‘off’ high compared to UAH both in anomaly and anomaly trend. That is not Arctic infilling. It is bolixed.

Dave Fair
Reply to  Rud Istvan
March 3, 2021 11:57 am

CliSci tells us that the “greenhouse effect” occurs in the atmosphere and is reflected back to the surface. Satellites and radiosondes show minor warming trends of about 0.14 C/decade. And this is over a period where cyclical global temperatures were on the upswing. Ignoring this will eventually bite CliSci in the posterior.

Richard M
Reply to  Dave Fair
March 3, 2021 4:38 pm

Yup, this is the big problem. Climate science views the surface as if it is different from the atmosphere. Turns out they are in the same thermodynamic system and moving energy from one area to the other does not change the temperature of the system.

Once a person realizes this they also realize that GHGs cannot warm the planet. The entire basis of greenhouse warming is unphysical.

Dave Fair
Reply to  Richard M
March 3, 2021 4:48 pm

That is not what I said nor implied. GHGs warm the planet; it is the water vapor enhancement of minor CO2 warming potential in which I disagree.

Richard M
Reply to  Dave Fair
March 3, 2021 6:12 pm

Dave, I understood you did not say that. I was pointing out you were accepting a view that is wrong. I accepted it as well up to a couple of months ago. I had no reason not to accept it until I dug a little deeper.

Richard M
Reply to  Rud Istvan
March 3, 2021 4:35 pm

UAH matches HadSST3 almost perfectly. Personally, I would throw out all the land data as it is completely corrupted. Don’t need it anyway. Even if we are warming it has nothing to do with CO2.

March 3, 2021 11:22 am

“Normally, the adjustments are made on the older data and reflect things like changes in the time of observations of the data, or new overlooked older records added to the dataset. “

What is “normally”? There is a difference here which no-one at WUWT seems to want to take notice of. HADCRUT does not adjust station data. They use the data as supplied by the Met office source. It’s true that they incorporate new station data when they can get it. But they don’t themselves adjust old readings.

As Weekly_rise says, the reason for the change is the correction for a failing pointed out by Cowtan and Way in 2013. In dealing with cells lacking data, they did not try, as one should, to estimate using local information. Deducing a global figure from a finite set of sample points necessarily requires estimating every point that has not been measured, ie if is not actually a station. If you leave out empty cells, that is equivalent to estimating them as the same as the average of all cells that have data. But sometimes you know this is wrong, as with the Arctic, where there has been recent warming. HADCRUT 5 uses local information to estimate all missing cells, which reverses the artificial cooling that came from supposing that missing Arctic cells only experienced the global rise.

That is why the change is in recent averages. It isn’t because any station data was changed. It isn’t because something was different in measurement 50 years ago. It is because the Arctic warming has been properly weighted, and that warming is recent.

Dave Fair
Reply to  Nick Stokes
March 3, 2021 12:00 pm

It doesn’t matter the HADCRUT trend; UAH 6 is the gold standard. You are just putting lipstick on hopeless “data.”

Bellman
Reply to  Dave Fair
March 3, 2021 12:28 pm

Why UAH 6? It seems to be the data set most at odds with the others.

Dave Fair
Reply to  Bellman
March 3, 2021 2:06 pm

Why? Because it doesn’t mix hopeless pre-ARGO sea surface data with confused and inaccurate land surface measurements. Especially considering that ARGO’s 16-year upper ocean trend, ending on a Super El Nino, is consistent with UAH’s 40+ year trend of 0.14 C/decade.

Richard M
Reply to  Bellman
March 3, 2021 4:40 pm
Dave Fair
Reply to  Richard M
March 3, 2021 5:00 pm

ARGO mixed layer trend is about the same.
comment image

Reply to  Dave Fair
March 3, 2021 12:30 pm

“UAH 6 is the gold standard”
Not for surface temperature. Nor of anything. Was UAH 5.6 also the gold standard? It told a very different story.
https://moyhu.blogspot.com/2018/01/satellite-temperatures-are-adjusted.html

Dave Fair
Reply to  Nick Stokes
March 3, 2021 2:24 pm

B.S. by moyhu blog. Please explain why one would not use UAH 6 to determine the 40+ year global atmospheric temperature trend. The justification for the satellites was that they would give a more accurate picture of GHG-driven temperature changes on a global basis.

Reply to  Dave Fair
March 3, 2021 2:44 pm

Why would you not use RSS? But you’d get a totally different trend, higher than thermometers. OK, don’t trust RSS (though we loved them during the Pause). So it isn’t just satellites, but who’s doing the calculation. But that comes back to UAH V5.6 – same people, different result.

Dave Fair
Reply to  Nick Stokes
March 3, 2021 3:54 pm

Deflection on your part, Nick. UAH 6 was an improvement, fully documented by Spencer, et al. Mears at RSS is a proven ideologue, using unsuitable models for diurnal drift adjustments. He was incensed that the unwashed were using his reconstruction to blow up CAGW and the models. It doesn’t matter anyway; both RSS and UAH show minimal temperature trends over the period of maximal CO2 increases.

fred250
Reply to  Nick Stokes
March 3, 2021 4:25 pm

RSS DID match UAH closely before they started making MANIC AGENDA-DRIVEN “adjustments”, using “climate models” no less.. Seriously !!!!

There are very good reasons that RSS should now be TOTALLY IGNORED. !

Again, you KNOW all that.. so why the ACDS mis-information / LIES, nick

You are TRASHING your reputation even more everyday.

Now you need to be treated on the level of griff, etc as a CHRONIC MIS

fred250
Reply to  Nick Stokes
March 3, 2021 4:19 pm

The surface data is kludged together MESS of mis-matched, in-filled, smeared , adjusted and CORRUPTED once-was-data

UAH matches the trend of only pristine surface data almost exactly

You are MIS-DIRECTING / LYING, yet again, Nick

fred250
Reply to  Nick Stokes
March 3, 2021 4:50 pm

Everybody needs to realise that moyhu is a site that uses every twisted bit of non-science it can to support the ACDS of its owner.

It is NOT interested in science, it is interested in AGENDA.

It is a rabid PROPAGANDA site, behind a thin FACADE of science.

fred250
Reply to  Nick Stokes
March 3, 2021 12:23 pm

HADCRUT does not adjust station data.”

.

It takes station data that has already been adjusted….

Why always the manic misdirection to support your ACDS, nick ?

Pretending otherwise is just AGW cult behavior, UNWORTHY of a rational mind.

“as with the Arctic, where there has been recent warming”

.

Again with this BS

No warming apart for the El Nino this century.

comment image
.

Arctic was similar or WARMER in the 1940s.

comment image

M Courtney
Reply to  Nick Stokes
March 3, 2021 12:43 pm

 But sometimes you know this is wrong, as with the Arctic, where there has been recent warming. “
How do we know that when we have no measurements?
Because we assume that they have and so add in imaginary stations that show the warming.
And to compete the circle, these made-up results confirm what we already wanted to know.

It’s not best practice.

Reply to  M Courtney
March 3, 2021 1:21 pm

“How do we know that when we have no measurements?”
We do have measurements, and HADCRUT 5 now has more of them.

It an issue of inhomogeneity and proper weighting. As I showed here
https://moyhu.blogspot.com/2013/11/coverage-hadcrut-4-and-trends.html
you can get a result similar to Cowtan and Way by just averaging 10° latitude bands first, omitting empty cells, and then combining those bands by area. It comes back to this issue of not infilling, but just omitting empty cells. That effectively assigns to them the population average. If you average by bands first, the omission assigns to missing data cells the value for the latitude band, rather than the globe. That is already better. In fact HADCRUT has always done a limited version of this, in that they average hemispheres separately, and then combine the result. They made a point of doing that, but should have taken it further. Now they have.

We saw the same issue with the “Goddard spike”
https://wattsupwiththat.com/2014/05/10/spiking-temperatures-in-the-ushcn-an-artifact-of-late-data-reporting/
There the problem was that over a four month period to April (US), missing data was mostly in the latest month (time delay) and so averaging without infilling assigned to these the average, which was to cold. Steve Goddard sort of fixed this by averaging by month separately, and then combining the averages. But it’s a weak version of proper infilling.

M Courtney
Reply to  Nick Stokes
March 3, 2021 3:10 pm

You have used many words to say back what I said. And I don’t think you realise it.
The question to be answered is “What to do about areas with no data?”
The answer is either
A) Admit that we do not know about that region and say we cannot discuss it.
B) Take all the trends in anomalies we can find and apply that mean to the unknown regions.
C) Take all the trends in anomalies we think are appropriate (by proximity) and apply that mean to the unknown regions.

Option A is the least useful and the most true.
Option B has the benefit of hopefully eliminating most random errors most strongly. Although it is obviously flawed for systematic error.
Option C is exactly the same as Option B but weaker. However, it has the benefit of meeting the initial guesswork by assuming that proximity means more representative. That is not justified, except by the circular reason I described.
No measurements are not equal to imaginary stations.

Reply to  M Courtney
March 3, 2021 3:37 pm

“The question to be answered is “What to do about areas with no data?””
I answered it. The whole world is an area with no data, except where there is, and that is at just a finite number of points.. You have to estimate the rest, to get a global average. This is a universal situation in science. 

The cells are a distraction. It is useful to estimate specific areas first, within which local variation is small, and then put them together. But it is all derived from the sample (station) readings. The cell lines are arbitrary. All it means is that inside an empty such area, you just have to look further afield for data. But just lumping it with the global average is definitely not good, and I’m glad that HADCRUT is now doing better.

Last edited 7 months ago by Nick Stokes
Derg
Reply to  Nick Stokes
March 3, 2021 4:06 pm

How can someone who is so bright be so dumb…geez Nick you take the cake.

fred250
Reply to  Nick Stokes
March 3, 2021 4:28 pm

WOW, you are getting so, so desperate in you DENIAL, Nick

“You have to estimate the rest,”

.

Now you are ADMITTING that they are making up data. !!

We knew that !!

Its rather pathetic that you still try to justify the UNJUSTIFYABLE

Your ACDS is overwhelming your ability for rational thought.

fred250
Reply to  Nick Stokes
March 3, 2021 7:05 pm

“and I’m glad that HADCRUT is now doing better.’

.

EXCEPT THAT THEY ARE NOT !!

fred250
Reply to  Nick Stokes
March 3, 2021 8:09 pm

You will know they are doing “better”, when their result get at least SOMEWHERE NEAR the original data.

Seems they are GETTING WORSE as the put a VOID between REALITY and their fabrications.

bit chilly
Reply to  M Courtney
March 4, 2021 1:39 am

The issue i have with all the various statistical techniques is, if true that there is some homogeneity by grid square/latitude band etc ,why bother using these techniques at all ? Surely just using the physical data we do have would give the same result ?

Unless of course that result doesn’t support a narrative.

Jim Gorman
Reply to  Nick Stokes
March 3, 2021 5:22 pm

You are totally ignoring that there are areas that may or may not be at the average of a band. Without data, how do you know? Look at this map and tell everyone how temps can’t vary widely over just a few miles.

When you are calculating anomalies to the 1/100ths of a degree, you have no way to know how accurate absolute/anomaly truly is. You are playing a guessing game and trying to convince us that you know what you are doing!

kansas temp map.jpeg
John Tillman
Reply to  M Courtney
March 3, 2021 6:11 pm

Good question!

Carlo, Monte
Reply to  Nick Stokes
March 3, 2021 2:58 pm

Another information creator.

WXcycles
Reply to  Nick Stokes
March 3, 2021 3:54 pm

In dealing with cells lacking data, they did not try, as one should, to estimate using local information. Deducing a global figure from a finite set of sample points necessarily requires estimating every point that has not been measured, ie if is not actually a station. If you leave out empty cells, that is equivalent to estimating them as the same as the average of all cells that have data. But sometimes you know this is wrong, as with the Arctic, where there has been recent warming.

Honestly Nick, you missed your calling, just think of all the used cars you could have sold, you would be a very rich man.

Has it ever crossed your cerebellum that just maybe it’s obviously quite inappropriate and dishonest to be trying to squeeze a global map from the actual data available?

“But! … but … it’s a GIS man! … that’s what we do!!”

Enjoy your unbridled cumulative-error constructs mate, it’s all you’ve got as a substitute for having an intellectual foundation.

Last edited 7 months ago by WXcycles
fred250
Reply to  WXcycles
March 3, 2021 7:09 pm

“just think of all the used cars you could have sold”

.

He would have tried to weasel his way around missing engines, missing wheels, no brakes

Trying to sell a LEMON… even after they have all-but rusted into dust.

In this case in the form of HadCrud surface temp FABRICATIONS

Just IMAGINE there are 4 wheels 😉

This is HadCrud….. Nick style !!

comment image

Last edited 7 months ago by fred250
Shanghai Dan
Reply to  Nick Stokes
March 3, 2021 4:38 pm

Deducing a global figure from a finite set of sample points necessarily requires estimating every point that has not been measured, ie if is not actually a station.”

False. It does no such thing. You simply use the data you have, and specify the tolerance in terms of accuracy of average (based on the tolerance of each instrument used; you cannot average away tolerance errors) and spatial resolution.

You could say “we believe the anomaly was X.X deg C (+/- Y deg C) with a spatial resolution of ZZZZ km”. Of course, that would require thinking people to actually understand what is stated. And it does not drive the agenda as much as stating “the anomaly is X.X deg C”.

In the GIS world in which I’ve participated in the past, you always specify the tolerance of your equipment AND of your geographic resolution. Failure to do both meant you did not actually report your data.

Jim Gorman
Reply to  Nick Stokes
March 3, 2021 5:15 pm

Adding new short term data, either real or imagined is going to bias the information if it is different from the past. In other words, you have no idea what the past data should have been, and are making up all kinds of trends that get into the average. If they are included after the current baseline years, they are not even included in the baseline!

March 3, 2021 11:26 am

Ah, I have had a sailboat for 49 years now, Pearson 26. Raising kids and college, etc. curtailed cruising big time. But, empty nest now and I am good to go. Planning three months of Maine cruising, going wherever.

March 3, 2021 11:31 am

“…looking at a bunch of boats that I’m very happy that I don’t own.”

Yep. A boat owner’s happiest days are the day he buys the boat, and then the day he sells the boat.

Steve Case
March 3, 2021 11:34 am

 I have no big conclusions, other than that at this rate the temperature trend will double by 2050, not from CO2, but from continued endless upwards adjustments …

It’s called re-writing history. GISTEMP does it too. The issue is, that they get away with it in broad daylight. You & I and a lot of others are losing the argument. Freedom of the press is limited to those who own a printing press. These days the press is the internet and the so-called main stream media. There isn’t any public organization that is going to come along and break up the information monopoly that is becoming more and more obvious every day.

Richard M
Reply to  Steve Case
March 3, 2021 4:56 pm

The reason we’re losing is because we are playing on their field with their ball and by their ever changing rules. Time for a change.

1) The claimed 33 C of warming from the greenhouse effect is false. There is no warming period. The Earth’s temperature is exactly as predicted by S-B when measured properly. Radiative gases simply move energy around.

2) The atmosphere-surface model is incorrect. The correct model is closer to what is called the ecosphere. It includes the atmosphere and the surface skin where life is found. When this model is used there is no ability of GHGs to create warming.

3) The energy budget views are nothing but cherry picked nonsense. They are meaningless.

Only when these and other glaring climate science mistakes are picked up by skeptics and used against the false science that exists today will progress be made.

Editor
March 3, 2021 11:45 am

Thanks for the analysis, Willis. I found this statement of yours very interesting:

Also, in the past adjustments have tended to reduce the drop in temperature from ~ 1942 to 1970. But these adjustments increased the drop.”

I wonder if the CMIP6 models better capture that cooling.

Regards,
Bob

Mr.
March 3, 2021 11:48 am

So it looks from your 2nd graph Willis that the “settled science” was back around 1940?

Last edited 7 months ago by Mr.
dodgy geezer
March 3, 2021 11:52 am

Does anyone get to see the justification for the adjustments?

Gary Pearse
March 3, 2021 12:26 pm

Willis, I also note that they lowered the older temperatures as every iteration seems to do. My suspicious heart suspects the steepening achieved an important goal too, since alarm is greater in this case. It also maybe forestalling dev of a new ‘pause’, currently standing at a 5 yr decline.

The UAH has constrained the jiggery pokery on the recent end of the temperature curve up intil now. With GISS jerking up their recent end satellite temps and hachet work by climate wroughters on UAH and satellite temps in general over the past decade, they must feel less restraint to their wants.

March 3, 2021 12:31 pm

Sorry for OT, but at Climate Audit Stephen McIntyre has a new article “Milankovitch Forcing and Tree Ring Proxies”

bluecat57
March 3, 2021 12:32 pm

HadCRUT #4 & 5? Is that what you had at last night for dinner?

Walter Sobchak
March 3, 2021 12:36 pm

Welcome to the Adjustocene.

To bed B
March 3, 2021 12:37 pm

Has HadSST4 come out?

Do they swap the hemispheres back?

Do they have as massive a change to the first half of the base period as going from v2 to 3, with a barely perceptible change to other decades?

Is there a seasonal signal in Hadsst SH (or was it NH) with an amplitude increasing linearly with time after 2000?

Does it still appear as the maximum anomaly of 1998 is used as the reference rather than the mean of individual months in the base period?

These people are a joke.

John Tillman
Reply to  To bed B
March 3, 2021 1:20 pm

A tragic joke, as at least millions of lives are at stake and trillions in treasure as a result of this antiscientific drivel, not fit for any purpose, except to enrich its perpetrators and enable globalism.

ResourceGuy
March 3, 2021 12:42 pm

Thanks again Willis. The difference plot is interesting and anomalous.

Chris Hanley
March 3, 2021 12:45 pm

Quoting Ole Humlum @ climate4you: Temporal stability of global air temperature estimates:
“… a temperature record which keeps on changing the past hardly can qualify as being correct …”.
Will they ever get it right⸮

John Tillman
Reply to  Chris Hanley
March 3, 2021 1:21 pm

“The science” (an unscientific formulation) is settled, except where it needs to be resettled, closer to the bogus narrative.

March 3, 2021 1:36 pm

“Only the future is certain; the past is always changing.”–Chinese proverb

JamesD
March 3, 2021 2:08 pm

Seems like adjustments should bring the data CLOSER to the satellite data, not farther away.

Dave Fair
Reply to  JamesD
March 3, 2021 4:29 pm

CliSci has a fundamental problem: Theory says that increasing GHGs will heat the atmosphere and drive surface temperatures higher. To get significant surface warming, the UN IPCC climate models had to invent a tropospheric “hot spot” to reflect water vapor amplification. Since the satellites detect no “hot spot,” CliSci had to discredit UAH (see Nick Stokes’ comments). Mears, a fellow traveler, adjusted RSS higher to mimic surface and, to an extent, model trends.

bit chilly
Reply to  Dave Fair
March 4, 2021 1:47 am

There is nothing wrong with the physics around how ghg’s behave. The problem is no one in climate science has spent any serious time or effort looking at how the earth responds and it’s natural ability to lose energy/heat.

It’s likely (note use of climate sciency term) the reason runaway warming didn’t occur in the past with significantly higher co2 levels.

KentN
March 3, 2021 2:19 pm

Willis, can you post a graph of RAW data that goes into the adjustments? That would be informative. It can be argued that RAW does not include truly needed adjustments. But it also can be argued that any adjustments that don’t increase the warming trend are dismissed out of hand, seems like. For my local station, the raw data is a noisy flat line back to 1893. Guess what the adjustments do?

March 3, 2021 2:32 pm

Temperatures took an upswing about the same time as water use (about 1960). The upswing in irrigation and other sources of increased water vapor explains the temperature increase of HadCRUT4. I don’t expect HadCRUT5 to make much difference. https://www.researchgate.net/publication/338805648_Water_vapor_vs_CO2_for_planet_warming_14

water withdrawl.jpg
March 3, 2021 2:33 pm

Within the 4 square mile area with my house in the center, Weather Underground real-time read outs show several degrees difference, which varies from day to day, with no evidence that anomalies are consistent. Therefore, I call BS on the notion that grid infilling, whether using anomalies or actual temps, has any validity.

Compare the same site over time, that might tell you something – changing the rules and creating data tells you only that they didn’t like the old answer.

WXcycles
Reply to  Taylor Pohlman
March 3, 2021 4:09 pm

Please, don’t go bringing how the real world operates into this.

Next you’ll want error margins on all the imaginary bananas, that were interpolated from a known real banana on the other side of the continent.

If we subject the imaginary bananas to these provocative imposts of real observation that sort of scrutiny will cause the entire fruit-shop to fold-up and collapse into a bottomless chasm of figments and we’ll be right back to having only one banana again!

Who wants that? All advancement in banana-science lost!

Science needs these bananas!

Last edited 7 months ago by WXcycles
Tim Gorman
Reply to  Taylor Pohlman
March 3, 2021 5:50 pm

I’ve suggested something like this in the past. Look at each station and determine its individual trend. Assign a + or – according to the slope.

Then just go around the globe adding up all the pluses and all the minuses and see what comes out!

Gary Ashe
March 3, 2021 3:00 pm

Well paint me shocked.

Just more of the same, the same desperation that is, nature just ain’t playing along so needs must in their noble cause of saving humanity.

RickWill
March 3, 2021 3:36 pm

They keep telling us that climate models are right and they keep proving it with record homogenisation.

When does this fiddling get called out for what it really is?

Dave Fair
Reply to  RickWill
March 3, 2021 4:32 pm

When the U.S. Social Cost of Carbon (SCC) gets before the Supreme Court.

TheFinalNail
Reply to  RickWill
March 3, 2021 4:49 pm

“When does this fiddling get called out for what it really is?”
_________________

The second somebody can demonstrate, in peer-reviewed comments or rebuttals, that the various homogenisation techniques used by the many different groups, all of which produce broad agreement on trends, are all flawed. Critics have had an awfully long time to accomplish this simple task – to no avail as yet. Lot’s of heat; very little light.

fred250
Reply to  TheFinalNail
March 3, 2021 7:13 pm

Rusty strikes again

Here is rusty’s car he is trying to sell you

comment image

You have had an AWFUL long time to produce empirical scientific evidence of warming by atmospheric CO2

You still remain an empty sock, awaiting a hand up your **** so you can graduate to being a muppet. !

fred250
Reply to  TheFinalNail
March 3, 2021 7:31 pm

comment image

comment image

This is the sort of manic “adjustment” you condone

Are you REALLY so ANTI-SCIENCE that you think this sort of activity is ANYTHING BUT FRAUD !!

fred250
Reply to  fred250
March 3, 2021 7:38 pm

And here are the nearest sites to Alice Springs to show that are all similar to the RAW Alice Springs data

comment image

There is absolutely ZERO reason for this sort of fraudulent adjustment

Let’s add Bourke as well

comment image

Why haven’t the “homogenised” (lol) to Deniliquin temperatures

comment image

Joseph Campbell
Reply to  fred250
March 4, 2021 8:40 am

Dang! Those are serious “adjustments”…

fred250
Reply to  TheFinalNail
March 3, 2021 7:42 pm

“all of which produce broad agreement on trends,”

.

D’OH of course they do.. that is the AIM of the homogenisations routines..

to CREATE WHAT YOU WANT. !

All the major groups use the same pre-adjusted data from NCDC/GHCN.

Why are you so DELIBERATELY IGNORANT about these things !

Is it you are just DUMB, or is it willful DECEIT.

Izaak Walton
March 3, 2021 4:15 pm

If people actually bothered to read the discussion paper they would find the paragraph:
“The most obvious difference is the relative warming of HadCRUT5 between around 1970 and 1980. This arises from improved estimates of biases in measurements made in ship engine rooms at that time. Engine room measurements were biased warm in the 1960s with the warm bias dropping over time, first between 1970 and 1980 and then again between the early 2000s and present. There are also changes around the Second World War, where changes to the assumptions made in HadSST4 about how measurements were taken shifted the mean and broadened the uncertainty range, reflecting the lack of knowledge of biases during this difficult period (Kennedy et al., 2019).”

https://www.metoffice.gov.uk/hadobs/hadcrut5/HadCRUT5_accepted.pdf

Which provides the reason why the difference is most marked after 1970. Nothing to do with infilling.

TheFinalNail
March 3, 2021 4:19 pm

“Here are just the lowess smooths, which give us a clear view of the underlying adjustments. I’ve added the University of Alabama Huntsville microwave sounding unit temperature of the lower troposphere (UAH MSU TLT) for comparison.”
________________________

There’s not much difference between them. Willis has excluded error margins in the trends here (he has also excluded the RSS satellite TLT data). For comparison, the ‘post 1978 trends’ in HADcrut4, HADCrut 5 and UAH all show statistically significant warming and all overlap when error margins are taken into account. RSS_TLT is also within the error margins of all the other producers’ trends and is in far better agreement with the surface trends than is UAH.

Dave Fair
Reply to  TheFinalNail
March 3, 2021 5:29 pm

1) UAH’s 40-year warming trend (0.14 C/decade) is less than HADCrut and RSS, overlapping error bands notwithstanding.

2) Mears adjusted RSS to mimic surface trends.

ARGO trends (accounting for ending on a Super El Nino) are arguably closer to UAH.

UN IPCC climate model average trends exceed all of them.

The science is not settled and everyone needs to get a grip.

TheFinalNail
Reply to  Dave Fair
March 4, 2021 2:02 am

“Mears adjusted RSS to mimic surface trends.”

Any evidence to support that. RSS TLT v4 methodology is in the peer reviewed literature. Not aware of any rebuttals.

TheFinalNail
Reply to  Willis Eschenbach
March 4, 2021 2:18 am

Willis,

“If you think the error margins overlap, why are you complaining about me not including RSS? Given what you say, there’s no reason to prefer any one of them over any other of them …”

It’s not a question of what I ‘think’; it’s a demonstrable fact that all the data sets, surface and satellite, show statistically significant warming post 1978 and that the error margins in their trends overlap.

“But that’s not the case. The HadCRUT error margins don’t overlap. They start not overlapping about 1975, and they don’t overlap at all since 2000.”
______________

Can you source this please? HadCRUT4 since 2000 shows +0.168 ±0.089 °C/decade (2σ) warming. HadCRUT5 would need to show a best estimate of at least +0.4°C/decade warming since 2000 for the lower bound of its error margins not to overlap with HadCRUT4:
http://www.ysbl.york.ac.uk/~cowtan/applets/trend/trend.html

Derg
Reply to  TheFinalNail
March 4, 2021 3:15 am

significant 🤓

fred250
Reply to  TheFinalNail
March 3, 2021 7:22 pm

“far better agreement with the surface trends than is UAH.”,

WRONG

There is only ONE set of pristine data, and the trend matches UAH over that area very closely. Just as UAH match unadjusted balloons, even NOAA’s own satellite data matches UAH well.

RSSV$ matches the surface data BECAUSE THEY WANT IT TO.

They even use “climate models” to do their manic adjustments. It has become a TOTAL FARCE.

Which of the surface data sets show the Arctic a similar temperature or slightly warmer than now in 1940 ?

TheFinalNail
Reply to  fred250
March 4, 2021 2:29 am

For reference, the error margins associated with the trends in the current versions of RSS and UAH overlap. Since 1979 these are:

UAH: +0.137 ±0.051
RSS: +0.216 ±0.054

Both shown in °C/decade at 2σ confidence. http://www.ysbl.york.ac.uk/~cowtan/applets/trend/trend.html

There is no evidence to support the defamatory allegation that RSS V4 was changed to match suface trends. Their methods were published in a reputable peer reviewed journal in 2016 and have not been rebutted or subjected to peer reviewed comment, to my knowledge.

Carlo, Monte
Reply to  TheFinalNail
March 4, 2021 7:11 am

How were these bounds calculated?

Unless they were obtained with the uncertainty methods in the GUM, they are meaningless. Simply taking the s.d. of the last step (the linear regression) and multiplying by 2 is misleading at best.

fred250
Reply to  TheFinalNail
March 3, 2021 7:28 pm

Does Hadcrud show NO WARMING from 1980-1997?

comment image

And NO WARMING from 2001-2015 ?

comment image

That is what happens when you stuff around with the raw data

You DESTROY ANY MEANING it might have once had,
.
…. ending up with a meaningless load of GIGO !

Eg….. the warming period around the 1940s which has been INTENTIONALLY AND FRAUDULENTLY REMOVED

bit chilly
Reply to  TheFinalNail
March 4, 2021 1:59 am

I know some people like to use satellite “data” to prove whatever point but the simple fact remains it is just as bad as any other “data” set used in climate science when the methodology used to arrive at the final numbers is examined.

March 3, 2021 7:39 pm

[Snipped. Violates site rules, and is slimy as hell.]

Last edited 7 months ago by Willis Eschenbach
Reply to  William Teach
March 3, 2021 7:50 pm

[Snipped. Same reason. Stop it.]

Last edited 7 months ago by Willis Eschenbach
Geoff Sherrington
March 3, 2021 7:56 pm

Colleagues were once at the global leading edge of knowledge about interpolation of grades and tonnes between assayed drill holes for the purpose of ore resource estimation.
Here is an important learning point from that experience.
The purpose of interpolation between drill holes was not to invent grades of ore between the drill holes. It was primarily to help to understand the uncertainty of the final estimates of grades and tonnes, by making a range of assumptions about the interpolations for sensitivity analysis, to see what the wildly high and wildly low assumptions might mean in terms of $$$ profit and loss. These uncertainty estimates were taken to the bankers who used them to understand the risk of lending money to develop the mine.
The real, useful values were the assays of intervals along the drill core. These are analogous to actual measured temperatures at stations. The interpolated block grades, just mathematical calculations, are analogous to the grid cell infilling that Nick Stokes goes on about. The actual assays were often repeated over and over to estimate their uncertainty as well. (This step cannot be done with past temperatures). When we made a decision whether to mine or to walk away, it was based on the assay results (and other factors not relevant here). It was NOT based on the mathematically interpolated values. If the Stock Exchange thought something was fishy, the asked about the assay results from the drill holes, not about the interpolated, imaginary values.
So, the lesson here is that interpolation is OK ONLY when it is used to help estimate uncertainty of actual temperature measurements. Interpolated values have no place masquerading with presumed importance similar to measured values for credibility. They are not similar. One is an actual physical measurement, recorded at the time. The other is an imaginary mathematical construct that can vary, or can be manipulated, because it has no pin back to the reality of a value recorded at the moment of observation.
As to Nick Stokes saying that to make a global average one has to assume values between the measured points – that is correct, but the premise is horribly wrong. There is inadequate past data to make a global average, so that interpolation should never have been attempted (except to confirm that it was an invalid exercise).
As to uncertainty, I await a valid description about how you estimate the proper uncertainty of an interpolated value, which is a guess that you can vary at whim.

Reply to  Geoff Sherrington
March 3, 2021 9:37 pm

“It was primarily to help to understand the uncertainty of the final estimates of grades and tonnes”
So what are those estimates based on if not inference about what lies between the drill holes. People aren’t staking their investment on mining those drill holes. They are basing it in the interred value of the whole ore body. That is what the whole business of kriging etc is about. And yes, they want to know about the uncertainty of the estimate. But the primary purpose of drilling is to sample the ore body, to get an estimate of mass and grade. ie what lies between the drill holes.

“Rrsource estimation”. It was the phrase you used.