Meshing issues on global temperatures – warming data where there isn’t any

Guest essay by Tim Crome

The plots attached here are taken from the MOYHU blog maintained by Nick Stokes here. The software on the blog allows the global temperature anomaly data for each month for the last several years, it also allows the mesh showing the temperature measurement points to be turned on and off.

This is a powerful tool that gives excellent opportunities to plot the temperature anomalies around the globe. Looking at the mesh used in the plotting routines does however raise some questions.

clip_image002
Figure 1 – Arctic and Northern Atlantic plot for October 2017

 

Figure 1 shows the data for the month of October 2017 centred on the East coast of Greenland. It shows that the whole of Greenland has an temperature anomaly that is relatively high. What becomes apparent when the mesh is turned on is that this is purely the result of the density of measurement points and the averaging routines used in generating the plots. This can be seen in Figure 2, zoomed in on Greenland.

clip_image004
Figure 2 – October 2017 plot showing Mesh and positions of data points centred on Eastern Greenland.

Figure 2 shows the same data as Figure 1 but with the addition of the mesh and data points. If we study Greenland it is very apparent that the temperature on the surface of most of the inland ice is, in this model, determined by one measurement point on the East coast of the country and a series of points in the middle of the Baffin Bay between the West coast of the country and North East Canada, no account is taken of the temperatures of the interior of Greenland, often significantly below those occurring along the coastline.

Figure 2 also shows how there is a large part of the Arctic Ocean without any measurement points such that the few points around the circumference are effectively defining the plotted values over the whole area.

Similar effects can also be seen at the Southern extremities of the planet, as shown in Figure 3. There are only two points on the interior of Antarctica and relatively few around the coast. For most of the East Antarctic Peninsula, about which we often hear stories of abnormal warming, there is clearly a situation where the temperature anomaly plots are developed from one point close to the South Pole and two locations some distance out at sea North of the peninsular. This cannot give an accurate impression of the true temperature (anomaly) distribution over this sensitive area.

clip_image006
Figure 3 – October 2017 plot showing Mesh and positions of data points for Antarctica.

Another geographical region with very few actual measurements, and huge distances over which the data is averaged, is Africa, as shown in Figure 4. There is a wide corridor from Egypt and Libya on the Northern coast to South Africa with absolutely no data points, where the averages are determined from relatively few points in the surrounding areas. The same is also true for most of South America and China (where the only data points appear to be in the heavily populated areas).

clip_image008
Figure 4 – Plot of October 2017 data and Mesh for Africa.

 

Based on this representation of the data it is apparent that there are huge areas where the scarcity of data and the averaging routines will give incorrect results. Often the temperature anomaly distribution in these areas, especially for Greenland and the Eastern Antarctic Peninsula, is used to show that these sensitive areas of the globe are subject to extraordinary warming threatening our very way of life. Such conclusions are invalid, they are purely the result of a scarcity of good data and statistical practices.

0 0 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

335 Comments
Inline Feedbacks
View all comments
Pat Lane
November 10, 2017 2:09 pm

All of that would be fine if it was reflected in the uncertainty.

Nick Stokes
Reply to  Pat Lane
November 10, 2017 2:24 pm

Actually the quoted uncertainty for global anomaly averages is mostly exactly that, the uncertainty resulting from interpolation. Or pu another way, the variation you would expect if you samples a different set of points. It can be quantified.

Reply to  Nick Stokes
November 10, 2017 5:17 pm

There’s a lot of uncertainty in the measurements themselves as well as in the average from which anomalies are calculated. If you add up all the uncertainties, it’s likely to be larger than the presumed anomaly. Moreover; a small error in a temperature becomes a large error in the anomaly which when extrapolated across 1000’s of km^2 has an even larger affect on the global average.

Reply to  Nick Stokes
November 10, 2017 7:23 pm

I am working on an article about this. When your uncertainty in any given measurement is +- 0.5 deg F, how does an anomaly less than this uncertainty even occur? It seems to me there are mathematicians here, but no experts on measurements and doing calculations on measurements. Significant digits are important in measuring real world events, but apparently not real world climate temperatures!

Clyde Spencer
Reply to  Nick Stokes
November 10, 2017 9:27 pm

Jim Gorman,
Amen! Kip Hansen and I are about the only ones concerned with significant figures and the uncertainty implied by them.

Nick Stokes
Reply to  Nick Stokes
November 10, 2017 9:29 pm

“When your uncertainty in any given measurement is +- 0.5 deg F, how does an anomaly less than this uncertainty even occur?”
We’ve been through this endlessly, eg Kip Hansen. The local anomaly is not more certain. But the global anomaly average is, because errors cancel. That is why people go to a lot of trouble to get large samples to estimate population averages. Standard statistics, not climate science.

AndyG55
Reply to  Nick Stokes
November 10, 2017 10:13 pm

“Kip Hansen and I are about the only ones concerned with significant figures and the uncertainty implied by them.”

I’m more concerned that Nick doesn’t seem to have a CLUE to the quality of his data,

…. NOR DOES HE SEEM TO CARE.

Forget about significant figures etc…

…any calculations with data of unknown quality are basically MEANINGLESS.

AndyG55
Reply to  Nick Stokes
November 10, 2017 10:15 pm

And Nick again shows he doesn’t understand when and where the rules of large samples apply. So sad.

Clyde Spencer
Reply to  Nick Stokes
November 11, 2017 5:56 pm

NS,
You said, “But the global anomaly average is, because errors CANCEL.” I have yet to see a rigorous proof of the claim. You have accused commenters on this blog of engaging in “hand waving.” You can’t really hold the moral high ground on this.

November 10, 2017 2:14 pm

Steve once said in this blog that the interpolations HAD to be done, so that he’d have the “data” needed to get the global anomalies. Glad to see this dragged over the coals finally.

November 10, 2017 2:29 pm

Have been studying the surface temperature stuff for several years. There are several unrelated big problems. Microsite issues. Infilling. Homogenization (and regional expectations). Cooling the past. Uncertainty and error bars. These are all readily demonstrable, with multiple examples of each in essay When Data Isn’t. Bottom line, GAST not fit for climate purpose with the exactitude asserted by warmunists. OTH, natural warming out of LIA is indisputable but no cause for alarm.
I think surface temp record since 1900 is not one of the main counters to CAGW. Nitpicking when there are huge ‘pillar shaking’ arguments: Model failures (pause, missing tropical troposphere hot spot), attribution, observational TCR and ECS, lack of sea level rise acceleration, thriving polar bears, cost and intermittency of renewables, and in my opinion a considerable amount of provable blatant scientific misconduct (OLeary, Fabricius, Marcott, …) and ‘tactics’ like Mann and Jacobsen lawsuits. No need for the last if the ‘settled’ climate science was robust. It is anything but, hence the warmunist tactics.

A C Osborn
Reply to  ristvan
November 10, 2017 3:39 pm

You missed the Australian BOM using their equipment incorrectly measuring spikes and cutting off low temps.

Reply to  ristvan
November 10, 2017 5:52 pm

It absurd that the whole premise of CAGW as ‘settled’ science is based on a sensitivity with +/- 50% uncertainty. On top of this is even more uncertainty from the fabricated RCP scenarios. Even worse is the low end of the presumed range isn’t low enough to accommodate the maximum effect as limited by the laws of physics!

The actual limits are readily calculated. The upper limit is the sensitivity of an ideal BB at 255K (about 0.3C per W/m^2) and the lower limit is the sensitivity of an ideal BB at the surface temperature of 288K (about 0.2C per W/m^2). Interestingly enough, the sensitivity of a BB at 255K is almost exactly the same as the sensitivity of a gray body at 288K with an emissivity of 0.61, where the emissivity is the ratio between the emissions at 255K and the emissions at 288K.

The physics clearly supports a sensitivity as large as 0.3C per W/m^2, yet nobody in the warmest camp can articulate what physics enables the sensitivity to be as much as 4 times larger. They always invoke positive feedback, which not only isn’t physics, it assumes an implicit source of Joules powering the gain which is the source of the extra energy they claim arises to increase the temperature by as much as 1.2C per W/m^2.

To illustrate the abject absurdity of the IPCC’s upper limit, a 1.2C increase in the surface temperature increases its emissions by more than 6 W/m^2. If emissions are not replenished with new energy, the surface must cool until total forcing == total emissions. 1 W/m^2 of the 6 W/m^2 is replaced by the W/m^2 of forcing said to affect the increase. The other 5 W/m^2 have no identifiable origin except for the presumed, and missing, power supply. The same analysis shows that even the IPCC’s lower limit is beyond bogus.

WR
November 10, 2017 2:29 pm

Using such a grid results in weighting measurements from remote stations much more heavily in computing the global average. This is illogical in that 1) we would have the least amount of confidence in those site measurements, and 2) these readings are the least important in terms of impact on humanity, due to their remoteness.

It is not a coincidence that these sites seem to show a much higher warming bias. It’s very easy for the modelers or other data fiddlers to adjust these values/assumptions to achieve whatever global average they want. It’s just like Mann and his tree rings.

What I don’t see in any of these models is appropriate factoring in of measurement uncertainty, which could solve this problem, or at least highlight the limitations. Of course we know that will never happen.

November 10, 2017 2:38 pm

‘This cannot give an accurate impression of the true temperature (anomaly) distribution over this sensitive area.”

WRONG.

It is easy to test whether an interpolation works or not.

First you have to understand that ALL spatial statistics uses interpolation.

if I hold two thermometers 1 foot apart and average them and report the average air temperature, the claim
being made is this.

1. Take the temperature BETWEEN the thermometers and it will be close to the average. In other words the average of the two is a PREDICTION of what you will measure in between them

The question isnt CAN you interpolate, the questions are.

1. How far apart can stations be, and still be used to estimate an estimate that performs better than a guess?
2. What, IN FACT, is the estimated error of this interpolation method?

To do this in spatial statistics we create a DECIMATED sample of the whole. For example, we have over 40K stations ( more in Greenland than Nick has) The sample will be around 5K stations.

We then use this sample to Estimate the whole field. This is a prediction.

Then you take the 35K that you left out and test your prediction.

For Greenland, Nick uses a few stations, same with CRU.. just a few

If you want to test how good his interpolation is you can always use More stations, because there are More stations than he uses.

http://berkeleyearth.lbl.gov/regions/greenland

There are 26 ACTIVE stations in Greenland proper, and 41 total historical.

So test the interpolation. ya know the science of spatial stats

Reply to  Steven Mosher
November 10, 2017 7:42 pm

Your talking as a mathematician not a scientist dealing with real world measurements. Read the following http://tournas.rice.net/website/documents/SignificantFigureRules1.pdf . Then tell us what is the average temperature between a thermometer reading 50 deg F and one reading 51 deg F. Better yet, then tell us what the average is when these are recorded values with an uncertainty of +- 0.5 deg F.

The following quote is pertinent from the above website “… mathematics does not have the significant figure concept.” Remember you are working with measurements, not just numbers on a page.

RW
Reply to  Jim Gorman
November 10, 2017 9:56 pm

Error propagation and uncertainties are inapplicable here according to Stokes and Mosher as the comment section on Pat Frank’s post here shows.

Ian W
Reply to  Jim Gorman
November 11, 2017 12:59 am

Then add that what they are claiming is that atmospheric temperature is a metric for atmospheric heat content disregarding the enthalpy of the atmosphere. So now one of your thermometers is in New Orleans at close to 100% RH and the other is in Death Valley at close to 0% RH. Then Nick and Mosh are trying to construct the heat content of the air in Austin TX, by infilling its ‘temperature’? Not only that but they then proceed to provide results with a precision that is several orders of magnitude smaller than the errors and uncertainty.

Nick Stokes
Reply to  Jim Gorman
November 11, 2017 10:31 pm

“Your talking as a mathematician not a scientist dealing with real world measurements.”
Calculating a spatial average is a mathematical operation. Despite all the tub-thumping by folks who have read a metrology text, I have not seen any rational account of how they think it should be done differently.

Reply to  Jim Gorman
November 12, 2017 6:09 am

Calculating an average is not the question. How you show the uncertainty of the measurements you are using to calculate the average is the point. Do you ever show the range of uncertainties in the measurements you include? Do you ever examine your calculations to determine the correct number of significant digits in the temperatures you report?

Remember, most folks seeing your data and plots will believe that the temperatures you show are absolute, real, exact values. You are misleading them if you don’t also include an uncertainty range so they can see that temperatures may not be exactly what you say.

Nick Stokes
Reply to  Jim Gorman
November 12, 2017 2:34 pm

“Do you ever examine your calculations to determine the correct number of significant digits in the temperatures you report?”

Yes. I did a series on it earlier this year.

The tub-thumpers don’t have much to say here either, because they take no interest in sampling error, which is what dominates.

RW
Reply to  Steven Mosher
November 10, 2017 9:03 pm

Mosher. Avergaing two thermometers assumes there is one true temperature. That is all. Claims about where that true temperture really is is itself a second assumption.

Sounds like you are talking about Bootstrapping which is fundamentally limited by the data itself. It doesn’t remove systematic bias in coverage. It doesn’t conjure up more spatial resolution than is already there. Also, why do the maps neglect to depict the error associated with the estimates?

November 10, 2017 2:43 pm

It looks like Greenland has better coverage than China … or am I missing something? (Very possible 😎

Nick Stokes
Reply to  Gunga Din
November 10, 2017 3:30 pm

You’re looking at October, which finished just 11 days ago. China results haven’t come in yet. Try looking at September.

Reply to  Nick Stokes
November 11, 2017 6:32 am

I see. Thanks.

Pamela Gray
November 10, 2017 2:46 pm

Wow. A new way of showing we are in a normal warm period when flora and fauna flourish as they ride the slight warming and cooling stable period between witch tit cold dives.

November 10, 2017 2:48 pm

For Africa… there are hundreds of Active sites

http://berkeleyearth.lbl.gov/regions/africa

People seem to forget WHY berkeley Earth was set up.

The Number one complaint many skeptics had was MISSING DATA, or Poorly sampled regions.

CRU only uses 4-5K stations
NOAA and GISS around 7K

But it was clear that the Available data was Far Larger than this.

before berkeley earth people like gavin argued that with 7K stations the world was Oversampled.
Thats right, he argued that we didnt even need 7K

So what was the skeptical concern and theory.

The theory was that you could not capture the correct global temperature with such a small sample as 7K or 4K. The concern was “maybe its cooling where we dont have measurements” or not warming as fast where we dont have them.

Well we can TEST that theory.

How?

USE MORE DATA.. USE the data that other people cannot. They can use the data becuase they work in anomalies. but we dont have to.

Well, we used more data. More data in Greenland, more in africa, more in China, more in south america.

And we tested..

The answer. Yes you can interpolate over large distances.
The answer No, more stations than 7K will not change the answer.

And WIllis even Proved here at WUWT why this is the case.

AndyG55
Reply to  Steven Mosher
November 10, 2017 2:55 pm

Berkely.. doyan of the AGW farce,, tested.

ROFLMAO.

Yes we have seen the results , Mosh !!

GIGO !!! more GI.. even more GO !!

Gabro
Reply to  Steven Mosher
November 10, 2017 2:57 pm

Mosh,

It’s not just the number, but where the stations are located.

The oceans are essentially unsampled, since there are in effect no actual “surface” data comparable to those on land, and older sampling practices even for below the sea surface are incomparable with current sampling methods.

GASTA is a fabrication and fiction, if not indeed a fantasy, meaningless and inadequate for any useful purpose.

A C Osborn
Reply to  Steven Mosher
November 10, 2017 3:51 pm

Let me remind you what Mr Mosher said on a previous post “if you want know what the actual temperature was look at the RAW DATA, if you want know what WE think it should be look at the BEST final out put” or words to that affect.
I have the exact words on my computer if anyone wants them.

He also said “give me the latitude and elevation and I can tell you the temperature.”
Which is so easily disproved it makes you wonder if he actually lives in the real world.

Gabro
Reply to  A C Osborn
November 10, 2017 4:44 pm

No further proof would be needed of divorcement from reality than that imbecilic assertion, but wait, there is more!

Gabro
Reply to  A C Osborn
November 10, 2017 6:36 pm

Jacksonville, FL v. Basra, Iraq, average temperatures, ° F:

July hi: 90 v. 106.3
July lo: 74 v. 61.3

JAX is slightly farther south. Both slightly above sea level.

Nick Stokes
Reply to  A C Osborn
November 10, 2017 8:43 pm

“July hi: 90 v. 106.3
July lo: 74 v. 61.3”

And that is subtracted out in the anomaly.

A C Osborn
Reply to  A C Osborn
November 11, 2017 3:44 am

Nick, you are talking about anomalies which have absolutely nothing to do with the Temperature at a Latitude and Elevation. The Range of Temps at that Latitude is 16 degrees on the high and 13 degrees on th low, in opposite directions.
You can find that on opposite sides of practically every Continent, let alone across the world.
Try reading what we are actually talking about.

Reply to  Steven Mosher
November 10, 2017 6:26 pm

Like Mosher said, there are hundreds of African stations.

After Berekeley Earth gets done with the data, the Raw data of no increase seems to show 1.0C of temperature rise. The stations are chopped into little pieces and adjusted upwards by 1.0C under the biased algorithm.

Berekeley Earth can add to its credibility by showing the actual data rather then turning it into an adjusted only regional trend. Credibility starting at Zero that is, given there only pro-Global warming analysts working with the data.

Michael Jankowski
November 10, 2017 2:58 pm

It probably doesn’t affect results much, but there are clearly spots west of Africa where the meshing algorithm is inconsistent/failing. Lack of QA/QC? Incompetence? Laziness?

Nick Stokes
Reply to  Michael Jankowski
November 10, 2017 3:27 pm

The mesh has to be consistent, or various troubles arise. Are you sure you aren’t interpreting the Cape Verde Islands as nodes?

AndyG55
Reply to  Nick Stokes
November 10, 2017 4:23 pm

“The mesh has to be consistent,”

roflmao.. foot in mouth disease yet again, hey Nick

Your grid is NOWHERE NEAR consistent…… Africa, Greenland, Antarctica

ie .. it is RUBBISH..by your own words.

And I don’t believe you have consistent ocean data at the regular nodes your grid says you do.

You still haven’t shown us pictures of the “quality” sites at the 6 points I circled on your grid above

Is it that you JUST DON’T CARE about the quality of the data….

….. because you KNOW you are going to bend it to “regional expectations” anyway ??

Michael Jankowski
Reply to  Nick Stokes
November 10, 2017 7:20 pm

For example, start at Sierra Leone. Go due south about 5 triangles into the ocean. Instead of triangles in the mesh sharing an east-west line like all of the others out in the open ocean, there’s a pair sharing a north-south running line. Why would the algorithm treat those few triangles differently (happens on the opposite side of Africa in a spot as well), and why would nobody clean them up? They are along the equator, so presumably the north and south points for those triangles are closer than the east and west points. Seems hokey that only happens for a select few and not cleaned-up to be consistent with the others.

If you go fither from that spot 5-6 triangles to the southwest, there’s what appears to be an “open diamond” with blue in it…surrounded by green. The only way this could happen is if there’s a data pont in there. Maybe I can see one very close to one of the mesh endpoints. Regardless, it shouldn’t be an open diamond.

Maybe it’s a problem with graphics, but there are several areas that show as open diamonds and not triangles.

Nick Stokes
Reply to  Nick Stokes
November 10, 2017 7:53 pm

“Why would the algorithm treat those few triangles differently (happens on the opposite side of Africa in a spot as well), and why would nobody clean them up? “
Well, firstly it isn’t a fault. It’s a change of pattern, but it’s still a correct mesh. But it’s true that generally in the ocean the divides go one way. That is because of the Delaunay condition (which is for optimality, not validity). The curvature of the Earth tends to stretch diamonds in the longitude. So why those exceptions?

They both lie on the Equator. The diamond is a square, and the diagonal can go either way.

As to cleaning up, there are about 1400 of these meshes, going back to 1900. They are automatically generated. I don’t check every one by eye. Not that I would intervene here anyway.

“Maybe it’s a problem with graphics, but there are several areas that show as open diamonds and not triangles.”
Yes, it is. I’m using WebGL, which draws 3D lines and triangles. It can happen that the lines are covered by the triangles. I try to avoid that by raising the lines, but it doesn’t always work. They are still there, but underneath the shading.

AndyG55
Reply to  Nick Stokes
November 10, 2017 7:59 pm

Not only that, but the points on those big triangles are on opposite side of the continent.

Who in their right mind thinks that they can get anything within ‘cooee’ of reality with that.

Its bizarre that Nick even thinks he can. !!

I really am starting to wonder about his grasp on REALITY !!

A C Osborn
Reply to  Nick Stokes
November 11, 2017 4:00 am

Andy just use NuSchool Earth to look at the Variation from one side of an Island or Continent to see how ridiculous that concept is, I started seeing it when I investigated BEST and then when Mosher made his stupid claim about only needing Latitude & Elevation with the season to give the Temperature.
Just take the 250 miles across the UK, currently East Coast 9.7C West Coast 13C.
BEST then Smears these Temperatures with those of Europe for their “Final” temp, absolute rubbish.

Michael Jankowski
Reply to  Nick Stokes
November 11, 2017 8:58 am

Nobody said check all 1400 one-by-one. I saw those within a few seconds, though.

So those are the only two areas on the map where the diamond is a perfect square?

I noted they were along the equator…according to you and the algorithm, it makes more sense to have “nearest neighbors” north and south of the equator rather than running along it. That doesn’t make sense. It may not affect the calcs because so much of that open-water mesh is just an interpolation of data and doesn’t contain any data points, but it’s not a “correct mesh.”

Nick Stokes
Reply to  Nick Stokes
November 11, 2017 10:40 am

“So those are the only two areas on the map where the diamond is a perfect square?”
The Equator is. In the tropics, one diagonal is 4° of latitude; the other is 4° of longitude. Only on the Equator are those lengths equal.

The question would be, why aren’t there more changes of pattern. I add a tiny amount of fuzz to location so there never is an actual perfect square, which confuses the algorithm. Maybe there is some bias in that process. I’m not going to spend time to find out.

Michael Jankowski
Reply to  Nick Stokes
November 11, 2017 12:37 pm

Yes, I understand why they’d be equal on the equator…and hence the mesh is not consistent when most of those have TIN lines going east-west across the equator and a few others opt to go north-south.

“…The question would be, why aren’t there more changes of pattern…”

Well that is one way too look at it, but there SHOULDN’T be any changes of pattern along the equator. TIN lines should logically run east-west. Maybe an algorithm doesn’t know any better, but a user should.

DMA
November 10, 2017 4:12 pm

The TIN (triangulated Irregular network) that is here being called the mesh is the standard method of creating contour maps. The assumptions are that the data points are dense enough and arranged such that the desired contour interval will result. Part of that assumption is that each line lies near enough to the surface that linear interpolation is the correct model of the line. In the case of land forms, connecting data points on opposite sides of terrain features erases that feature. The data collection process and data analysis process are both important to the final product. Because the TIN here is being used to model anomalies, I would expect the surface to be rather smooth but widely spread data would require much more attention in processing. Each group of four data points defines two triangles but the configuration of the central line controls the interpolation process.In figure 2 a warm anomaly in eastern Greenland will shrink dramatically if the long east west lines running to it are swapped to the north and south data points. This analysis is needed for every line that connects a two data points that are very unequal.
I suspect the plots in this post and others made using this software to be generally useful for visual analysis but pretty iffy for data extraction. A contour map is checked by collecting new data randomly and testing it against the elevation extracted for those point from the map. These anomaly plots could be made using some of the data points as discussed by Nick above and then checking the unused ones, however in the areas of widely spaced data this is not a meaningful check.

Nick Stokes
Reply to  DMA
November 10, 2017 4:42 pm

” Each group of four data points defines two triangles but the configuration of the central line controls the interpolation process.”
It can. But that is fixed here, by the convex hull, which in turn satisfies the Delaunay condition. That means that the central line leaves the two opposite angles that add to less than 180° (the total is 360). More importantly, it means that for each point, the nodes of its triangle are in fact the closest nodes.

“in the areas of widely spaced data this is not a meaningful check”
It is if it works. But for the global average, what matters is that interpolation works on average, not exactly every time. That is why I showed above the performance of he integral as nodes are diminished. It holds up well.

Bill Illis
November 10, 2017 4:19 pm

The planet is heating up and these maps show it for sure.

Especially if you are colour-blind and see the 70% of the Earth “blue” areas as “red”.

I mean look at all these red-hot hot spots adding up to … well, we don’t know it seems.

angech
November 10, 2017 4:20 pm

“He also said “give me the latitude and elevation and I can tell you the temperature.”

We really only need one station at the average elevation of the world at a latitude North or South that lies on the equator for even yearly solar insolation, adjust for the slight bulge of the earth and average the cloudy days and do a one hundred year running average.
Mosh could knock it over before Breakfast.

Phil
November 10, 2017 4:41 pm

IIRC, at one point the earth’s surface was divided into 8,000 cells (later 16,000). The vast majority of the cells did not have ANY data. The problem is not focusing on the mean, but on the variance. Estimating the variance is important to properly determine uncertainty. One formula that estimates the standard deviation (the square root of the variance) is:

http://www.nemux.org/content/images/2015/10/devstd.jpg

When using this formula on those thousands of cells that have no data, the denominator becomes -1. Taking the square root of -1 results in imaginary numbers. Ergo, if there is only once cell with no data, the entire averaging exercise results in imaginary numbers. So, the entire exercise of calculating global temperature anomalies results in imaginary numbers. QED. (/sarc)

Crispin in Waterloo
November 10, 2017 4:59 pm

It is obvious that the global average temperature cannot be known to 0.1 degrees precision. The % of data that is in-filled is simply too large and there must be a large uncertainty attached to those numbers.

Ethical question: it is cheating if he tells you how he did it?

Loren Wilson
November 10, 2017 5:06 pm

So it appears that most of the nodes in the ocean are not actual data, they have already been processed from whatever buoy or ship data was available. I think it would be informative to show the raw data collected for a day that go into making these nodes that then get further averaged.

RW
Reply to  Loren Wilson
November 10, 2017 8:50 pm

Loren, Tony Heller seems to be one of the few who takes raw temp values seriously. You can check out his blog or any of his youtube videos. You can dirch half of the stations on the u.s. and get the same result because the coverage is so dense there. Raw values in the u.s. give anradically different picture of historical temperature. Undeniably cyclical with what is probably a very strong positive correlation with the AMO and PDO. What NOAA and NASA does with the global historical climate network data is a bit of a mystery. But it is based on anonolies. The mainstream view is fixated on anomolies. I’ve never seen a detailed analysis for why anomolies are superior to the raw data or even an analysis of the raw data where coverage is dense and the conclusions limited to those areas. NOAA’s FAQ on “why anomolies?” is really superficial and just plain bad.

Reply to  RW
November 11, 2017 6:11 am

Anomalies are used by mathematicians to calculate one value from averages. They need to go back to school and learn how to handle measurements of real world values. An average from recorded values with a range of +- 0.5 deg F still has a range of at least +- 0.5 deg F. All you are getting is an average of recorded values, not what the actual (real world) average temperature was.

Let me say it again, an average from recorded values with a range of +- 0.5 deg F still has a range of at least +- 0.5 deg F. The conclusion is that unless you have an anomaly that is greater than the uncertainty range, you simply don’t have anything. To propagate the idea that these mathematical calculations have any relation to real world temperatures and can be used to develop public policy is unscientific and sorry to say, is approaching unethical science.

YOU CAN NOT DEAL WITH THIS DATA IN A PURE MATHEMATICAL WAY. YOU MUST DEAL WITH THEM AS A PHYSICAL SCIENTIST OR ENGINEER WOULD IN THE REAL WORLD WHILE USING MEASUREMENTS.

RW
Reply to  RW
November 11, 2017 7:41 pm

Jim. They don’t care.

I agree with you – I am currently reading a text on error analysis. Error propagation is a pretty complex topic, but it should be a required second or third year course for all undergraduate science majors.

Reply to  RW
November 12, 2017 2:56 pm

Of the few?

Berkeley earth is all done in raw temps. And unlike tony, we use all the data and avoid the crap (USHCN monthly) that he uses.

angusmac
November 10, 2017 5:35 pm

Great post Tom. I agree that there is a meshing issue in the areas of the world that you highlighted.

This sort of problem happens frequently in structural engineering models and the solution is to use a finer mesh. Additionally, (not mentioned in your post) is that the long thin triangles also give unreliable results. Ideally, these triangles should not have an aspect ratio (length to breadth) of not more than 2 and preferably nearer to 1.

In a climate scenario this would require more weather stations in the areas with too coarse a mesh. Until that happens, the areas should be shaded grey and marked “insufficient data.”

Nick Stokes
Reply to  angusmac
November 10, 2017 6:33 pm

“This sort of problem happens frequently in structural engineering models and the solution is to use a finer mesh.”
Yes, but here the nodes are supplied measurement points. That isn’t an option.

“Additionally, (not mentioned in your post) is that the long thin triangles also give unreliable results.”
That can be true when you are solving a partial diferential equation, as with elasticity, for example. Basically the problem is anisotropic representation of derivatives. But here it is simple integration. Uniform elements would be more efficient, but again, we don’t have a choice. This is a Delaunay mesh. You can’t get a better mesh on those nodes.

AndyG55
Reply to  Nick Stokes
November 10, 2017 8:05 pm

“You can’t get a better mesh on those nodes.”

GI-GO !!

November 10, 2017 6:39 pm

If you don’t have regional data, leave it grey.

That way you identify and show others how little data you have to make judgments with.

Anything else is misleading.

jim2
November 10, 2017 7:47 pm

Maybe the poor coverage explains the divergent UAH and HadCrut4 data.

Reply to  jim2
November 12, 2017 2:55 pm

Er No.
RSS matches CRUT just fine

UAH is the odd ball.

RW
November 10, 2017 8:38 pm

Great post. I would love to see more posts drilling down on this.

Tony Heller has been pointing out the inadequacy of the global land-surface spatial coverage from land-surface stations for a long time. And i know it’s a known thing among many here on WUWT. But we need a thorough explanation for NOAA’s global heat maps that show intense positive anomolies across masive swaths of land where there are actually no land surface measurements whatsoever. To be fair, NOAA does not ‘hide’ the poor spatial coverage of the ghcn data set. You can find maps depicting cumulative coverage for different time periods. It’s jaw droppingly bad until maybe the 80’s (and still clearly bad in areas as we can all plain well see!) But the ‘polished & smoothed’ heat maps that get released are extremely misleading. They are then immediately broadcast out, amplified by newspaper ‘science’ writers who probably don’t even realize the coverage is as patchy as it is. It blurs the lines between scoence and propaganda, frankly. No real scrutiny done aside from a few sparse bloggers around the internet. Why do anomolies make for such a radical change in the time series trend compared to the raw values?

DWR54
Reply to  RW
November 10, 2017 10:15 pm

Do you prefer the satellite lower troposphere coverage? It shows virtually the same pattern as the surface data for October:comment image?w=720&h=446

Are both wrong?

bitchilly
Reply to  DWR54
November 11, 2017 12:24 am

do you prefer it now it matches the narrative ? mosher has pointed out the problems with the satellite “data” many times. i am afraid i have no faith in any of the “data” sets.

A C Osborn
Reply to  DWR54
November 11, 2017 4:12 am

Bitchilly, Satellite Data does NOT reflect the actual temperatures which we experience at the Surface.
Just take a look at IceAgeNow and how many new Cold Temp Records havd been broken this year and how early the Snow has come as well.
We are not talking about the odd isolated incident but all ove the world, including Australia and NZ.
So I am positive that the Satellites are measuring the heat being transported away from the Surface and through the Atmosphere.

Toneb
Reply to  DWR54
November 11, 2017 7:04 am

“Just take a look at IceAgeNow and how many new Cold Temp Records have been broken this year and how early the Snow has come as well.”

https://phys.org/news/2017-10-hot-weather-worldwide.html

Toneb
Reply to  DWR54
November 11, 2017 7:30 am
A C Osborn
Reply to  DWR54
November 11, 2017 11:50 am

Why are you showing me a climate activist site reporting 2016 so called records that use Sidney as an example when I am talking about this year?

RW
Reply to  DWR54
November 11, 2017 7:19 pm

DWR54 uses the most recent month to compare surface to satellite data and surmises they are in good agreement. I’ll take your word for it. Satellite data exists from 1979 ish onwards and the spatial coverage of NOAA’s GHCN data set is garbage until the 80’s. – and there are many who say it is still not good enough today.

The agreement between a satallite data set and a polished composite from the land surface stations in a recent month make up for the pausity of them >35 years ago. Anomolies aren’t a magic bullet. They don’t solve the problems introduced by breaks in the time series and station changes without a lot of fine grained scrutiny and analysis. Nevermind the terrible spatial coverage. The NOAA site does a terrible job of explaining their methodology.

Reply to  RW
November 12, 2017 2:40 pm

Tony Heller looks at 1/40th of all the actual data

Nick Stokes
November 10, 2017 8:42 pm

From the article, which I should have noted earlier:
“The same is also true for most of South America and China (where the only data points appear to be in the heavily populated areas).”
That’s because it’s using October 2017 data. It’s early days. China and much of S America comes in late. Look at September, or any earlier plot.

MrZ
Reply to  Nick Stokes
November 11, 2017 2:16 am

Hi Nick!
I think why people get upset with anomalies is that they projects something that is not there. While you need to use them to calculate energy circulation laymen reads them as weather maps. A location that appears hot on your map could in realty just have gotten a bit milder. A hot place did not reach anywhere near record temps but cooled down slower during the night due to weather conditions. etc etc.
You sit on all the readings would you agree that even though averages trends upwards the extreme temps are actually less frequent on a global scale compared with what they used to be. When you only think averages and anomalies that fact gets hidden away.

Patrick MJD
Reply to  MrZ
November 11, 2017 5:11 am

He was in the business of making stuff up.

MrZ
Reply to  MrZ
November 11, 2017 8:29 am

Maybe I was too late in the thread. I will repeat the question in next discussion…

crowcane
November 10, 2017 8:50 pm

They haven’t the foggiest idea what is really going on yet except us to spend billions of dollars to change whatever is going on all the while acting like they know everything about what is going on and making all of us feel like fools and idiots if we do so much as raise our hand to ask a simple question.

DWR54
November 10, 2017 10:11 pm

The Moyhu chart isn’t complete yet for October due to incomplete data as described above by Nick, but for many of those specific areas mentioned in the above article, for instance eastern Greenland and Antarctica, the UAH global chart for October is a good match:comment image?w=720&h=446

UAH is a satellite based data set of lower troposphere temperatures, but it covers the entire globe (apart from the very top of polar regions) and as such is pretty useful as a rough calibrator for the interpolated surface data.

The unusual warmth over Antarctica and eastern Greenland, also the anomalous cool temperatures across much of central Asia seen in the surface data are clearly duplicated in the satellite data.

A C Osborn
Reply to  DWR54
November 11, 2017 4:33 am

That semi match is close, but large areas of the rest of the globe don’t match very well at all.
Exactly how do you explain that?
It is a pity that the first globe of the post does not show the same areas as the Satellites.

DWR54
Reply to  A C Osborn
November 11, 2017 8:43 am

A C Osborn

That semi match is close, but large areas of the rest of the globe don’t match very well at all.
Exactly how do you explain that?

Which areas in particular? I see some variation but not much. Apart from areas previously mentioned, Alaska shows above average warmth in both maps, southern South America and parts of North Africa were colder than average, and Australia and South east Asia were unusually warm.

I think some of the remaining dependencies might be explained by the anomaly base periods used: UAH uses 1981-2010 whereas Nick Stokes uses a (rather complicated) system based on “a weighted linear regression”. But the differences look quite small to me.

A C Osborn
Reply to  A C Osborn
November 11, 2017 12:21 pm

Well how about East Africa, the sea of S America and the western UK.
The rest of the world is harder to compare due to the different views.

Reply to  DWR54
November 12, 2017 2:38 pm

UAH doesnt cover the entire globe.
They interopolate over satellites gores and have no arctic coverage

November 10, 2017 11:55 pm

Ah the great Anti-Metrology Field.

Reply to  mickyhcorbett75
November 11, 2017 4:58 am

Also known as meteorology by adding E&O (Errors And Omissions).

David King
November 11, 2017 6:24 am

This discussion begs a question. Since the data is so scarce and uneven, how do the climate models deal with this lack of data? The answer is the data, the parameters, is created by the computer. The software fills in the blanks. So one temperature is used for an area the size of Los Angeles to Vegas to Death Valley.
This climate modeling technique is often called “Garbage In, Gospel out.”

KO
November 11, 2017 8:58 am

Would anyone take any US or European temperature data seriously if it were collected from capital cities (state or national) and their suburbs only, and extrapolated for each entire continent? Oh wait…

Reply to  KO
November 12, 2017 2:37 pm

In the US alone there are 20,000 or more daily stations.
Thousands located in rural and pristine locations.

Drop every city with more than 10000 people and you still have 1000s

select stations with no population and you still have thousands