Meshing issues on global temperatures – warming data where there isn’t any

Guest essay by Tim Crome

The plots attached here are taken from the MOYHU blog maintained by Nick Stokes here. The software on the blog allows the global temperature anomaly data for each month for the last several years, it also allows the mesh showing the temperature measurement points to be turned on and off.

This is a powerful tool that gives excellent opportunities to plot the temperature anomalies around the globe. Looking at the mesh used in the plotting routines does however raise some questions.

clip_image002
Figure 1 – Arctic and Northern Atlantic plot for October 2017

 

Figure 1 shows the data for the month of October 2017 centred on the East coast of Greenland. It shows that the whole of Greenland has an temperature anomaly that is relatively high. What becomes apparent when the mesh is turned on is that this is purely the result of the density of measurement points and the averaging routines used in generating the plots. This can be seen in Figure 2, zoomed in on Greenland.

clip_image004
Figure 2 – October 2017 plot showing Mesh and positions of data points centred on Eastern Greenland.

Figure 2 shows the same data as Figure 1 but with the addition of the mesh and data points. If we study Greenland it is very apparent that the temperature on the surface of most of the inland ice is, in this model, determined by one measurement point on the East coast of the country and a series of points in the middle of the Baffin Bay between the West coast of the country and North East Canada, no account is taken of the temperatures of the interior of Greenland, often significantly below those occurring along the coastline.

Figure 2 also shows how there is a large part of the Arctic Ocean without any measurement points such that the few points around the circumference are effectively defining the plotted values over the whole area.

Similar effects can also be seen at the Southern extremities of the planet, as shown in Figure 3. There are only two points on the interior of Antarctica and relatively few around the coast. For most of the East Antarctic Peninsula, about which we often hear stories of abnormal warming, there is clearly a situation where the temperature anomaly plots are developed from one point close to the South Pole and two locations some distance out at sea North of the peninsular. This cannot give an accurate impression of the true temperature (anomaly) distribution over this sensitive area.

clip_image006
Figure 3 – October 2017 plot showing Mesh and positions of data points for Antarctica.

Another geographical region with very few actual measurements, and huge distances over which the data is averaged, is Africa, as shown in Figure 4. There is a wide corridor from Egypt and Libya on the Northern coast to South Africa with absolutely no data points, where the averages are determined from relatively few points in the surrounding areas. The same is also true for most of South America and China (where the only data points appear to be in the heavily populated areas).

clip_image008
Figure 4 – Plot of October 2017 data and Mesh for Africa.

 

Based on this representation of the data it is apparent that there are huge areas where the scarcity of data and the averaging routines will give incorrect results. Often the temperature anomaly distribution in these areas, especially for Greenland and the Eastern Antarctic Peninsula, is used to show that these sensitive areas of the globe are subject to extraordinary warming threatening our very way of life. Such conclusions are invalid, they are purely the result of a scarcity of good data and statistical practices.

0 0 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

335 Comments
Inline Feedbacks
View all comments
November 11, 2017 9:38 am

All of these averages and interpolations are correct. . . .

What they are showing is a regression to the mean and the sample size doesn’t really matter.

The problem is that the methodology is going to produce the mean temperature from 1750 to today, a single data point. A single data point is pretty useless.

That is what the entire climate change debate is all about, a single useless data point.

Nick Stokes
Reply to  jinghis
November 11, 2017 10:53 am

“A single data point is pretty useless.”
You can choose to reduce to a single point if you like. That’s a choice. There is plenty of information there.

Reply to  Nick Stokes
November 11, 2017 12:14 pm

“You can choose to reduce to a single point if you like. That’s a choice. There is plenty of information there.”

No that is the point, all the “information” is lost. The averaging won’t even tell us the average global temperature which is meaningless anyway.

What we want to know is high and low temperature trends and variances (not averages or anomalies) caused by changes in CO2 concentrations, and the natural responses to temperature change like humidity, clouds, albedo, sea surface skin temperatures, etc. NONE of which are included in temperature measurements measuring the other factors in isolation doesn’t tell us much either.

What the climate scientists have been doing is akin to the drunk looking under the street light for his keys, because that is the only place where he can see.

All we really need to do is establish a few discrete setups, probably along the equator that measure all the variables continuously, for ten or twenty years, then it would be easy to measure the effects of increasing CO2 levels.

If memory serves me right, the equivalent has already been done : )

Angusmac
November 11, 2017 3:44 pm

Nick, regarding your statement, “Uniform elements would be more efficient, but again, we don’t have a choice. This is a Delaunay mesh. You can’t get a better mesh on those nodes.”

This is the crux of the problem – we don’t have enough data points in the interiors of many of the large, sparsely covered, areas (such as, Africa, Antarctica and Arctic) Therefore, we require more data points for accurate results. To illustrate this, I have marked up Tom’s Figure 2 with additional data points that would greatly improve the interpolation.

Until we have these additional data points the interpolation is inaccurate. In fact, it is practically an extrapolation because it estimates temperatures from coastal regions (with one type of climate regime) to interior regions (with a different type of climate regime). The temperatures in these interior regions would differ significantly from their respective coastal regions and cannot be estimated by interpolation.

I repeat my contention that those areas with sparse coverage should be shaded grey and marked “insufficient data.” When my engineers produce this sort of data, I call it a guesstimate because the results are nonsense.

Reply to  Angusmac
November 11, 2017 5:54 pm

“I repeat my contention that those areas with sparse coverage should be shaded grey and marked “insufficient data.” When my engineers produce this sort of data, I call it a guesstimate because the results are nonsense.”

We know that 90+% of the variance in monthly average temps can be explained by Latitude and elevation.
You tell me those two and any missing data can be predicted.
The test is sufficiency is NOT your eyeball. the test is mathematical.
And sparse data works just fine because the change in avergae monthly temperature from place to place is highly predictable.

RW
Reply to  Steven Mosher
November 11, 2017 7:46 pm

What statistical method was used to explain the total variance as a function of those variables and can you recall what the other variables were?

A C Osborn
Reply to  Steven Mosher
November 12, 2017 7:10 am

There we have it again.
“We know that 90+% of the variance in monthly average temps can be explained by Latitude and elevation.
You tell me those two and any missing data can be predicted.”

Absolute bullshit, that is not even true for the UK, let alone across Continents and from Continent to Continent.

Reply to  Steven Mosher
November 12, 2017 2:17 pm

“Absolute bullshit, that is not even true for the UK, let alone across Continents and from Continent to Continent.”

Willis proved it here, *snip* (easy Steven)

Reply to  Steven Mosher
November 12, 2017 2:24 pm

“What statistical method was used to explain the total variance as a function of those variables and can you recall what the other variables were?”

Well Willis did it his way look at his post here.

We used regression. This is Actually a very old technic used in physical geography classes, but never mind.

Its a regression that models temperature as a function of latitude ( a spline ), elevation, and Season.
You can even include interaction terms.

Now before you complain that regression wont work, dont forget that this technique was used in a Seminal and highly regarded paper by a famous skeptic.. hehe.

As for the other variables:

If you look at the residual of this regression, you can then say that the residual will contain the following

1. temperature effect due to climate change
2. temperature effect due to land class
3. influence of the ocean
4. influence of cold air drainage,
5. influence of site problems
6. Instrument issues
7. Weather
8. Albedo

Nick Stokes
Reply to  Angusmac
November 11, 2017 8:59 pm

“Until we have these additional data points the interpolation is inaccurate.”
As Steven says, that is just eyeballing. You need quantify it. Of course extra points will help, but do they make the difference between accurate and inaccurate. Quantification has been done for a long time. Hansen in 1987 said that up to 1200 km gave enough correlation to interpolate. BEST has studied this, and many others. So have I, eg here, and in the study I showed above where I culled points. The latter showed that you can take out a whole lot of points without much effect on the average. That suggests that putting in more points would not make much difference either.

MrZ
Reply to  Nick Stokes
November 12, 2017 12:18 am

Hi Nick! Do you have time for this one fro above please?
I think why people get upset with anomalies is that they projects something that is not there. While you need to use them to calculate energy circulation laymen reads them as weather maps. A location that appears hot on your map could in realty just have gotten a bit milder. A hot place did not reach anywhere near record temps but cooled down slower during the night due to weather conditions. etc etc.

You sit on all the readings would you agree that even though averages trends upwards the extreme temps are actually less frequent on a global scale compared with what they used to be. When you only think averages and anomalies that fact gets hidden away.

Nick Stokes
Reply to  Nick Stokes
November 12, 2017 1:10 am

MrZ,
Anomalies are often misunderstood. They are simply formed at each location by subtracting from the observed temperature some average, or normal, value for that place and time. Usually it is a historical average, often for a fixed period, like 1961-90. So I don’t think they have the properties you attribute to them. They are also usually calculated only for monthly averages or longer; rarely for daily tempeatures.

The basic idea is that the anomaly has the information content of the weather. The trend of temperature is the same as the trend of anomaly, at a location.

MrZ
Reply to  Nick Stokes
November 12, 2017 1:42 am

Appreciate that but still do you have an opinion on what monthly average trending represents in terms of actual weather. Are we getting more or less extremes? Presenting change as unkommented anomalies alone tells the latter. But is it true?

Nick Stokes
Reply to  Nick Stokes
November 12, 2017 2:30 pm

MrZ,
The default view is that variability is much as before, and climate change just shifts the mean. It could be otherwise, but that would need to be shown.

Angusmac
November 11, 2017 4:51 pm

The mark up of Tom’s Figure 2 with additional data points is here:
comment image

donald penman
November 11, 2017 11:05 pm

The areas measured could be weighted by population density which would remove the dominance of ocean temperature readings over land temperature readings because no one lives in the oceans.

NeilC
November 12, 2017 1:26 am

According to Fig 1, the Colour for the UK shows +2 Deg C (maybe a little higher). From daily readings for 27 loactions in the UK for October 2017 the temperature difference from the last 20 years average was 1.2 Deg C.

0.8 Deg C is quite a large margin of error for such a small land area.

Can you explain?

Nick Stokes
November 12, 2017 1:43 am

Well, the first thing to note is that it is a different anomaly base. This plot uses 1961-90. The last 20 years would have been considerably warmer, and the present anomalies less.

But the main thing is that this facility is not forming averages. It colours each triangle according to the correct colours for each station reading, and shades in between. It is a device for visualising lical anomalies. If you go to the original, you can zoom in on UK (right button drag upward), and you can shift click on each location to find out details, including name, temperature and anomaly. Here is the expanded picture of UK:
comment image

MrZ
Reply to  Nick Stokes
November 12, 2017 3:46 am

Here you could extend shift click with information like highest and lowest daily temps for the selected station and period including how it ranks among all measurements on that specific station.
If you do that you’ll see that the extreme readings are in the past.
One single station could still be an outlier but you will see that nearby stations will have the same dates for the extremes.

Nick Stokes
Reply to  MrZ
November 12, 2017 3:50 am

It is a system for displaying the months GHCN and ERSST temperatures, not a climate encyclopedia.

MrZ
Reply to  MrZ
November 12, 2017 5:08 am

Of course not. It was an example to illustrate the above line of thoughts. Do it offline, and I’m sure you most probably have already done so for other reasons. You’ll see the dramatic temperature events happened in the past.

MrZ
Reply to  MrZ
November 12, 2017 5:18 am

At least in US. I have not done global

Nick Stokes
Reply to  MrZ
November 12, 2017 2:52 pm

I did my locality here. But I think record temperatures aren’t a very good guide, at least old ones. It only takes one aberrant thermometer (or reader, once) to set a record.

MrZ
Reply to  MrZ
November 13, 2017 2:41 am

You got it!
It gets more interesting when you group all records by year. Because as you say a record one single station does not say much.
Also plot them graphically and you will see they match weather patters. I think the odd ones disappears in the noise when you group them. Then if you finally color them according to rank with 1st as red and the gradually 10th or worse as white you get a very interesting result when comparing years.

My point was though that anomalies and averages hides the dynamics. I blame it mostly on averages.

NeilC
Reply to  Nick Stokes
November 12, 2017 4:34 am

WMO suggests when using anomalies, the latest 30 year period should be used, thus 1981-2010 not the colder period 1961-1990, always used when you want to show more warming.

Reply to  NeilC
November 12, 2017 2:16 pm

No

NASA uses 1951-1980
CRU uses 1961 -1990

The reason is related to reducing noise in the anomaly calculation

Both NASA and CRU use an anomaly method. So the best period is the period with the most stations.
That way your noise in the monthly base average is reduced.

NASA uses 1951-1980 because the number of stations is at it MAX (globally) during that period
CRU does temperature by HEMISPHERE.. and the period 1961-1990 is a better choice
when you look for the decades with the maximum stations in both the NORTH and the Southern hemisphere.

Reply to  NeilC
November 12, 2017 2:16 pm

arrg, flip the years for each series

robinedwards36
November 13, 2017 3:04 pm

My feeling is that people seem to forget that what actually matters to us (real people), who live, work and die on the Planet, is not what the “anomalies” happen to be for some sort of selected area but what the /actual/ temperature is or has been for given sites, which is where the real people are. It may be Valencia, or Rekyjavik, or London Heath Row, or De Bilt, or Turin, or Las Palmas, or Prague, or Warsaw, or Moscow, or Krasnoyarsk,or Vladivostock, or Helsinki, or Teheran, or Mumbai, or Singapore or Melbourne, or Santiago or Seattle, or Winnipeg, or Fairbanks, or Kansas City, or Caracas. or Cape Town, or other real places, small and large, where people grow crops, or perhaps harvest fish. The temperatures over central Greenland, the mid and southern Pacific, Antarctica, the North Pole, the Canadian North West, North Central Siberia, the Sahara and Gobi deserts, though they may be “interesting” are hardly of real importance when it comes down to the practicalities of growing coffee, rice, wheat, potatoes, beef, alfalfa, maize and so on – the basic foodstuffs that we need on a daily basis. Actual SITE data with measured temperatures, not anomalies from some cherry-picked base, are what matter in practice.
It these are available on a monthly basis for a substantial number of years we can readily remove “seasonality” and compute how the data are changing for sites where people live and work. We still have the actual measured temperatures that enable us to judge just how dire or otherwise our situations are, and an index of how temperatures have changed in the past. Correctly and efficiently analysed, the presence or otherwise of step changes can be investigated. These important aspects of Climate get smeared over by gridding techniques or by averaging over substantial groups of dispersed sites.
Please think in terms of real temperatures at real sites, not “anomalies”, when contemplating the potential effects of a changing climate on the real world.

Nick Stokes
Reply to  robinedwards36
November 13, 2017 3:20 pm

“Actual SITE data with measured temperatures, not anomalies from some cherry-picked base”
What is shown here, on land, are actual site monthly data, unadjusted. The anomalies are simply formed by subtracting an average value for that site and month, based on history there.

The thing to look for are the spatial patterns. They are on a large scale. If Prague is warm, it is because the whole of central Europe (and beyond) is warm. And they are linked. It is now well-known that if the Eastern equatorial Pacific gets warm, the rest of the world is likely to (El Nino).

robinedwards36
November 14, 2017 7:06 am

Yes, I understand that, and have verified it for myself countless times. Whole regions respond to whatever drives the temperature changes that we notice. What I do not understand is why climatologists seem to be unable to recognise that many climate changes can and do occur suddenly, punctuating what is an otherwise stable situation with a rise or fall that can happen over a month or two. It happens everywhere, and has done so through the numerical temperature records that we have. I have been aware of it for over twenty years

angusmac
November 14, 2017 4:13 pm

To Nick Stokes &Steven Mosher
Nick, I have read Hansen and Lebedeff (1987) regarding 1200 km interpolation, however, it deals with global and hemispheric average of temperature. It does not address the problem of interpolating from coastal regions (with one type of climate regime) to interior regions (with a different type of climate regime).

I summarise the problem of requiring interior data by using the NOAA maps for Africa. Figure A shows the temperature data points and Figure B shows the reported temperature data.

comment image?ssl=1&w=450
Source:
https://www.ncdc.noaa.gov/temp-and-precip/global-maps/

comment image?ssl=1&w=450
Source:
https://www.ncdc.noaa.gov/temp-and-precip/global-maps/201709?products%5B%5D=map-percentile-mntp#global-maps-select

Steven, are you sure that, “We know that 90+% of the variance in monthly average temps can be explained by Latitude and elevation…any missing data can be predicted”? Are you sure that the algorithm is 90+% accurate when it turns no data in Africa (Figure A) to record heat (Figure B)?

Nick Stokes
Reply to  angusmac
November 14, 2017 4:51 pm

“Are you sure that the algorithm is 90+% accurate when it turns no data in Africa”
Those NOAA maps lowball the amount of data. Your map is dated Oct 13 for September data. But Africa data comes in slowly during the month. That doesn’t mean it doesn’t exist. Even my graph, shown in the article, was made on 9 November for October data. There are certainly gaps, but there is a lot more measurements than the plot you showed indicates. Here is the current map for September, 2017:
comment image

Angusmac
November 14, 2017 4:22 pm

Here are Figures A and B.
comment image
comment image

November 15, 2017 9:00 am

Here is a paper that WUWT would never cover

https://www.nature.com/articles/sdata2017169

Why?

Now, hadcrut, giss, berkeley were all built without this data.

INTERPOLATION is a PREDICTION of what we would have measured had we had a thermometer in the location.

As we recover new data we test the predections.

[Steve, the option exists for you submit your own story. This seems a worthy candidate. Please refer to the link at the top of the home page for instructions. -mod]