Mark Fife writes:
This is my eighth post in this series where I am examining long term temperature records for the period 1900 to 2011 contained in the Global Historical Climatology Network daily temperature records. I would encourage anyone to start at the first post and go forward. However, this post will serve as a standalone document. In this post I have taken my experience in exploring the history of Australia and applied it forward to cover North America and Europe.
The way to view this study is literally a statistic-based survey of the data. Meaning I have created a statistic to quantify, rank, and categorize the data. My statistic is very straight forward; it is simply the net change in temperature between the first and last 10 years of 1900 through 2011 for each station.
Below is a list of countries showing the lowest net change, the highest net change, and the number of stations per country.
This is an old-fashioned histogram showing how the stations ranked in terms of overall temperature change. This shows the data falls in a bell-shaped curve. The underlying distribution is very close to normal. This means analysis using normal techniques will yield very reasonable estimates. This is significant to a statistician. However, you don’t need any statistical knowledge to understand this.
The mid line value is between -0.5° and 0.5°. The number of stations showing an overall drop in temperature is 40%. Slightly less than 60% of the stations show an increase. The absolute change is statistically insignificant in 74.6% of the stations.
The following graph shows a normalized look at each category: No significant change, significant warming, and significant cooling. The graph is of rolling 10-year averages. Each plot has been normalized to show the 1900 – 1910 average as zero.
You will note, though the overall slope of each plot is significantly different, the shape of the plots is nearly identical. A random sampling of individual station data shows that condition remains true for each station in the range. For example, Denmark’s Greenland station shows the 1990 – 2000 average is the same as the 1930 – 1940 average.
Short term changes, such as the warming into the 1930’s, hold true for the clear majority of stations. Other examples of this would be the 1940’s temperature jump, the post 1950 temperature drop, and the late 1990’s temperature jump.
Long term changes vary significantly.
There are several conclusions to be drawn from this analysis.
- There is no statistically significant difference between North America and Europe. Those stations showing significant cooling are just 8% of the total. By that statistic, the expected number of the 17 European stations to show cooling would be just one. The number expected to show significant warming would be three. From a statistical sampling standpoint, 17 is just not a robust enough sample size to yield accurate estimates.
- Short term changes which appear in the vast number of stations from Canada to the US to Europe are probably hemispheric changes. However, there is no indication these are global changes as there is no evidence of similar changes in Australia. Australia did not experience a 1930’s warming trend for example. In fact, the overall pattern in Australia is obviously different from what we see here.
- The evidence strongly suggests the large variation in overall temperature trends is due to either regional or local factors. As shown in the data table at the beginning, the extremes in variation all come from the US. As noted before, there just aren’t enough samples from Europe to form accurate estimates for low percentage conditions.
- Further evidence suggests most of the differences in overall temperature change are due to local factors. What we see from the US is extreme warming is generally limited to areas with high population growth or high levels of development. Large cities such as San Diego, Washington DC, and Phoenix follow the pattern of significant change. Airports also follow this pattern. However, cities like New Orleans, St Louis, El Paso, and Charleston follow the pattern of no significant change.
In Conclusion, based upon the available long-term temperature data the case for global warming is very weak. There is evidence to suggest a hemispheric pattern exists. The evidence further suggests this is a cyclical pattern which is evident in localized temperature peaks in the 1930’s and the 1990’s. However, changes in local site conditions due to human development appear to be the most important factor affecting overall temperature changes. Extreme warming trends are almost certainly due to human induced local changes.
What is unclear at this point is the significance of lower levels of human induced local changes. Assessing this would require examining individual sites to identify a significant sample of sites with no changes. Unfortunately, the US, Canada, and Europe are not nearly as obliging on that kind of information as the Aussies are. I must admit the Australians have done an excellent job of making site information available. Having the actual coordinates to where the actual testing station resides made that easy. I literally pulled them up on Google Maps and was able to survey the site and surrounding areas.
It appears this is about as far down the rabbit hole as I am going to get, at least, not without a lot of work which at this point doesn’t appear warranted.
For more, visit me at http://bubbaspossumranch.blogspot.com/
Mark Fife holds a BS in mathematics and has worked as a Quality Engineer in manufacturing for the past 30 years.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.



The paper by Belda et Al (2014) is probably the best to date in reconstructing the Koppen-Trewartha climate classification map from modern datasets.
The Belda maps show the climate regions of the world (except Antarctica) for two periods, 1901-1931 and 1975-2005, based on a 30 minute grid, average area about 2500 km2, (About 50,000 grid cells cover 135 million km2, the land area of the Earth except Antarctica.)
Between the two periods separated by 75 years, 8% of the cells changed climate type. When you plot a scatter diagram of distributions for the two periods, you will find there is little divergence from the straight line passing through the origin and with slope unity. R-squared is 99.5.
The paper does not discuss error bars. However, the CRU (UK) has revised the climate data to remove wet bias, an adjustment that would increase R2, indicating even less change than these maps show.
Belda M, Holtanová E, Halenka T, Kalvová J (2014) Climate classification revisited: from Köppen to Trewartha. Clim Res 59:1-13. https://doi.org/10.3354/cr01204
http://www.int-res.com/articles/cr_oa/c059p001.pdf
“Assessing this would require examining individual sites to identify a significant sample of sites with no changes. Unfortunately, the US, Canada, and Europe are not nearly as obliging on that kind of information as the Aussies are. I must admit the Australians have done an excellent job of making site information available. Having the actual coordinates to where the actual testing station resides made that easy. ”
GHCN has all the metadata you need. you need to actually use it.
Also, You cannot simply compare the first few years to the last.
you have to
Control for, or adjust for changes in observation practice.
This is the same thing that UAH and RSS do with satellite series. When the instrument changes over your time period, you have to adjust or control for it. When the location of your sensor ( in satellites this is orbital decay) changes, you have to account for it. And finally when your time of observation changes, ( in satellites this is diurnal drift) you have to account for it.
if you dig shallow, you come up with shallow answers and conclusions.
However folks here will like your false conclsuions so no one will question it.
“GHCN has all the metadata you need. you need to actually use it.”
No it hasn’t. I’ve checked metadata for stations in Sweden, where I can verify it, and the quality is abysmal.
Here we go. “You have to change the data.” Of course.
Steven: So far the presentation has actually been questioned fairly broadly, essentially a hallmark of most sceptical sites that seems to be distinctive. Consensus sites don’t appear to be secure enough to venture outside their echo chambers and don’t let even mild dissenters in. There are a few contrarian individuals, who, although they disagree with CO2- based significant warming, are no longer tolerated on this site because of the one topic scientific dreck that they peddle ad nauseum . How about that? However invitations are open to Tamino, Mann, Schmidt, Hansen, Trenberth…. but they fear to debate the subject (you’ve seen this on TV).
Remarks about “folks here” is fine for scientifically illiterate useful idiots to make but they reflect badly on you. With scientifically literate sceptics who came to the issue uncommitted, all they sought was the basis for the dangerous warming case and were denied this reasonable request.
Even you eventually adopted the strawman tactic of presenting that there was some warming and pretended that the opposition side didn’t believe this or didn’t believe there is even a greenhouse effect. This is the whole argument that you all have to offer an inquirer. Indeed, you all didn’t think the warming itself was strong enough in your own minds so the whole of Client Science became adjustment of data to enhance your uninformative answer.
The Climategate affair gave us the most convincing answers to our questions. No TOBS or Station Moves needed to erase the LIA or the MWP. The Karlization of ocean temperatures followed because a lengthy Pause could no longer be tolerated or rarionalized and thousands of fit for purpose , high tech buoys were giving inconvenient answers.
Steven, can you, in as clear a manner as possible, tell us the answer to a few self-evidently legitimate questions? 1) What is the basis for the prognostication of catastrophic global warming that justifies $Trillions a year expenditures and trillions more in increased power and fuel bills and trillions more in costs for food, shelter, clothing, etc. that will impoverish Billions of people and destroy the world economy (in these terms, it sounds more like a certain 50km diameter bolid strike type of emergency!) 2) Do you or do you not believe there are very significant pluses on the benefits side of the cost-benefit analysis for carbon and what are they. 3) Given that getting people to go along with a drastic solution to the danger; do you think that a cadre of elites in a global government suspending freedoms and managing by decree will be necessary to avert a planetary disaster?
I think you see what troubles thinking sceptics.
Already thought of that adjust for data gathering practices thing. One of the many things I did was to examine individual stations with an eye to abrupt changes. I detected and verified many mean shifts in individual stations. Testing is nothing more than a test for equality of the means before and after a shift. There were plenty of examples where the shift occurred at a time without evident overall trends. detecting such a shift occurring in an overall trend is more difficult by not impossible. Such as taking a linear trend line and normalizing the data to match. Thus turning it into a plot of variations from the predicted trend. You can do that with any trend model. It is the basis of error approximation.
For the Australian data, which was just 10 stations, I was able to detect significant site changes from the data. Thus I knew which sites I was going to reject before I even reviewed the sites.
I have done exactly the same thing running through numerous data sets. So this is nothing new to me.
But here is the rub. there is no abrupt change corresponding to the time of observation change which was supposed to have happened in the 1960’s. Or so I hear. Which isn’t surprising either.
What possible difference does it make if I record today’s high and low this evening vs recording this morning’s low late morning and observing today’s high tomorrow morning? So long as I do my recording at the same time each day I will capture the high and low for the preceding 24 hours.
It doesn’t appear in the data and it doesn’t deduce logically.
Steven Mosher April 2, 2018 at 2:58 am “GHCN has all the metadata you need. you need to actually use it.”
From the man who has said many times that he does not use USHCN or GHCN.
eg
Steven mosher August 3, 2010 at 8:23 am
Ross’s comments on the changes in sampling are but a first step.
Nobody who uses GHNC data uses all 7280 stations. So for starters you can’t look at the entire sample of GHCN and draw any substantive conclusion. For example, Zeke and I take in all 7280 stations and then we do a preliminary Screen. The first screen is to reduce the total number of stations to those stations that have at least 15 years of full data within the 1961-1990 period. That screen drops a couple of thousand stations, so of the original 7280 stations you end up using about 4900 of them.
Basically each station gets assigned to a grid ( 3 degrees by 3 degrees) When you do that with 4900 stations you get some grids with one station and other grids with as many as 36.
WRT GHCN adjustments. i dont use the GHCN adjustments. The file in question has a few minor issues that Zeke has noted, and some other issues that I’ve identified but havent made public.
Also, the metadata in the ghcn inventory is stale.
Meh…. what are you going to do about it?
Isn’t THIS the question? And the answer is ‘nothing’. Nothing we do will make a difference. We’ll just carry on, adapting to local/global changes as and when necessary – as we always have.
Discussion of whether it’s hotter, colder, warming, cooling etc are irrelevant to anyone unless you want to just ‘argue (discuss) it to death’. Fruitless. Pointless.
If only TPTB understood this.
Dr. S, don’t know if you saw my comment thanking you for you input on kinetic energy from 15 micro photon, but I found my error so I withdrew my comment about that specific thing.
Thanks again.
NASA Climate Science Fraud for all to see. We went from cooling to warming by jiggling the numbers in the computer. And now the world acts on this fraud and spends its limited resources on windmills and bureaucrats and fatcats and elites. All based on the lie shown in the charts. Someone ought to go to jail over it.
This isn’t how science works. You have to be looking for the warming in order to find it. Sheesh.
The whole conversation here, excluding a link to Pat Frank, is wooly thinking because of the comparative absence of error terms. Real errors, including bias that does not dacel positives against negatives, real errors like the difference between liquid in glass and electronic, real errors from systematic studies of the effects of shelters,including drifts with age.
There are bits and pieces of error analysis in the literature, but they absolutely fail when compared to the best practice recommendations of the Paris based Bureau of Weights and Measures and associated groups, with formal agreements with many Nations being flaunted big time.
It would not surprise me to find that the full error, one sigma, for daily temperatures that go into these global sets, is worse than plus or minus one degree C. A even +/- 1.5 deg C is within easy sight at many stations. So much for a “trend” of an official 0.8 deg for the last century! Dreamin’, as we say in Oz.
That uncertainty leaves great scope for creative, artistic, compositional science that can be tuned to a warming melody.
Scientifically, any body like BEST, GISS, CRU, etc that continues to evade the correct science of correct error analysis does not deserve to be heard. Geoff.
Not ready for ISO- 9000 certification. And how do these temperature variations compare to the last 10,000 years. The case for CAGW is virtually non-existent. Just computerized extrapolation from poor data, no more. Mean while almost all of the energy is stored in the ocean at 4°C. Once more, we have meet the enemy and they are us.
You hit the nail on the head. Several here have articles and have continually pointed out that much of the “science” being done is based on manipulated data that appears as absolute numbers with no accuracy (or inaccuracy) information included. Just pick a data set output by someone and assume it is totally accurate and without any error range at all (or whether it is even applicable to the study being done). Like you, it would not surprise me that the so-called warming is inside the actual error of readings in the past. As to applicable, how many studies have you read that used Global Temperature data sets rather than localized temperatures to justify the conclusions?
Okay, now let’s test equivalency of statistical distributions for this against the distributions produced by urban stations and also the top graded quality compliance stations.
What do you have to say about the 24-year interval 1975 to 1998, during which global temperature went up by nearly one centigrade degree? Coincidentally (or is it?) this is exactly the interval over which we leaked and sprayed CFCs into the atmosphere. Statistically, global temperature hasn’t increased since 1998. If you want to reply, please do it to davidlaing(at)aol(dot)com, as if I check the box below, I get to read ALL posted replies to this column.
So what caused the 1919-1945 warming, or the 1945-`975 cooling? Invoking CFC’s for the 1975-1998 warming seems to be special pleading.
One of the CET thermometers was located at Manchester International airport, until 2008 – just opposite the engine run-up bays. They have moved it recently, but calibrated the new location to the old measurements…..
R
“Short term changes, such as the warming into the 1930’s, hold true for the clear majority of stations. Other examples of this would be the 1940’s ”
Mark, you may not be aware that US temperature manipulators greatly pushed the 30s-40s temperatures down (GISS under Hansen) in 2007! Until that egregious revision 1937 was still standing as the US high. 1937 was also the Canadian high. The change didn’t affect the long term temperature trend but it did two things. a) It reduced the sharp temperature decline after the 1950s so that the 1980s-90s temperature rise wouldn’t be interpreted as a simple recovery to previous highs. 2) It took away the much more rapid warming from the 1890s to the late 30s which couldn’t be interpreted as man-caused – essentially the entire 0.8C increase took place in 40 years, not 100 yrs of the revised warming. Without the revision, no warming would have been registered Since he late 1930s.
Regarding it being only a hemispheric pattern, note that South Africa (Capetown), Paraguay, Bolivia and possibly others have the same pattern as the raw US temperature pattern. The ‘Not a lot of People Know’ web site of Paul Homewood has the curves for these Southern Hemisphere examples.
Could it be that greenhouse gases are a moderating influence – with neither extremes of heat OR cold?
http://journals.sagepub.com/doi/abs/10.1177/0958305X18756670
“We find a lack of warming in the ocean air sheltered temperature data – with less impact of ocean temperature trends – after 1950. The lack of warming in the ocean air sheltered temperature trends after 1950 should be considered when evaluating the climatic effects of changes in the Earth’s atmospheric trace amounts of greenhouse gasses as well as variations in solar conditions.”
Am expecting to see quite a fall in global temps during March.
With most of Russia and Europe been below average temp, plus a cool eastern Pacific and a colder Arctic.
Then a drop in global temps must be on the cards. Watch what happens to this years spring snow extent.
UAH has March up slightly from February.
Bellman
lf that is the case then it really casts doubt on just how useful it is as a guide to global temps.
Because during March there has been cooling where it matters, over the large land mass of northern Asia and cooling in the Arctic. Any cooling over northern Asia leaves a huge land area for snow to settle and last into the spring. This is where real climate change comes from.
Reanalysis surface is up by about the same (0.05°C). Cooler Arctic/N Russia and Europe are balanced by a band of warmth from N China through to Sahara.
Nick what that suggests is that there has been less mixing of the air masses between north and south Asia.
So the cold air has tended to stay in the north and the warm air in the south. lf that lasts through out the spring. Then that will extend the spring snow extent. Which can lead to real climate change.
And, although UAH shows a rise which you don’t accept, would you have accepted the result if it had shown a fall as you were expecting? In other words, what changed your mind about using it as a basis to confirm your prediction – the fact that it didn’t confirm your prediction? Is that how you judge the validity of something – whether it agrees with you or not?
l don’t use the UAH
l use the daily updated temp maps on the Google Arctic sea ice page.
You should ignore any data from Australia’s BOM. Via the homogenisation process they have systematically corrupted the data. Just as Jen Marohasy.
There have been a number of very good questions, comments, and critiques. That is actually wonderful to me. Anyone should be challenged. That’s what makes us strong. And I could be wrong too. So thank you for doing me the courtesy of reading in the first place, giving it some careful thought, and taking the time to reply. I appreciate that.
This is the 8th part of a series. The series really serves as a diary of sorts taking the process from start to finish. I think most questions are answered there, but I will address a few things here. Hopefully I will answer most concerns.
This isn’t my first look at this type of data. I have gone through a great deal of the source data and tabulated data from Berkeley Earth. I chose the GHCN because it looks like the best data out there available to me. Much of the other data sets are plagued by data issues such as incomplete years, estimated data, duplicate data, and other issues. The bottom line being the number of complete long term stations for the data I wanted were very low. Certainly less than 30 total. Finding so many in the GHCN was like finding the price Easter Egg.
It is important to understand how I am using the data. Think about it this way. There is a distinct, concrete history of temperature for every square foot of surface on the planet. The actual aggregate of square footage of the total where we know what that history is even for the past 60 years is poor. Push that out to 100 years and it is abysmal. As we have seen from my study the amount of variance within even a portion of that area for the a 112 year period for a very small percentage of that area is very high. From +3.5° to -2.5°.
Hence the idea of calculating an area weighted based model as an estimate of global surface temperatures from 1900 onward is pretty crazy. The idea of creating a spatial volumetric based model for the earth from 1900 onward is even crazier. I am betting most of you don’t know this but that is exactly what they are doing. I gleaned that information by challenging a fairly public proponent of CAGW and climatologist on twitter. We had a couple of weeks worth of discussion and she provided a fair amount of documentation.
The only viable method for assessing the history of temperature change is to perform a sampling of that history and do as is done with all such sampling methods. You project the sample results out to make inferences about the total population. In this case we are looking at a set of locations.
The limitations of this are determined by the variability within the population as a whole. You estimate that by the variability within your sample set. When you are looking at time series data that overall variability is compounded by the individual variability within each sampled time series.
Since the objective here is to project a result upon a larger population it is important to identify, quantify, and account for the sources of variation which would impact that estimate. For the statistic I have created to describe the data the primary factors are location, start point, and end point.
As described above the location variability is pretty high. The standard deviation for the sample population is 1.5°.
Start point and end point variability is dependent upon what years you choose and upon the year to year variability. The amount of variability from year to year in the same station is pretty high. I chose to assess this by compiling and averaging the 10 year standard deviation for each station for each such period in the study. This yielded data which as expected fits a chi squared distribution. The average being 1.24°.
I calculated an 80% confidence interval for my statistic as ± 1.09° and a 90% confidence interval as ± 1.4°. I chose to go with ± 1° because it made plotting the histogram easier and represented a slight under estimation.
If you are wondering, standard distributions for sampling were calculated by diving the over all standard deviation by the square root of the sample size using 386 for the overall statistic result and 10 for the end points.
If there are any concerns as to the goodness of this process please let me know.
As you move forward, have you given any thought as to what temperature really implies? Does it adequately describe the heat/enthalpy at that station? Does a temperature average also give you a representative enthalpy average? Can you average the temperatures at various stations and then say that accurately represents the average heat/enthalpy for all the stations together?
As you can tell, I am somewhat sceptical that all the focus (and money) on temperature data sets and trying to get an average global temperature is basically worthless. Over the globe, atmosphere varies from 0% to 99.9% humidity, station altitudes (pressure) from sea level to 5000? feet. These variances just can’t be blithely ignored, they must be adequately addressed in any study or paper that hopes to address global warming. As far as I know, none of the accepted data sets have addressed this issue either and attempted to explain how their estimate of a “global temperature data set” accurately addresses it.
I think about it like this. A mountain range covers so many square miles, but the actual surface area is far greater just including the shape of the mountains. Add to that the surface area created by all the trees and what not. Add to that the elevation changes, the changes in shading during the day, and so forth. Just calculating how much energy is being radiated outward is impossible. It depends upon surface area, material properties of those surfaces, and temperature. All of which can vary quite a bit often in a matter of just a few feet.
But how much energy is being absorbed by the air by induction? How much energy is being absorbed by rivers or springs running off the mountains?
Then, as you say, relative humidity rises and falls a lot, often every single day in a mountainous area. Water precipitates out due to abrupt changes in temperature. Think about the enormous amount of energy being exchanged in that sudden change of phase from gas to liquid.
Which brings me to what I see as the biggest failure in their energy models. Water. From the earliest days of the greenhouse gas theory to now it seems to me they are ignoring water and the fact it is really a unique substance. As a liquid or as a gas, fresh or salt, water has a very high specific heat. It is an energy sump, and energy battery. It absorbs energy the way ice absorbs heat. Mass for mass, water is a very efficient substance for transporting energy.
So now we have an understanding. The amount of energy contained in a cubic meter of atmosphere cannot be determined by it’s average temperature alone. You must know how much water is there in gas or solid form. You must know the pressure.
The amount of energy contain by what covers a square meter of the earth cannot be determined by the air temperature above it. You must know the actual surface area, the composition, the pressure, what the actual surface temperature is, and how much water is present both in the air and in the ground. And even that ignores interior heating or cooling of surfaces. Last time I cut down a tree I don’t remember it’s center being frozen.