Guest Post by Willis Eschenbach
Over in the Tweeterverse I saw that someone said:
NASA GISSTEMP Global Mean went above 1.5C for 2 months in 2016.
Hmmm, sez I … why not since then?
So I thought I’d take a look at the major global surface temperature estimates for the 21st Century.
The datasets that I used are the surface temperature datasets of the Goddard Institute for Space Science (GISS), Hadley Climate Research Unit (HadCRUT), Berkeley Earth, and Japan Meteorological Agency (JMA), along with the Earth’s Radiant Energy System (CERES), University of Alabama Huntsville Microwave Sounding Unit (UAH MSU), and the Remote Sensing Systems Microwave Sounding Unit (RSS MSU) satellite-based datasets.
I’ve done a structural change breakpoint analysis for each one. I have not chosen the breakpoints. They are selected, and their uncertainties estimated, by the Bai & Perron algorithm implemented in the R computer language package “strucchange”.
Here they are, in no particular order:
Surface Station Datasets




Satellite Based Datasets



Clearly, there has been a sea-change in the surface temperature changes in the 21st century. For most of the last half of the 20th century, temperatures were rising on the order of 0.15°C per decade. But this century, for a good part of the period from 2000 to the end of 2014, the rate of rise of most of the datasets was much less than that.
And in all seven of the datasets, since the breakpoint before the 2015/16 El Nino, the temperatures have either been near level or dropping …
Hmmm …
One interesting note. The records fall into two groups—the CERES, JMA, RSS and UAH data have very little change up until ~ 2015. But three of the ground-station-based datasets, GISS, HadCRUT, and Berkeley Earth, have a distinct trend change around 2005. Why? Dunno … but it doesn’t increase my confidence in the ground-based data. My guess is that those three ground datasets, GISS, HadCRUT, and Berkeley Earth, are contaminated by the Urban Heat Islands or excessive “homogenization”. However, that’s only a guess.
My very best wishes to all,
w.
PS—Please do us all a favor, and when you comment, QUOTE what you’re talking about. This avoids endless misunderstandings.
PPS—Please be clear that I am not predicting the future—I am reporting on the past …
The red lines don’t appear to be drawn fairly. Although there is a clear pause / slight decline, the period from around 2012 through 2018 justifies a very steep upward trend line, which never appears. Instead, there’s a gap. How is that justified?
I believe that posts like this degrade the overall credibility of our arguments.
This level of analysis is all very “sciencey”, but it seems pointless if the underlying datasets are poor quality. Analysis like this gives the datasets credibility that they don’t deserve.
I’ve been following the global warming issue since the 1980s, and I’m constantly amazed that historical temperature datasets from surface stations are still considered worthy of discussion. Worse, they are being used to justify major public policy and the diversion of large amounts of money to things like renewable energy, when that money could be used for more worthwhile purposes.
There seem to be many problems with these datasets, the need for multiple adjustments being just one. For example, the GHCN v3 dataset included a README file suggesting that GHCN staff were performing adjustments ON TOP OF other unknown adjustments. Quote:
Often it is difficult or impossible to know for sure, if these original sources have made adjustments, so users who desire truly "raw" data would need to directly contact the data source. The "adjusted" data contain bias corrected data (e.g. adjustments made by the developers of GHCNM), and so these data can differ from the "unadjusted" dataNote that newer GHCN datasets do not include this text.
Another problem is station siting. Years ago, Anthony Watts (and others) have pointed out problems with station siting, and Anthony has recently initiated another project on this issue.
On Watts Up With That and other sources, I’ve read many articles that examine surface temperature datasets, but I haven’t seen any that look at the annual average distance of weather stations from the equator, how this changes over time, and how this may impact the calculation of global temperature averages.
I have examined three datasets that are publicly available, from the Global Historical Climatology Network (GHCN), NASA’s Goddard Institute for Space Studies (GISS), and the University of East Anglia Climate Research Unit. These datasets cover land-based measurements only, but as far as I know, this is the predominant type of measurement performed over the last 150 years.
GHCN datasets seem to be a major component of other datasets such as GISS, so I’ll use GHCN as an example. In the GHCN data shown below, the effects of changes in average station distance from the equator can be seen in the average temperature trends. I have not seen any evidence of attempts to remove this sort of bias from the data.
Note that only complete annual temperature records were used. Records with any missing monthly data were filtered out before plotting.
Given the many problems with surface temperature datasets outlined on this site and others, complex analysis seems to be a waste of time.
Excellent. This is the kind of analysis we need to understand the GAT. You have found a bias that has not been dealt with. It is why the uncertainty of the GAT is much higher than what is normally quoted. It also shows why local/regional temps are more relevant than a misunderstood GAT.
Willis,
These are nice observations, but they might start to fall apart a little when you introduce a form of measurement uncertainty.
The anomaly base values are not given on your graphs, but we can avoid their need by simple methods. Just take 2 prominent peaks, same time each year for each graph, and subtract their values.
Roughly, I get the temperature difference between the 2002-3 peak and the 2017 peak thus, in order of your graphs from GISS to UAH:
0.47
0.39
0.38
0.43
0.51
0.78
0.52
FWIW, these have a mean of 0.5 and a standard deviation of 0.136. This is a rough as bags estimate, but it shows that the measurement uncertainty is large enough to be considered. Its effect on the breakpoint analysis can be seen by Monte Carlo runs with temperatures within bounds like these. I have no idea if it will swing those lines about. Geoff S
Nice analysis.
The Climate Mafia has form on this. “Hide the decline using Mikes Nature trick”. Ref the Climategate emails.
Why does it start in 2000 and not in 1990? Because of the decline from 1998 to 2000!
It is obvious.
What happened during 2015?
Paris Agreement .
I am serious .
Manipulation stops when a goal is reached!
“I am serious”
You are delusional.
I don’t want to say it, but I think I figured it right.
‘Of the estimated 1,500 active volcanoes on Earth, at least two dozen of them erupted in 2015.’
Did I not tell you? (click on my name)
2015’s Most Notable Volcanic Eruptions | The Weather Channel – Articles from The Weather Channel | weather.com
It is the peak of the Eddy cycle and it may soon be over.
I think you should have started in 1992 instead of 2000 to get a 30 year view (standard for climate analysis). For the surface temperature datasets, I think you would have seen a steep slope from 1992 to 2002 and then a flat to slight rise from 2002 to 2015. The break you are currently seeing at 2005 would move back to 2002. This 1992 – 2002 steep rise can be explained by depressed temperatures globally in 1992-94 due to Mt. Pinatubo and then a spike at the end of the period due to the 1998 El Nino. 2002-2015 would be the famous “pause” which probably just represents normality without any global temperature altering events. We’re now in the midst of a new pause.
An lingering question in my mind is, why have temperatures after the last 2 strong El Nino’s leveled out at a higher level than they were before the El Nino? Perhaps other long term factors such as the AMO or solar cycles are at play.
Dan M:
Because of Global Clean Air efforts that have been removing dimming SO2 aerosol pollution from the atmosphere. As the air gets cleaner, the sunshine gets stronger.
“But three of the ground-station-based datasets, GISS, HadCRUT, and Berkeley Earth, have a distinct trend change around 2005. Why?”
My guess is that when USCRN came online in 2005, they had to start calibrating against a high quality reference network which forced them to stop playing games with adjustments.
Back in 2015 I did a comprehensive analysis of the data from 54 weather stations where I calculated the time that Tmax was going down:
Now look at this:
Wood for Trees: Interactive Graphs
If it was not for that upkick in 2015, my analysis would have been correct. It probably still is.
ja. ja.
Who or what turned up the heat? | Bread on the water
help me out. Simply, where did that extra heat come from?
I am saying it did not come from the sun.
see
https://wattsupwiththat.com/2022/05/11/the-recent-decline/#comment-3515772
This is just cherry picking. The following is indisputable from the data:
1. El Nino years are warming at about the same rate as La Nina years.
2. El Nino years average almost 0.2 C warmer than La Nina years (the difference in the Y direction between the red and blue lines below).
3. The AGW warming signal is about 0.2 C/decade.
4. ENSO cycles between El Nino and La Nina inside of decadal time scales.
That means you can pretty much always cherry pick short term “cooling” trends if you begin with a strong El Nino and end in the next La Nina, and it has nothing to do with whether the AGW warming signal is changing. 2020 tied 2016 for the warmest year on record without the benefit of being an El Nino year. The 30-year trend in HadCRUT5 is 0.225 C/decade from 1992 to 2021, and that includes both of these so-called “pauses.”
“For most of the last half of the 20th century, temperatures were rising on the order of 0.15°C per decade. But this century, for a good part of the period from 2000 to the end of 2014, the rate of rise of most of the datasets was much less than that. And in all seven of the datasets, since the breakpoint before the 2015/16 El Nino, the temperatures have either been near level or dropping …”
Omitted entirely from this description was the most pertinent feature of the temperature trend of recent decades, that being the large step wise jump of 0.3 to 0.4 degrees C at the 2015 breakpoint.
Why am I not surprised at this omission?
“I am not predicting the future—I am reporting on the past …”
Good reporting doesn’t omit all discussion of the most pertinent feature of the story.
Just sayin’ …
wow! a whopping 1.0 deg C century there. the rest is a flat line.
And the Joel-ian nonsense sadly continues.
The various datasets give an overall 21st century rise rate thus far as somewhere around 1.5 to 2.5 degrees per century, not 1.0.
GISSTEMP is pure fakery and manipulation. anyone in climate who studies that knows that. Gavin Schmidt can be proud of the fraud he is committing on humanity. Billions of humanity will die because of his lies.
“GISSTEMP is pure fakery and manipulation”
Merely because I, the Great and Powerful JOEL, have declared it to be so!
Never mind that GISSTEMP is not all that different from the denier’s beloved “pure fakery and manipulation” UAH dataset.
MGC,
You think GISS data is reliable ? Have a look at the attached chart, and you’ll see a clear bias on the temperature values caused by average station latitude.
(Source: https://data.giss.nasa.gov/gistemp/station_data_v4_globe/v4.mean_GISS_homogenized.txt.gz)
Then, compare this with charts for solar activity, such as sunspots and the geomagnetic Ap index. From the 1960s onwards, you’ll see a very good correlation.
If you think that this is coincidence, I have a big bridge in Sydney that you may be interested in buying.
George, who fed you this steaming pant load of phony baloney nonsense that GISS temperature supposedly “correlates” with solar activity since 1960?
Here’s reality:
https://climate.nasa.gov/climate_resources/189/graphic-temperature-vs-solar-activity/
MCG, emotional responses like that mean that I am probably on the right track.
Sunspot counts are only a very basic indicator of solar activity, but it gives a rough indication of solar magnetic activity, which may influence cloud formation and therefore surface temperatures.
The geomagnetic Ap index is apparently considered a better indicator, but it also has a fairly good correlation with sunspot counts.
Your link refers to Total Solar Irradiance (TSI):
See the attached chart for a comparison of GISS station location and solar activity. You will see the clear correlation after the 1960s.
Why would GISS be selecting their station data in this way ? Coincidence ?
Download the GISS data and see for yourself.
You’ll have to explain what you think you “see” in this graph. I don’t follow what “station distance from the equator” is supposed to mean, or why GISS supposedly “selects” such things.
I think cutting graphs into 3 with disjointed trend lines obscures the trends & distorts the results. Compare peaks, compare lows, trend for whole period?
I prefer having the starting data point start where the preceding trend line finished.