Guest Post by Willis Eschenbach
My last two posts, one on Gavin’s claims and the other on the Urban Heat Island (UHI) effect, have gotten me to thinking about the various groups producing historical global surface temperature estimates. Remember that the global surface temperature is the main climate variable that lots of folks are hyperventilating about …
In particular, in the earlier post, Steven Mosher has been defending how the Berkeley Earth folks handle the Urban Heat Island effect. I remembered that Berkeley Earth had data about cities, so I went and got it from here. The data shows the trends for the period after 1960.
I graphed it up, but I wanted to have something to compare it to, so I also got the data for Berkeley Earth global surface temperature trend since 1960. Figure 1 shows the result:

Figure 1. Berkeley Earth trends for various cities, along with the global surface temperature trend over the same period.
There were a couple of things that I found unusual about this. First, there are some indications that the Berkeley Earth method of removing the Urban Heat Island distortion of the global temperature record is … well … perhaps not all that accurate. However, more research would be needed to determine that.
The bigger surprise to me, however, was the size of the Berkeley Earth global surface temperature trend. I had remembered the global trend as being around two-thirds of the value shown in Figure 1. I thought “Over two degrees per century? That’s over two-tenths of a degree (0.2°C) per decade! How did it get that high?”
The answer is, that’s land-only Berkeley Earth data … the global Berkeley Earth data is less than that. A lot less.
So, being addicted to data, I went and got the temperature records from a variety of organizations. I wanted to include the satellite-measured temperatures of the lower troposphere, which only started in 1979, so my analysis covered 1979 to the present. I got the data from Berkeley Earth, the Hadley Centre (HadCRUT), the Goddard Institute of Space Sciences (GISS LOTI), Remote Sensing Systems (RSS), the University of Alabama Huntsville (UAH) and the Japan Meteorological Agency (JMA). I smoothed them all with a 6-year Gaussian average and graphed them up.

Figure 2. Surface and lower troposphere temperature records from six different groups.
For me, the best part of science is my first look at some batch of numbers that have been converted into a graph. It’s always so exciting waiting for the unknown surprises. In this case, I cracked up laughing when I saw the graph. If there were ever an indictment of the current state of climate science, it’s shown in that graph.
People are all up in arms about the surface temperature … but thirty years after James Hansen started madly beating the alarm bell about “global warming”, and after thirty years of endless claims about some mythical “97% scientific consensus”, the sad truth is that the climate scientists have not even been able to come to a consensus regarding how much the globe has warmed in the last 60 years. I mean, the answers differ by a factor of 1.5 to 1!
And mainstream climate scientists wonder why people don’t pay much attention to them? …
Here’s a protip. It’s not a communications problem. If you want people to listen to what you say, first you have to centralize your fecal material.
My best wishes to everyone,
w.
My Usual: Please quote the exact words that you are discussing in a comment. Otherwise, we can’t tell what you are referring to, and misunderstandings multiply.
The best data is the worst and that’s by design.
Berkeley [Earth] represents the worst case scenario. Mama, don’t let your babies grow up to be …
Climate scientists.
Mamas, don’t let your babies grow up to be “Hansens”
Don’t let ’em pick cherries or adjust them old charts
Let ’em be honest with ethics and such
Mamas don’t let your babies grow up to be “Hansens”
‘Cos they’ll say anything to promote that ol’ “Cause”
Even to someone they love
MODS, I made a reply that hasn’t shown up. ( I did wait awhile.)
It was spoofing the chorus to “Mama, don’t let your babies grow up to be …”.
I suspect it went to auto-bit-bin because I used the word fr@ud. “But I didn’t say ‘Fudge'”.
(I did not use in reference to an individual.)
Thanks.
Guess I didn’t use the F word after all.
(But it would be applicable!8-)
Anyone who starts out calling a project BEST before they even have the data, has a hubris problem.
I was prepared to keep an open mind about Muller while he had Curry and Watts on board. When he pulled to rug on the collaborative effort, he lost the credibility he may have had.
Why doesn’t anybody note that the Average Daily Temperature of the Continent of Antarctica is 59 degrees BELOW Zero F? WE keep hearing how the warming is going to melt that South Pole, but NO ONE ever tells us how 5-10 degrees of warming is going to accomplish this “miracle”. Where are all the scientists with an explanation for a simple question?
See Clive Best for a gridded global approach
http://clivebest.com/
Organize the cities by Latitude? That might also be interesting.
And altitude. Mashhad and Kabul are near the top of the list and are 1,000 and 2,000 metres ASL. They are near the top of the list along with “cold country” cities.
YUP”, the Figure 1graph may be accurate, …… but highly misleading.
The literal fact is that near-surface temperatures are highly dependent upon both the altitude and latitude that they are recorded at. And to get real persnickety about it, longitude is also a critical factor, to wit:
New York City —– 40.7128° N, 74.0060° W
January — 39° / 26°
February – 43° / 29°
March —- 52° / 36°
April —– 64° / 45°
London England —– 51.5074° N, 0.1278° W
January — 48° / 40°
February – 49° / 40°
March —- 53° / 42°
April —– 59° / 45°
A 10.7946° difference in latitude ….. but nearly identical average temperatures.
If not for the Gulf Stream, ….. Great Britian would be as cold as Hudson Bay.
Looks like they are already sorted in latitude. That was the first thing which struck me.
Don’t take it too literally but the top ten ( with the exception of Kabul ) are pretty much in latitude order.
Also , despite Aus being this years “canary in the ever moving coalmine” , I don’t see anything south of Seoul in the list.
In my eyes they are sorted by temperature trends value, not by latitude.
Or it’s the same order 😀
I recommend you add JRA-25 and ERA-5 2m and 850mb, 700mb and 500mb to your assessment.
Why are the 1980 data clustered so closely together relative to the present day data?
Tom, to answer your question, the base period for anomalies, according to Willis’s graph in his Figure 2, is 1979-1985.
It’s a logical choice, in this case, because he wants to show how the datasets diverge with time.
Regards,
Bob
Thanks. I should have thought about that a little harder…
It occurs to me that maybe the base period should be close to the present instead, and we should look at where the data starts. After all, it’s been pointed out here many times that certain “adjustments” have been made to historical records which amazingly all seem to be going in the same direction. So we should see how cold it really was in 1960, should we not? Some of us were even around at the time.
Absolutely, yes. The best data we have are USCRN, and that data is only contiguous over the last ~10 years.
By all means, set every base period as 2008 forward until we have a full 30 year base, then let it slide forward each year.
It might shine a new light on all the nutty adjustments being added to the past and present.
The total effect is largely in the ‘adjustments’ methinks.
GlobalTemperature =
Solar Effect+Cloud Effect + Greenhouse Effect + Heat Island Effect (Urban and other anomalous) + Instrument Relocation Effect + Instrument Technology Change Effect +
Location-Area Adjustment Effect + UHIAdjustment Effect + MaintainUptrendEffect + HeadlineChasingAdjustmentEffect + 97%ConcensusAdjustmentEffect + DenierBashingAdjustmentEffect
There is also a puzzling divergence from 2000 in the HADCRUT4 NH mean and SH mean:
http://woodfortrees.org/plot/hadcrut4nh/mean:12/plot/hadcrut4sh/mean:12
It surely needs ‘adjusting’.
A bit like their models and their ETS and TTS etc.
Of course the science is settled.
The only thing that’s really settled are the solutions. Everything else is backfill.
That’s exactly what is wrong with the whole global warming thing:
it’s a solution in search of a problem.
The solution being abolishing democracy and establishing global government (i.e. communism). The problem is non-existing but it sounds so good and people want it to be true. It’s trust-abuse. 97% agree, remember?
Which UN biggie was quoted as saying that it didn’t matter whether global warming was true or not, because it forced people to do things that needed doing anyway?
“We’ve got to ride this global warming issue.
Even if the theory of global warming is wrong,
we will be doing the right thing in terms of
economic and environmental policy.”
Timothy Wirth,
President of the UN Foundation
For more devastating quotes, see: http://www.green-agenda.com/
Are those rates statistically different?
Hard to tell … but when they differ by a factor of two to one, they’re practically different, particularly in the effect that they have on the overheated debate.
w.
Yes. I agree. All of these methods of assessing gmst seem weak to me.
Another fine job, Willis, but dammit, I did it again.
You see, at an early age I thought it might be cool to have a large vocabulary and true to form, just looked up surfeit.
Problem is, now I’ll use the word all over the place, because I’m smart enough to understand it, but not smart enough to not use it, so end up sounding like an academic a*hole, instead of the regular kind.
It’s a sister to forfeit or forfait
Or even ‘surfait fait’
in re my prior comment: Use of surfeit in this forum is naturally apropos and shows that Willis knows how to get ‘er done, while I typically pick up the wrong bat.
Willis
“People are all up in arms…”, “hyperventilating”
“mainstream climate scientists wonder why people don’t pay much attention to them”
“cracked up laughing when I saw the graph.”
You pour scorn and derision on climate science but then it appears in your haste to cast doubt you yourself have produced a bit of “fecal matter” – “the answers differ by a factor of 1.5 to 1!” Nick’s graph below appears to totally refute this.
Are you standing by your ridicule?
Crickets.
“Are those rates statistically different?”
The 95% CI range for the AR(1) trend for GISS from Jan 1979 to Dec 2019 is 0.166 to 0.21. So the differences between land/ocean measures are not statistically significant. The differences between land/ocean and lower trop, or land/ocean and land only, are significant.
AR(1) is merely a mathematically convenient, a prori model for surface temperature variations. Empirically, it’s a gross fiction. Check out the stark differences in spectral structure.
Analysis is done here, mainly by acf, with some spectra also here. The main deviation from AR(1) is an unexpectedly large oscillation in the series.
How would you suggest estimating the confidence range? Do you think it would alter the conclusion that the land/ocean trends are not significantly different, while the trends of different kinds of data are?
Don’t estimate stuff that can’t be estimated. You end up with specious results.
Isn’t that obvious.
The “unexpectedly large oscillation” provides ample proof that the acf of surface T doesn’t resemble structurally the monotonically declining, non-negative acf of AR(1). And the (hanned) power spectrum of GISP2 Holocene data clearly shows significant peaks of various bandwidths that are irreconcilable with AR(1). See: http://i1188.photobucket.com/albums/z410/skygram/graph1.jpg
The realistic way to estimate the confidence intervals for such complex processes is via Monte Carlo simulation, rather than academic presumption.
Nick,
To quote those figures for 95% C.I. range, one might assume that there is a peer reviewed paper disclosing method of calculation, data sources, assumptions and so on. Please f this is so, what is the reference?
I have been rebutted for several years by your BOM after repeatedly asking for their similar figures for their routine temperature data, more primary than your AR(1) trends, with their responses seeming to insinuate that it is too complicated to reduce findings to a simple number or two.
I am particularly interested in how the AR(1) number analysis treats the uncertainty calculations for invented, guessed T where T numbers now exist in places distant from the nearest measurements. Geoff S
Geoff,
The data is just the published time series, same as Willis is plotting. The methods are fairly standard, and are described in detail here and here.
“where T numbers now exist in places distant”
The trend uncertainty is calculated for the time series of global averages, and uses the observed apparent variance. The uncertainty due to location belongs within the global average, and is appropriate if you need to assign an uncertainty to those figures.
Nick,
It’s my understanding that Confidence Interval is a statistical measure and does not reflect the full uncertainty in the global temperature estimates. From what I recall, HadCRUT presents a monthly uncertainty derived from an ensemble approach for estimating uncertainty, although my subjective feeling is that their uncertainty estimates are still too low.
I also recall that you have done quite a bit of work as presented on your blog about different methods for compositing (integrating) global temperature measurements to produce an estimate of global temperature anomalies. From what you have presented, it seems to me that much of the differences between the estimates from various agencies are more from the integration method and handling of data sparse areas than anything else, since they all use slightly different methods in these regards. This is especially true in the Arctic area where probably most of the warming has occurred in recent decades and mainly in the Arctic winter (Arctic night) and that is a data sparse area with large high temperature anomalies that could drive much of the differences.
For instance, HadCRUT tends to be on the lower end of the global warming estimates and from what I recall, they do not attempt to estimate temperature anomalies in grid cells where measurements are not available. Thus much of the Arctic region, being data sparse, is essentially not included which effectively assigns these areas equivalent to the global average, which probably causes a low bias.
Most of the groups, except BEST from what I recall, use NOAA’s GHCN homogenized monthly surface air temperature measurements for land in conjunction with gridded sea surface temperature estimates based on ship and buoy data and may or may not include satellite sea surface temperature estimates. More room for discrepancies, depending on the choices made. I also recall that you have produced estimates using the raw GHCN data in conjunction with ERSST sea surface data and using a variety of integration methods.
As far as urbanization effects, I’m still not convinced that any of the current homogenization methods provide a very accurate removal of the these effects. I call it “urbanization” rather than “urban heat island” because urbanization effects can occur even at relatively rural sites. What is important are the changes over time, which can be very localized urbanization such as construction of a building or paved parking area near the monitor, or on a larger scale from slowly encroaching suburban and/or urban development over a nearby range of one to 10 km or more. I suspect this effect is not especially large on a global scale, but could possibly account for a significant portion of the man-made influence on recent global warming. I recall an important study that Anthony Watts et al conducted examining station siting issues that could affect temperature anomalies if there have been significant changes over the time scales of interest. If there are no changes in siting issues and/or other urbanization effects over time, then there should be little effect on temperature anomaly trends over that time period.
Bryan,
“does not reflect the full uncertainty in the global temperature estimates”
The CI for a time series trend reflects a probability model fitted to the numbers. It does not try to work out the underlying sources of error. Because they are based on a large number of readings, it is expected that random error variation in the underlying data will be reflected in the standard error of the weighted mean, which is the trend.
“From what you have presented, it seems to me that much of the differences between the estimates from various agencies are more from the integration method and handling of data sparse areas than anything else, since they all use slightly different methods in these regards.”
Yes, that is true. The first requirement of a good method is that you make a proper estimate for all the data; not leaving patches out. If you do that, then while it is possible to improve the integration method, it won’t make a huge further difference.
“More room for discrepancies, depending on the choices made. I also recall that you have produced estimates using the raw GHCN data in conjunction with ERSST sea surface data and using a variety of integration methods.”
Yes, that is true too. I quoted above the trend for TempLS, which is based on unadjusted GHCN. For that period it was almost exactly the same as NOAA and HADCRUT. In fact I think they have a slightly lower trend because of weak treatment of the Arctic (cf C&W), so the corresponding reduction in TempLS could be attributed to using raw data. It is not nothing, but not much.
“I recall an important study that Anthony Watts et al conducted examining station siting issues that could affect temperature anomalies if there have been significant changes over the time scales of interest.”
It’s worth bearing in mind here the very close agreement between USHCN, USCRN and ClimDiv for ConUS for the period since 2005. Since USCRN has good, non-urban stations then if UHI or siting are having an effect, it must be an unchanging effect over that period.
Nick,
Call me a conspiracy therorist, but as I read here ( ? memory may be faulty) but the USHCN las been suspended since the USCRN came on line.
They are now forced to be the same by the powers that be, since the USCRN is unimpeachable and cannot be “adjusted”.
And over the history of the USCRN (which deserves better press coverage) there is NO discernible warming. Though not global in its reach, it does indicate that the changes in US temperatures are not dangerous or scary.
“but the USHCN las been suspended since the USCRN came on line”
Memory is faulty. USHCN was replaced by ClimDiv in 2014. Both have a substantial overlap with USCRN. And they all agree very well.
a_scientist
“And over the history of the USCRN (which deserves better press coverage) there is NO discernible warming.”
AHA.
Of course we can’t expect very relevant trend info when we inspect a time series averaging about 200 stations over a period no longer than 15 years.
Nevertheless, the linear estimate for the average of the raw data of all CRN stations during 2004-2019 is: 0.34 ± 0.18 °C / decade.
*
“Though not global in its reach, it does indicate that the changes in US temperatures are not dangerous or scary.”
Indeed. But… the US are 2 % of the Globe’s surface.
Rgds
an_engineer
UHI and other land-use effects cannot be meaningfully assessed from records only a decade or two long. The presence of strong multi-decadal oscillation requires records at least an order of magnitude longer for SECULAR trend differences to emerge. The inability (or unwillingness) to cope with this fundamental analytic requirement renders all sanguine conclusions drawn from short records quite meaningless. Their apparent linear trends are themselves oscillatory, rather than steady.
Bryan – oz4caster
“I recall an important study that Anthony Watts et al conducted examining station siting issues that could affect temperature anomalies if there have been significant changes over the time scales of interest. If there are no changes in siting issues and/or other urbanization effects over time, then there should be little effect on temperature anomaly trends over that time period.”
This is a major point into which I focused by performing a little experiment last year.
NOAA published years ago on this page:
https://www.ncdc.noaa.gov/ushcn/station-siting
a list of 71 USHCN stations acknowledged as well-sited by Anthony Watts’ surfacestations.org:
ftp://ftp.ncdc.noaa.gov/pub/data/ushcn/v2/monthly/ushcn-surfacestations-ratings-1-2.txt
All these 71 stations exist within the GHCN daily station set
ftp://ftp.ncdc.noaa.gov/pub/data/ghcn/daily/ghcnd-stations.txt
as well, so I compared them with GHCN daily’s entire CONUS subset:
https://drive.google.com/file/d/1pbQCHFwTTy1HIns9pDNj6mDQ85Vau7NC/view
As you can see when looking at the running means, the 71 ‘well-sited’ stations show a slightly higher trend than the average of all (over 8000 actually) stations located in CONUS (AK wouldn’t change much).
I did the same with these ‘pristine’ CRN stations, by comparing the average of all CRN stations (about 200) available in GHCN daily with all CONUS stations:
https://drive.google.com/file/d/1zg9M-GZwNoIBln404Ay0voAL8V4PmSdK/view
Here too, we see that the discrepancies are minimal.
Of course: despite it was subject to heavy V&V, this GHCN daily data processing is and remains layman’s work.
Feel free to do the same job and compare the results 🙂
Rgds
J.-P. Dehottay
Bindidon,
Thanks for sharing your two graphs. I also looked at the graphs provided by NOAA, where they use a gridded model to produce CONUS annual temperature anomalies for comparing USHCN, USCRN, and nClimDiv. They used a baseline for the anomalies that was prior to the start of the USCRN and used relationships with nearby stations to estimate a baseline for the USCRN. Their results are similar to your second graph.
While some sites probably do have significant urbanization changes over time, the amount is likely to be variable from one site to another and for different time periods, adding complexity. However, in areas like CONUS and Europe the station density is so high now that any effects may hidden in the noise. In areas with low station density, urbanization effects could potentially play a larger role in influencing regional temperature anomaly trend estimates, especially for sites in or near rapidly growing urban areas that show steady long-term population growth encroaching around the sites. Regardless, I doubt the effect is very significant on a global basis, but it may not be negligible for land only temperature trend analyses and especially for some regional and local trend analyses.
You did anomalies not temperatures.
The urban heat island effect certainly does not look to have been removed from those cities, and exactly how much corruption by spreading that temperature over thousands of square miles and homogenizing it with rural datasets raises the average?
Berkeley BEST is just another group of warmists.
I remember when they were first brought forward, they were going to be the good guys, and they were going to correct the lies. It seems as though they have amplified the lies.
Good cop, bad cop
EPA used to have a lot of stuff up about UHI, they say temp changes are a whole lot more than what Berkeley claims…
EPA > “. The annual mean air temperature of a city with 1 million people or more can be 1.8–5.4°F (1–3°C) warmer than its surroundings. In the evening, the difference can be as high as 22°F (12°C).”
https://www.epa.gov/heat-islands
====
ex: EPA says a +15 degree difference between rural and city Atlanta > https://www.epa.gov/sites/production/files/2014-07/documents/epa_presentation_oct05.pdf
Heck, this morning there was a 41 F difference in temperature between Traverse City, MI and Grayling, MI. They are only about 55 miles apart and almost directly east/west from each other. That is one of the largest differences I have ever seen with cities that close together.
What happened in 2000? Irrespective of whether the numbers are in any way connected to reality, they are reasonable coordinated for the first 20 years, then in the year 2000 they diverge wildly, and the last 20 years they go their own way. What happened?
Looks to me like the longer the analysis, the more the divergence…tough to diverge wildly close to the beginning.
Ron
There was the massive El-Nino-La Nina event from 1997-1999, which shifted a few million cubic km of warm sea water from the equator toward the poles, perturbing global climate and raising global temperatures as Bob Tisdale has explained in previous posts.
“What happened in 2000? Irrespective of whether the numbers are in any way connected to reality, they are reasonable coordinated for the first 20 years, then in the year 2000 they diverge wildly, and the last 20 years they go their own way. What happened?”
In 1998, the global temperatures reached the warmest year since the 1930’s.
The CAGW proponents back in the 1980’s claimed human-caused global warming was going to cause the temperatures to increase and their predictions were correct right up until around the year 2000. The beginning of their claims for CO2 warming were in the 1970’s, one of the coldest periods in recent history, so it was not much of a stretch to say the temperatures would probably climb from there.
And the temperatures did climb right up to the El Nino high of 1998, so the CAGW proponent’s predictions were proceeding along the way they predicted but then the warming stopped, and the cooling started, so that was about the time the CAGW proponents thought they needed to bastardize the global temperature record some more in order to make it look like things were getting hotter and hotter year after year, instead of flatlining or cooling.
During this time they turned 1998 from the warmest year since the 1930’s into an insignificant year.
They did this so they could claim that the years following 1998, were the “hottest year evah!” in order to continue the CAGW narrative. As you can see below, if they had used the UAH satellite chart instead of the fraudulent Hockey Stick chart, they couldn’t make the claims that six or eight years in the 21st century were hotter than 1998. UAH shows only one year was hotter than 1998, and that was 2016, which was one-tenth of a degree warmer than 1998, a statistical tie.
Note how the fraudulent Hockey Stick chart eliminates the warmth of the 1930’s and the cold of the 1970’s and the warmth of 1998. It’s a bastardized temperature record created to push a political/personal agenda.
Fraudulent Hockey Stick chart:
UAH satellite chart:
http://www.drroyspencer.com/wp-content/uploads/UAH_LT_1979_thru_December_2019_v6.jpg
There are two confounding factors within these data. If the BEST global surface trend is from land+ocean, then any data from land sensors, such as cities would be hotter than the ocean. Also, as Clive Best (not related) showed, anomalies such as these are biased by the higher volatility at higher latitudes.
Ron Clutz
Correct!
“What happened?” Desperation kicked in.
markl,
+1
Hi Willis,
(1) Zoe Phin posted these yesterday regarding BEST. Thoughts?
https://phzoe.wordpress.com/2019/12/30/what-global-warming/
https://phzoe.wordpress.com/2020/01/17/precipitable-water-as-temperature-proxy/
(2) “… I smoothed them all with a 6-year Gaussian average and graphed them up…”
In the graph it looks like 3 end in 2020 (i.e., through 2019), 2 end in 2019, and BEST ends 2017?
Brilliant!
Equation (1) in http://globalclimatedrivers2.blogspot.com produced a match of 96.7% 1895 to 2018 with 5-yr-smoothed measured average global temperature by accounting for water vapor (Total Precipitable Water), SSN and the net effect of all ocean cycles approximated by a saw-tooth profile with 64 yr period. IMO the clouds are accounted for with the SSN anomaly time-integral (sort of like Svensmark). The equation divvies up the ~0.9 K temperature rise since 1909:
sun 17.8%,
ocean 21.7%,
WV 60.5%.
I see that I’ve used the Berkeley Earth land data instead of the land+ocean data … corrected now. The spread is less but the point remains … the spread in the results is far too large.
w.
Willis
The inclusion of ocean temps damps the variation because of the large difference in specific heat. Cities are on land. Why would you want to compare them with ocean temps? I think that you did it the right way the first time.
Clyde, land cities are compared to land temps in Fig. 1.
Global temperature records are compared to global temperature records in Fig. 2.
w.
Willis,
Are you using the correct data for other data sets?
I’ve looked at my downloaded data using the same base period as you, and I cannot see how GISS is warmer than RSS. It should be very close to BEST unless I’m doing something wrong.
I also cannot see how HadCRUT is now below JMA.
Bellman
Correct! JMA has a much lower trend than all other surface series. It is partly deue to lack of interpolation, what imho is particularly noticeable within their COBE-SST2 sea surface stuff.
And GISS has indeed, as you wrote below, a trend of 0.19 °C / decade.
“how GISS is warmer than RSS”
Yes, I got 0.188 °C/decade for GISS and 0.208 °C/Decade for RSS. Incidentally I made a small mistake with trends quoted earlier, in that I requested 1979-2019, but that gives a result ending Dec 2018, so the last year is missing. Since 2019 was warm, the full trends are about 0.004 °C.decade higher, but the order doesn’t change.
Here’s my attempt at a similar graph
There definitely seems to be something wrong with the ends of the graph in the head posting. GISTEMP should be below RSS, and very close to BEST. JMA is currently almost identical to UAH and HadCRUT is somewhere between GISTEMP and JMA.
Willis,
You are still exaggerating trend differences. The Gistemp trend 1979-2019 is 0.19 C/decade, not 0.21.
The HadCRUT4, NOAA, and JMA datasets have coverage bias, JMA also lacks the ship/buoy adjustment.
RSS has a slight warm bias, avoiding most of cool Antarctica.
The true and only outlier is UAH, but their analysis of microwave radiances is subjective, since they have made significant choices and adjustments not supported by data
Datasets that are apples to apples, decadal trends 1979-2019:
Berkeley l/o 0.189
Gistemp loti 0.188
Cowtan&Way 0.187
Very good agreement, right?
Willis,
“For me, the best part of science is my first look at some batch of numbers that have been converted into a graph.”
You can make such a plot interactively here. And here is a snapshot. The agreement is much better than you say.
” I mean, the answers differ by a factor of two!”
Well, it helps if you are comparing the same thing. Your highest is Berkeley land only. The lowest is UAH lower troposphere. If you split them into groups according to what they are measuring, I get, from 1979, in °C/decade:
Land/Ocean:
BEST 0.186
GISS 0.184
HADCRUT 0.170
NOAA 0.170
TempLS 0.169 (My own calculation)
Lower Trop
RSS 0.204
UAH 0.127
Land only
BEST 0.282
The variation factor for land/ocean is 1.1, not 2.
Thanks, Nick. I already noticed that and fixed it. My point remains. Look at the differences, not just in the trends, but in the patterns of the changes in each one. It’s a pit of unrelated snakes. Wildly different.
In any other science what they’d likely do is select a group of experts to examine each of these and determine a) why there are differences and b) what can be done to bring them into harmony.
In climate science, on the other hand, groups just each make their own claims and keep going.
Finally, I appreciate you (and many others) checking my work. It’s one of the beauties of writing for the web—my errors get pointed out in record time.
w.
Willis,
” It’s a pit of unrelated snakes. Wildly different.”
It isn’t if you compare apples with apples. Here is a plot of just land/ocean indices. I have replaced HADCRUT with the Cowtan/Way index, since that is a case where scientists identified the issue causing discrepancy and corrected it. It is more like a pit of mating snakes. It uses 12 month running means of monthly data.
Thanks, Nick. I don’t understand that. It looks like land-only. Is that correct? Also, a boxcar filter like you’ve used is a bad choice … I’ve posted your graph in-line for ease of discussion.
w.
Willis,
Thanks for posting. The plots are land/ocean. The trends are as shown in my comment above, but in C/Cen units. The boxcar filter for smoothing monthly to annual will give similar results to others, and won’t create agreement where none existed.
I made a mistake in requesting the trends here and got 1979-2018 instead of 2019. Since 2019 was warm, trends for 1979-2019 (inclusive) are a little higher:
Land/Ocean:
BEST 0.188 C/decade
GISS 0.188
HADCRUT 0.172
NOAA 0.173
TempLS 0.173 (My own calculation)
Lower Trop
RSS 0.208
UAH 0.132
Nice work Nick, you eliminated the data sets with the most discrepancy ( without comment ) and replaced the third lowest with a “corrected” version.
I suppose you have to pick cherries if you want to make cherry pie.
Greg,
“you eliminated the data sets with the most discrepancy”
As I said several times, I gathered the sets that were actually measuring the same thing – land/ocean surface temperature. The discrepancy with the LT sets is not an issue of measurement; it is a different place.
Not that there are no issues of measurement. The surface measures largely agree,; one LT set is above them all, and one below.
“replaced the third lowest with a “corrected” version”
In the spirit of Willis’
“b) what can be done to bring them into harmony.”
The reason for the error of HADCRUT was identified and corrected.
Agreed , TLT is in a different place. As land and sea are different places. Why the hell anyone would want to take the “average” of sea and land air temps and pretend it has physical meaning is beyond me but it seems to fashionable because it warms more.
If TLT is rising less than pseudo surface average temps you need to explain why before concluding it is “not an issue of measurement”.
If there was an “error ” in HadCRUT, Had and CRU would have corrected it. The “error” is that they use the data they have instead of including zones where they don’t have data.
reg,
“and pretend it has physical meaning is beyond me but it seems to fashionable because it warms more”
People have carefully tracked how the air just above the sea closely tracks SST. SST is used because we have a better history. It actually warms less than land only, and also less than the extended land that GISS started with and maintained until last year.
“If TLT is rising less than pseudo surface average temps you need to explain why”
No, you don’t. It’s just an observation of two different places. Just as sea warms more slowly than land.
“The “error” is that they use the data they have instead of including zones where they don’t have data.”
No, they are failing to use the data they have. To get something you can call global temperature you need your best estimate of every point on the globe. Every point where you don’t actually have a thermometer is an estimate. HADCRUT’s error is that they apply an arbitrary grid, and then assign to cells a temperature equal to the average of the remainder. That is what simply omitting
empty cells does. It is much inferior to an estimate based on nearby information, which is what is used everywhere else.
“best estimate” is not Data.
Just as the “Average” daily or monthly is not the mean and loses very important information.
But of course it tells the required story when you torture it enough.
Greg,
Do you think that is the case? There are two TLT data sets, one shows temp rising faster than global surface temp, the other less. Why do you choose to accept one of those data sets and disregard the other?
40 years strait up, the grapf is a work of art, I have to try harder.
This graph is a perfect example of why I keep saying that arguing about the error bars is a red-herring, and a lost cause.
I can clearly see the ’98 El Nino spike. But… and I have been monitoring this website for a long time… I clearly remember the huge temperature drop followed then the long plateau with essentially no rise for ~15 years. Now, this graph shows the temp matching the ’98 El Nino spike in only 5-ish years. That’s just baloney. That’s not what was being posted back then.
The problem with the data isn’t the error bars… the problem with the data is the data… or rather the constant adjustments in the data. So long as we simply accept those adjustments, we will always be on the defensive.
What was being posted back then? You could try the waybackmachine, though I didn’t have any luck with that myself. I seem to recall seeing temperature exceed the 1998 super el nino year in 2005 and then again a few years later. I think 1997 was a record year, at the time, and that has been exceeded many times. Starting with a super el nino year is bound to make it seem that temperature stalled for a while after that but studies have shown that, statistically, there was not even a slowdown. Of course, data series are revised from time to time as more is learned about interpreting the raw data, so some of the numbers will change over time.
Nick Stokes:
I have just examined your graph with respect to my claim that Earth’s temperatures” are controlled by the amount of SO2 aerosol emissions in the atmosphere, and that the Rule of Thumb, or Climate Sensitivity Factor, is .02 deg of warming for each net Megaton of change in global SO2 aerosol emissions.
Between 1980 and 2014, as near as I can determine , you show an anomalous temp. rise of 0.53 deg C. Between 1980 and 2014, SO2 aerosol emissions fell by a reported 24 Megatons.
Thus, 24 x ,02 = an expected temp. rise of 0.48 deg. C, (within .05 deg. C. of actuality, over a 34 year period. Compare that with projections from any model based upon CO2 !
If NASA GISS Land-Ocean Jan-Dec. anomalous temperatures are used, their reported temperature rise is 0.48 Deg. C., precisely as predicted
In view of the above, how can you continue supporting the “Greenhouse Gas” hoax?
Mr. Eschenbach,
Excellent work, sir. I trust you are amply prepared to give Mosher the “source, codes and data” he will inevitably demand if and when he shows up here? On the other hand, he may choose to ignore your report and article since defending Berkely Earth may be quite difficult for him.
Can I just say, before Mosher gets here, that whatever he says, it won’t even be wrong? I can’t say that? Too late, I’ve said it.
Several months ago there was an article on fitness for purpose about the databases. I came to the conclusion that they are not. I have not changed my mind. Tmax/Tmin/Tavg ends up hiding so much info it’s not funny. Averaging these into bigger and bigger intervals hides more and more info. I never see variances, standard deviations, and uncertainty for the populations of data addressed. I see coastal stations with a data population variance of 5 degrees simply averaged with interior stations with data population variances of 10 or 20 degrees.
If you plot daily T temps for these longer periods by individual cities/rural you get different graphs than averages of large geographic areas. I don’t think appropriate trending theory and tools are applied.
Combining a large number of stations with God only knows what differing uncertainties is asking for trouble. These stations were designed with capabilities for weather forecasting, not for determining regional and global heat values.
And all that is before they make their “adjustments”.
If climate science can’t predict the past what is the hope for predicting the future?
And here is a snapshot of the ever changing temperatures:
https://realclimatescience.com/2020/01/alterations-to-giss-surface-temperatures-from-2001-to-2015/
Thanks , I have a gif somewhere of the changing fortunes of Hansen’s dataset but those newspaper clips from 70s are priceless, almost 0.4 deg. C of cooling in the post-war period, which is now presented as totally flat at best.
“those newspaper clips from 70s are priceless”
And worthless. They are based on a few hundred mostly or all NH, land stations. No area weighting. You are comparing with a modern land/ocean measure.
You missed the whole point didn’t you?
NASA’s records changed, why? Did more stations show up in the period mentioned after they published the 2001 report? Did Nebraska not experience cooling temps during the period? Has NASA appended area weighted temps onto earlier non-weighted temps?
You really don’t explain anything.
You don’t stick to subject. I was responding to “those newspaper clips from 70s are priceless”
Gerald,
The global temperature changes, in the blink of a lie…
Gerald Machnee
What Heller aka Goddard never would tell you is that between 2001 and 2015, considerable amounts of met data was accumulated, especially in the SSt corner, but also for land surfaces.
Let me give you a simple example.
I started processing of NOAA’s GHCN daily in early 2017, after 2 years of using GHCN V3.
While V3 has 7280 stations since around 2010, GHCN daily is growing and growing, moving from ~ 35000 temperature stations in 2017 up to over 40000 in this January.
Do you really think that this has no influence on the global record?
Being fully unprepared for such an increase over time, due to V3 keeping unchanged over years, I unfortunately never kept old data.
What a pity!
Bindidon January 20, 2020 at 4:03 pm
“Gerald Machnee
What Heller aka Goddard never would tell you is that between 2001 and 2015, considerable amounts of met data was accumulated, especially in the SSt corner, but also for land surfaces.”
I was impressed by your information as to the considerable amounts of met data that was accumulated between 2001 and 2015 which enabled NOAA to revise their estimates of global temperatures between 2001 and 2015.
I would have been more impressed if you had also published an explanation as to why so many pre-1940 estimates had been reduced and why so many post-1949 had been increased as a result of the increased number volume of data.
The discrepancies are so obvious that any competent statistician would have investigated
their accuracy before drawing any conclusions as to trends.
Within the bounds of uncertainty, it seems as if the difference between 1.5 and 1 is nil. Whether 1.5 or 1, how much the globe has warmed is indistinguishable from zero, and so, in a strange way, there IS a consensus: How much the globe has warmed in the last 60 years is indistinguishable from zero.
They just don’t know that they are in agreement, and they just don’t know that they are saying this.
It’s funny how the answer can be right before our eyes, and those who determine the answer cannot even understand that they HAVE the answer. It’s zero! Hence, NO GLOBAL WARMING ! [yeah, I’m yelling this time]
How do you get GISS warming at 0.21°C / decade over the last 40 years? I assume you meant 40 and not 60 years. I make it 0.19°C / decade. I don’t think any of the surface based data set trends are statistically different from each other, and the only real deviations are between the two satellite data sets.
The books certainly have been overcooked. It looks like there has been some perverse competition to see who can be the most alarmist; may be linked back to funding?? Which is great for us realists, as they are running out of road to play on before the inevitable crash and burn (they have no choice but to be even more extreme).
Hi Willis,
In the urban areas, does the anthropogenic built environment cause localised reduction of precipitation and surface water infiltration contribute also to urban heat effect?
I ask this because deforestion and mofication of the natural landscape shown to increase temp by 2 to 3 degrees in summer (over thousands of sq kilometers not just a city) and reduce precipitation 10 to 20 % and in some circumstances higher. The Australian deforestation (40% since 1780’s) streches over 3,000 km, down the entire western side of the great dividing range gone.
http://forestsandclimate.org.au/cms/wp-content/uploads/deforestation-and-cleared-land-map-png-1.jpg
Or am i barking up the wrong tree here ?
Willis: “… I smoothed them all with a 6-year Gaussian average and graphed them up…”
It’s very gratifying that you are now an advocate of gaussian filters but how do you padding the 3 years at the start and end ? Those upticks may not be there next time you look. Not a fan of uncleared Mannian padding techniques.
Why do all graphs show a temp decline from 1941 to 1979 while CO2 went up year after year? That is the question that needs to be answered.
T.C
“That is the question that needs to be answered.”
CO2 forcing had not yet overcome the -ve ones of aerosols……
http://www.climatechange2013.org/images/figures/WGI_AR5_Fig8-18.jpg
And at that time the PDO had a sig influence on GSMT …..
http://2.bp.blogspot.com/-Fkg790Q3b8o/VMRGN17t2oI/AAAAAAAAHwo/GTCVnmku248/s1600/GISTempPDO.gif
PDO is based on the difference between N. Pacific and global. If you want to say that was the “cause” of the post war cooling, then it is also the “cause” of late 20th c. warming which is what caused all the end-of-the-world panic. You just removed the final evidence of ANY detectable AGW.
How convenient that the estimate for aerosols cancels the estimates for CO2 for that period.
Your base period of 1979-1985 is too short.
Willis – knowing your love of graphs, a good one to try is one of our longest standing continuous surface temperature records – from central England (https://www.metoffice.gov.uk/hadobs/hadcet/cetml1659on.dat). Plot the series 1533 – 1711, 1712 – 1890, 1891 – 2069 and overlay on the same graph. You will see that at about 30 and 70 (and possibly125 years) into this cycle there appears to be a superimposed temperature drop followed by a broad recovery where the recovery rate approximates 0.11 – 0.24 degC/decade between the major drops and plateaux, except for the Maunder minimum where temps dropped at -0.3 degC/decade. Could this be a clue to our natural pre-industrial warming rate (for central England anyway).
Willis I’ll try again and I don’t want to be a pest, so here’s my previous inquiry.
Willis another very interesting post, thanks for having the nerve to take on the true believers. BTW have you heard of the work of the Connollys over the last 5 years? Nic Lewis has promised to look at the Connolly’s work , when he has the time. He has met them and seems impressed. See his answer at Climate Audit.
They’ve looked at co2 as the driver of climate and actually checked the atmospheric data and evidence over a long period of time using the balloon data.
Here’s the links. http://oprj.net/
https://blog.friendsofscience.org/wp-content/uploads/2019/08/July-18-2019-Tucson-DDP-Connolly-Connolly-16×9-format.pdf
It’s not immediately clear what the problem is, but your link to friends of science.org got to their home page, but said the file doesn’t exist. Searching for “Connolly” did get a link. That led to another link. That downloaded the pdf from https://blog.friendsofscience.org/wp-content/uploads/2019/08/July-18-2019-Tucson-DDP-Connolly-Connolly-16×9-format.pdf — which looks the same as your link to me.
No clue why I had trouble or if others will also.
Don K–
I had the same results–a 404 error on Neville’s link but your worked.
Neville’s link worked for me w/Firefox using “save link as”.
I have to correct — Neville’s link only dloaded a partial (unviewable) pdf, Don K’s link dloaded the pdf properly.
Neville’s original link has a UTF-8 code for a multiplication symbol, ‘x’, while Don’s working link has a plain ‘x’ in the ’16×9′ portion of the URL.
Neville: OK, I read through the Balloons in the Air pdf. It’s very well done. The math is minimal and seems clear. A lot of effort seems to have gone into explaining clearly and avoiding BS.
A lot of the material is clearly solid , and some of the criticisms of conventional climate modeling seems likely to have at least some merit . Or at least to be worthy of discussion. Some stuff is going to take rereading and some research.
It’ll be interesting to see what others think.
Neville: I looked at the Connolly stuff some more before bedtime last night. Much of it seems well done, but there are a couple of areas that seem sort of hazy.
1. They, the Connollys, make some assertions about how climate modeling is done that may or may not be correct. For example, they seemed to assert that climate modeling improperly assumes a constant lapse rate. Could be. …. Or not. I have no way to check. Even if they are right, is the discrepancy significant?
2. More important, they seem to be invoking a previously unknown energy transfer mechanism in the atmosphere. That claim is clearer in this paper
http://oprj.net/oprj-archive/atmospheric-science/25/oprj-article-atmospheric-science-25.pdf than the main link. Basically, they seem to be claiming that substantial amounts of energy can move around in the atmosphere by a mechanism that is neither conduction, convection, nor radiation. Is that possible? Way beyond my pay grade. But to the extent that I understand it, it seems rather an extraordinary assertion. Surely, if true, it would show up in the lab, and would at least mentioned as an odd experimental anomaly? Or maybe I completely misunderstand.
Anyway, thanks for bringing the matter up.
Don K,
Both of your links worked for me. Interesting read. I hope to find time to read a couple areas again if I can find time. Connolly is clearly looking outside the box and seems to back up his claims/hypothesis. A lot of groundbreaking thought and research.
Thanks Neville for bringing this forward.
It will come worse, believing alarmisting views
Global heating: London to have climate similar to Barcelona by 2050
Looking at the trends Willis is showing us, that Tom Crowther can’t be rigjht at all.
I’m sure Londoners are quaking in their wellies at the prospect.
Yeah, instead of turning Japanese, they’d be turning Spanish.
I can’t figure out why HADCRUT and your own UAH curves show little difference at 2019 but the decadal warming rate is so different. Don’t the curves provide an almost identical warming rate?
Fuzzy data give fuzzy answers. The precision claimed by CAGW advocates is unfounded. Let me count the ways …
Average global temperature trend is up because the water vapor trend is still up. https://watervaporandwarming.blogspot.com
That’s what Joe Bastardi says — increased water vapor from the oceans is the main driver behind current warming, particularly in the Arctic & subarctic.
http://www.weatherbell.com/premium
The WV increase is greater than POSSIBLE from ocean warming. The extra is from increased irrigation.
If there was a political bias (conscious or not) on the temperature adjustments and hence the final temperature trends… where would it show?
My guess is that the “typical” city would be culturally significant to the ‘west’. Therefore the centre of the trend spread should be the mean of London and New York,
What is the centre of the UHI trends? And what cities-being-averaged hits that point?
I guess I don’t understand. Isn’t the bar graph showing a increase of 2.2 degree C/century (global surface) and the spaghetti graph is indicates about .7 per/decade which is 7 degrees C/century?
The key says .20 +/- degrees C/decade or 2 degrees C per century.
Obviously I am not reading the spaghetti graph correctly?
“the spaghetti graph is indicates about .7 per/decade”
It indicates about 0.7°C in 40 years.
A more basic approach would report temperatures as measured, not anomalies, and without man-made adjustments. That is the starting point for calculations of differences. Then, you introduce adjusted data, firstly after acceptable adjustments like rejection of 5 sigma outliers. The you introduce adjustments peculiar to the managing body like GISS and Hadley. Then you look at anomaly data.
Willis in no way am I critical of your essay here. The managers of the global warming scare cherry pick their methods to add (false) credence to their manipulated numbers. Geoff S
Geoff Sherrington
1. “A more basic approach would report temperatures as measured, not anomalies…”
Mr Sherrington, this makes no sense at all. How do you want to compare data measured at the surface with data measured in the lower troposphere, if you use absolute values instead of departures from a common mean? They differ by over 20 K.
The same applies when comparing the lower troposphere with the lower stratosphere far above it.
*
2. “… without man-made adjustments… ”
Do you really want, for example, to compare bulks of station data coming from places where there are over 300 stations per 100,000 km² with corners having a couple of them?
The very first man-made ‘adjustment’ therefore is averaging over a grid.
I generate time series out of raw GHCN daily data. Such time series have a trend lower than those computed by professionals.
Why? Simply because I don’t interpolate anything, what automatically results in a bias: the grid cells containing no data let the whole behave as if their data was the average of that whole. This is bad work.
Rgds
J.-P. D.
P.S. Recently we had here a little discussion with Nick about the very small effect of baseline modifications due to inclusion / modification / exclusion of data sources.
Nick was obvioulsy right! A look at the own software was quite convincing.
Bindidon,
Nck tends to regard errors as being those shown by statistical variation, spending less of his effort on fundamental uncertainties such as the real errors involved in reading a variety of thermometers in a variety of housings by a variety of observers. Whereas the total error approach seems to give a range of about +/- 1 degree C for routine daily values, the application of optimistic and enthusiastic numerical methods seems to reduce this range to +/- 0.1 degrees C, depending on the author.
Society is already paying a huge penalty for climate research methods like poorly restrained extrapolation (making up numbers where none exist) and improper consideration ot the total error of original observations.
One thing is clear, the land based indices are warming faster than the global indices indicate.
I guess that’s because the slowness with which the oceans warm up delays the global index.
Willis–
How did Seoul fall so far from #1 on your UHI graph to below the global increase?
A “surfeit” of temperature?
Sorry, this sounds a bit too polemic for me.
And… what is this strange 6-year reference period, please? Does not look very professional imho.
I am a layman, but anomaly construction using such a baseline? Who would do that?
Anyway I was, like at least another commenter, wondering a bit about your trend estimate of 2.1 °C / decade concerning GISS-LOTI.
This trend is wrong: the correct one is
0.19 ± 0.04 °C / decade.
The trends of HadCRUT4.6 and JMA are wrong as well: the correct ones are
0.17 ± 0.03 °C for HadCRUT and
0.14 ± 0.004 °C for JMA.
The reason for JMA’s lower trend compared with other surface series simply is due to the fact that the Tokio Climate Center does not interpolate. That results in each ‘grey’ grid cell getting the global average. This is their choice.
As known since longer time, UAH6.0 LT disconnects from the rest by about 2003. Why? No se!
The other trends for BEST, RSS and UAH are correct.
That may sound like pedantry, but if we criticize the data resulting from other people’s work, we should imho do it on the basis of a correct interpretation of that data.
I added NOAA land+ocean with a trend of 0.17 ± 0.04 °C / decade.
*
Here is a chart showing the monthly anomalies – of course wrt the mean of 1981-2010:
https://drive.google.com/file/d/1tPi5YqHMe3jslz7zdXEah-Gg1JZytkO7/view
This looks by far less dramatic than your graph upthread.
*
Sources
GISS
https://data.giss.nasa.gov/gistemp/tabledata_v4/GLB.Ts+dSST.txt
BEST
http://berkeleyearth.lbl.gov/auto/Global/Complete_TAVG_complete.txt
NOAA
ftp://ftp.ncdc.noaa.gov/pub/data/noaaglobaltemp/operational/timeseries/aravg.mon.land_ocean.90S.90N.v5.0.0.201907.asc (the suffix behind 90S.90N unluckily changes on every update)
HadCRUT
https://www.metoffice.gov.uk/hadobs/hadcrut4/data/current/time_series/HadCRUT.4.6.0.0.monthly_ns_avg.txt
JMA
https://ds.data.jma.go.jp/tcc/tcc/products/gwp/temp/list/csv/mon_wld.csv
RSS
http://images.remss.com/data/msu/monthly_time_series/RSS_Monthly_MSU_AMSU_Channel_TLT_Anomalies_Land_and_Ocean_v04_0.txt
UAH6.0 LT
https://www.nsstc.uah.edu/data/msu/v6.0/tlt/uahncdc_lt_6.0.txt
Regards
J.-P. Dehottay
“If you want people to listen to what you say, first you have to centralize your fecal material” it would be easier to centralize if the subject wasn’t held hostage by CO2 emissions politics…..
Arrg.
1. The city data we post is pretty old. Circa 2013. In 2013 we lost our access to the Lawrence livermore
super computer and basically an update that would do charts and graphs for all the cities and all
the states and all the countries ( 100s of thousands of plots ) would take months. Send money!!
2. How did you process UAH?
The reaosn I ask about UAH is that there are some little known and rarely talked about issues with UAH over land. Did you consider all of UAH? or UAH over land only, or UAH over land only where the signal
is not contaminated? ( Psst I have never see anyone do it right, not even Roy )
Lastly. Folks still don’t get spatial averaging.
“There were a couple of things that I found unusual about this. First, there are some indications that the Berkeley Earth method of removing the Urban Heat Island distortion of the global temperature record is … well … perhaps not all that accurate. However, more research would be needed to determine that.”
Interesting list Willis.
Rule number 1.
When a skeptic shows you data, you can be SURE he will only show you data that fits his story.
That’s why WE show all the data. It lets us see who cherry picks.
Notice willis list ends at at city showing 2C
What did he leave out?
Tokyo 1.63C
Ho Chi Minh City 1C
Pune India 1.26C
cape town 1.39
Chengdu. 1.29C
Chongqing 1.1
Rangoon 1.06
Calcutta .98C
Santiago 1.26C
Bangalore 1.36C
Bangkok 1.1 C
Dhaka .84C
Bogotá 1.32
And a bunch of others.
the city list is a trap of sorts. We list all the largest cities and then wait for skeptics to show their skills
at picking the highest numbers to “prove” their point.
next point, the method does not REMOVE all the UHI. what the method does is REDUCE THE BIAS.
It does that in two ways.
A) sites in urban areas are compared with sites in rural areas. IF the urban site shows artefacts that are inconsistent with the surrounding rural areas the urban site is WEIGHTED DOWN for its quality.
this downweighting doesn’t change the data, it changes how much weight is ascribed to the station.
So lets look at a station that Willis didnt mention
Tokyo.
Here is the tokyo AREA (emaphasis on AREA which is what the city list shows)
http://berkeleyearth.lbl.gov/locations/36.17N-139.23E
see that URL? thats the AREA we call tokyo.
Click on it
Now go here
http://berkeleyearth.lbl.gov/station-list/location/36.17N-139.23E
That’s all the stations in that area.
Now pick a long station in that area
http://berkeleyearth.lbl.gov/stations/156178
Raw monthly anomalies 1.46
After quality control 1.45
After breakpoint alignment 1.12
Whats that mean?
That means the alogorith REDUCED THE WARMING to 1.12 C
take another long station
http://berkeleyearth.lbl.gov/stations/156183
Raw monthly anomalies 2.05
After quality control 2.04
After breakpoint alignment 1.14
OMG, we reduced the warming AGAIN!!! stupid algorithm
here is a chart WILLIS WILL NEVER SHOW YOU
http://berkeleyearth.lbl.gov/stations/156164
Raw monthly anomalies 2.59
After quality control 2.58
After breakpoint alignment 0.94
WHAT THE HELL, why is berkeley earth REDUCING THE TEMPERATURE AT TOKYO?
Will wont show you that. here is what he would never do. he would never take the time to study all the data from all the cities to see those places where the algorithm was working to reduce UHI.
Me? I do that all the time. But I look for the OPPOSITE THING. i look for places where the algorithm is not working. And then try to figure out how to improve it. Trust me if you look you will find places were the algorithm doesnt do what you expect it to do. But you will also find Tokyo. And an honest person looks at
both and makes a considered judgment
What did we do here?
http://berkeleyearth.lbl.gov/stations/156185
Reduced the warming
What about this location?
http://berkeleyearth.lbl.gov/stations/156163
Raw monthly anomalies 0.82
After quality control 0.81
After breakpoint alignment 1.08
Here the algorithm found that the site was Out of wack in the other direction.
Imagine that! an algorithm that looks at the data and adjusts some up and some down.
http://berkeleyearth.lbl.gov/stations/156148
Adjusted down
oh look! an airport
http://berkeleyearth.lbl.gov/stations/156160
Adjusted down
http://berkeleyearth.lbl.gov/stations/156188
Adjusted up
So, you get the idea. The algorithm is not designed to just add warming. Adjustments go both directions
depending on ALL THE DATA.
How Else does the method REDUCE THE BIAS
B) The method reduces the bias by AREA WEIGHTING. people tend to think that temperature averages
are like other averages. if you have 10 records, and 2 of 10 are in big cities, then 20% of your average
will be infected by UHI! WRONG WRONG WRONG. temperature averages dont work that way because
NO ONE AVERAGES TEMPERATURES! you dont add up the 10 records and divided by 10.
The records have to be spatially averaged. in short, urban areas represent a tiny area of the globe
and when you SPATIALLY AVERAGE their contribution, the bias will nearly vanish.
So TWO things work to REDUCE THE BIAS.
A) The algorithm that downweights stations that disagree with their neighbors.
B) the spatial interpolation that downweights urban areas. They are small.
So if 2 out 10 stations are in urban areas, and urban areas, are 2% of the entire land area, then
that “urban bias’ gets diluted to less than 20%.. a lot less.
How do we test this?
Simple: we REMOVE ALL THE URBAN STATIONS and re calculate the average.
So bottom line: UHI is not a problem in the GLOBAL MONTHLY RECORD because.
1. There are not that many urban stations in dense urban environments
2. the algorithm works to REDUCE ( not eliminate) that bias.
3. Area weighting reduces the residual bias to de minimus values.
Micro site bias ( as willis as argued before !) remains an open issue. UHI? not an issue.
Interesting and informative reply Mr Mosher, thank you.
Mr Eschenbach didn’t write your team pushes up the urban temperatures. Just that he suspects that the UHI is not perfectly eliminated and that further examination would be interesting.
You confirmed essentially that your algorithm isn’t perfect. In good faith , but imperfect.
There’s no proof that the errors cancel automatically, nor that they do. We simply don’t know.
So the ‘product’ of the algorithm shouldn’t be trusted 💯% .
Regards
Despite everything that Mr Mosher says, they end up with a “Modelled” temperature for each station called their final data set that bares no resemblance to the original data.
It completely fails to work for Coastal v Inland Stations, they superimpose the land mass temperature on the original data.
Take a look for yourself, start with Valentia in Ireland, then check Mumbles in Wales, Cardiff & Greenwhich Maritime.
The “finals” are all basically the same and ar nothing like the actual station data.
I will quote Mr Mosher.
“Steven Mosher | July 2, 2014 at 11:59 am |
“However, after adjustments done by BEST Amundsen shows a rising trend of 0.1C/decade.
Amundsen is a smoking gun as far as I’m concerned. Follow the satellite data and eschew the non-satellite instrument record before 1979.”
BEST does no ADJUSTMENT to the data.
All the data is used to create an ESTIMATE, a PREDICTION
“At the end of the analysis process,
% the “adjusted” data is created as an estimate of what the weather at
% this location might have looked like after removing apparent biases.
% This “adjusted” data will generally to be free from quality control
% issues and be regionally homogeneous. Some users may find this
% “adjusted” data that attempts to remove apparent biases more
% suitable for their needs, while other users may prefer to work
% with raw values.”
With Amundsen if your interest is looking at the exact conditions recorded, USE THE RAW DATA.
If your interest is creating the best PREDICTION for that site given ALL the data and the given model of climate, then use “adjusted” data.
See the scare quotes?
The approach is fundamentally different that adjusting series and then calculating an average of adjusted series.
in stead we use all raw data. And then we we build a model to predict
the temperature.
At the local level this PREDICTION will deviate from the local raw values.
it has to.
”
This is true for every single station, not just Amundsen, they make it what they think it should be.
Lastly. Folks still don’t get spatial averaging. Steven Mosher
I think I understand spatial averaging and I know of one giant elephant and that’s the one splashing about the boundary between land and sea!
Living in a land “girt by sea” and having lived in several of its cities and places that are so typically representative of the coastal urban sprawl of The Land Down Under. It does make me wonder just how anybody can spatially average the temperature of the land/sea boundary without conflating the urban heat island with the ‘island’ itself. The local Sea Breezes* alone would confound measurement, making separation of the land from the sea, let alone the urban from the rural an intractable if not impossible problem – in reality!
* Sea breezes are a local and emergent patterns documented to be affected, enhanced and even produced by urban heat islands.
At last, someone has stated that, to allow for the UHI effect in the global average, simply remove all the urban stations from the average. But then you are left with a high percentage of rural stations that have encroaching urbanization. Airport stations are the worst…then newer station installations with faster electronic probes that show higher peak temps with a minor puff of convection…these effects adjusted downward by people very confident of their confirmation bias….
“When a skeptic shows you data, you can be SURE he will only show you data that fits his story.”
Goose, meet gander.
Mosher, could be that your above list of cities would’ve cooled if not for UHIE.
In many science disciplines, data with known substantial errors are rejected.
Climate researchers spend a lot of time retaining wrong data, then massaging it to get their best guesses about what it would have been without errors. Sadly, there is no way to confirm if the massaging works. The massaging goes on, regardless. In some industries, massaging like this is a reason for dismissal.
The fundamental philosophy is fatally flawed.
Geoff Sherrington
What about coming along with some real proof of what you pretend here?
There is so much evidence that you need to choose your pet topic. If, for example, you choose UHI, you can see ample evidence in a WUWT post I made a year or so ago.
Your problem seems to be that you have not read enough previous evidence. Read first, then ask me to spend more of my unpaid time.
Geoff S
Geoff Sherrington
“If, for example, you choose UHI, you can see ample evidence in a WUWT post I made a year or so ago.”
“Read first, then ask me to spend more of my unpaid time.”
Present a link to something more valuable than the superficial, polemic stuff above, Mr Sherrington, and then MAYBE I’ll spend myself more of MY unpaid time.
If at least you were able to exactly describe what you understand under ‘massaging’…
Just a hint.
Some years ago I had a fruitful discussion with a French woman working in the highway context.
She had such a big laugh about these ridiculous ‘pseudoskeptics’ (her wording) who never and never used any kind of interpolation technique, but feel the need to criticize its use in the climate context.
I can’t recall the list of enterprises and administrations in France (which she recited by head) where work without e.g. kriging is absolutely inimagineable. Mining exploration, highway construction, river contamination estimates, etc etc).
You remind me a strange WUWT commenter (I forgot his ‘name’), who was 100% convinced that grid averaging of station data would be data wrangling (!!).
People like you and him are simply incredible.
J.-P. D.
The CAGW hoax survived the 19-year hiatus (mid 1996~mid 2015) by removing many rural temperature stations from the Land temp database, and concentrating the land temp data in urban areas (especially at airports with massive UHI effects: huge parking lots, massive landing strips, hot jet engine exhaust, exhaust heat from gigantic AC units, etc.).
CAGW advocates also got “lucky” from: 1) the natural 2015/16 Super El Niño event, 2) a delay in the start of the next 30-year PDO cool cycle, 3) a strong La Niña event hasn’t occurred since 2010.
CAGW advocates’ “luck” is about to run out when: the PDO and AMO enter their respective 30-year cool cycles, a strong La Niña event occurs, and perhaps additional global cooling from a 50-year Grand Solar Minimum event which has already started…
It will also become increasingly difficult for CAGW advocates to justify the growing disparity between UAH satellite global temp data and GISS and HADCRUT datasets…
SAMURAI
1. “… especially at airports with massive UHI effects: huge parking lots, massive landing strips, hot jet engine exhaust, exhaust heat from gigantic AC units, etc.”
What a dumb comment, so far from reality as is possible, but copied and pasted everywhere ad nauseam by persons probably having never inspected any station data set.
Here is a chart using the raw GHCN daily data set, and comparing the data of
– 71 well-sited USHCN stations, selected by surfacestations.org
with the data of
– all CONUS airport stations (over 800):
https://drive.google.com/file/d/1Ifbok0sBDyz7cKMyyzQH8tvXjDRk0iaQ/view
Sources
– well-sited stations
https://drive.google.com/file/d/14_1wVIyZ1k2cuKMu6fPEs9NsvFD-OhST/view
– GHCN daily
ftp://ftp.ncdc.noaa.gov/pub/data/ghcn/daily/
*
2. “It will also become increasingly difficult for CAGW advocates to justify the growing disparity between UAH satellite global temp data and GISS and HADCRUT datasets…”
Yes indeed.
But… if you would consider that UAH differs as much from all satellite-based measurements than it differs from nearly all surface data sets (the only exception being the Japanese JMA), then you might yourself become motivated to think a bit about how to really interpret what you write.
But be all sure that SAMURAI-san nonetheless will continue to replicate his superintelligent message!
Bindidon-san:
“If you torture numbers enough, you can get them to confess to anything.”
The silly links you provided are evidence of the above truism..
To quantify the disparity between airport temp stations and rural temp stations, one must compare data between rural stations (in perfect compliance to temp station specs) near airports.
NOAA went back and added heat to all raw temp data to make the line go up and save the biggest scam in human history:
You’ll notice they stopped updating their temp fiddling in 2000, and because it was so embarrassing, they removed this evidence from their website in early 2017…
After this ridiculous CAGW scam crashes and burns, real scientists will have to try and correct all the fiddling CAGW grant grubbers inflicted on US land temp data— providing the raw data even exists anymore, but more likely, all the temp data somehow ended up on Lois Lerner’s or Hillary’s hard drive ..oops…
SAMURAI
1. “The silly links you provided are evidence of the above truism..”
That is the very first mechanism used by Pseudoskeptics: to discredit and denigrate the work made by others, instead of scientifically proving it’s wrong, and to call ‘silly’ all what they are absolutely unable to do by their own.
*
2. “To quantify the disparity between airport temp stations and rural temp stations, one must compare data between rural stations (in perfect compliance to temp station specs) near airports.”
See (1) for my appreciation of your ‘thoughts’.
Here is a comparison example of an airport station at Anchorage, AK, with a rural station (Kenai, belonging to CRN) located about 50 km from that airport, in ‘the middle of nowhere’:
https://www.google.de/maps/dir/61.1689,-150.0278/60.7236+-150.4483/@60.8520105,-150.6230064,104127m/data=!3m1!1e3!4m7!4m6!1m0!1m3!2m2!1d-150.4483!2d60.7236!3e2?hl=en
To avoid bias due to a possible abuse of homogeneity by CRN processing, the comparison was made using raw GHCN daily data, a data set you very probably never have seen at any moment (and anyway, even when having a look at it, you would pretend it’s ‘adjusted’, ‘fudged’ etc etc).
Anch AP: USW00026451 61.1689 -150.0278 36.6 AK ANCHORAGE INTL AP 70273
Kenai: USW00026563 60.7236 -150.4483 86.0 AK KENAI 29 ENE CRN 70342
Here is a graph comparing the two stations via absolute temperatures:
https://drive.google.com/file/d/1D6Plbj3pZiYE3kQS05B6_6mcgOLxNB5n/view
What is immediately visible is – ha ha – that the airport’s station permanently measures ‘something like’ 2 °C more than the rural context (and indeed: the average difference for 2011-2019 is 2.2 °C).
That alone leads inexperienced persons to think ‘Oh Noes there is something wrong in Anchorage’.
But… a very first hint should be that the linear estimates nonetheless are nearly equal:
5.22 ± 3.30 vs. 5.16 ± 3.38 °C / decade. Hmmh.
{ The high standard deviations are due to (a) absolute data including sesonal cycles and (b) single station comparisons. }
*
Now let us switch to the anomalies (what a bloody word for ‘departures from a mean’).
https://drive.google.com/file/d/1OhCuDiAFUT80Ws4S8XopciaWQTp4rorn/view
The difference plot is absent, because the mean difference now is… 0.03 °C.
And the linear estimates still are nearly equal:
3.41 ± 0.81 vs. 3.34 ± 1.07 °C / decade. Hmmh.
Years ago, I computed that using Excel for numerous site comparisons within GHCN V3 (unadjusted and adjusted), and found similar things for nearly all tests.
Feel free to communicate real site data (and not what so-called experts ‘created’ out of it), and differing from what I have shown here!
*
3. “You’ll notice they stopped updating their temp fiddling in 2000, …”
So? Are you sure? Or are you simply… guessing?
“… and because it was so embarrassing, they removed this evidence from their website in early 2017…”
So? It took me a few seconds to find

You seem to suffer under what is termed ‘conspiracy syndrome.
Your last paragraph isn’t worth any answer.
As said, SAMURAI: I’m sure you will continue to discredit and denigrate other people’s work, mainly because you are unable to do what they did.
What remains for you therefore is to be ad vitam aeternam the gullible follower of those who impress you, regardless of their real qualification. In Germany they are called ‘Flötenspieler von Hameln’.
I wish you all the best!
J.-P. D.
SAMURAI,
If the CONUS trend divergence between USCRN and ClimDiv should be taken as evidence for UHI, the UHI is actually significantly negative and quite large (-0.12 C/decade) during the years 2001-2018.
https://pbs.twimg.com/media/D3pQojzXsAAJInx?format=png&name=medium
USCRN = state of the art instruments located in pristine rural areas
ClimDiv = The big adjusted met station network including mixed quality urban sites
I will update the graph as soon as NOAA update ClimDiv trough 2019
Sorry, typo, the period for USCRN should start 2005, not 2001
The changes between versions of the same groups is a bigger eye-opener. Ignoring the swap in hemispheres, HadSST make large changes to the base period but large parts outside the 1940-1970 period remain the same. All 4 – HadSST SH and NH, v2 and 3 meet at the maximum anomaly of 1998. Its too large a coincidence when it’s important that 1998 is not the hottest year 20 years later. They’re not doing what they claim to do.
Am I wrong that I read recently on this blog that about 40% of the warming in the satellite era (from 1979 to the present) is due to two major volcanoes that erupted in the early 80’s which suppressed the baseline against which we are measuring? If so, when we take .6 times the average do we get about 1.1 degrees C per century? That 40% number seemed to be supported by multiple analyses.
What am I missing?
At the end of the day, nothing scary about the temperature increases being recorded. Virtually certain that folks in the higher latitudes aren’t complaining and I can’t personally think of any time in history when all of us could be more thankful for being alive. Warming is good for the soul as well as the body.
Can anyone say why UAH data looks very different when graphed at woodfortrees.org : http://www.woodfortrees.org/plot/uah6/from:1998/to
That is nothing like the UAH curve in the graph in this article, why the huge difference?
Probably because Mr Eshenbach is using a 6 year Gaussian Average.
Ahh, thanks for that
Matthew Sykes
1. You start in 1998. Why? Mr Eschenbach shows data starting in 1979.
2. You show monthly anomalies. He shows smoothed data to have a more comprehensive picture, but you can’t let WFT do this Gaussian filtering of data. The best you can do there imho is a running mean over 72 months = 6 years:
http://www.woodfortrees.org/plot/uah6/from:1979/to/plot/uah6/from:1979/mean:72
Willis,
Figure 1, the first thing that struck me was that the furtherNorth/South a city is or if it has had rapid expansion the greater is the trend. Confirmation of your earlier post on UHI?
As long as Trump is just acting ! .using false hope the sellout of our environment will be prioritized as well as our constitution and prosecution of oppossed citizen of just his party the judges are deaf blind and stupid.is the normal.normal God save America
In 1966, the population of Mashhad was 400, 000. Now its 3 million.
An annual 20 million pilgrims now visit the city.
Now why would Mashad be getting hotter?
Don’t forget to check out:
https://phzoe.wordpress.com/2019/12/30/what-global-warming/
https://phzoe.wordpress.com/2020/01/17/precipitable-water-as-temperature-proxy/
As well as understanding how all climate scientists are geothermal deniers:
https://phzoe.wordpress.com/2019/12/04/the-case-of-two-different-fluxes/
https://phzoe.wordpress.com/2019/12/06/measuring-geothermal-1/
The last 2 are paradigm changers. You won’t read about this biggest science scandal anywhere else.
Even geologists don’t understand the difference between conductive and radiative flux.
Please refer to my post “Energy causes global warming” at https://hotgas.club
It’s a bit long to post here.
Eddie Banner
Eddie Banner
As you can see in the comment above, you are obviously facing strong competition.
Willis,
It’s even worse than you thought. Rather than smoothing the noise, look at the trends for the individual stations over the 1990-2019 period, and see how many stations are negative or statistically zero. There are a few hundred out of the thousands of valid reporting stations, and it’s gotta mean something that stations with negative trends are mixed right in with stations with positive trends.
It sure doesn’t look like there’s any uniform warming going on, only warming on average.