By Nick Stokes,
There is much criticism here of the estimates of global surface temperature anomaly provided by the majors – GISS, NOAA and HADCRUT. I try to answer these specifically, but also point out that the source data is readily available, and it is not too difficult to do your own calculation. I point out that I do this monthly, and have done for about eight years. My latest, for October, is here (it got warmer).
Last time CharlesTM was kind enough to suggest that I submit a post, I described how Australian data made its way, visible at all stages, from the 30-minute readings (reported with about 5 min delay) to the collection point as a CLIMAT form, from where it goes unchanged into GHCN unadjusted (qcu). You can see the world’s CLIMAT forms here; countries vary as to how they report the intermediate steps, but almost all the data comes from AWS, and is reported at the time soon after recording. So GHCN unadjusted, which is one of the data sources I use, can be verified. The other, ERSST v5, is not so easy, but there is a lot of its provenance available.
My calculation is based on GHCN unadjusted. That isn’t because I think the adjustments are unjustified, but rather because I find adjustment makes little difference, and I think it is useful to show that.
I’ll describe the methods and results, but firstly I should address that much-argued question of why use anomalies.
Anomalies
Anomalies are made by subtracting some expected value from the individual station readings, prior to any spatial averaging. That is an essential point of order. The calculation of a global average is inevitably an exercise in sampling, as is virtually any continuum study in science. You can only measure at a finite number of places. Reliable sampling is very much related to homogeneity. You don’t have to worry about sampling accuracy in coin tosses; they are homogeneous. But if you want to sample voting intentions in a group with men, women, country and city folk etc, you have inhomogeneity and have to be careful that the sample reflects the distribution.
Global temperature is very inhomogeneous – arctic, tropic, mountains etc. To average it you would have to make sure of getting the right proportions of each, and you don’t actually have much control of the sampling process. But fortunately, anomalies are much more homogeneous. If it is warmer than usual, it tends to be warm high and low.
I’ll illustrate with a crude calculation. Suppose we want the average land temperature for April 1988, and we do it just by simple averaging of GHCN V3 stations – no area weighting. The crudity doesn’t matter for the example; the difference with anomaly would be similar in better methods.
I’ll do this calculation with 1000 different samples, both for temperature and anomaly. 4759 GHCN stations reported that month. To get the subsamples, I draw 4759 random numbers between 0 and 1 and choose the stations for which the number is >0.5. For anomalies, I subtract for each place the average for April between 1951 and 1980.
The result for temperature is an average sample mean of 12.53°C and a standard deviation of those 1000 means of 0.13°C. These numbers vary slightly with the random choices.
But if I do the same with the anomalies, I get a mean of 0.33°C (a warm month), and a sd of 0.019 °C. The sd for temperature was about seven times greater. I’ll illustrate this with a histogram, in which I have subtracted the means of both temperature and anomaly so they can be superimposed:
The big contributor to the uncertainty of the average temperature is the sampling error of the climatologies (normals), ie how often we chose a surplus of normally hot or cold places. It is large because these can vary by tens of degrees. But we know that, and don’t need it reinforced. The uncertainty in anomaly relates directly to what we want to know – was it a hotter of cooler month than usual, and how much?
You get this big reduction in uncertainty for any reasonable method of anomaly calculation. It matters little what base period you use, or even whether you use one at all. But there is a further issue of possible bias when stations report over different periods (see below).
Averaging
Once the anomalies are calculated, they have to be spatially averaged. This is a classic problem of numerical integration, usually solved by forming some approximating function and integrating that. Grid methods form a function that is constant on each cell, equal to the average of the stations in the cell. The integral is the sum of products of each cell area by that value. But then there is the problem of cells without data. Hadcrut, for example, just leaves them out, which sounds like a conservative thing to do. But it isn’t good. It has the effect of assigning to each empty cell the global average of cells with data, and some times that is clearly wrong, as when such a cell is surrounded with other cells in a different range. This was the basis of the improvement by Cowtan and Way, in which they used estimates derived from kriging. In fact any method that produced an estimate consistent with nearby values has to be better than using a global average.
There are other and better ways. In finite elements a standard way would be to create a mesh with nodes at the stations, and use shape functions (probably piecewise linear). That is my preferred method. Clive Best, who has written articles at WUWT is another enthusiast. Another method I use is a kind of Fourier analysis by fitting spherical harmonics. These, and my own variant of infilled grid, all give results in close agreement with each other; simple gridding is not as close, although overall the method often tracks NOAA and HADCRUT quite closely.
Unbiased anomaly formation.
I described the benefits of using anomalies in terms of reduction of sampling error, which just about any method will reflect. But there is care needed to avoid biasing the trend. Just using the average over the period of each station’s history is not good enough, as I showed here. I used the station reporting history of each GHCN station, but imagined that they each returned the same, regularly rising (1°C/century) temperature. Identical for each station, so just averaging the absolute temperature would be exactly right. But if you use anomalies, you get a lower trend, about 0.52°C/century. It is this kind of bias that causes the majors to use a fixed time base, like 1951-1980 (GISS). That does fix the problem, but then there is the problem of stations with not enough data in that period. There are ways around that, but it is pesky, and HADCRUT just excludes such stations, which is a loss.
I showed the proper remedy with that example. If you calculate the incorrect global average, and then subtract it (and add later) and try again, you get a result with a smaller error. That is because the basic cause of error is that the global trend is bleeding into the anomalies, and if you remove it, that effect is reduced. If you iterate that, then within six or so steps, the anomaly is back close to the exactly correct value. Now that is a roundabout way of solving that artificial problem, but it works for the real one too.
It is equivalent to least squares fitting, which was discussed eight years ago by Tamino, and followed up by Romanm. They proposed it just for single cells, but it seemed to me the way to go with the whole average, as I described here. It can be seen as fitting a statistical model
T(S,m,y) = G(y) + L(S,m) +ε(S,m,y)
where T is the temperature, S,m,y indicate dependence on station, month and year, so G is the global anomaly, L the station offsets, and ε the random remainder, corresponding to the residuals. Later I allowed G to vary monthly as well. This scheme was later used by BEST.
TempLS
So those are the ingredients of the program TempLS (details summarized here) which I have run almost every night since then, when GHCN Monthly comes out with an update. I typically post on about 10th of the month for the previous month’s results (October 2018 is here, it was warm). But I keep a running report here, starting about the 3rd, when the ERSST results come in. When GISS comes out, usually about the 17th, I post a comparison. I make a map using a spherical harmonics fit, with the same levels and colors as GISS. Here is the map for October:
The comparison with GISS for September is here. I also keep a more detailed updated Google Earth-style map of monthly anomalies here.
Clive Best is now doing a regular similar analysis, using CRUTEM3 and HADSST3 instead of my GHCN and ERSST V5. We get very similar results. The following plot shows TempLS along with other measures over the last four years, set to a common anomaly base of 1981-2010. You can see that the satellite measures tend to be outliers (UAH below, RSS above, but less so). The surface measures, including TempLS, are pretty close. You can check other measures and time intervals here.
The R code for TempLS is set out and described in detail in three posts ending here. There is an overview here. You can get links to past monthly reports from the index here; the lilac button TempLS Monthly will bring it up. The next button shows the GISS comparisons that follow.
Well Nick, your graph of temp anomalies clearly highlights the biggest problem faced by the observation data climate science community.
And that monumental problem is: How in world of declining global temperatures can they keep the adjusting observations in order to keep the anomalies close to the models?
But I’m sure NCEI, GISS, RSS, and CRU (with appropriate coordination) are up to the task as Trillions of dollars are riding out that preconceived outcome.
“How in world of declining global temperatures can they keep the adjusting observations in order to keep the anomalies close to the models?”
My main point here is that you can calculate the average yourself, from unadjusted data, and it makes very little difference.
Don’t ever underestimate the ability of Gavin and Co. to come up with “new and improved” CAGW algore-rithms when the Cause is on the line.
By now it’s clear those CAGW cargo planes ain’t gonna land. But that won’t stop the climate cultists from making their catastrophe claims.
You can choose to be a part of that pseudoscience, or you can choose to not be.
Your call.
The most recent version of GHCN adjustments…….
REDUCE the warming
When UHI adds more than they adjust down…..yes, they can claim adjustments reduce the warming
THE go-to shell game.
Nick,
Man, it ain’t the math. It is the baseline logic.
1)It is the complete lack of actual data. There too few data collection platforms.
2)Furthermore, until they adapt ISO or GMP or any equivalent QBD process in an open and transparent way I will always be skeptical and cynical of any and all it. I have actually been to the NOAA station/offices in Seattle as an Instrumentation Engineer. We were asked to modify a TOC analyzer. No QBD protocols were followed. I have been to the Hive mind. I actually know what happens there.
3)Even if we fixed 1 and 2 – there is still the Signal to Noise problems.
4)Even if we fixed 1,2,3 – the average, the anomaly temperatures in real terms mean nothing and beyond hubris to suggest otherwise.
5)A slightly warmer planet is better for almost all life.
6)We as humans adapt quite easily to most temperature conditions in real time; I have done a lot of traveling, hiking, camping. I have woke up to 15F in the morning to 105F by midday to 15F by 2200hrs yet I am still alive. We go from 70F house to <30F outside during the winter months.
7)Show me any place on the Planet where climate has actually and completely changed over the past 150 years.
8)Most of this is now driven by politically motivated activists whom have an aggressive agenda and some of these folks have no real care of the Planet. This is a tool, a lever. You are a smart and talented human do not fall victim to the "Wizards' 1st Rule." The hardest lies to overcome are the ones we tell to our inner selves.
Great riposte, JE.
Nailed it. The true deniers are those who deny that politics drives their hypothesis and motivates them to manipulate their data by financially rewarding them for creating our hobgoblins.
JEHills
A succinct, logical rebuttal of the anomalous assertions made by the Climate Change Catastrophists. Thank You!
JEHill
It is not just the baseline logic, it is also the math.
“The result for temperature is an average sample mean of 12.53°C and a standard deviation of those 1000 means of 0.13°C.”
This 12.53°C is simply an impossible level of precision. The readings are not accurate enough to be able to claim 4 significant digits. The readings are also not precise enough to report that number. (This analysis is for people who know the difference between accuracy and precision).
Where are the uncertainties? There is no way they can get an anomaly figure to a larger number of significant digits than the raw data. They can estimate where the centre of the uncertainty is, but it is “unscientific” it the true sense to claim to know what the anomaly is with a precision of 0.01 C.
Uncertainties don’t reduce, they propagate.
jehill,great summation of the situation. outside of the odd mistake i generally agree that what climate science does with the numbers is correct,it’s just the numbers they use are not far off meaningless. ocean heat content, we haven’t a clue. energy budget at toa, we don’t know it exactly,atmospheric water content, probably one of the most important numbers in this debate, we don’t know to the required degree of accuracy. a number for a global average temperature at any point in time is a near notional concept.
Sorry. /blockquote failed.
Nailed it.
JEHill: You make some sensible points. Here are some other things to think about, which should be relatively non-controversial.
In the central continental US, mean annual temperature rises about 1 degF for every 100 miles you move south, or about 150 miles for 1 degC. The rise of nearly 1 degC in the last 50 years is equivalent to going about 150 miles south. When you say that nothing has changed much in 150 years, you are simply saying that the climate 150 miles further south is little different. However, if you add another 500 miles south to this change, things will be different. I won’t argue about whether people prefer New England or Florida. That is the your call. However, I believe it is sensible to think in terms of moving 500 miles south (if the IPCC is right).
Move climate from 500 miles south into the corn belt in Iowa (which is only about 100 miles north to south). Adaption will certainly be required. Corn is grown further south. Winter wheat can be substituted for summer wheat.
Every 1 km higher elevation is worth -6.5 degC. 3 deg C is about 2500 feet in elevation. Move the rain/snow line up about 2500 feet and think about what happens to mountain snow packs and run-off. Especially in California.
It was about 6 degC colder during the last ice age and sea level was about 120 m lower. I’m not going to say sea level is going to rise 20 meters for every degC it warms and melting itself is very slow. (The increased rate of ice flow is the big danger.) The end of the LIA was the cause of the SLR detected when tide gauges first began to be widespread (1880-1900) and was probably the main driving force for SLR through 1970. So we probably haven’t really experienced most of the consequences of warming since 1970. A few meters of SLR – which appears inevitable, but not imminent to me – are going to make a big difference in many places in the distant future.
FWIW, I’m not trying to tell you what to think – just provide some pointers about how you might think more clearly. Emissions reductions are also challenging. Better technology could help a great deal.
Nick Stokes – November 14, 2018 at 5:45 pm
Nick Stokes, now the above is “fine and dandy” …… for you and all the others that have expended literally BILLION$ …… collecting, composing, studying, adjusting, calculating, averaging, debating and/or publishing the aforesaid near-surface air temperature data, …… but what have you actually accomplished by doing so.
Have you et el, provided any redeeming social values, improved the socio-economic status of any populations, enhanced the health and welfare of poor individuals or advanced the science of the natural world even a smidgen?
If not, then the only use for that BILLION$ of DOLLARS worth of recorded temperatures, …. plus a $5 bill, …… would be to purchase a cup of coffee at most any diner or restaurant.
personal attack?
nice way to treat a guest poster
And yet, these would be the same set of questions a lawyer could use an opening or closing statements to the jury in a civil lawsuit under any type legal negligence and malfeasance.
I’ve seen you and Nick get pretty brutal towards guest posters that you disagree with.
Why the double standard?
Personal attack ? … I’m not seeing it.
It’s a fair question from Samuel CC. Seriously, I would ask the same question. What good does developing a statistical “science” around a fundamentally meaningless statistic accomplish ?
While these kind of attacks occur frequently, it is specious to deny they are unjustified attacks. So please tone it down if possible. Nick did not write one word about any mitigation policy.
Mark, we know why the double standard. He is a lying troglodyte. You cannot expect anyone on the left to be any of the following:
1. Honest
2. Consistent
3. Moral
Their worldview doesn’t allow it. Mosh is there poster child of such childish tyranny
OMG Mosher! Samuel really made a withering attack on Stokes there, with his “cup a coffee” smear. So bad that it triggered you, so time to get to your safe-space…
“for you and all the others that have expended literally BILLION$”
‘Others’, Mosher. Not personal.
“HA”, my “cup of coffee” pun was a modified version of what I was told upon graduating High School, which was. …… “Take that Diploma, along with a new shinny dime, down to the Drug Store and you will have enough to buy yourself a bottle of soda pop.”
And there t’wernt no PERSONAL ATTACK, ….. because I specifically addressed “for you and all the others ”
And secondly I queried …… “Have you et el,”…….
If there hasn’t been any scientific facts or evidence presented that supports the CAGW meme, ….. AND THERE HASN‘T BEEN, …. then it’s a literal FACT that BILLION$ have been and are being wasted solely for the feeding and care of tens-of-thousands of government “troughfeeders”. And I specifically stated government “troughfeeders” ….. simply because there is very little to none, ….. non-government monies being expended on “climate science”.
From universities to government agencies, taxpayers are being extorted to pay the costs.
It’s worse than we thought…
These were the bad projects. As you might see the bottom of the list was climate change. This offends a lot of people, and that’s probably one of the things where people will say I shouldn’t come back, either. And I’d like to talk about that, because that’s really curious. Why is it it came up? And I’ll actually also try to get back to this because it’s probably one of the things that we’ll disagree with on the list that you wrote down.
The reason why they came up with saying that Kyoto — or doing something more than Kyoto — is a bad deal is simply because it’s very inefficient. It’s not saying that global warming is not happening. It’s not saying that it’s not a big problem. But it’s saying that what we can do about it is very little, at a very high cost. What they basically show us, the average of all macroeconomic models, is that Kyoto, if everyone agreed, would cost about 150 billion dollars a year. That’s a substantial amount of money. That’s two to three times the global development aid that we give the Third World every year. Yet it would do very little good. All models show it will postpone warming for about six years in 2100. So the guy in Bangladesh who gets a flood in 2100 can wait until 2106. Which is a little good, but not very much good. So the idea here really is to say, well, we’ve spent a lot of money doing a little good.
And just to give you a sense of reference, the U.N. actually estimate that for half that amount, for about 75 billion dollars a year, we could solve all major basic problems in the world. We could give clean drinking water, sanitation, basic healthcare and education to every single human being on the planet. So we have to ask ourselves, do we want to spend twice the amount on doing very little good? Or half the amount on doing an amazing amount of good? And that is really why it becomes a bad project. It’s not to say that if we had all the money in the world, we wouldn’t want to do it. But it’s to say, when we don’t, it’s just simply not our first priority.
http://www.ted.com/talks/bjorn_lomborg_sets_global_priorities/transcript?language=en
Why is a flat average the proper baseline?
If the globe is naturally warming, so what?
If urban areas are heating up, so what?
The goal should be to discern unnatural CO2 based warming from all other natural and manmade (UHI, land use, etc) sources of warming.
Since the starting point for the thermometer record is the little ice age, and the starting point for the satellite record was the cold late 70s, it is reasonable to assume that the natural trend in both data sets should be up, not a flat line average from 1950 or any other baseline period you may prefer.
Is +0.33 unexpectedly warm if the 1950 – 1980 trendline is extended into the future instead of using the flatline average?
What about if you extended the trendline from all 30 year periods from 1900 to 1950 (pre-CO2 era)? Would the 2017 temperature be unexpectedly warm, or unexpectedly cool on average?
Trying not to be a pedant here Nick, but the data, Tavg, is taken from the average of the high and low readings for a 24 hr, period. It is in no way a temperature. At best it is a very crude estimate of a temperature.
The WMO and its predecessors established a 30 year anomaly period as a convenience. If you want to get a number that actually has some physical meaning use K as the base. 0 K is the same everywhere and everytime. That eliminates half the error in any temperature statistic since there is no error in generating the base temperature.
Philo,
Tavg is not an average in the sense that it has an associated frequency distribution, PDF, or variance. It is a mid-range calculation that is arrived at arithmetically by the same process as a mean or median of only two values. But, an ‘average’ of only two numbers is a poor metric for statistical characterization of the sample. Calling it an “average” makes it appear to be more robust and useful than it actually is.
What’s the problem with post hoc adjustments to what they know must be happening?
http://www.climate4you.com/images/MSU RSS GlobalMonthlyTempSince1979 With37monthRunningAverage With201505Reference.gif
The graph shows 2015 to 2018.
Ooops, my bad. Brain fart.
Thank you Nick.
Still didn’t not understand your fierce opposition to sensible skepticism.
Do appreciate the work you put in. Any comment on how temple s and uah can be so alike at times and so different at others.
Also Mosher has commented in the past on the 100,000 plus stations used for estimation but you cite less than 5000.
“you cite less than 5000”
Yes. I think Mosh also says that you can use any reasonable subset and get the same answer. I too think 5000 is more than what is needed. I did a study here of how much it hurts if you use smaller subsets. Getting down to 500 is still pretty good, and down to 60 is still meaningful.
This is a crunch question coming, because GHCN V4 is now out, and has a lot more stations than V3. Should I switch? It will be disruptive, and I don’t think it will really make a difference. I’ll probably change when the majors do.
I have often suggested that one needs only a small sample.
But what is fundamental is that like is compared to like. This means that the stations in the sample (say 500) must have undergone no significant change in local environmental site conditions (eg., no encroachment from urbanisation, no position change of the measuring station etc), and then each station in the sample must be retrofitted so that it is as near identical to its own individual historic the past, ie., same type of enclosure, painted with the same type of paint, fitted with the same type of LIG thermometer, the thermometer calibrated as per the system applied in that country in the past. Then once properly retrofitted modern observation would be taken using the same system and procedures as used by the individual station in the past, eg., using the same TOB as was employed by the station in the past .
If that is done one can obtain modern day RAW data that can be directly compared to the station’s own past historic RAW data without the need for any adjustments whatsoever. Just simply a direct comparison on a station by station basis with that particular station’s own historic record.
There is no need to make any hemispherical or global construct. Just simply a list of the station’s own record and to what extent it differs from the station’s own historic highs of the 1930s/1940s.
We would quickly know whether there had been any significant warming, and the order of that warming. Comparing each station with its own historic record of the 1930s/1940s would quickly tell us whether CO2 may be a significant factor, since over 95% of all manmade CO2 emissions has taken place since the 1930s.
If this was a proper science, the first step would have been to set up a system of quality control getting good quality data that needs no adjustments whatsoever. It is easy to achieve this, and one suspects that the reason why this has not been done is that the ‘scientists’ do not want to work with cream but rather with crud so that they are enable to make whatever adjustments are necessary to fit their meme.
I have often suggested that one needs only a small sample.
But what is fundamental is that like is compared to like. This means that the stations in the sample (say 500) must have undergone no significant change in local environmental site conditions (eg., no encroachment from urbanisation, no position change of the measuring station etc), and then each station in the sample must be retrofitted so that it is as near identical to its own individual historic the past, ie., same type of enclosure, painted with the same type of paint, fitted with the same type of LIG thermometer, the thermometer calibrated as per the system applied in that country in the past. Then once properly retrofitted modern observation would be taken using the same system and procedures as used by the individual station in the past, eg., using the same TOB as was employed by the station in the past .
If that is done one can obtain modern day RAW data that can be directly compared to the station’s own past historic RAW data without the need for any adjustments whatsoever. Just simply a direct comparison on a station by station basis with that particular station’s own historic record.
There is no need to make any hemispherical or global construct. Just simply a list of the station’s own record and to what extent it differs from the station’s own historic highs of the 1930s/1940s.
We would quickly know whether there had been any significant warming, and the order of that warming. Comparing each station with its own historic record of the 1930s/1940s would quickly tell us whether CO2 may be a significant factor, since over 95% of all manmade CO2 emissions has taken place since the 1930s.
If this was a proper science, the first step would have been to set up a system of quality control getting good quality data that needs no adjustments whatsoever. It is easy to achieve this, and one suspects that the reason why this has not been done is that the ‘scientists’ do not want to work with cream but rather with crud so that they are enable to make whatever adjustments are necessary to fit their meme.
PS. Sorry if this is a duplicate post, but I had a problem initially posting..
“I think Mosh also says that you can use any reasonable subset and get the same answer.”
That’s because you and he are averaging. Averaging disparate intensive properties is a Bozo no no.
wrong.
Not averaging.
and Essex was wrong
Im running v4 with your code. about 27K stations
I love your method much faster than the 2 weeks our code takes
if not properly balanced to zero latitude [and apparently also to a certain altitude] your 27 K stations will still give you a completely wrong result.
wrong.
provably wrong.
How do I know? Berkeley does what you suggest.
Nick does not.
The answer is the same.
Why?
Anomalies
Nick S
How many extra significant digits can you add to the average if you use 60, or 500, or 5000 readings? I for one would like to understand how to get an average that is 50 or 200 times more precise than the measurements that went into creating it. Surely that deserves a Nobel Prize?
There is a “clever bit” in the article that mentions taking an average and subtracting some baseline number, which in terms of calculating the propagation of uncertainties, makes it appear to be something precise (meaning, no additional uncertainty because it is a fixed value). But that subtracted number is actually an average of other readings and has an uncertainty which is not reported, and thus not propagated, which it should be.
Do you agree that it is not possible to get from measurements that are ±0.5°C an average that is ±0.005°C ?
If you agree (because that is how measurements and calculations work) then how can anyone support the notion that global temperature anomaly is known to within 0.01°C ?
Crispin,
“How many extra significant digits can you add to the average…”
My view on significant digits is that I am reporting the result of a calculation, and I should do so to a precision that covers whatever use people might want to make of it. So I usually give three, even though two would usually do. I think a guide to the true precision is probably in the comparison graph I showed (over 4 years). The spread is about 0.1°C. I think I do a bit better than that indicates, but it is of that order.
“Do you agree that it is not possible to get from measurements that are ±0.5°C an average that is ±0.005°C ?”
I think it is possible to damp measurement error to such an extent with many data points. The main cause of error is uncertainty in the temperature in the areas not sampled, which is not related to measurement accuracy.
In other words, “I won’t answer your question”, which is a perfectly valid answer, as long as the listener gets it.
Mosher is far less honest/competent. He merely states things like “Essex is wrong”.
Back when I was studying actual Science in college, if my answer had more significant digits than the data could support, I would get points taken off, even if all my calculations are 100% correct.
Measuring many different places with many different instruments does not improve accuracy. No matter how much your pay masters want it to.
MarkW says ” Measuring many different places with many different instruments does not improve accuracy.”
…
This is false. When you are measure GAST, you are doing statistical sampling. Being that it is a sample of the mean, the standard error is inversely proportional to the square root of the number of observations. So, to improve the accuracy of your estimator of GAST, you increase the number of places you measure.
Dave, no amount of temperature measurements on the equator will improve the accuracy or precision of your measurement of the temperature at the South Pole.
” the accuracy or precision of your measurement of the temperature at the South Pole”
No-one said it would. It improves your estimate of a population mean using a sample mean.
Depends on how much of the error is systematic, does it not? Stochastic noise in measurements does tend to decrease with additional sampling.
Nick, there would be absolutely no criticism about the graphs produced by you , or by anyone else in the climate change industry, IF your industry was not using those graphs to push for government policies costing billions of $$.
“by anyone else in the climate change industry”
I’m not in any climate change industry. I’m just a retired hobbyist. I use the graphs to see what is going on. Other people can work out what to do about it.
As I am not a WUWTer from the start of this blog would you care to share your CV with us unwashed masses? As a “Retired Hobbyist”— that does not reinforce any technical capability to post and defend the science.
I’m curious, do you ask for CVs for all who post? Or only those whose conclusions are that AGW is real?
Content free challenges are not a winning position.
Ah! Ray Boorman just made an extremely important and telling comment!
The graphs are criticized not because of their scientific merit or factuality, they are criticized because of their impact on policy. Finally someone is honest about this!
Kristi:
The unjustified leap of “faith” made by alarmists is that, if the temperature is going up, it must be the fault of humanity, and somehow by spending trillions and trillions of dollars to reduce CO2 we can make the temperature go down. The global average temperature IS going up. So what? It has gone up and down many times in the past without human assistance. The spot I am sitting in right now was covered by 3000 to 5000 m of ice about 20 kya, so between 20 kya and today the temperature went UP and a lot of ice melted. That was a good thing, not an environmental catastrophe. We can go back to 3.5 Mya when there were camels and beavers and fish on Ellesmere Island in the Canadian Arctic (https://www.history.com/news/giant-ancient-camel-roamed-the-arctic), so the temperature obviously went DOWN between 3.5 Mya and 20 kya, without any help from the burning of fossil fuels. (There were lots of temperature fluctuations in between; I’m just picking two points in Earth’s history.) Arctic temperatures were much warmer than today and there was no permanent Arctic ice cap. The argument that humans are solely responsible for increasing temperatures therefore requires the absurd assumption that natural climate change stopped some time around 1850 and humans took control of the climate.
Bjorn Lomborg, who is not who you would call a “climate change denier,” suggests that living up to Paris commitments will reduce the temperature by 0.05 degrees C by 2100 (https://www.lomborg.com/). He also suggests that there are far better ways to help humanity than by spending trillions to reduce CO2. I agree.
They’re criticized here because the everyone in the field ignores instrumental resolution and systematic measurement error. So does Nick Stokes.
Nick further supposes that temperature sensors are not only perfectly accurate, but also have infinite resolution. See the series of exchanges starting at April 20, 2016 at 9:12 pm in the linked post.
None of the anomalies or temperatures plotted here reflect any awareness of physical error or instrumental resolution, nor display any valid physical uncertainties.
Systematic error from uncontrolled environmental variables is not constant, non-normal, and does not subtract away.
An instrumental resolution limit means the instrument is not sensitive to magnitude changes below that limit. Data are absent. Resolution limits are a hard knowledge stop.
In calculating an anomaly, the uncertainty goes as the root sum square of the systematic error in the temperature and the uncertainty in the normal.
That means any anomaly temperature necessarily is more uncertain than either the originating temperature or the reference normal.
But you’ll never see standard error analysis in any published anomaly series, and certainly see none here.
And a representative lower limit of systematic error in the global averaged surface air temperature record is ±0.5 C (870 kB pdf). Those error bars alone would go right off the page of Nick’s anomaly series graphic.
The same diagnosis can be applied to the air temperature folks as Steve McIntyre observed with respect to the proxy air temperature people. Paraphrasing, they’re so incompetent that anyone of standard competence can show their work to be nonsense.
Pat Frank
Pretty sure HadCRUT publish error margins with every monthly update. Quite a wide range of them too, as I recall. Unfortunately the site seems to be down at the minute.
NOAA also publish error margins with each monthly global climate report. For instance, in September 2018 the global temperature anomaly is estimated by NOAA as +0.78 ± 0.15. https://www.ncdc.noaa.gov/sotc/global/201809
Systematic measurement error is never included, DWR54. They assume without justification that it is subject to the Central Limit Theorem, and then just ignore it.
“In calculating an anomaly, the uncertainty goes as the root sum square of the systematic error in the temperature and the uncertainty in the normal.”
Not entirely true, and for an interesting reason. It is true only if the temperature and the normal are independent. But within the base period, it is certainly not true, because each temperature is part of the sum making up the normal, so they are correlated. And within that period, the uncertainty goes down, not up. That may happen beyond, too, because of autocorrelation.
“Pretty sure HadCRUT publish error margins with every monthly update. “
Yes, it does, and they are based on a 100-member ensemble. Since the anomaly base offset is calculated from each ensemble, and the error quoted is the observed variation due to all changes, the effect of anomaly formation is definitely included.
Individual temperatures entering an anomaly are not part of the normal. Temperature error and normal uncertainty combine in the anomaly as the root-sum-square.
HadCRUT does not include that uncertainty in their calculation, and neither does anyone else.
Were they to do so, the global anomaly trend would be nearly submerged beneath the uncertainty bars. Which, perhaps, explains why they don’t do so.
I disagree, I say the graphs are criticized because they too frequently support a previously established leftist globalist agenda. It’s too perfect a fit, it makes the skeptics alarm bells go off bigly.
Case in point: Hide the decline.
Bingo! Plus they push selected facts or implications way too hard.
Case in point-polar bears.
Ocean acidification (absolute unscientific B.S.)
I could add about another 100
The graphs are criticized not because of their scientific merit or factuality, they are criticized because of their impact on policy. Finally someone is honest about this!
There you go again Kristi 🙂
No one disputes the planet is warming. The question is “why.” And pretty much every skeptic I’ve read here both admits the temperature is rising as well as laments the policy implementations that have/will continue to plague we the people.
I, for one, do NOT admit that “the temperature is rising”.
I might admit that a temperature construct is rising, and that this construct is essentially meaningless, and largely a waste of good talent such as that which we see in Nick S.
sycomputing,
I was only echoing what Ray said. I didn’t say everyone did this, but I think policy comes into play in interpretation of the science more than most people are likely to admit (or be consciously aware of). It goes both ways – people who want drastic policy measures implemented will tend to see more dramatic and certain climate change, too.
Actually you did Kristi, you just aren’t honest enough to admit it.
I didn’t say everyone did this
you kinda did. By saying “Finally someone is honest about this” you are implicitly saying up till now *everyone else* was lying.
As usual, Kristi assumes facts not in evidence.
Criticizing graphs because the support a political point does not exclude criticizing those same graphs for other reasons.
Another point Kristi frequently complains when someone from our side tries to tar everyone on her side when someone on her side says something stupid.
Yet, once again, Kristi is performing the same stunt she condemns in others.
Markw, I think you are playing the man instead of the ball.
My comment about temperature graphs is aimed at the fact that, the scientifically interesting information contained in them, showing variations in our climate, is being seriously mis-used by activist’s for political purposes. Even worse, those political purposes come at huge economic cost to the small group of democracies based on the Western/European tradition which has developed over the last 2000 years.
Because this activism has gained so much traction, which is another story, vast amounts of taxpayers money is thrown at the issue. The money results in ever larger numbers of people jumping on the bandwagon to get their hands on it. Naturally, the reported results often reinforce the arguments of the activist’s, perpetuating the cycle.
The data on which these temperature ‘anomalies’ are based is so diverse and rough, so fragmented and spotty that to quote the accuracy of the result other than in whole degrees is a systematic misrepresentation.
WELL STATED, Nicholas.
Any pretence of accuracy with the huge disarray of erratic UNKNOWN data quality, is a joke.
And its all TOO EASY to fudge to match an agenda.
I can’t find the original WUWT article by Anthony where he lambasted infilling and pointed out examples where a location was infilled by data from a station across a range of mountains. The figure, 1200 (km or miles?) is mentioned in a number of comments over the years.
In digital signal processing it isn’t uncommon to employ a process similar to infilling. The thing is that a simple signal in a well behaved, linear, bandwidth limited channel is miles away from the conditions you have with temperatures on the surface of the planet.
What am I saying? Just because something sounds reasonable, that doesn’t mean it actually is. The difference between the satellite (and balloon) datasets and those derived from surface station data is disturbing.
Infilling seems like a overly simplistic approximation when you consider factors that can change surface temperature between a temperature station and infilled location. Factors could include elevation differences, latitude differences, wind speed and direction, topographical variations, average cloud cover, differences in precipitation, vegetation, etc. etc. (not to mention UHI bias).
As just one example, elevation differences on land will result in about 3.5 F difference w/ 1000 ft elevation change. This is for an adiabatic system where no energy is added or removed and the only difference is the absolute air pressure (or altitude).
If the many factors are actually accounted for in the data infilling calculation, this process quickly becomes very complicated – mathematical gymnastics so to say.
Farmer Ch E retired,
In a word, “microclimate.”
I know about that. Grew sweet cherries on the eastern shore or Flathead Lake in Montana for several years. It was made possible by the microclimate created by the lake and adjacent mountains.
The grapes grown in the Okanagan Valley in BC differ between the east and west sides of Okanagan Lake because of microclimatic differences. The lake is a few kilometres wide.
you can infill quite well
The temperature at any location is a function of latitude, altitude and season.
willis proved this.
here on wuwt
you should read it
Don’t tell the cherries and grapes that all that is needed is latitude and altitude. They might argue with you.
In January of this year, daily highs in Calgary ranged from -23 to +10. In January 2012, to pick another January at random, they ranged from -32 to +15. On January 30, 1989, the high was +12; on January 31, it was -27, a 39 degree drop in the daily high in 24 hours. From 1885 through 2017, the average difference between the monthly high and the monthly low across all twelve months of the year was 34 C; since 1885 there have been 30 months in which the difference exceeded 50 C. The AVERAGE daily high temperature for the month of January has ranged from -20 to +6. It is not uncommon for winter temperatures in this part of the world to differ by 30 C within 100 km as Chinook winds warm part of the province. So, what is the “normal” temperature for Calgary in January? Temperature infill does not work, at least in this part of the world.
Randy Stubbings says for Calgary –
“The AVERAGE daily high temperature for the month of January has ranged from -20 to +6.”
Here is the solution to your cold temperatures – we will infill the Calgary temperature using temperature stations in the Okanagan Valley (~260 miles SWW) and the Flathead Valley (~230 miles SSW) and you will instantly experience much milder winters. The mathematics is “beautiful” /s
Some questions for those familiar w/ data infilling: Are land locations used to infill locations over arctic seas and arctic ice? If so, is there any correction for changes from an land environment to a maritime environment? How is the variation between air temperature (land readings) and water temperature (ocean readings) correlated? Seems like a challenge.
“Seems like a challenge.”
Yes, it is. It is a significant component of the uncertainty.
I would have thought you also need to add orientation to that list of influences on temperature
Here In OZ a locale in inland Canberra on the south side of a mountain blocking the sun produces freezing temperatures in the southern winter
Similarly the temperatures of south facing locales in southern Australia are affected by the prevailing cold south westerlies
Similarly proximity to the ocean can affect temperatures at a given locality dependent on whether the wind is from the ocean or from the inland
What about this, Bob?
https://wattsupwiththat.com/2018/04/18/an-interesting-plot-twist-call-it-an-anomaly/
> Anomalies are made by subtracting some expected value from the individual station readings…
That there invalidates the data. The very worst error in all of science is expectations bias.
Feel free to explain you didn’t mean it this way but don’t deny that results expectations is a serious issue.
“Feel free to explain you didn’t mean it this way”
The point is that you calculate the difference from expected. This is close to the classic definition of information, and also corresponds to what we want to know in everyday life. If I tell you the average in Athens for October was 19.45°C, what can you make of that? Nothing much, unless you know what is normal there for October (19.6).
No. I said something you cannot deny and instead you make up some excuse that still uses the word “expected.”
Expectations color results. It is disappointing that you cannot bring yourself to acknowledge this maxim of science.
The best way to use anomalies is probably the way that UAH does it. They take 75% of the whole time line and average that as the base. No expectation involved. The difference between UAH satellite data (which is validated against balloon data) and the other 4 datasets is shocking . The climate establishment has no answer for this but Tony Heller, Paul Homewood, Anthony Watts …………etc have an answer, and it is not one you want to hear Nick Stokes.
UAH use a 30 year anomaly base period. The fact that it’s 75% of their time series is incidental. 30 years is considered to be a period of ‘climatology’ according to the WMO. A period over which natural influences on climate such as ENSO, other longer ocean cycles such as AMO and PDO, etc, and solar cycles tend to even out; where there are roughly equal cold and warm phases. GISS and HadCRUT4 also use 30 year periods as their anomaly base for this reason.
The very starting point of Nick’s (and all the other’s) method has a problem.
Taking the highest and lowest readings and dividing by 2 (Tav) in no way represents the Average Temperature of a day, week, Month or Year.
This is simple to show.
Take 2 days one is sunny in the morning and reaches 20 C and is then cloudy for the rest of the day, thus has a cool evening.
The next day is sunny all day, it still only reaches 20 C but the whole of the evening and most of the night are warmer than the first day. Both days end up with the same Cold temperature in the morning.
The second day’s actual “average” Temperature is higher than the first day.
Similarly one day could have a min of 0 and a max of 20, another day -5 and 25, they are not the same conditions.
The calculations should only be carried out on the Highest and Lowest values and presented separately, using the “average” loses too much data and can imply a uniformity that is not in the data.
As for combining SST with Land Temperatures, that is really ridiculous.
Alan: 75% of the whole satellite period is 30 years, which is the same length baseline period as everyone else uses.
HadCRUT trend since 1/1979: 1.70 +/- 0.24 K/century
UAH6 trend since 1/1979: 1.28 +/- 0.38 K/century (75% of HadCRUT)
RSS TLT V4 since 1/1979: 1.96 +/- 0.38 K/century (consistent with HadCRUT)
Not as much difference as there was before problems posed by satellite drift were recognized.
The UAH satellite data was not validated by radiosonde data. Radiosonde data was used to help make corrections for orbital drift. Any error in the radiosonde trend has probably been transmitted into the UAH trend. They are no longer two independent assessments.
Tony Heller relies on simple averaging real temperatures and the average latitude of stations changes with time. AFAIK, our host as never published anything showing that station quality influences station trend.
@Rob_Dawg
You have it backwards: mathematically, the results always determine the expected value.
https://en.wikipedia.org/wiki/Expected_value
The mathematical notion of expected value is very formal and rigorously defined. Not the same as the informal term ‘expectation’, which we use in everyday language, which leads to unfortunate ambiguity.
So, you could have an (informal) expectation on what the (formal) expectation might be, but they won’t always agree. (So it is best not to mix the two contrary definitions like this)
Same situation applies to the term ‘anomaly’. In meteorology it simply means the difference between an actual value and an expected value (formal). In everyday English it usually means a “problematic” difference. But in meteorology, the intent is not to highlight some kind of “problem”, but merely state in a formal way that a value departs from some expected value.
For example, in meteorology, anomalies of zero are routinely reported. That merely states that the actual value was the same as the expected value. But in everyday English it might sound ludicrous: “We have an anomaly!” “What is it?” “Nothing!”
“You have it backwards: mathematically, the results always determine the expected value.”
Unless you’re Mann or Briffa, or any number of climate scientists, and you have to find that one dataset that gives you the results you were looking for.
The article you referenced says “The expected value is also known as the expectation, mathematical expectation, EV, average, mean value, mean, or first moment.”, which contradicts what you said about expectations.
The article also says “The same principle applies to an absolutely continuous random variable, except that an integral of the variable with respect to its probability density replaces the sum”. How do you define temperature? Continuous random variable? Discreet random variable? Better yet, tell us what function you would integrate and what probability density would you use?
@Gorman
“…which contradicts what you said…”
Perhaps I was not clear in stating that there are both _formal_ and _informal_ definitions. I even stated that the formal and informal definitions of ‘expectation’ don’t agree in general. Where is the contradiction?
The function used to compute expected values is also called the ‘expectation operator’ E and is linear. So, E(aX + bY ) = a E X + b E Y, which means it is not fussy about the ordering of the inputs. Random variables may be continuous or discrete, in theory. But, in practice, all measurements tend to be discrete rationals.
It doesn’t make much difference how you sum them, long as you are careful about the weighting coefficients. Temperatures are intensive quantities, so must be normalized in time and space to compute mean temperature values. For example, if you have a swimming pool at 20C and a glass of water at 30C, you cannot simply compute the average as 25C. You have to weight the pool value 20C by multiplying by the pool capacity in glassfuls.
A lot, enough to see if the world is warming or cooling
michael
if you can’t do it with one,, you can’t do it.
” If I tell you the average in Athens for October was 19.45°C, what can you make of that? Nothing much, unless you know what is normal there for October (19.6).”
I still cannot “make” much of that, because you didn’t specify how “normal” was derived, or the margin of error, or a standard deviation, or anything that describes a set of numbers instead of a single numerical result. This is the problem with climate “scientists” in general (not referring to you, but the professionals getting paid to do this). Some, maybe many, do not understand the limitations of statistics, or how to apply them. They just apply procedures to data sets to get numbers they like.
The same for the computer models. I am an expert on many aspects of computers including programming and numerical methods. I understand what models can and cannot tell us. ANYONE who believes you can run a simulation (i.e. a model) containing millions of calculation and iterations where many of the processes are guessed at, non-linear, chaotic, and poorly understood, and derive an average temperature over 100 simulated years just does not understand computer modeling. PERIOD.
It does not matter that it is warming…The Earth warms and cools for reasons we still poorly understand. We cannot know if or how much we contribute to any warming. Warming is better than cooling – History teaches us this if we listen. More of the Earth becomes comfortable, habitable, and productive. Trying to stop warming is just nonsense – you would do as much good as trying to stop continental drift.
So, your analysis is interesting. It tells us something about past behavior. It has NO VALUE in predicting future behavior unless it can be conclusively linked to a cause. CO2 might be PART of a cause, maybe – but we already know that there is are many other natural causes that explain part of all of the behavior – we know this because History (Geologic in this case) teaches us this. This is the lesson that so many people miss.
Robert: IIRC, the HadCRUT “model” (not the Had AOGCM) is constructed by dropping out a random subset of stations 100 times and seeing how much that changes their results. They also consider other sources of error, including the issue of data homogenization.
According to HadCRUT4, the planet has experienced 0.9 degC of warming since 1950. You are correct that the planet has always been warming and cooling in the Holocene, but warming and cooling between glacial and interglacials is clearly driven by changes in the Earth’s orbit. So how much warming and cooling have we experienced during the Holocene and how does that compare with the last half century? If you want to use ice cores as temperature proxies, recognize that “polar amplification” makes the amplitude of changes in ice core data about twice the change experienced globally. How often did you find changes of 0.9 degC in a half century? How likely is it that an 0.9 degC happen to arrive by chance in the same half-century as unprecedented changes (in the Holocene) 0.9 degC of warming. And compared with the LIA, the mid-20th century was already a relatively warm period to begin with.
Nick, first, thanks for even trying. BUT … the trouble is in the chain of “trust.” If you followed Jennifer Marohassy’s and Joanne Nova’s discussion of problems with the collection of raw data in Australia, or Anthony’s project evaluating USHCN site quality, then you know that there are even issues with purportedly “raw” data. This is compounded when historica data, especially pre-WWII data are adusted downward, not once, but repeatedly. Doing that alone is nearly guaranteed to introduce or even alter the sign of a trend. Your methods are not the issue. The trouble lies in the data and the methods of recording the “raw” data. Hearking back to “Climategate” the Harry_read_me file is a key example. The “raw” data Harry is dealing with is hand-entered from paper records (no typos?). Notionally that first entry is the raw entry, but Harry had to process it before it can be used and in the actual code there are constants employed on some sections of data with no explanation. There are also documented areas such as Africa, where there is simply NO SURFACE DATA, yet high positive anomalies are imputed to areas which are not even represented in the data; anomalies that are higer than in neighboring areas where there IS data. You cannot calculate those anomalies from the raw data! It does not exist in any form; it was never collected. Even if the neighboring areas were employed to impute estimated surface temerpatures, and the anomalies were calculated using those imputed measures, it won’t work. The “mean” against whihc the anomally had to be calculated was imputed as well. Such anomalies are pure fiction, yet they are employed in calculating the global anomally. It is not you methods, but the raw data itself which is troublesome.
Duster,
“This is compounded when historica data, especially pre-WWII data are adusted downward, not once, but repeatedly.”
Again, I deal with unadjusted GHCN data. For historic data, it is the same as when it first appeared on CD in about 1992 (except for a few added locations).
“The “raw” data Harry is dealing with is hand-entered from paper records”
Well, yes. Paper records are what we have, pre about 1980. Harry is talking about problems with CRUTEM. I don’t know if his complaint is justified, but it is at a later stage of processing than the data I use.
“There are also documented areas such as Africa, where there is simply NO SURFACE DATA,”
The entire world has no surface data, except where it doesn’t – at a finite number of stations. The error due to spatial sampling can be estimated – I write about that here. Incidentally, the claims about Africa are exaggerated; you can see coverage in detail here.
“Even if the neighboring areas were employed to impute estimated surface temperatures, and the anomalies were calculated using those imputed measures, it won’t work.”
Generally, anomalies are calculated based on each stations record. So there is no need to impute; if you have a temperature measured at a location, you also have a history there (for GHCN stations at least). Sometimes there is a problem with having data in the base period – my method avoids that.
Nick,
Thank you for the explanation of the use of anomalies.
My difficulty is that, like most of the population, I don’t really understand statistics (though I did have to study the basics at uni). My gut feeling therefore is that the further one gets from plain data by selection and adjustment the less useful the end result. Now statisticians understand the subject and can’t see why it doesn’t make sense to the rest of us. There is a vast comprehension gulf which is not helped by journalists and politicians making lurid headlines out of what might have been quite sober results.
I doubt whether anything can be done about this difficulty.
Susan
one point re africa. it is an area that doesn’t even pay any attention to the basics like commercial airline safety and african airlines flying outside africa generally have someone on board with a briefcase full of money to pay for fuel due to the fact they have ripped so many suppliers off over the years. to suggest that anywhere in africa can produce reliable temperature records over decades is a step too far when they can’t manage the basic day to day requirements we here in the west take for granted.
And that normal is actually a rolling thirty year average. So the difference between 19.45 and 19.6 is meaningless in the scheme of things.
Yet you still know nothing. A 30 year average doesn’t tell you what the temperature “should” be. Two things come to mind whether absolute or anomalies are quoted. What is the error range. If the average uses temperatures that are only accurate to +- 1 degree then the current temperature may very well be within the error bars. Trying to discern whether the temperature is hotter than normal or colder than normal in this case is not worthwhile. Anomalies have the same problem. They may appear more precise and accurate, yet they are also subject to the accuracy and precision of the measuring devices.
Some of you guys who have never worked in fields that require determinations of accuracy and precision in measurements just don’t ever seem to pick up on the concepts of accuracy and precision and how they can affect the things you do. You all never, and I mean never, discuss these issues when you are giving answers. That is one reason so many people seem to disbelieve you. I have yet to see you give a dissertation on the errors associated with anomalies of temperature readings nor quote how accurate any given stream of data may be. It makes one wonder if you or Mosher have ever sat down and done a rigorous mathematical analysis of the errors associated with the raw data and how it affects your results. If you have, please share it with the group.
“done a rigorous mathematical analysis of the errors associated with the raw data and how it affects your results. If you have, please share it with the group”
Yes. Here is a study of how perturbing actual daily data affects the monthly. Here is a study of its effect on a spatial global average.
“A 30 year average doesn’t tell you what the temperature “should” be.”
We’re not claiming that it is what the temperature should be. We aren’t called Warmists for nothing :). The claim is that it reduces exposure to sampling error.
I am going to ONLY use Dr. Spencer’s data end of story.
The UAH satellite data which is validated by balloon measurements is the only temperature data that both sides trust.
Alan Tomalty
Hardly, Alan. There is an ongoing academic ‘discussion’ regarding how UAH interprets data from the NOAA-14 satellite, which is warm relative to NOAA-15. UAH regards the NOAA-14 warming as spurious. RSS say we can’t currently know which of these 2 satellites is providing wrong data. Maybe NOAA-15 is spuriously cool? Maybe both are a bit off in each direction? No one knows for sure. So RSS split the difference between data from these two satellites. There’s no doubt that the trend in RSS TLT is closer to the surface data than is UAH TLT (see Nick’s chart above). To suggest that “both sides trust” UAH TLT is simply wrong. There is considerable suspicion about it and the whole question remains very much up for debate.
“The UAH satellite data which is validated by balloon measurements is the only temperature data that both sides trust.”
Peeps on here keep saying that.
Neither is true.
Further to DWR’s comment
There is a large disparity between UAH V6.0 and Radiosonde data (RATPAC A) ….
It’s running colder from the time of the sensor change from NOAA 14 to 15 ….
True. I dont trust the SATS either.
Salvatore Del Prete
Nothing like a bit of a la carte skepticism.
Nick, I have a problem with anomalies. Your graph shows a high temperature anomaly 0.8-1.1 degrees in March(?) 2016. Does it mean
a) that the global temperature in March 2016 was that much higher than a global average temperature in 1981-2010 , OR
b) that the global temperature in March 2016 was that much higher than a global average March temperature in 1981 – 2010?
I had a problem with Bob Tisdale’s post https://wattsupwiththat.com/2018/11/05/do-doomsters-know-how-much-global-surface-temperatures-cycle-annually/ which shows a global average temperature peaking in July. The Earth is closest to the Sun in January, furthest in July, why would the global average temperature peak then? I guess that his method might be skewed towards the northern hemisphere. Your result with a peak in March(?) look more like it.
George,
On your a/b, it means b. Anomaly should be calculated relative to the best prior estimate for that number.
“why would the global average temperature peak then?”
Anomalies can’t tell you anything about that effect – because each March (or whichever) is relative to previous Marches, all of which are similarly affected.
Thank you Nick, so March 2016 was unusually warm for March but not necessarily the warmest month of 2016.
George,
In most years the warmest month by absolute is August, maybe July. So learning that in 2018 August was the warmest month conveys little information about 2018. But the high anomaly in March 2016 does tell you something, mainly about El Nino.
I believe this is an important question. Could it be too do with a greater number of weather stations in the northern hemisphere?
I suspect it would be because of the greater landmass in the nh. The sea takes longer to heat up, so summer temperatures do not have as much effect on the overall temperatures. That also highlights the problems with using a ‘global temperature’ since such a thing does not really exist, nor mean anything if made up.
Once you start it and don’t change the rules midway it can give some indication of whether the difference you end up years later is significant or not. However the time period of natural cyclical climate change is so long that measuring this far into the future becomes an impossible task if you want the result to drive public policy. Even if measured for the next 1000 years, that would not tell us whether the earth is heating up naturally or not. All that can be said is that if we do get a cold spell in the next decade, we can confidently say that even if AGW global warming was true, the natural cycle is so much more important as to negate any worrying about any man made effects. I will start to believe in AGW if all of the ice in the Arctic melts. Alarmists however always refuse to state what their bottom line belief is.
The reason July (or August) is the hottest month in absolute terms globally is because of the land distribution on earth. The northern hemisphere has a much smaller land/ocean ratio than the southern hemisphere (about 1/1.5 NH vrs 1/4 in SH). Land warms more quickly than water, so earth as a whole tends to be warmest during the NH summer.
Wouldn’t that be a much LARGER ratio (.67 vs. .25)?
“But fortunately, anomalies are much more homogeneous.”
Not necessarily, if you examine anomalies for local regions, they’re all over the place. When one place is anomalously cold, another is anomalously hot, even across meaningful intervals of time. Even otherwise similar parts of the globe can have wildly different anomalies and they can even differ between adjacent micro-climates. When integrated over time, homogeneous anomalies can only result when the sample space is a normal distribution. If you already have a normal distribution of sample sites, why bother with anomalies? This same flaw applies to Hansen/Lebefeff homogenization which is only valid when homogenizing a normal distribution of sites. Keep in mind that it’s this same distribution of sites that establishes the predicted behavior which is subtracted from the measured data to produce the anomaly.
A big problem with anomalies is that they remove seasonal variability which eliminates an objective perspective about what change means relative to reality and the perception thereof. Furthermore, the differences between hemispheres are so large and they respond in such an independent manner from each other, any kind of averaging across them will misrepresent what’s actually occurring which is only visible when analyzing hemispheres or portions of hemispheres by themselves.
“Once the anomalies are calculated, they have to be spatially averaged.”
Spatially averaging temperature is an issue by itself, even when averaging anomalies. A linear average of temperature is the temperature that would arise if you uniformly combined all the matter from which the temperature arises, assuming that it all has the same heat capacity and the mixing process added no new heat. A linear average of anomalies is even worse, since the baseline temperature goes away, a 1C change from 270K is considered the same as a 1C change at 300K, even though a 1C change at 300K requires 50% more forcing to achieve and maintain.
Neither of these are the average that’s relevant to the physical mechanics of how the climate operates where W/m^2 are linear to each other and are all that matters relative to the energy balance and the sensitivity, thus local temperatures must be converted into W/m^2 of emissions, which are then linearly averaged and the result converted back into an EQUIVALENT temperature. In fact, the entire analysis should be done in the linear domain of W/m^2 and converted into a change in degrees K only at the very end. W/m^2 are linear to what satellite sensors are measuring directly anyway and is the natural way to process weather satellite data.
“Not necessarily, if you examine anomalies for local regions, they’re all over the place. When one place is anomalously cold, another is anomalously hot, even across meaningful intervals of time.”
I drive approximately 32 miles one way to/from work, about 14 miles as the cow flies. I’ve seen temperature differences of up to 27 degrees between work and home, in a 45 minute time frame.
If a sensor near my house went away, and someone used the sensor at work to “estimate” data near my house, well, I’m not sure what you’d call that, but “accurate” is not a term that would be appropriate.
The problem with anomalies for understanding the issue of “climate change” is that they zero base the discussion.
Say the mean temperature of the troposphere at sea level is 288.5K, and that it represents an anomaly of 0.5K over an earlier period. If the mean temperature increases to 289K, the anomaly has doubled from 0.5K to 1K. This leads to hysterical newspaper article saying that a key indicator of global warming has increased 100% and that we are all gonna die.
In fact, the mean temperature of the troposphere at sea level in that scenario has increased by 0.2%, or as we say in Bankruptcy Latin, bupkis. But, there are no hysterical headlines from 0.2% and no sit-ins in Nancy Pelosi’s office.
Anomalies may be useful, but they represent a real communications issue between the world of scientists and the world of the unwashed. One, that scientists must be a lot more careful about.
“This leads to hysterical newspaper article saying that a key indicator of global warming has increased 100% and that we are all gonna die.”
I hope not, and I don’t think that does happen. It never makes sense to express a temperature as a percentage of another. Maybe a temperature change, but that 100% is dependent on, for example, what anomaly base you choose. The 0.2% is also meaningless. After all, if your body temperature increases by 2% on that scale, you really are gonna die.
The bigger issue with zero basing is that it makes a 1C change in the poles equivalent to a 1C change at the equator. Most of the change we observe comes from polar regions which ends up being weighted too high relative to the whole and the energy balance.
Bingo!
Is what I said as well. You have to balance it all out to zero latitudes.
We have seen it within the last month. Almost nobody understands how tiny the changes you people are talking about are. The so called 1.5° limit the press is touting is actually a change of about o.2%.
I always enjoy Nick Stokes informative posts.
Thanks, DP
Seconded.
Agreed
I suppose the same approach could be used to find anomalies by averaging all of the telephone numbers in the world, and the result would be just as spectacularly meaningless and useless.
Unfair comment because temperature does have a linear meaning. Telephone numbers have no meaning when compared against each other except for area and country code equivalency.
Temperature only has a linear meaning relative to stored Joules, while its meaning relative to Watts/m^2 is very non linear where W/m^2 are proportional to degrees K raised to the forth power and W/m^2 are all that matters relative to forcing, the radiant balance and how these change over time. Consensus climate science assumes approximate linearity between W/m^2 and degrees K in their definition of the ECS and this invalid assumption is one of the many serious errors that was canonized in the first IPCC report. The other big one was the misapplication of Bode’s linear feedback analysis to support the conjecture of massive amplification by unspecified positive feedback.
Temperature in one location has little to no meaning when compared to temperature measured elsewhere. Averaging them is meaningless.
Land Surface Air Temperature Data Are Considerably Different Among BEST‐LAND, CRU‐TEM4v, NASA‐GISS, and NOAA‐NCEI https://doi.org/10.1029/2018JD028355
The mean LSAT anomalies are remarkably different because of the data coverage differences, with the magnitude nearly 0.4°C for the global and Northern Hemisphere and 0.6°C for the Southern Hemisphere. This study additionally finds that on the regional scale, northern high latitudes, southern middle‐to‐high latitudes, and the equator show the largest differences nearly 0.8°C. These differences cause notable differences for the trend calculation at regional scales. At the local scale, four data sets show significant variations over South America, Africa, Maritime Continent, central Australia, and Antarctica, which leads to remarkable differences in the local trend analysis. For some areas, different data sets produce conflicting results of whether warming exists
The anomalies map for October 2018 displays the most red near the polls where the least temperature data are collected. We just had a report of the lowest temperature ever recorded in Antarctica. Apparently, when they extrapolate the temperatures around the South Pole, they use the warmer readings from hundreds or thousands of miles away to do the extrapolation.
“Apparently, when they extrapolate the temperatures around the South Pole”
Again, they don’t extrapolate the temperatures, for the kinds of reasons you are suggesting. They interpolate the anomalies, of which the most influential there is the Pole itself.
Yes, there was a report of the lowest temperature ever recorded. they looked in a very cold place. What that report didn’t say is whether it was any colder than it usually is there. Anomaly.
A funny thing about the people who say they don’t like anomalies –
if Spencer had reported the UAH /TLT temperature for October as an absolute value, say – 11.24 C., they would immediately ask how that compared to previous Octobers. Warmer, colder or business as usual?
The reason many don’t like anomalies is that far too many people are misled to think that the numerical value of the anomaly has a correspondence to an actual trend and that the same presumed trend will continue forever based on the unwarranted assumption that the trend can only be a consequence of increased CO2 concentrations. This is also why alarmists seem to like anomaly analysis as it can effectively obfuscate inconvenient facts.
The definition of an anomaly is, “something that deviates from what is standard, normal, or expected.” and it’s clear that what’s standard, normal or expected about the climate is not known with enough certainty to turn insignificant anomalous mole hills into mountains of catastrophic climate change. The analysis itself is relatively sound as long as you understand its limitations, conform to its requirements and don’t try to apply it beyond its capabilities or extend its results beyond reason.
notevil
And after all that complaining and bluster, you would still want to know how a given months absolute value compares to previous months.
Snape,
You’re missing the point. You’re trying to substantiate a failed hypothesis by pointing to an apparent trend and claiming that it will continue forever, moreover; depending on when you start the trend, you can make it go in any direction you want it to. The problem is not with anomaly analysis itself, but with how it’s applied and the inferences made relative to the non problem of catastrophic climate change caused by CO2.
This is not science. This is flailing to avoid answering hard questions. For example, the nominal ECS claimed by the IPCC is 0.8C per W/m^2 of forcing, yet an 0.8C increase in temperature increases surface emissions by 4.3 W/m^2. One of the many hard questions that nobody on your side can answer is that the first W/m^2 of the 4.3 W/m^2 incremental emissions is offset by the forcing, what’s the origin of the power that offsets the other 3.3 W/m^2? Another unanswerable question is that each W/m^2 of existing solar forcing results in only 1.62 W/m^2 of surface emissions, so how can the climate system distinguish the next incremental Joule from all the others so that it can be so much more powerful at warming the surface than any other Joule?
The most common bogus claim is positive feedback, but that just illustrates ignorance as any system whose positive feedback (in this case 3.3 W/m^2) exceeds the forcing (in this case 1 W/m^2) is unconditionally unstable and this certainly doesn’t describe the Earth’s climate.
The other approach to deflect truth is to insult the messenger, for example, calling people paranoid for speaking the truth. This is a feeble attempt for rationalizing why its OK to ignore a truth that’s so significant and so catastrophic to your belief system that it would collapse if you were to acknowledge it.
The most important truth that far to many deny is that climate science is horribly broken and the reason is the conflict of interest at the IPCC where they require a significant effect by man to justify their existence. This is not a made up charge and they are very transparent about their agendas, biases and charter, but you do need to pay attention. They consider the ends to justify the means, but the means of destroying science can never be justified for any ends, and even more so when those ends are the destructive and repressive agenda of the UNFCCC. Once more, this is not a shallow charge and they are very transparent about how they want to equalize the wealth of the developing world and the developed world by making energy prohibitively expensive for the developed world, while subsidizing it for the s-hole countries. The Paris BS was about this and nothing else.
The IPCC/UNFCC is is pursuing a very anti-west agenda driven by greed and envy over western success and its disturbing that anyone can be so oblivious to this obvious truth. They and their followers are deluded into thinking they’re saving the world, when in fact, they’re out to destroy everything that makes the world worth saving.
This is why I thoroughly enjoy co2isnotevil. Thank you for your precise, succinct appraisal of reality. This is my biggest gripe with catastrophists… They haven’t the grit or personal integrity to challenge themselves.
It took me nearly 5 years of questioning to finally understand the game, and it’s much more interconnected than most think..
The end game is depopulation. Full stop.
Then the catastrophists claim that’s a hoax whole simultaneously claiming overpopulation.
This is why generation Z is trending more conservative. They see through the inconsistency and hypocrisy
honest liberty,
I think the end game is power and control as it is with any political goal and this power comes from controlling money. Population control is somewhere, but where it’s most important is in those countries on the receiving end of climate reparations. The GCF ‘solution’ only makes the population problem worse as the decreased population in the developed world will be more than offset by increases in the developing world.
If you look at how the Green Climate Fund is structured, the World Bank is acting as an intermediary (money laundering) agency removing the need for transparency and accountability for where the money goes and how it’s spent. Just look at all the World Bank affiliations with those responsible for the IPCC’s Summary For Policymakers, which its authors are said to have joked about it being a Summary By Policymakers. This represents the pinnacle of absolute power and control and it’s all in the hands of UN bureaucrats. It’s bad enough that many of its member states exert this level of control over their own people, but granting this kind of power to a global bureaucracy goes against everything the free world stands for.
And don’t forget the same lies and deception caused the Ozone scam as well. Both scams are destroying science. All caused by the UN.
It do get a laugh when you show data like this, the whole concept of an anomaly of 0.2 degree in the way you have analyzed the data is funny.
Perhaps you can add the expected natural variation error bars for your technique over the data for us. We aren’t even talking real errors here just the error margins in your data analysis. If you don’t know how to do it perhaps I can give you a hand. So lets do it.
Whatever you are using as your base 1951-1980 or 1981-2010 you need to analyze the error in it using the same technique you are going to use one the data section. Take the highest value and subtract the lowest value from your base data and that is the error range of your base. You get the logic here we are applying the same technique we are using on the new section we are interested in on the base section to see what the natural variation is in our base data. Basically we are checking what signal variation we would expect to see if our new data was nothing more than an extension of the base data. If you want to be over the top accurate you can repeat the process dropping one year from the start and then one year from the end so you cover all start and end years on your base data and you have extracted every ounce of possible statistical variation.
Now add the analysis error bar to each of your new section data graph. The total error is worse than that as that is just the error in the statistical analysis itself 🙂
I should add if your base has already a slope in it we need to do a little more because we are trying to pick man made variation away from the background slope. You aren’t even entitled to use the mean value of the base if it has a slope in it. So let me know if you need me to explain how to deal with that, at the moment I am assuming you are happy your base is an absolute fixed value.
LdB,
I do quite a lot of error analysis. Recent posts are here, here and here. But it is important to break down the error estimate; just one ± number really isn’t enough.
An example of this is the base error that you mention. It actually isn’t such a big component, because of the stabilising effect of a 30 year average. But it also matters less than some others. The reason is that it is, for the most part, just a number that is added to all the data. So it doesn’t change trends, or the various topography of the results. In fact, you shouldn’t have to care what the base was, since the choice is arbitrary.
But there is one aspect that does matter, which is month to month variation. Say in those 30 years, it just happened that you had a run of cold Junes. Then you’ll find in current anomalies that Junes tend to be warm relative to, say, May or July. That could matter. That is the chief reason for insisting on 30 years, otherwise you’d get most of the benefit from just using one.
Spatial interpolation error is the main one, and I do quite a lot of study of that. It’s part of the reason why I track four different integration methods.
This is really really basic take a standard sine wave if I use your technique I get an anomaly even if my values exactly match the sign wave … why is pretty obvious because the mean is zero or for non even waves the average of the part wave left over beyond an even cycle.
So lets not even consider climate science I am going to give you a series of normal natural cyclical wave shapes your anomaly analysis must be able to give me the deviation from the natural cycle.
Do you agree your current analysis fails dismally on that situation in that it reports the normal natural cycle as the anamoly .. that is a yes/no answer you can’t stokes defense dodge it.
You say you have a background in this so how you fix that problem should be well know, so do it please. Then re-run your analysis instead of the stupidity above.
“This is really really basic take a standard sine wave if I use your technique I get an anomaly”
No, you don’t. The anomaly at each point in time is the difference from expected. If you expect a sine wave and get it, the anomaly is zero.
That is why monthly anomalies are calculated; it takes out most of the seasonal variation.
Ok then I expect some strange base and you display show exactly what I expected and is useless .. and I am serious 🙂
I should add it is your responsibility to prove your baseline in science norms. This is the problem with soft sciences they forget what they have to show especially if they want me to make very expensive decisions on it.
“to prove your baseline in science norms”
The correct baseline is … whatever works. The objective is to minimise the variance of the residuals. It isn’t to provide some kind of halcyon period. You do have to make sure that the set of offsets that you subtract is orthogonal to whatever you are trying to calculate – in particular, doesn’t take some of the trend with it.
So if there is a repeated cycle, as with annual, you use fixed phase points in the cycle to form the expected value.
Your baseline doesn’t work .. stop trying the stupid stokes defense rubbish, you are insulting people intelligence. If you want to see how stupid your analysis is use it on a period in the middle of your baseline period … it will show you how utterly stupid your analysis method is … anomaly in the middle of a baseline anyone 🙂
The real problem is there is likely a number of very large long timespan signals in your baseline it probably already has a warming slope in it. Therefore you calling your result an anomaly is actually a lie in the first place.
At this stage you can’t even honestly deal with issues and I don’t know why you wasted your time.
If you need help .. the top line output might help you 🙂
Thanks for submitting the article so quickly Nick. You’re unflapple persistence in the face of antagonism is quite admirable. I certainly can recall times when I attacked your points vigorously.
But not on this subject. Good on ya.
Thanks, Charles, for the suggestion and your promptness in posting. There is a back story to my early response. I’ve posted the October result, which I usually compare with GISS when it comes out, which should be soon. I see Bob T is waiting on it. GISS probably won’t rise quite as much, because TempLS dipped a bit more last month, so part of the October rise (0.16°C) rise could have been catch up.
I see GISS is now out and I was wrong there. GISS rose even more (0.25°C).
“GISS rose even more (0.25°C).”
For which point on the planet? Average temps are meaningless.
Jeff, if the temp in your kitchen is 73 degrees, if the temp in your bedroom is 71 degrees, and the temp in your living room is 72 degrees, and the temp in your bathroom is 70 degrees, what is the temperature inside your house?
Thank you for the analysis, Nick Stokes.
I appreciate the effort to dig into the data even if we may not agree on policy.
i will second that sn, nicks efforts are always appreciated by me.
I don’t like the prejudicial term anomaly. Who decides what is standard or normal? All should use the term difference, as in these data are different by such amount versus this baseline. Anomaly, by definition, implies there is a known ‘true’ value. I would hope that all could drop these weighted terms and then talk about if there really is a significant change in any data set taken over the long term.
So your base period is 1981 – 2010 and your anomalies for every subsequent year from 2011 to 2018 is warmer than the base period.
Seems to me that in a gradually warming environment, whether naturally induced or not, this would always be the case.
I would wager that if your base was 1951 – 1980 , the temperature anomaly of years 1981 to 1988 would be warmer every year.
Further, if your base was 1881 – 1910, the temp anomaly of years 1911 to 1918 would also be warmer every year.
If you had similar data going back to 1650 and set your base year 1651 – 1680 your anomalies of years 1681 to 1688 would be warmer as well.
All you have done is prove it is still warmer today than it was between 30 years ago though the same can be said for most every 30 year period since the Nadir of the LIA
Nick,
You said, “For anomalies, I subtract for each place the average for April between 1951 and 1980.”
I’d like to be clear about the process, and what you are calling an “average.” From other things that I have read here, I’m of the impression that the way the monthly ‘average’ values are calculated is to take the diurnal highs for every day and determine the arithmetic mean. The same is then done for the diurnal lows for every day. Then, the median (special case of two values only) or mid-range value is calculated
( https://en.wikipedia.org/wiki/Mid-range ) from the two arithmetic means. Is that correct? If not, can you please explain just what you mean by “average?”
It isn’t just a matter of semantics, because as the Wikipedia article notes, using mid-range values comes at some cost with respect to utility and robustness.
Clyde,
“I’d like to be clear about the process, and what you are calling an “average.” “
Like GISS, HADCRUT and the other majors, my starting point is the monthly average, TAVG, published by GHCN and ERSST. It’s true that GHCN also publishes average TMAX and TMIN for the month, and as a matter of arithmetic, usually TAVG=(TMAX+TMIN)/2. The arithmetic is actually done now by the national met offices, which submit this data on the CLIMAT forms.
But yes, you can say that it is in origin the daily midway-value. Remember too that the majority of the data is SST, for which the diurnal variation is smaller and smoother. But the arithmetic of anomalies that I describe happens after the site monthly average is calculated. And the rule for anomaly is, however the monthly numbers were calculated, compare with the same calculation for the base months.
Nick,
Here’s the thing. There has not only been criticism of anomalies, but there has been criticism of the claimed precision that the press (and agencies such as NOAA and NASA) reports. However, biological organisms don’t directly experience anomaly temperatures, they experience actual temperatures. Therefore, predictions of the effects of warming are impacted by the actual variance of those temperatures.
Let’s take a look at your example. You said, “The result for temperature is an average sample mean of 12.53°C and a standard deviation of those 1000 means of 0.13°C. … But if I do the same with the anomalies, I get a mean of 0.33°C (a warm month), and a sd of 0.019 °C. The sd for temperature was about seven times greater.”
Clearly, if we want to talk about current or future actual temperatures, we can’t remove variance simply by converting to anomalies and then converting back to actual temperatures and claim that the SD is now reduced to 1/7th of the original SD. Variance is a function of both range and magnitude of the data. Subtracting some constant from the raw data reduces the range and magnitude. Therefore, I seriously question whether one is justified in expressing current temperatures with a certainty (SD) that is associated with the anomalies derived from the temperatures.
Then there is the issue of the monthly ‘average.’ Although the process for calculating a mid-range is arithmetically equivalent to a ‘mean’ of two numbers, or even the ‘median’ of two numbers, it has none of the properties of a parametric statistic of means or medians. That is, one is not justified in increasing the precision as with the standard error of the mean. Therefore, even though the practice is questionable when the sample is from a population that isn’t limited to random variation, there is even less justification for claiming increased precision in the calculation of a mid-range value.
I have argued before that it is inappropriate to conflate SST with land surface temperatures.
“I have argued before that it is inappropriate to conflate SST with land surface temperatures.”
nobody is conflating them
its an index
Like an index of the price of fruit — by averaging the price of apples and oranges.
clyde, i think apples and cabbages would be more appropriate.
+10
Earliest snow in Houston ever recorded. https://www.khou.com/article/news/local/snow-vember-the-earliest-houston-snowfall-ever-just-happened/285-614153347
Record snowfall in Vail recently too.
The sky isn’t falling, it’s the temperatures and we’re in for another harsh cold period
Still trying to figure out the point. Damping data to an arbitrary baseline does not reduce uncertainy any more than averaging does. These processes are just anesthetics, numbing the pain naked apes experience trying to comprehend complex systems.
Ultimately, we have to deal with the complexity; diurnal and seasonal variation sometimes more than 50x the trend and all.
The line graphic holds no surprises. We all know it has cooled since the supaaare nino. We all know GISS gets wild on the high side.
Full credit for spherical triangulation. Now you get to explain how Carbon dioxide produces the very peculiar distribution of warming and cooling. My suggeston: use your method on the raw data.
Gordon,
“My suggeston: use your method on the raw data.”
Thanks, but that is what I do. I use GHCN unadjusted, which can be traced back via the CLIMAT forms to the original AWS data.
The reason that anomaly gives an improvement is that you make use of the huge amount of data from the past. If you just had one month’s data to average, you’d have to accept the error that comes with sampling many different climates. But with hundreds of months of past data, you can characterise and deduct the climatology component, so that the sampling error just relates to what you want to know – what was different about that month.
I realize this is orders of magnitude more complicated, but I mean raw as in un-anomalized.
Let’s make this easy.
Day 1 avg – 53, Day 2 avg – 53, Day 3 avg – 53. Each reading has an accuracy of +- 5% and a precision of +- 0.5 degrees.
Then we have Day 4 – 54 avg. Same accuracy and precision values.
What is the baseline value of Day 1 – 3? What are the range of values due to the measurement errors?
What is the anomaly for Day 4? What are the range of values due to the measurement errors?
Please include your analysis of error for each.
Gordon,
The point is that different stations have different average temperatures. You wouldn’t expect the average July temperature in New York to be the same as the average May temperature in Buenos Aires. Taking the departure from the station averages gets at how the temperature has changed in each of those locations. Because you eliminate the wide range of absolute temperatures across the world, the certainty that you are characterizing the true average is higher for anomalies, as you can see from Nick’s graph. The absolute temperatures have a much wider spread (and lower frequency of each) than the anomalies. Does that make it any clearer?
Diurnal variation is averaged, as is variation within a month. Since the anomalies are calculated for each month, seasonal variation can be retained.
(Not all research does it this way, though; it depends on the study. Some studies look at changes in daily highs and lows over time, since CO2-induced climate change makes specific predictions about them.)
Kristi,
You said, “Some studies look at changes in daily highs and lows over time, since CO2-induced climate change makes specific predictions about them.”
Yes, looking at highs and lows separately instead of the ‘average, can be quite instructive. See my graphs at the following:
http://wattsupwiththat.com/2015/08/11/an-analysis-of-best-data-for-the-question-is-earth-warming-or-cooling/
RSS gets wild, not GSS. My bad.
Nick,
When you start with CLIMAT or ERSST, you cannot deduce much about accuracy, because there have already been mistakes made before then.
Whereas you claim that most stations from Australia use Automatic Weather Station data, AWS, there are errors in that also. Witness our year-old discovery that BOM were setting a cold minimum temperature and witness the debate – far from resolved – about why BOM use recording times for AWS temperatures that do not comply with international standards, like the use of a 1-second integration. But these errors might be small compared with what is perhaps the largest source of error in the methodology now used, namely the selection of which stations will go into the CLIMAT files. You are familiar with past assertions that historically, if you use stations from cold places, then drop them in favour of stations in hotter places, you will change the apparent warmth of the record, its trend and so on down the road to poor data. I have not seen a refutation of the claims that this dropping of stations has been answered. It is likely the cause of the difference between the oft-quoted early Hansen graph showing a big temperature dip roughly 1945-75, compared with the latest reconstructions from which the dip has been disappeared.
Even more fundamentally, in the case of the dog that did not bark, I have for a year now been repeatedly asking your BOM for a figures or figures that the BOM use to express total uncertainty in temperature data. I have asked, inter alia, if you say that a new temperature sets a new record, how much different from the former record-holder has it to be, to be statistically significantly different. The answer, after about 5 emails, has been … crickets.
(Of course I am not suggesting that you, Nick, are responsible for BOM conduct, but a neutral acientist should be aware of it and calling out problems from their beginnings.)
Can I suggest to you that an analysis of uncertainty about numbers, such as you are presenting here, has little meaning unless you include expression of the fundamental uncertainty in the numbers? When I write uncertainty, I mean primarily the old saws of accuracy and precision as taught to science students for the 50 years before 2000, at least to my poor understanding of what is lately taught about this fundamental concept. See the Nic Lewis comments about the Resplandy et al ocean heat paper. The error they made, overall, was to neglect formal error analysis (or to do it poorly). So I suggest that your contribution here might benefit from formality also.
Later, I have more to say about how and why standard deviations change in the ways you illustrate, plus how incorrect it is to assert that by using the anomaly method you are increasing the quality of the data you are tormenting. Later.
Cheers Geoff.
Geoff,
Speaking of “crickets,” Stokes hasn’t replied to my concerns expressed last night about the issues of precision and uncertainty.
Sometimes people think they are so smart, that they outsmart themselves.
Nick, an intelligent man, has tricked himself into believing this isn’t a religion, when all evidence illustrates it as exactly that.
These people are religious, period.
Nick,
There are dozens more comments I could make about your essay, but I shall be selective.
Here is one. You write ” For anomalies, I subtract for each place the average for April between 1951 and 1980.”
That procedure is only valid if the factors that cause variation in the 1951-80 period are the same as those for the rest of the series. To me, that is a hard test to pass. It would be wrong to assume that all was well.
Geoff.
During 1951-1976 the Southern Oscillation was mainly on the La Nina side of neutral, meaning cooler and wetter conditions for most of the world. Surely that should rule it out as a sensible period over which to average the data.
(I know, it’s a bit like claiming the the pre-industrial temperature should be calculated from 1850-1900 data, which for land-based data meant data mainly from Europe, which was recovering from the Little Ice Age at the time.
Testing 1,2, 3
Dear Mod or Moderator,
I’ve tried my comment on the test page and it went up but I can’t seem to paste the same comment here. I’ve tried twice, shall I try again?
cheers,
Scott
Nick, first, thanks for your contribution. Most interesting.
Next, a question. You say:
My question relates to something that I rarely see addressed—the uncertainty of the anomalies. The problem, which you haven’t addressed in your comments, is that people generally do NOT carry the uncertainty through the procedure of calculating any given anomaly.
In that procedure, you calculate the mean value for each month, and subtract that month’s mean value from all of the records for that month.
However, what is NOT generally carried forward is the standard error of the mean value for the month. For example, you have not included this error in your calculations described above.
Comments?
w.
Right to the heart of the matter as usual, Willis. 🙂
When one brings up measurement errors, people in the field just gaze off into the distance and start whistling.
“Averaging” temperatures is a filtering operation that removes high variability terms. The reduction of calculated standard deviation is the result simply of removing higher variability terms. There is no actual reduction in uncertainty. The standard deviation of 0.13°C includes the higher variability terms. The standard deviation of 0.019 °C does not. In either case, a smoothing operator has no effect on other error terms such as measurement error, which cannot be reduced by appealing to statistical miracles.
“Averaging” temperatures is a filtering operation that removes high variability terms. The reduction of calculated standard deviation is the result simply of removing higher variability terms.
As the illustration: Below the chart with 2 temperature signals superimposed on each other (Darrington, WA). Blue curve is the daily temperature sampled every 5 min round the clock. Red curve is the monthly average of daily (Tmin+Tmax)/2.
Subhourly and monthly temperatures.
Willis,
“The problem, which you haven’t addressed in your comments, is that people generally do NOT carry the uncertainty through the procedure of calculating any given anomaly.”
There was an interesting discussion some years ago at
Climate Audit, going into some detail on this. It was about Marcott et al, where they had graphed the uncertainty and people were perturbed by the anomaly effect. It was rather different in that paleo case, because the base period was relatively short, and the low resolution created a lot of autocorrelation (and so a “dimple”), so the error was relatively more significant. The interesting aspect is that the anomalies are reduced in uncertainty within the anomaly base period (because the mean being subtracted is correlated with the data) but increased outside.
However, these are small (and quantifiable) effects. They are reduced by the usual damping of noise by the process of taking a 30 year average. And as I mentioned elsewhere in the thread, it isn’t even a damaging uncertainty. The reason is that the base average is subtracted from all the temperatures, so error won’t affect trends or other shape features of the profile. It could be as if the error for GISS corresponded to using a 1952-81 period when you thought you were using 1951-80. Would that really matter?
There is a rather specific effect that is small but does matter, which is month to month fluctuation. This is smoothed in the base period, but still has persistent effects. I have tabulated these and their corrections here.
“For example, you have not included this error in your calculations described above.”
The way I deal with the whole process is by fitting a linear model for both the global average G and the offsets L (which are subtracted to create anomalies) jointly. So the residuals give you the estimate of error of both L and G, and you can unravel all that by studying the matrix that the model generates. That can give you a lot of information about the modes; I’ve written on that here.
You are still ignoring the measurement errors and glossing over their effect on the results.
If i average 54 degrees and 54 degrees obtained from devices that are 5% accurate and have a precision of +- 0.5 degrees, what are the possible values of an average temperature, i.e. a baseline?
Now the third day has a reading of 55 degrees from an instrument with identical accuracy and precision. What is the range of the anomaly?
Please explain how anomalies can remove these measurement errors by averaging.
Here is my hypothesis. You keep proposing that the average of a large set of data reduces the error. To me it reduces the error of the average, not the average of the error.
You were probably raised in the digital era, not the analog era. To many of us who were raised in the analog era some of these things are second nature. Just one example, if a power plant’s generator was off by 0.001% how many seconds per day would be lost/gained on a clock using a synchronous motor designed for 60 Hz? How many Hz would a transmitter at 1 Mhz be off when using an oscillator set by using a fully analog oscilloscope synced to a 60 Hz power line that was off by 0.001%? We learned that accuracy and precision are important things and can’t be wiped away through statistics. I couldn’t average a +0.001% and a -0.001% and say, hey look I have an accurate average!
Jim,
“Please explain how anomalies can remove these measurement errors by averaging.”
Anomalies don’t reduce measurement error; they reduce sampling error.
“To me it reduces the error of the average”
Yes. It’s the average that we are calculating.
“can’t be wiped away through statistics”
Statistics predated digital (so did I).
@willis
The error in the mean temp value includes a fixed calibration error, which is subtracted out when the anomaly is computed. For example, an uncalibrated mercury thermometer might have an absolute error of several degrees, but still be fairly accurate for computing _temperature differences_. So I think that is why taking anomalies does reduce uncertainty under certain conditions. The largest part of the error seems to be correlated with the mean value itself, which is subtracted out (as Nick pointed out).
So, has anyone validated the use of anomalies to compute mean values for climate parameters by applying a post-analysis step which adds the computed mean anomalies back into the mean temperatures, and then analyze any resulting errors? For example, if two adjoining regions had distinct mean temps, say 18C and 16C but shared a common anomaly of +1C, then we should see 17C and 15C in the mean base interval values, for comparison. Does that make sense?
Willis,
I think that you have missed some of the comments. The monthly temperature is NOT an arithmetic mean, but it is instead, a mid-range calculation of the average of two diurnal extremes. One is not justified in applying a standard error of the mean to a mid-range ‘average!’ A mid-range value does not have a frequency distribution or probability distribution function. Therefore, there is little that can be said about statistical significance.
Willis,
I think that you have missed some of the comments. The monthly temperature is NOT an arithmetic mean, but it is instead, a mid-range calculation of the average of two diurnal extremes.
Some time ago had a look into NOAA USCRN data. If I remember correctly, for their monthly temperature series they average (arithmetic mean) daily (Tmin+Tmax)/2. So, it’s sort of averaging of daily mid-range values. Not sure how other temperature series are constructed, whether it’s also averaging of mid-range daily values or they actually use mid-range monthly values (i.e. monthly [Tmin+Tmax]/2). That would be bizarre, I would say.
Paramenter,
Unfortunately, the exact procedure is rarely addressed. That is why I asked Stokes to verify my understanding of the procedure. There is too much going on behind the curtain! Whether an average of mid-ranges, or mid-ranges of averages are used, or different agencies use different procedures, it is all rather sloppy! What is needed is a standard that all involved agree to abide by, and make it widely available. Unfortunately, the use of mid-range temperatures at any stage of the calculations has unwanted consequences that are cavalierly ignored.
( https://en.wikipedia.org/wiki/Mid-range )
Hey Clyde,
Clarity on procedures is required indeed. I reckon Nick kindly confirmed that our understanding is correct. So, the procedure is as follows:
1. Compute daily (Tmin+Tmax)/2
2. Per each month average (arithmetic mean) daily (Tmin+Tmax)/2
3. Computed that way monthly averages are basic ‘unit blocks’ for any large scale climatic analysis as calculating baseline anomalies.
Whether an average of mid-ranges, or mid-ranges of averages are used, or different agencies use different procedures, it is all rather sloppy!
It is. My constant impression is that they believe that having hundreds thousands of records and applying anomalies to them causes that all noise (measurement uncertainty, rounding errors, spatial sampling problems, poor historical records, etc.) somehow vanish.
Thanks for mid-range values definition (used in climatic industry) – even sloppy Wikipedia says:
‘The mid-range is rarely used in practical statistical analysis, as it lacks efficiency as an estimator for most distributions of interest, because it ignores all intermediate points, and lacks robustness, as outliers change it significantly. Indeed, it is one of the least efficient and least robust statistics.’
But – do not worry. Averaging it and applying anomalies converts mud into the gold!
Paramenter,
You said, “2. Per each month average (arithmetic mean) daily (Tmin+Tmax)/2”
That is not my understanding. It is my understanding that the monthly ‘average’ is obtained from averaging all the daily highs, averaging all the daily lows, and then calculating the mid-range value from those two numbers!
Paramenter,
P.S.
The situation gets worse! Those 12 monthly mid-range anomalies are then used to calculate an annual mean for the station. The general rule of thumb for statistical significance is 20 to 30 samples, not 12!
“It is my understanding that”
That does happen. There is an example of a B-91 form here, where the max’s and min’s are averaged (and no mention of TAVG). But it could well be done the other way too; you get the same arithmetic answer if all data is present. It can make a difference if there are days with max but no min entered, or vice versa; it depends on how missing data is handled.
“Those 12 monthly mid-range anomalies are then used to calculate an annual mean for the station”
Yes. What else? The months are not statistical samples from the year. The year does not acquire statistical significance or otherwise from the fact that you have decided to divide it into 12 months.
Nick,
Those annual means, composed of 12 calculated mid-range anomalies, are then used to create a 30-year baseline and a recent annual time-series, where parametric statistics are used unjustifiably to pronounce how much hotter this year is compared to last year. You should be glad that your bank doesn’t handle your money so carelessly and with so little regard for standard accounting practices!
This is why you are so frustrating. You never admit that there is anything wrong in the ‘counting house,’ and always come to the defense of those who are guilty of behaving cavalierly. It isn’t “turtles all the way down,” it is sloppy work all the way down!
This is the heart of the problem of attempting to calculate global temperature.
Essentially two important differences are conflated and then glossed over.
Spatial sampling is a three dimensional problem, while anomalies may deal – nominally (per-say!) – with altitude issues, they don’t deal with directionality, or more accurately, the symmetry of the two dimensional temperature distribution.
It is assumed that because anomalies are used, the temporal correlation between any two points will have the same spatial scale in any direction. However, spatial anisotropy in the coherence of climate variations has been well documented and it is an established fact that the spatial scale of climate variables varies geographically and depends on the choice of directions. (Chen, D. et al.2016).
The point I am making here is completely uncontroversial and well know in the literature.
What Nick and all climate data apologists are glossing over is that despite the ubiquity of spatial averaging, its application – the way it is applied particularly – is inappropriate because it assumes spatial coherence. But climate data has long been know to be incoherent across changing topography. (Hendrick & Comer 1970).
In layman’s terms (Although I am a layman!) station records are aggregated over a grid box assuming that the fall-off or change in correlation between different stations is constant. So conventionally, you would imagine a point on the map for your station and a circle (or square) area around it overlapping other stations or the grid box border. However, in reality this “areal” area is actually much more likely to be elongated, forming an ellipse or rectangle stretched in one direction – commonly and topographical north/south in Australia.
But it is actually worse than this in reality because unless the landscape is completely flat, coherence will not be uniform. And that is an understatement because to calculate correlation decay correctly, spatial variability actually has to be mapped in and from the real world.
Unfortunately, directionality would be a very useful factor in the accurate determination of UHI effects, due to the dominant north/south sprawl of urban settlement. Coincidentally, all weather moves from west to east and associated fronts with their troughs and ridges typically align roughly north/south.
The other consequence of areal averaging is that it is a case of the classical ecological fallacy, in that conclusions about individual sites are incorrectly assumed to have the same properties as the average of a group of sites. Simpson’s paradox – confusion between the group average and total average – is one of the four most common statistical ecological fallacies. If you have the patience, it is well worth making your own tiny dataset on paper and working through this paradox as it is mind blowing to apprehend!
What I believe this all means is that the temperature record is dominated by smearing generally and a latitudinal smearing i.e east/west particularly. And this means for Australia and probably the US as well, that the UHI effect of north/south coastal urban sprawl is tainting the record.
Either way, if real changes in climate are actually happening locally, then this local affect will be smeared into a global trend – by the current practice – despite or in lieu of any real global effect.
So, yes I do think the globe has warmed since the LIA or at least the last glaciation but I don’t believe it is or can be detected in any of the global climate data products.
Scott,
+1
“But climate data has long been know to be incoherent across changing topography.”
Hendrick and Comer was about rainfall, which is of course dependent on orography. But there are no fancy assumptions about directional uniformity. Just that the best estimate for a cell is the average of the stations within, if gridding.
Hendrick and Comer characterised the spatial variations in daily precipitation in terms of inter-station correlations.
They derived a spatial correlation function and used it to determine the required rain gauge density and configuration for their desired accuracy.
The problem of accurately representing the spatial variability of climate variables arises frequently in a variety of applications, such as in hydrological studies. Such spatial variability is usually quantified as the spatial scale of correlation decay.
I made a notes about this a while ago but I think I was actually quoting Chen, D. et al*. but I first read about correlation in Hansen & Lebedeff (1987) where right at the get go – three lines in – they blurt out their very necessary assumption that stations separated by less than 1000km are highly correlated.
They did make an attempt to understand and quantify intrasite correlation as did Jones et al (1986). It is known that correlation among sites is affected by such factors as a grid box’s location (Haylock et al. 2008), orientation and weather patterns (Hansen and Lebedeff 1987), seasonality (Osborn and Hulme 1997) and site density and homogeneity.**
*Chen, D. et al. Satellite measurements reveal strong anisotropy in spatial coherence of climate variations over the Tibet Plateau. Sci. Rep. 6, 30304; doi: 10.1038/srep30304 (2016).
**Director, H., and L. Bornn, 2015: Connecting point-level and gridded moments in the analysis of climate data. J. Climate, 28, 3496–3510, doi:10.1175/JCLI-D-14-00571.1.
***The spatial structure of monthly temperature anomalies over Australia. David A. Jones and Blair Trewin, National Climate Centre, Bureau of Meteorology, Melbourne, Australia
Nick,
You remarked, “Hendrick and Comer was about rainfall,…” But that rainfall has important consequences for temperature. Consider a moist air mass moving up over the Sierra Nevada mountains (or any NS mountain chain). It will cool at the wet (or intermediate, initially) adiabatic lapse rate. It will commonly release the moisture as precipitation, and then when the dried air-mass descends on the other side of the crest, it will warm at the dry adiabatic lapse rate. Thus, two weather stations, even at the same elevation on opposite sides, will not only be expected to have different temperatures, but the rate of change will be different because of the different lapse rates. Clearly, the method of interpolating temperatures across topographic barriers will provide poor results — and it is because of orography!
While it may be that “the best estimate for a cell is the average of the stations within, if gridding” it leaves a lot to be desired and one should be careful not to have undue confidence in the accuracy.
The focus on temperature as the single variable of interest has a fatal flaw, that being the poor quality of the instrumentation and particularly the siting. We know from Anthony’s crowd survey that the fraction of stations meeting all the criteria for accurate measurement of temperature is on the order of 2%, as I recall. Stations that meet most of the criteria bring us up to 11 or 12%. Most stations were not designed to measure temperature as a climate variable. They are in airports to help pilots and flight controllers.
The only system that was designed to measure temperature as a climate variable is the USCRN, with triplicate measurements using best equipment. But that is only about a dozen (?) years old and only in the US at 2% of the global surface area. So global land-based temperatures do not meet basic scientific criteria.
This leaves satellites, but RSS and UAH have made different choices about which satellite instruments they trust, and thus we have RSS 4.0 trends about 50% higher than UAH v6.
I conclude that temperature is not a useful variable. Energy would perhaps be better, but I don’t think we have a proper handle on the total energy leaving the earth. But suppose we can measure energy in and energy out with some accuracy. Then we might say that if energy in is greater than energy out we are warming. However, to get good mixing in the ocean might require 1000 years, so it would not necessarily be true that an imbalance in energy over periods of a year or a decade or even a century can give us a picture of the actual trend.
This leaves models based on physical mechanisms. But we know the model resolution is far below what is needed to deal with, say, thunderstorms. See Christopher Essex on that. Not to mention the overestimation problem that has become evident in recent years, even to the IPCC.
Maybe a few extra centuries of observations would give us a better picture of where we are going, or maybe we would just find that we are in a chaotic process and can’t really predict anything at all.
Keep in mind state changes that don’t have to affect the total internal kinetic energy of a defined sample of matter, even if the total energy does change. I’ll also say that we shouldn’t conflate the thermodynamic temperature, which is a proxy for the internal kinetic energy, and a color or brightness temperature. Those will conform if and only if the energy source for the radiation is a conversion of internal kinetic energy, and even then, unless the radiation is broadened, so to speak, radiation does not have to transfer power.
1. If the adjustments make little difference, why is there so much time, effort and money spent on them? The people doing so seem to think it makes a significant difference…
2. You claim the difference is small, but in a narrative where a few tenths of a degree over a century supposedly spells catastrophe, a little difference isn’t so little.
3. That the SD of the anomalies gives you a smaller number than the SD of the temps not relevant. What is relevant is which gives a better measure of what the change in the energy balance of the earth is. After all, is that not what we’re trying to understand? Is the rise in CO2 is causing a change in energy balance purported to be 3.7 w/m2 per doubling of CO2 plus feedbacks? If that is the goal, then both temps and anomalies are not fit for purpose because w/m2 varies with T^4, so an anomaly of 1 degree at -40 C has a completely different value than an anomaly of 1 degree at +40 C.
“why is there so much time, effort and money spent on them”
Well, is there? It seems to me it’s mostly dome with Matthew Menne, Williams and a bit of computer time. A lot of people then use them; it doesn’t cost anything. But you don’t have to. I’m sure more time and effort goes into complaining about the adjustments than into making them.
The basic issue is that you know that various non-climate things have influenced temperature measurement (station moves etc). Chances are that it will all balance out, but it mightn’t. So they work it out. Turns out that it does balance, pretty much, but you can’t be sure till you’ve tried. And I’m sure that if they didn’t do anything about it, they would be criticised for that too.
As to energy balance of the earth, that has quite a few aspects. Global average anomaly is one aspect that we can actually calculate, and should be done correctly. It’s a building block.
Karl had to write a whole paper to justify adjustments to the Argo Buouys. Adjustments come from a lot more than just the source you claim.
The point you missed was that the adjustments ARE significant, which is why they are done, so your claim that they make little difference is at best misleading.
As for energy balance, no, if it was a building block, then all temp readings would be converted into w/m2 and THEN the same trending and anomaly analysis applied. The building block you’ve used over represents the arctic regions, high altitudes, and winter, which under emphasizing the rest.
Two quotes from the essay:
…I find adjustment makes little difference…
And:
I used the station reporting history of each GHCN station, but imagined that they each returned the same, regularly rising (1°C/century) temperature. Identical for each station, so just averaging the absolute temperature would be exactly right. But if you use anomalies, you get a lower trend, about 0.52°C/century. It is this kind of bias that causes the majors to use a fixed time base, like 1951-1980 (GISS).
Over time the adjustments made to existing data (maybe that isn’t what the essay was talking about) do make a difference.
This graph

compares the change to the trend due to adjustments/corrections made to the data since 1997 and shows that the 1950-1997 trend depicted as it was in 1997 was 0.75°C/century and by 2018 that adjustments have increased that 1950-1997 trend to 1°C/century.
This graph

shows the distribution of positive and negative adjustments over the period 2002-2018. One can’t but notice that all adjustments (Annual average) to data since 1970 are positive. (Why is that?)
Links to data used in the above graphs:
1997
2002
Current
Anomalies are a good and valid method of comparing changes to “What it was then to what it is now.” However, changing the anomalies as reported decades ago doesn’t make a lot of sense. Every month GISS changes monthly entries on their Land Ocean Temperature Index including those that are over 100 years old. The results of that over time finally affect the overall trend as shown in the graphs linked above.
Can anyone link to the method used to post an image on these boards? Obviously it can be done”
LdB November 14, 2018 at 10:54 pm
The “Test” link on the WUWT header says:
If WordPress thinks a URL refers to an image, it will display the image
Ha ha – does that have a familiar ring to it or what (-:
Well anyway I’d like to post images instead of the just the links.
Steve,
“Obviously it can be done”
I don’t think it can (except by people with special permissions). What you linked to is a movie, which does still work.
If you look at an old post that used to show images in comments, they aren’t visible now.
I don’t think it can (except by people with special permissions).
After my post I groaned when yes I noticed it was a YouTube link. Yes I noticed that Willis E. can post images and as you said, I wondered if he has a special handshake or whatever to get his stuff posted and I don’t.
Nice of you to ignore the graphs and comments of my top post.
Top post? Well, the first one compares the earliest of GISS land/ocean with current. It only goes back to 1950, because that is as far as GISS could go at the time. And that was with some difficulty. From Hansen’s 1999 paper:
“We use the SST data of Reynolds and Smith [1994] for the period 1982 to present. This is their “blended” analysis product, based on satellite measurements calibrated with the help of thousands of ship and buoy measurements. For the period 1950-1981 we use the SST data of Smith et al. [1996], which are based on fitting ship measurements to empirical orthogonal functions (EOFs) developed for the period of satellite data. “
There is plenty of scope for improvement there, and they have improved.
As to the pattern of adjustments pointed out – that is mainly from the adoption of adjustments like TOBS, which has a warming effect for known reasons.
But the real test of all this should be not going back to early calculations with limited data, but to compare what we now know, with and without adjustment. That is why doing a complete calc along those lines is the proper answer to the effect of adjustment.
Thanks for the reply. The GISSTEMP LOTI product continues each and every month to monotonously change about 25% of the monthly entries from 1880 to present. The September 2018 edition of LOTI compared to the August 2018 edition changed 26.4% of the data. So what’s going on? Are they still bumping it up due to the Time of Observation?
Going all the way back to that 1997 edition there have been over 80,000 adjustments.
This past month there were 440 changes and 65 of them were to data from 1880 to 1900. Of those 65, 54 were negative and 11 were positive.
For the past 20 years of data (1999-2018) August vs. the September edition there were 96 positive changes and only one negative.
This sort of thing goes on nearly every month.
“So what’s going on? “
Flutter. The algorithm is somewhat unstable, in that if one reading goes up, it tends to push the next one down, and so on. I think it is a fault, but seems to balance out, so it doesn’t affect the spatial average. I’ve shown typical patterns in that link.
Nick Stokes November 15, 2018 at 9:31 pm
“So what’s going on? “
Flutter. The algorithm is somewhat unstable, in that if one reading goes up, it tends to push the next one down, and so on. I think it is a fault, but seems to balance out, so it doesn’t affect the spatial average. I’ve shown typical patterns in that link.
“… I think it is a fault…”
I think there’s something wrong too.
“…but seems to balance out…”
It seems to me that the one chart shows all adjustment since 1970 is upwards, the other shows an increase in the trend over the last two decades.
Steve there is plenty of code around for your PC they are IIR and FIR filter components you can also use MathLAB.
If you want a reasonably easy starter
https://www.embedded.com/design/configurable-systems/4025591/Digital-filtering-without-the-pain
The climate data sequence I can get hold of is pretty short so the tricky part is to make it longer for certainty is to modulate that data on a signal on various carrier waves. Ideally you pick frequencies that are likely to be there so for climate data you organize yearly cycles. So you can do say 1 year to 1000 year cycles. So the complication is you need to train the analysis so you end up with an optimal match filter. However even a badly trained filter is probably not as bad as Nick Stokes.
If you want some ideas Ligo has this issue of cyclical natural interference. The Ligo filters have some decent semi-layman resources on it. So this is the dumbed down way Ligo does a matched filter
https://www.gw-openscience.org/tutorial_optimal/
It gets more complicated than that but that is good enough for students to run the code on data themselves. The full Ligo filter details are discussed in the calibration detail.
https://dcc.ligo.org/public/0143/T1700318/001/LIGO_SURF_Progress_Report_2.pdf
LdB November 15, 2018 at 11:40 am
Steve there is plenty of code around …
I have no idea what your trying to tell me.
Steve, it’s worse than we thought.
a·nom·a·ly əˈnäməlē/ noun
1. something that deviates from what is standard, normal, or expected.
I guess for those that don’t get the big picture, anomalies make sense. But I started as a geology student, and ended up as a climatology student. So my perspective is a bit larger than most “climate experts”, who seem to think that the tail end of this interglacial is somehow the standard by which all climates should be compared. What I learned through my education is that there is no “normal” in climate or weather (except for change), and IMHO, claims that there is a “normal” climate is simply ignorance (and possibly something sinister).
Bingo (2)
Nick Stokes, thank you for your essay.
davidmhoffer
It’s a fair question, but the fact is that it really does make very little difference to the overall trend. Anyone can check this for themselves, as Nick and others have been saying for years.
This really answers your first question. They have to be as meticulous as possible, given the possible policy implications and consequent scrutiny they are under. Ironically, the care taken to get it things as close to ‘right’ as possible has left the scientists open to allegations of malicious tampering.
Re 3: surely a change in temperature is a reflection of a change in energy balance in any system? This is true whether you’re measuring the temperature (which is just a measure of the rate at which the molecules jiggle) of water in a warming pot or the temperature of air in the atmosphere. If the temperature is rising or falling in any system over the long term, then it’s a fair sign that the energy balance of that is changing.
Just how much ‘luck ‘ does it take to find that all adjustments to past data end up supporting ‘one view ‘ when you statistical you would expect a mixture of supportive and unspportive ?
The bottom line is one type of results offer grant money and career enhancement, another may see you blacklisted and you losing grant money. Has a human being with bills to pay, which do you think is the most ‘attractive ‘ We have seen with the recent ‘ocean warmer ‘ paper , how easy it not to ask to many question when you get the results you ‘need’ .
Then can you explain why the adjustments produce a trend effectively no different from the raw data?
The question is not if there is a trend but is it a trend that exceeds natural variation and is it wholly or partially due to the addition of CO2 to the atmosphere? It is a major jump to say the baseline should be from last century rather than the last interglacial or that CO2 is the main driver of warming without any empirical evidence physically linking the two.
DWR54,
Easy to answer. The tests that have been done are not comprehensive. They compare before and after of how large the temperature adjustment was, how long the adjustment lasted, but never AFAIK, the leverage effect from whether the adjustments were made close to or far from the pivot point where original and adjusted cross.
You solve very little using a simple count of number of ups versus number of downs.
Geoff
The need to give a value to anomaly comes about because you are UNABLE to offer an an accurate and precise measurements of that you are claiming to represent .
Its is a sign of weakness at a rather basic level in your data collection method and not a indication of ‘settled science ‘
The same for many proxies , where you can throw has much statics at it as you like , but the bottom line remains that they used because ‘they are better than nothing’ which is the alterative , not because the data is valid and of high quality .
Nick
You cannot get a correct weighting of stations unless ALL [terra] stations surveyed are balancing out together to zero latitude. Since most of your stations are all in the NH you get a totally incorrect result.
I did this balancing act with my data sets and find that the earth is in fact cooling, not warming.
{you can click on my name to read my final report]
Why do NOAA USCRN readings arrive so late. It’s now 15 November but they still only have readings up to September. They are lagging over 6 weeks behind. I thought these readings were automatically collected.
Your link is showing October data (0.2°F anomaly).
But only September is there. DIY: Change the final month from October to September. The data does not change.
The time scale is set to ‘previous 12 months’ leading to October 2018. Set it to 1 month. October 2018 is there.