IPCC has never waited for 30-year trends, and they were right.
Guest essay by Barry Brill
Under pressure at a media conference following release of its Summary for Policymakers, AR5 WG1 Co-Chair Thomas Stocker is reported to have said that “climate trends should not be considered for periods less than 30 years”.
Some have seen this as the beginning of an IPCC ploy to continue ignoring the 16-year-old temperature standstill for many years into the future. But even the IPCC must know that any such red herring is dead in the water:
1. When James Hansen launched the global warming scare in 1988, there had been no statistically significant warming over the previous 30 years and the warming trend during 1977-87 was 0.0°C. The IPCC was also established that year.
Source: Woodfortrees plot
2. At the time of the first IPCC report in 1991 (FAR), the warming trend was barely 11 years old.
Source: Woodfortrees plot
3. Most significantly, the Rio Earth Summit in 1992 adopted the UNFCCC treaty on the basis of a 30-year cooling trend followed by only 12 years of warming. That treaty dogmatically redefined “climate change” as being anthropogenic and eventually committed over 190 countries to combat “dangerous” warming.
4. The latest WG1 report bases its assessment of sea level rise and ocean heat content on the trend in satellite readings which have been available for only 19 years, coupled with ARGO reports for a period less than a decade. There is no apology for the short periods.
5. In 2007, the AR4 made much of the fact that the warming trend over the previous 15 years exceeded 0.2°C/decade. In 2013, the AR5 plays down the fact that there is no significant warming at all during the previous 15 years. (But AR5 cites 0.05°C/decade without mentioning that this figure is ±0.14°C).
6. If the IPCC wants to focus on 30-year trends, why did it make no comment on the fact that the current 30-year trend has fallen to 0.174°C/decade from the 0.182°C/decade trend that was the (1992-2006) backdrop to the AR4? Particularly, as the intervening 6-year period has been characterised by record increases in CO2 emissions.
7. Dr Stocker’s criticism of short-term trends as being influenced by start and end dates, ignores that long-term trends are similar. He picked a 60-year period (1951-2010) to produce a 0.12°C/decade trend, when a 70-year or 80-year period would have shown a much-reduced trend of 0.07°C.
8. WG1 scientists found it appropriate to include a statement in the AR5 SPM that
“Models do not generally reproduce the observed reduction in surface warming trend over the last 10 –15 years.”
3 months later, this crucial sentence was disappeared by a secret conclave of politicians/bureaucrats – not by scientists.
9. Dr Jarraud, secretary-general of the World Meteorological Organisation (WMO), told journalist David Rose that his question about the standstill was “ill-posed”. The WMO issues manuals on best practice for climatology and regards itself as the premier authority on measuring temperature trends. Here is what its manual WMO GUIDE TO CLIMATOLOGICAL PRACTICES (3RD EDTN) has to say about 30-year periods:
Chapter 4.8.1 Period of calculation“A number of studies have found that 30 years is not generally the optimal averaging period for normals used for prediction. The optimal period for temperatures is often substantially shorter than 30 years, but the optimal period for precipitation is often subtantially greater than 30 years.”And (at page 102):“The optimal length of record for predictive use of normals varies with element, geography, and secular trend. In general, the most recent 5‑ to 10‑year period of record has as much predictive value as a 30‑year record.”
Prior to release of the SPM, Bloomberg reported that some countries (notably Germany) wanted to wholly ignore the temperature standstill and pretend that the 20-year-old paradigm was still intact.
Few expected that would happen, predicting a sharply-reduced best estimate of sensitivity and rueful acknowledgment that natural factors had been under-estimated. The fact that days of debate culminated in this absurd canard about 30-year trends is a powerful indicator of just how desperate the climate establishment has now become.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.


barry says
1) based on your early work of a handful of weather stations.
2) You would more easily find quotes from people like Roy Spencer, Roger Pielke Snr and Anthony Watts stating that increased levels of CO2 should warm the world in the long-term.
3) But if you think any old time period is good enough, it has been warming again since 2008.
4) UAH, the skeptics’ choice, shows a trend of 0.3C/decade!
henry says
1) A handful? 47 weather stations is just a handful?
http://blogs.24.com/henryp/2013/02/21/henrys-pool-tables-on-global-warmingcooling/
2) As far as I know I have never personally argued with them about more CO2 causing more warming but if I had, then they must show me the balance sheet same as I asked you, here:
http://blogs.24.com/henryp/2011/08/11/the-greenhouse-effect-and-the-principle-of-re-radiation-11-aug-2011/
that would prove to me that the net effect of more CO2 in the atmosphere is that of warming rather than cooling.
3) The first thing you should know if you really want to study climate science is that the irradiance varies over one solar cycle; therefore, to chose half a cycle would give you a complete wrong impression…..
This is why I chose not 10 years but 11 years. (2013-2002=11). That is equivalent time of one solar cycle. In addition, you should know that real climate scientists must consider the fact that there are more solar cycles. I stumbled upon one myself, here:
http://blogs.24.com/henryp/2012/10/02/best-sine-wave-fit-for-the-drop-in-global-maximum-temperatures/
[note: It seems to me this 88 year solar/weather cycle was already calculated from COSMOGENIC ISOTOPES as related in this study: Persistence of the Gleissberg 88-year solar cycle over the last ˜12,000 years: Evidence from cosmogenic isotopes Peristykh, Alexei N.; Damon, Paul E. Journal of Geophysical Research (Space Physics), Volume 108, Issue A1, pp. SSH 1-1, CiteID 1003, DOI 10.1029/2002JA009390]
Now look back at the global record and you will find that it generally started warming from around 1950, ignoring short spells of cooling in between. Hence, we know that from 2000 to 2040 it will be generally cooling.
Before 1940 the global temp record is murky, to say the least, because of various reasons.
(no calibrated thermometers, poor global representation, missing data when workers went on leave, etc.)
4) I believe UAH has issues with calibration; it does not agree with my own data set from 2000 but neither does it agree with any other data set. I also think that UAH only measures between plus 30 and minus 30 latitude, which could give a wrong impression about the global cooling taking place, especially if you are looking at average temps.
Predictably, global cooling would cause a small (?) shift of cloud formation and precipitation, more towards the equator, on average.When water condenses, large amounts of warmth are released……
If a data set is not globally representative you have to be careful….
ask me.
Keep your eyes on maximum temperatures and soon they will open up. And you feel liberated.
I can drive a big truck now and take my dogs with me without having to feel guilty.
Isn’t that great? I think that is worth something. It was worth all my work and all the trouble finding out things for myself.
If I were you, I would check out my final report on this:
http://blogs.24.com/henryp/2013/04/29/the-climate-is-changing/
Henry,
UAH temp record is derived from meausring radiance in the atmosphere between 85S and 85N. RSS have the same coverage – they use the same data from the same satellites, but the process algorithms are different.
Not 30S to 30N. If you think coverage is a problem for UAH, then you must discount RSS for the same reason. I think that the coverage is outstanding compared to the surface records, but, as you note, satellites have problems with drift and orbital decay, and the data set has to be woven through different satellites as they came on and offline. Also, the vertical radiance band is ~4 kilometers high, so they are measuring the temperature of thre lower troposphere rather than the surface (over land – they measure skin temperature of the oceans), and they have to estimate calcs to winnow out the full measured column, which includes radiance from higher up.
Another skeptic I converse with compiles a blog on the Australian temperature record, and says the national standard station count (112) is not enough. I disagree with him. If I remember correctly, 49 is more than you had when you were posting about it in early 2011. I think it is possible to get a fairly useful reflection of global temp trends with so few stations, but they would have to be well spaced out over the globe.
What is the spatial distribution of your data set?
Of the 11 year cycle, the total output of the sun has changed little since the 1950s, but the globe has warmed. The sun has had little influence during that period, so I would not expect it to have a strong influence for the next 30 years, unless it’s behaviour becomes different. I note from the paper you cited at your blog that the researchers don’t see a strong influece on global temperatures regarind the purported 88 year cycle.
The earlier surface temperature record is corroborated by proxy evidence (boreholes, tree-rings, sediment, glacial retreat). If you think proxy evidence is good enough to reconstruct an 88-year solar cycle for 12,000 years from one proxy type, wouldn’t you consider a range of proxy types to be an even stronger corroboration of the early temperature record?
Also, they have much more than 49 stations for the early temperature record, so by your own standard, this should be sufficient.
My point being that if there is an 88-year solar cycle, it is not running the global climate, as the globe is warmer now than it was 100 years ago.
The 11-year solar cycle can influence trends on short time scales. You need at least two full cycles to balance out the fluctuations, not just one, partly because the 11-year cycle is not perfectly regular. The shortest duration in the 20th century was 9.5 years, longest was 12.5, and short cycles usually follow long ones. The last cycle, with minimum at 2008, was 12.5 years duration.
And other non-climatic factors influence surface trends on short time-scales – el Nino/la Nina quasi-periodic fluctuations being one of the most infuential. The standard length for climate trend analysis is 30 years to suppress the effects of short-term variation, but you can get statistically trends for climate from surface records as little as 20 years in length. Slightly more for satellite records, owing to the greater variability (el Nino la Nina years tend to be larger temp changes in the satellite data).
20 years for a climate signal to emerge is a good minimum, no shorter. From a purely statistical point of view, no ten-year trend in the instrumental record is statistically significant. The time period is too short. As an example, the trend of HadCRUt4 from 2002 through 2012 is -0.046C/decade, and the confidence interval is +/- 0.19C, which means the trend may be anywhere between -0.236C/decade and 0.144C/decade. The time period is too short to get a better estimate. The sampled trend is not statistically significant. This is the case for all data sets for this time period – and any other 10-year time period in any of the records.
Ten years is too short to get useful trend estimates, for many reasons.
There must be something wrong with UAH that happened recently, because over the longest period, ie 30 years, I get the same result as UAH, ie 0.13 degrees C warming per decade (looking at my table for means)
112 weather stations in the same continent will not give a global result. At the beginning here I give the sampling procedure to follow to get a globally representative sample:
http://blogs.24.com/henryp/2013/02/21/henrys-pool-tables-on-global-warmingcooling/
(e.g. longitude does not matter if we are looking at the change from the annual average in degrees/annum)
As I said, means you can look at, but it confuses, maxima you must study and you will easily find the pattern. I stopped at 47 stations because of finding very high correlation on the pattern for the drop in the speed of maximum temperatures..
Hale-Nicholson proposed the 22 year sunspot cycle and they claimed that over 220 years there have been 10 cycles. Basically they say it is wrong to chart the 11 year cycles as “direct current” cycles in which all the amplitudes march in the same direction.
Within a sine wave of 88 years there are 4 quadrants of each 22 years. Indeed, it is not the total irradiance from the sun that varies much. It is the variation within TSI that causes different chain reactions TOA that changes the amount of energy coming in, as observed from maxima.
Did you see the sine wave I plotted for Anchorage? (it is below the global curve)
Of course, remember earth also produces its own energy (core heat/volcanic/lunar/etc) .
The proposal of a 90-100 year weather cycle is therefore the most probable position and history backs it up. Gleisberg was not stupid. I wonder how he figured it out?
People were looking at the planets to explain this behavior until the IPCC arrived.
http://www.cyclesresearchinstitute.org/cycles-astronomy/arnold_theory_order.pdf
Barry says
My point being that if there is an 88-year solar cycle, it is not running the global climate, as the globe is warmer now than it was 100 years ago.
Henry says
1) how would you prove this to me if nobody can even show me a re-calibration certificate of a thermometer from before 1940? They did not re-calibrate until after 1950. How would you know the weather is not about the same now as it was 100 years ago>?
2) even if there is slight upturn during the past century, how would you know if there are not more longer term cycles, e.g. those that seem to depend on the positions of the planets?
Anyway, it seems this Gleissberg cycle is more dominant now and for the time being it will be cooling until 2040 and no you or the ipcc can change this.You can keep fooling them by adjusting the figures, but in the end, people having to shove snow in late spring will make up their own minds about global warming.
Agreed, but that was not what I said. My skeptic friend, who has done a lot of work on the Australian temp record, says that 112 stations is not sufficient for a single continent – I can imagne what he would say about using 47 stations for the entire global surface record. But, as I said, I disagree with him.
It is very difficult to find coherent scientific premises amongst skeptics. They contradict each other.
How much coverage/weighting do you give to the Arctic region, the fastest warming region on the planet? Longitude definitely does matter, because rates of climate change are different in different parts of the globe. Internal variation and multidecadal cycles have an effect on certain locations but not on others.. A truly representative sample would have a god lat/long spread. Do you have a chart of the weather stations overlayed on a global map? That would be the clearest visual way to check your coverage.
Your table did not have enough information. How do you weight closely clustered stations with others that are more sparse? Ie, how do you prevent weight given to one particular region? Too many stations clustered in one area can skew results.
I note that over the long term your results are not far from the official records, and the attempts made by skeptical bloggers and others.
UAH isn’t ‘wrong’, any more than the others are ‘wrong’. You should not omit information and then post-justify the decision by assuming the data must be wrong simply because it doesn’t agree with your conclusions. They are all even wronger over short time scales. That’s the problem with only using 10 years of data. Here’s another 10 years of data.
http://www.woodfortrees.org/plot/gistemp/from:1987/plot/hadcrut4gl/from:1987/plot/uah/from:1987/plot/rss/from:1987/plot/gistemp/from:1998/to:2008/trend/plot/hadcrut4gl/from:1998/to:2008/trend/plot/rss/from:1998/to:2008/trend/plot/uah/from:1998/to:2008/trend
This time it’s RSS that has a declining trend when the others are warming. By your reckoning, we should discount RSS on this basis – lack of agreement.
But this lack of agreement is not because RSS is ‘wrong’, just that on short timescales interannual variability dominates trends, and this makes a difference when different data sets reflect interannual variation differently. RSS has the greatest temperature changes in la Nina/el Nino years, and as the trend starts with the super el Nino of 1998, RSS starts with a higher anomaly than the other data sets. This doesn’t matter over longer time scales, because the effect of ENSO evens out after 25 years or so.
Are you familiar with statistical analysis and what ‘statistical significance’ means? 10 year trends of global surface temperature/lower troposphere always fail statistical significance, by a large margin. You can’t tell anything about trend with only 10 years worth of data.
For the period 1998 through 2007, the decadal trends with confidence intervals are:
GISS | 0.151 +/- 0.269
Had4 | 0.121 +/- 0.264
RSS | -0.053 +/- 0.425
UAH | 0.074 +/- 0.442
For a trend to be statistically signficant, it has to be greater than the confidence interval, whether positive or negative. Note that the confidence interval is even larger for the satellite records than the surface records, an expected result owing to the greater interannual variability in the data.
You can choose any ten-year period from any of the records and come up with the same result. 10 year global temperature trends have no meaning. We have no confidence about trends from 2002.
barry:
I agree with much that you say in your post at October 2, 2013 at 11:40 pm. However, this conclusion misleads.
Oh, but “10 year global temperature trends” do have “meaning”.
They mean that global warming or cooling is not discernible over a decade.
Now (i.e. the present) is the only valid start date when considering how long there has been no discernible change in global temperature. And one then considers back in time from now. Any other date is a cherry-pick.
Smoothing the data is processing it. We are interested in whether the data shows a discernible trend: we are not interested in whether processed data shows anything.
Any model of change could be used but climate science uses linear trends, so that is the appropriate model.
The discernment has to be a trend different from zero at 95% confidence because that is the confidence level used by ‘climate science’.
So, to assess whether there has been discernible global warming recently one needs to take one of the data sets of global temperature time series (e.g. HadCRUTn, GISS, RSS, etc.) and assess past time periods from now to determine the shortest time period with linear trend which differs from zero with 95% confidence.
1.
All the available time series of global temperature show no discernible global warming or global cooling at 95% confidence for at least the most recent 17 years. RSS shows no discernible global warming or global cooling at 95% confidence for 22 years.
2.
Importantly, all the available time series of global temperature DO show discernible global warming or global cooling at 95% confidence for the previous 17 years (i.e. between 34 and 17 years ago.
3.
Hence, discernible global warming stopped at least 17 years ago.
Any other statement (except to query the validity of global temperature data compilations) is spin.
Richard
Richard,
You can only say that the long-term temperature trend has changed if the difference between the previous and the following trend is statistically significant. IOW, there has to be no overlap in the trends accounting for confidence intervals.
The most recent full-year 17 year period is 1997 to 2012 inclusive. Let’s look at the actual numbers. Here are the decadal trend results, with confidence intervals.
GISS | 0.082 +/- 0.131
Had4 | 0.049 +/- 0.126
RSS | -0.009 +/- 0.225
UAH | 0.093 +/- 0.230
The confidence intervals give us our range of possibile trends per decade for each data set. The ranges are:
GISS | -0.049 to 0.213
Had4 | -0.077 to 0.175
RSS | -0.234 to 0.216
UAH | -0.137 to 0.323
If there is overlap with the previous 17 years (or longer), then the trend change is not statistically significant.
Plot 1979 through 1996, and then 1997 thru 2012 (here)
The trends look quite different, but is the difference statistically significant?
Here are the 1979 through 1996 decadal trends for the four temp records, including confidence intervals.
GISS | 0.108 +/- 0.121
Had4 | 0.108 +/- 0.107
RSS | 0.071 +/- 0.170
UAH | 0.034 +/- 0.178
Not only do the confidence intervals overlap for the two periods, the intervals overlap with the central estimate for each data set. The difference (deceleration) is not statistically significant, at least not with linear regression.
17 years is not quite long enough, either. Of the four data sets, only one trend achives statistical significance over such a period – Had4 from 1979 to 1996. For a better comparison, you’d want to start with a statistically significant trend as your ‘null’ hypothesis, otherwise none of it is meaningful. For that you’d want to run a base trend line at least 25 years long for the surface records to compare with.
I think it is appropriate to speak of an ‘apparent’ slowdown in global warming since 1997/98, but in purely statistical terms, there is not enough data yet to make that determination to 95% confidence limits.
Any other conclusion is ignoring the uncertainty in the trend estimates.
A more straightforward way of doing this would be to run a second degree polynomial trend from 1979 to 2012, and see if that curve is statistically significant. I don’t know how to work out the uncertainty interval for that, unfortunately. Anyone else capable of doing it? I would guess that the result is likewise not statistically significant, but it would be good to know for sure. And anyone with the patience could test that until they find a long enough period with statistical significance and see the direction of acceleration by the end of the time series.
I took my own advice and plotted a base trend of 25 years for GISS and Had4, 1972 through 1996. This has the effect of better reflecting the long-term trend, and also reducing the uncertainty, giving a better chance for the last 17 years with confidence intervals to show statistically significant trend change.
Here’s the chart.
Here are the decadal trends with confidence intervals from 1972 through 1996:
GISS | 0.147 +/- 0.076
Had4 | 0.150 +/- 0.071
For the last 17 years:
GISS | 0.082 +/- 0.131
Had4 | 0.049 +/- 0.126
The confidence intervals for the last 17 years overlap with the central estimates for the statistically significant previous period of 25 years.
Thus, we cannot say with linear regression that the long-term trend has changed. Not to 95% confidence limits. 17 years of global surface data leaves too much uncertainty to make a definitive determination.
Barry says
UAH isn’t ‘wrong’, any more than the others are ‘wrong’.
Henry says
they admitted to me that they do have calibration problems, especially related to the actual zero point of temperature in Kelvin
I am also waiting for someone (other than yourself) to confirm or deny to me that the UAH is mainly representative of the tropics
barry says
How much coverage/weighting do you give to the Arctic region, the fastest warming region on the planet?
henry says
I do appreciate this question as I see now that you and all have not grasped the very basic of my sampling technique to see if there is a change of energy coming in. Obviously,when I looked at earth, standing on my table, I decided from the beginning that the amount of samples taken from the NH must equal the amount of samples taken from the SH. However, I never clarified this to anyone because I thought it was so obvious….
That was the other reason why I stopped sampling: I could not balance here….
I have 24 samples from the SH and 22 from the NH but I am +100 on latitude balance.There is simply a shortage of samples from the SH at high latitude….
I have subsequently rewritten this part of my post:
1)
I took a random sample of weather stations that had daily data
In this respect random means any place on earth, with a weather station with complete or almost complete daily data, subject to the given sampling procedure decided upon and given in 2) below.
2)
I made sure the sample was globally representative (most data sets aren’t!!!) ……
that means
a) The amount of weather stations taken from the NH must be equal to the amount weather stations taken from the SH
b) The sample must balance by latitude (longitude does not matter, as in the end we are looking at average yearly temps. which includes the effect of seasonal shifts and irradiation + earth rotates once every 24 hours). The sample must also balance 70/30 in or at sea/ inland
c) all continents included (unfortunately I could not get reliable daily data going back 38 years from Antarctica,so there always is this question mark about that, knowing that you never can get a “perfect” sample)
d) I made a special provision for months with missing data (not to put in a long term average, as usual in stats but to rather take the average of that particular month’s preceding year and year after). As an example here you can see the annual average temperatures for New York JFK:
http://www.tutiempo.net/clima/New_York_Kennedy_International_Airport/744860.htm
You can copy and paste the results of the first 4 columns in excel.
Note that in this particular case you will have to go into the months of the years 2002 and 2005 to see in which months data are missing and from there apply the correction as indicated by me + determine the average temperature for 2002 and 2005 from all twelve months of the year.
e) I did not look only at means (average daily temp.) like all the other data sets, but also at maxima and minima… …
3)
I determined at all stations the average change in temp. per annum from the average temperature recorded, over the period indicated (least square fits)
4)
the end results on the bottom of the first table (on maximum temperatures),
http://blogs.24.com/henryp/2013/02/21/henrys-pool-tables-on-global-warmingcooling/
clearly showed a drop in the speed of warming that started around 38 years ago, and continued to drop every other period I looked//…
5)
I did a linear fit, on those 4 results for the drop in the speed of global maximum temps,
ended up with y=0.0018x -0.0314, with r2=0.96
At that stage I was sure to know that I had hooked a fish:
I was at least 95% sure (max) temperatures were falling. I had wanted to take at least 50 samples but decided this would not be necessary which such high correlation.
6)
On same maxima data, a polynomial fit, of 2nd order, i.e. parabolic, gave me
y= -0.000049×2 + 0.004267x – 0.056745
r2=0.995
That is very high, showing a natural relationship, like the trajectory of somebody throwing a ball…
7)
projection on the above parabolic fit backward, ( 5 years) showed a curve:
happening around 40 years ago. You always have to be careful with forward and backward projection, but you can do so with such high correlation (0.995)
8)
ergo: the final curve must be a sine wave fit, with another curve happening, somewhere on the bottom…
http://blogs.24.com/henryp/2012/10/02/best-sine-wave-fit-for-the-drop-in-global-maximum-temperatures/
Now, I simply cannot be clearer about this. The only bias might have been that I selected stations with complete or near complete daily data. But even that in itself would not affect randomness in my understanding of probability theory.
Either way, you could also compare my results (in the means table) with that of Dr. Spencers, or even that reported by others and you will find same 0.14 /decade since 1990 or 0.13/decade since 1980.
In addition, you can put the speed of temperature change in means and minima in binomials with more than 0.95 correlation. So, I do not have just 4 data for a curve fit, I have 3 data sets with 4 data each.They each confirm that it is cooling. And my final proposed fit for the drop in maximum temps. shows it will not stop cooling until at least 2039.
barry:
In this one post I am answering your three posts at October 3, 2013 at 3:22 am, October 3, 2013 at 3:41 am and October 3, 2013 at 4:00 am.
Your post addressed to me at October 3, 2013 at 3:22 am begins saying
NO! That is plain wrong.
1.
If there is a trend discernibly different from zero at 95% confidence over a period then there was a discernible trend.
2.
If there is not a trend discernibly different from zero at 95% confidence over a period then there was not a discernible trend.
3.
If a 17 year period had a discernible trend but the subsequent 17 year period does not have a trend then the discernible trend ended.
This is not to say it is known that there is no trend in the latter period. It means that there is no longer a trend that is discernible although there was in the former period.
The remainder of your post at October 3, 2013 at 3:22 am is based on your error so does not warrant comment.
Your post at October 3, 2013 at 3:41 am begins saying
No. You are trying to move the goal posts. As I said in my post at October 3, 2013 at 1:56 am
http://wattsupwiththat.com/2013/09/30/to-the-ipcc-forget-about-30-years/#comment-1434425
My having written that up front may imply to some that I presciently understood you might try to move the goal posts 🙂
Your post at October 3, 2013 at 4:00 am repeats your mistake of trying to pretend the ending of a discernible trend is not important. I have already refuted that in this post.
As I explained to you in my post at at October 3, 2013 at 1:56 am which induced your series of posts making desperate excuses:
discernible global warming stopped at least 17 years ago.
Live with it.
Richard
Barry says
to the Arctic region, the fastest warming region on the planet?
henry says
I hope you realize (now) that that is also a (directed) false opinion
Temperatures in Anchorage have dropped by as much as 2 degrees C since 2000
and NOBODY noticed?
Henry,
Thanks for going into detail.
RSS and UAH have the same calibration problems, as I said. They are analogous to difficulties with the surface data (but not the same kind, obviously). All data have problems, as you are aware from your own work (missing data). I do not know how TiuTempo deal with quality control of their data. If you discount UAH, you must also discount RSS – they use the same data from the same sources. They have differences in how they process them.
If you think that the Arctic has been cooling since 2000 based on one data set, then there is a problem with your coverage. Every data set, including the satellite data sets, shows greater warming in the Arctic. UAH shows 0.66C/decade for the Arctic ocean. Satellite measure radiance from the ocean skin, which is a much more accurate measurement than the troposphere over land. Even with the uncertainty from callibration problems, this result is still greater than the global record. Every data set shows greater warming in the Arctic. HADCrut3 had fewer Arctic weather stations than HADCrut4, and the increased data revealed an even greater amount of warming than was already apparent.
I said it’s possible to construct a fairly good temp record from so few stations, but you are running with a bare minimum, and unless you check against other stations/records, you could easily end up with a skewed result. Long data streams from weather stations is not by itself a guarantee. Anchorage is an example.
I did a short bit of research on TiuTempo. The findings are not salutory. What gives you confidence in their data quality?
We are agreed that there has been an ‘apparent’ slowdon in global temperatures for the last 15 years or so. We are probably disagreed on the climactic significance of this. However…
You ran a second order polynomial for the last 40 years and found deceleration with statistical significance? From 47 weather stations? Do I have that right?
Richard,
What you have said amounts to:
The temperature trend from the last 17 years is not discernible.
I agree.
The goal is discerning temperature trends/changes in trend. I didn’t shift the goal posts, just trying to score a from a different (better) angle. If you think a second degree polynomial is inappropriate and prefer to stick to linear trends, that’s ok with me.
If we stick with linear regression and restrict ourselves to surface temps and satellite lower tropospheric data, we can’t say much about the last 17 years. The trends are not discernible. We cannot discern that the trend has increased, decreased or flattened. Not with statistical confidence. We cannot say that global warming has ‘stopped’. Nor can we say that it has continued. We do not yet have enough data to say so to 95% confidence.
Basically, “we do not know with much confidence”.
Regarding surface and tropospheric temp trends for the last 17 years, that is all we can say.
Henry,
forgot to add – UAH coverage is 85S to 85N. You can see that in their notes at the bottom of the data page here,
http://www.nsstc.uah.edu/public/msu/t2lt/uahncdc_lt_5.6.txt
Arctic and Antarctic (which they call ‘NoPol’ and ‘SoPol’) is latitudes 60 to 85, North and South.
The data page also has columns for Northern Hemisphere/Southern Hemisphere land, ocean and combined, tropics, and USA and Australian temperatures. It’s a good data reference for UAH regional anomalies.
barry,if we cannot discern if the trend has increased,decreased or flattened,could we say with 100% confidence it has not gotten any warmer in the last 17 years 🙂
barry says
Every data set shows greater warming in the Arctic.
henry says
well I am sorry. If that be true then every data set is wrong. I have two stations in Anchorage at 61 latitude that show that average temperatures dropped by ca -0.15 degree per annum on average over the past 12 years. That is 1.8 degrees C in total….Almost 2 degrees C cooler.
And nobody noticed????
As the people in Alaska have noted,
http://www.adn.com/2012/07/13/2541345/its-the-coldest-july-on-record.html
http://www.alaskadispatch.com/article/20130520/97-year-old-nenana-ice-classic-sets-record-latest-breakup-river-1
the cold weather in 2012 was so bad there that they did not get much of any harvests. And it seems NOBODY is telling the farmers there that it is not going to get any better.
Obviously, you can see from the data in military base in Anchorage that the average temperatures dropped because the maxima dropped sharply, following the pattern of the Gleisberg solar/weather cycle. See my second graph here:
http://blogs.24.com/henryp/2012/10/02/best-sine-wave-fit-for-the-drop-in-global-maximum-temperatures/
(I had good maxima data going back to 1940 – by that time the thermometers got stuck on the maximum, there is not much anyone can mess up)
I am also warning those people that are currently exploring the arctic that they do not know what will come against them.They will lose their investments…..
Counting back 88 years i.e. 2013-88= we are in 1925.
Now look at some eye witness reports of the arctic ice back then? Read the particulars in the actual news report.
http://wattsupwiththat.com/2008/03/16/you-ask-i-provide-november-2nd-1922-arctic-ocean-getting-warm-seals-vanish-and-icebergs-melt/
Sounds familiar? Back then, in 1922, they had seen that the arctic ice melt was due to the warmer Gulf Stream waters. However, by 1950 all that same ‘lost” ice had frozen back.
I therefore predict that all lost arctic ice will also come back, from 2020-2035 as also happened from 1935-1950. Antarctic ice is already increasing.
@bit chilly
it will get a bit chilly
As I was just saying
Climate change=global cooling
http://wattsupwiththat.com/2013/10/05/norways-wheat-production-impacted-by-climate-change/
Danger from global cooling is documented and provable. It looks we have only ca. 7 “fat” years left……
http://blogs.24.com/henryp/2013/04/29/the-climate-is-changing/
the silence from barry after my comments on “arctic warming” is rather deafening, don’t you think?
no doubt he has moved on to “greener” pastures