… or 19 years, according to a key statistical paper.
By Christopher Monckton of Brenchley |
The Great Pause has now persisted for 17 years 11 months. Indeed, to three decimal places on a per-decade basis, there has been no global warming for 18 full years. Professor Ross McKitrick, however, has upped the ante with a new statistical paper to say there has been no global warming for 19 years.
Whichever value one adopts, it is becoming harder and harder to maintain that we face a “climate crisis” caused by our past and present sins of emission.
Taking the least-squares linear-regression trend on Remote Sensing Systems’ satellite-based monthly global mean lower-troposphere temperature dataset, there has been no global warming – none at all – for at least 215 months.
This is the longest continuous period without any warming in the global instrumental temperature record since the satellites first watched in 1979. It has endured for half the satellite temperature record. Yet the Great Pause coincides with a continuing, rapid increase in atmospheric CO2 concentration.
Figure 1. RSS monthly global mean lower-troposphere temperature anomalies (dark blue) and trend (thick bright blue line), October 1996 to August 2014, showing no trend for 17 years 11 months.
The hiatus period of 17 years 11 months, or 215 months, is the farthest back one can go in the RSS satellite temperature record and still show a sub-zero trend.
Yet the length of the Great Pause in global warming, significant though it now is, is of less importance than the ever-growing discrepancy between the temperature trends predicted by models and the far less exciting real-world temperature change that has been observed.
The First Assessment Report predicted that global temperature would rise by 1.0 [0.7, 1.5] Cº to 2025, equivalent to 2.8 [1.9, 4.2] Cº per century. The executive summary asked, “How much confidence do we have in our predictions?” IPCC pointed out some uncertainties (clouds, oceans, etc.), but concluded:
“Nevertheless, … we have substantial confidence that models can predict at least the broad-scale features of climate change. … There are similarities between results from the coupled models using simple representations of the ocean and those using more sophisticated descriptions, and our understanding of such differences as do occur gives us some confidence in the results.”
That “substantial confidence” was substantial over-confidence. A quarter-century after 1990, the outturn to date – expressed as the least-squares linear-regression trend on the mean of the RSS and UAH monthly global mean surface temperature anomalies – is 0.34 Cº, equivalent to just 1.4 Cº/century, or exactly half of the central estimate in IPCC (1990) and well below even the least estimate (Fig. 2).
Figure 2. Near-term projections of warming at a rate equivalent to 2.8 [1.9, 4.2] K/century , made with “substantial confidence” in IPCC (1990), January 1990 to August 2014 (orange region and red trend line), vs. observed anomalies (dark blue) and trend (bright blue) at less than 1.4 K/century equivalent, taken as the mean of the RSS and UAH satellite monthly mean lower-troposphere temperature anomalies.
The Great Pause is a growing embarrassment to those who had told us with “substantial confidence” that the science was settled and the debate over. Nature had other ideas. Though more than two dozen more or less implausible excuses for the Pause are appearing in nervous reviewed journals, the possibility that the Pause is occurring because the computer models are simply wrong about the sensitivity of temperature to manmade greenhouse gases can no longer be dismissed.
Remarkably, even the IPCC’s latest and much reduced near-term global-warming projections are also excessive (Fig. 3).
Figure 3. Predicted temperature change, January 2005 to August 2014, at a rate equivalent to 1.7 [1.0, 2.3] Cº/century (orange zone with thick red best-estimate trend line), compared with the observed anomalies (dark blue) and zero real-world trend (bright blue), taken as the average of the RSS and UAH satellite lower-troposphere temperature anomalies.
In 1990, the IPCC’s central estimate of near-term warming was higher by two-thirds than it is today. Then it was 2.8 C/century equivalent. Now it is just 1.7 Cº equivalent – and, as Fig. 3 shows, even that is proving to be a substantial exaggeration.
On the RSS satellite data, there has been no global warming statistically distinguishable from zero for more than 26 years. None of the models predicted that, in effect, there would be no global warming for a quarter of a century.
The Great Pause may well come to an end by this winter. An el Niño event is underway and would normally peak during the northern-hemisphere winter. There is too little information to say how much temporary warming it will cause, but a new wave of warm water has emerged in recent days, so one should not yet write off this el Niño as a non-event. The temperature spikes caused by the el Niños of 1998, 2007, and 2010 are clearly visible in Figs. 1-3.
El Niños occur about every three or four years, though no one is entirely sure what triggers them. They cause a temporary spike in temperature, often followed by a sharp drop during the la Niña phase, as can be seen in 1999, 2008, and 2011-2012, where there was a “double-dip” la Niña that is one of the excuses for the Pause.
The ratio of el Niños to la Niñas tends to fall during the 30-year negative or cooling phases of the Pacific Decadal Oscillation, the latest of which began in late 2001. So, though the Pause may pause or even shorten for a few months at the turn of the year, it may well resume late in 2015 . Either way, it is ever clearer that global warming has not been happening at anything like the rate predicted by the climate models, and is not at all likely to occur even at the much-reduced rate now predicted. There could be as little as 1 Cº global warming this century, not the 3-4 Cº predicted by the IPCC.
Key facts about global temperature
- The RSS satellite dataset shows no global warming at all for 215 months from October 1996 to August 2014. That is more than half the 428-month satellite record.
- The fastest measured centennial warming rate was in Central England from 1663-1762, at 0.9 Cº/century – before the industrial revolution. It was not our fault.
- The global warming trend since 1900 is equivalent to 0.8 Cº per century. This is well within natural variability and may not have much to do with us.
- The fastest measured warming trend lasting ten years or more occurred over the 40 years from 1694-1733 in Central England. It was equivalent to 4.3 Cº per century.
- Since 1950, when a human influence on global temperature first became theoretically possible, the global warming trend has been equivalent to below 1.2 Cº per century.
- The fastest warming rate lasting ten years or more since 1950 occurred over the 33 years from 1974 to 2006. It was equivalent to 2.0 Cº per century.
- In 1990, the IPCC’s mid-range prediction of near-term warming was equivalent to 2.8 Cº per century, higher by two-thirds than its current prediction of 1.7 Cº/century.
- The global warming trend since 1990, when the IPCC wrote its first report, is equivalent to below 1.4 Cº per century – half of what the IPCC had then predicted.
- Though the IPCC has cut its near-term warming prediction, it has not cut its high-end business as usual centennial warming prediction of 4.8 Cº warming to 2100.
- The IPCC’s predicted 4.8 Cº warming by 2100 is well over twice the greatest rate of warming lasting more than ten years that has been measured since 1950.
- The IPCC’s 4.8 Cº-by-2100 prediction is almost four times the observed real-world warming trend since we might in theory have begun influencing it in 1950.
- From 1 April 2001 to 1 July 2014, the warming trend on the mean of the 5 global-temperature datasets is nil. No warming for 13 years 4 months.
- Recent extreme weather cannot be blamed on global warming, because there has not been any global warming. It is as simple as that.
Technical note
Our latest topical graph shows the RSS dataset for the 214 months October 1996 to August 2014 – just over half the 428-month satellite record.
Terrestrial temperatures are measured by thermometers. Thermometers correctly sited in rural areas away from manmade heat sources show warming rates appreciably below those that are published. The satellite datasets are based on measurements made by the most accurate thermometers available – platinum resistance thermometers, which not only measure temperature at various altitudes above the Earth’s surface via microwave sounding units but also constantly calibrate themselves by measuring via spaceward mirrors the known temperature of the cosmic background radiation, which is 1% of the freezing point of water, or just 2.73 degrees above absolute zero. It was by measuring minuscule variations in the cosmic background radiation that the NASA anisotropy probe determined the age of the Universe: 13.82 billion years.
The graph is accurate. The data are lifted monthly straight from the RSS website. A computer algorithm reads them down from the text file, takes their mean and plots them automatically using an advanced routine that automatically adjusts the aspect ratio of the data window at both axes so as to show the data at maximum scale, for clarity.
The latest monthly data point is visually inspected to ensure that it has been correctly positioned. The light blue trend line plotted across the dark blue spline-curve that shows the actual data is determined by the method of least-squares linear regression, which calculates the y-intercept and slope of the line via two well-established and functionally identical equations that are compared with one another to ensure no discrepancy between them. The IPCC and most other agencies use linear regression to determine global temperature trends. Professor Phil Jones of the University of East Anglia recommends it in one of the Climategate emails. The method is appropriate because global temperature records exhibit little auto-regression.
Dr Stephen Farish, Professor of Epidemiological Statistics at the University of Melbourne, kindly verified the reliability of the algorithm that determines the trend on the graph and the correlation coefficient, which is very low because, though the data are highly variable, the trend is flat.
Other statistical methods might be used. A paper by Professor Ross McKitrick of the University of Guelph, Canada, published at the end of August 2014, estimated that at that date there had been 19 years without any global warming.
SIGINT EX September 4, 2014 at 8:02 pm
>>>>>>>>>>>>>>>>>>>>>
Your fascination with death scenes has gotten you snipped on multiple occasions. Frankly, it isn’t “ha ha” at all. You need help.
At the present state of the Earth’s evolution, the atmosphere self controls to make the average warming from all well-mixed GHGs exactly zero. As for the IPCC’s claims about radiative and IR physics, it is easy to show that it is wrong from the operation of night vision equipment.
The detector at the same temperature as the surroundings shows an image that shimmers, alternating light and dark at any position. What is really being detected is the thermal incoherence about zero mean flux, thus proving net energy flux is the vector sum of opposing emittances.
‘Back radiation’ does not exist. There is no ‘positive feedback’. The effect of GHGs on OLR is exactly compensated by lower atmosphere processes. IPCC science is absurdly wrong.
http://woodfortrees.org/graph/hadcrut4gl/from:1996.5/to:2014.5/plot/hadcrut4gl/from:1996.5/to:2014.5/trend/plot/gistemp/from:1996.5/to:2014.5/plot/gistemp/from:1996.5/to:2014.5/trend/plot/rss/from:1996.5/to:2014.5/plot/rss/from:1996.5/to:2014.5/trend/plot/uah/from:1996.5/to:2014.5/plot/uah/from:1996.5/to:2014.5/trend/
This analysis is based on the clear outlier, RSS. Adding RSS to UAH in Figures 2 and 3 is adding sour milk to fresh.
Note WoodForTrees uses UAH v5.5 dataset, at the full page you can click on “Raw data” and see the source. For the post above, while the RSS source is given on Figure 1 and discussed in text, no mention is made of the UAH version and source.
It has to be 5.6 since it goes to August and 5.5 is not out yet for August. (If you know the August anomaly for version 5.5, please let me know.)
http://vortex.nsstc.uah.edu/public/msu/t2lt/uahncdc_lt_5.6.txt
2014 6 0.31 0.19 0.37 0.32 0.23 0.40 0.29 0.12 0.35 0.51 0.53 0.50 0.18 0.13 0.25 0.21 -0.22 0.33 0.31 0.36 0.24 -1.21 -1.32 -1.12 -0.11 -0.02 0.442014 7 0.30 0.13 0.40 0.29 0.20 0.38 0.31 -0.01 0.42 0.46 0.42 0.49 0.17 0.10 0.25 0.27 -0.27 0.42 0.21 0.41 -0.11 -0.30 -1.09 0.33 -0.28 -0.09 0.42
Year Mo Globe Land Ocean NH Land Ocean SH Land Ocean Trpcs Land Ocean NoExt Land Ocean SoExt Land Ocean NoPol Land Ocean SoPol Land Ocean USA48 USA49 AUST
August v5.6 isn’t here either.
http://vortex.nsstc.uah.edu/public/msu/t2lt/?C=M;O=D
Latest “Last Modified” date is 14-Aug-2014. If he used August data, I don’t know where he got it.
The “Technical note” says a “computer algorithm” scrapes the RSS data monthly and does the graphing. Perhaps likewise there is also another that gathers the UAH data, and they were averaged for another one or two that produced the other graphs.
http://data.remss.com/msu/monthly_time_series/RSS_Monthly_MSU_AMSU_Channel_TLT_Anomalies_Land_and_Ocean_v03_3.txt
The August RSS anomaly is 0.193. Zoom in on Figure 2, “RSS + UAH” is just under 0.2, could be about 0.19. Same in Figure 3.
July RSS was 0.351. July UAH was 0.221. It is unlikely August UAH v5.6 would be close enough to August RSS to allow the average to be just about the same as August RSS alone.
If August UAH was not available, a smart “computer algorithm” for averaging might simply average with the available data instead of a null or “N/A” value, like with spreadsheets. A smarter one though should not give an average without all the data. It is possible what was done to generate the graphs did not flag August UAH as missing but did label them as extending to August because it did have August RSS data.
Conclusion: August UAH was not used in Figures 2 and 3.
The August RSS anomaly is 0.193. Zoom in on Figure 2, “RSS + UAH” is just under 0.2, could be about 0.19. Same in Figure 3.
Exactly right! It was 0.199 according to:
http://www.drroyspencer.com/2014/09/uah-global-temperature-update-for-august-2014-0-20-deg-c/
From Werner Brozek on September 5, 2014 at 8:28 am:
Ah, that’s right! Thank you, my friend. I had forgotten about the UAH update.
It does seem strange to make such a big issue in the “Technical note” section of how the RSS numbers are accurate due to using a “computer algorithm”, and then conveniently not mention the last UAH number used in 2/3 of your graphs was added by hand, but it does make it possible for August UAH to have been included.
As the possibility is there, and the value is close enough to average with August RSS as shown in the two graphs, I admit my conclusion may be in error and withdraw it.
But the use of RSS still reeks of cherrypicking, when it’s clearly at variance with even the other satellite dataset. Mixing it with UAH is like mixing a rotten banana with fresh strawberries then offering the “freshly made” smoothie for sale.
With everything else showing the reality of the “hiatus”, do we really need to promote the outlier to also gloat over a few extra months?
it does make it possible for August UAH to have been included
Of course this is possible with everything but GISS. GISS can have a latest value that would make you think the plateau will go up or down, but then the opposite happens due to all other adjustments.
I also wish UAH and RSS were closer. We were told that would be the case with version 6. But who knows when that will come out?
You talk about “mixing a rotten banana with fresh strawberries”. If you were to categorize GISS as one of these, which one would it be?
Ah, GISS for this century is not bad, look at recent trend lines, too many eyes watching for them to get away with much. It’s the past they keep screwing with, before the satellites were watching, those numbers you can’t trust. For now they’re fresh strawberries, although somewhat tart with a residual taste of fertilizer.
But there is still a rather large difference between GISS and Hadcrut4. With GISS, the slope is flat for 9 years and 11 months. With Hadcrut4, the slope is flat for 13 years and 6 months.
http://www.skepticalscience.com/trend.php
GISS, 12mo moving average, 1995.92 to 2014.5 yields 0.110 +/- 0.110 °C/Decade (2 sigma). No statistically significant warming from November 1995 to July 2014, 18 yrs and 8 mo. Isn’t that better than saying there’s a flat slope for 9 yrs and 11 mo?
HadCRUT4, 1994.67 to 2014.5, 0.097 +/- 0.097 °C/Decade. September 1994 to July 2014, 19 yrs and 10 mo. Not that much difference between HadCRUT4 and GISS that way, eh?
HadCRUT4 is practically two decades without statistically significant warming? Why aren’t we mentioning that instead of mucking around with outlier RSS and flat slopes?
The site that you mention has not been updated since January. There is no difference if you put in 2014.5 or 2014.08. Besides, I cover statistically significant warming as well in my own posts. See my next one with July data, either today or tomorrow. In section 1, I give the time for no warming, and in section 2, I give the time for no statistically significant warming. They are really two different things.
HadCRUT is bullshit, you can not take the “average” of the temperature of two media that have heat capacities that are orders of magnitude different.
HadSST is water, CRUTem is air.
Even if they want to argue that SST is a ‘proxy’ for near surface marine air temperature, land based air temps have a variability that is roughly twice that of SST. It’s trying to average apples and oranges.
I agree that there is no reason to prefer RSS to UAH, so that is a cherry pick. It would be better to do the same processing on both. The result is not much different. IIRC it’s >15 yr for UAH.
In response to “Greg”, there are several reasons to prefer RSS to UAH. It usually reports first; I have now been providing the RSS trends for several years, providing an interesting insight into the ever-lengthening Pause; and at present the compilers of the UAH data have realized their dataset is running hot and are about to bring their dataset more closely into line with the mean of the three terrestrial datasets with a significant revision.
And – though I am open to correction on this – the CRUTem data do not measure the temperature of the land surface: they measure the temperature of the air immediately above the land surface. By the same token, the HadSST data do not measure the temperature of the sea surface: they measure the temperature of the air immediately above the sea surface.
Though it is true that temperatures above the land are more volatile than those above the sea, there is no particular reason why one should not take the mean of the temperature readings from both datasets as the basis for providing some indication of whether the Earth as a whole is warming or cooling. Whether over the land or over the sea, it is air temperatures that are being averaged – so, apples compared with apples, not apples with oranges.
The UAH trend from January 1999 to August 2014 is 0.143 degrees C / decade, a bit higher than the overall trend and a 27% increase from where the overall trend stood in 1999. The mean temperature in 1999 (i.e. overall mean from 1/1979 – 1/1999) was -0.0971 This has increased to 0.148 as of August 2014 (1/1979 – 8/2014). So during the 15 year hiatus, the UAH record shows: 1) the 15 year linear trend increased, 2) average temperatures increased over the previous 15 years, 3) the overall linear trend increased and 4) the overall average temperatures increased.
That’s one heck of a hiatus.
UAH 5.6 data was posted by Dr. Spencer. He said it will take a few days to show up in the database.
Thank you, my dear. I have noted my mistake above.
And now we have additional SO2 finding its way around northern reaches …
https://earthdata.nasa.gov/labs/worldview/?switch=geographic&products=baselayers,!MODIS_Aqua_SurfaceReflectance_Bands143,!MODIS_Terra_SurfaceReflectance_Bands121,!MODIS_Terra_CorrectedReflectance_TrueColor,!MODIS_Aqua_CorrectedReflectance_TrueColor~overlays,!Aura_Orbit_Dsc,!Aura_Orbit_Asc,!OMI_SO2_Upper_Troposphere_and_Stratosphere,!OMI_SO2_Middle_Troposphere,OMI_SO2_Lower_Troposphere,!AIRS_Prata_SO2_Index_Night,!AIRS_Prata_SO2_Index_Day,Reference_Labels,Reference_Features,Coastlines&time=2014-09-04&map=-129.476562,-30.345591,140.523438,118.998159
That could get interesting.
“The satellite datasets are based on measurements made by the most accurate thermometers available – platinum resistance thermometers, which not only measure temperature at various altitudes above the Earth’s surface via microwave sounding units but also constantly calibrate themselves by measuring via spaceward mirrors the known temperature of the cosmic background Radiation”. Please can you explain this method of temperature measurement in more detail or give some reference. I don’t understand how you measure the radiation temperature with a Pt thermometer . It seems to act as a bolometer and there must be some optical filters between the source and the thermometer.
There is a good explanation of how the satellites measure temperatures at Dr Roy Spencer’s website.
Mr Knoebel says that a r^2 of 0.000 indicates that the linear trend is a poor fit to the data. However, it is in the nature of the algorithm that determines r^2 that if the trend is zero even small departures from the trend line will produce an r^2 of zero. It is self-evident from looking at the data curve that the data are stochastic: nevertheless, it is also self-evident that the rate of warming is zero, or as near zero as makes little difference, and the r^2 of 0.000 is one indication of this.
Mr Knoebel says one should not talk of “no trend”. He prefers “neutral trend”. If the trend is neither a positive nor a negative trend but is a zero trend, it is often referred to as “no trend”.
Mr Knoebel says adding the RSS and UAH satellite datasets and taking their mean is inappropriate. However, this exercise produces a trend very close to the trend if one takes the mean of the three terrestrial datasets. RSS tends to run cold: UAH (in its current version, at any rate) tends to run hot. Averaging the two happens to cancel out the cold and hot running. UAH, however, is soon to move to version 6, which will bring it more closely into line with RSS.
Mr Knoebel says the August data for UAH were not used. They were used, for the monthly anomaly is published here by Roy Spencer. However, the data table available on the internet tends not to be updated till the middle of the month.
Mr Knoebel says I have cherry-picked the RSS dataset. However, the zero trend shown on that dataset is within the error margins of all the other datasets.
Bottom line: not one of the models predicted as its central estimate that there would be no global warming for approaching two decades notwithstanding the considerable growth in CO2 concentration over the period. No amount of statistical nit-picking will alter that fact. Likewise, there is a growing discrepancy between the predicted and actual warming rates. This, too, is undeniable. IPCC has all but halved its short-term predictions of global warming, demonstrating that it has – albeit with reluctance – accepted the fact of the Great Pause. It is time for Mr Knoebel to do the same.
“RSS tends to run Cold”
Looked at the answers in the back of the book, Sir Christopher? You seem to have access to divine knowledge of the true temperature.
“UAH, however, is soon to move to version 6” More ‘adjustments’ eh?
When are you going to start comparing like with like? IPCC’s scenarios are for surface temperature. Comparison with an average of the surface temperature data sets, for example, would be far more meaningful. Though, of course, it wouldn’t serve your agenda.
Village Idiot
You correctly say that doing as you suggest would not “serve” the “agenda” of Lord Monckton whose “agenda” is to proclaim the truth.
Please say why you think anybody would prefer the unstated agenda of an anonymous internet troll who admits to being an Idiot.
Richard
In answer to “Village Idiot”, comparison of the IPCC’s exaggerated predictions with the mean of the three principal terrestrial datasets would look very similar to comparison of its predictions with the mean of the two satellite datasets: for the terrestrial and satellite means are very close to one another. The sole advantage of using the satellite datasets is that they report sooner each month than the terrestrial datasets. However, from time to time I provide additional reports examining all of the principal datasets, terrestrial as well as satellite.
My Lord Monckton,
Agreement between the various temperature data sets isn’t as good as you suppose. UAH. GISS, HADCRUT and NOAA all show trends in excess of 0.05 % per decade (by least squares fitting). RSS is very much the outlier. For more details I suggest you have a look at http://moyhu.blogspot.co.uk/2014/09/fragility-of-pause.html .
A whiff of cherries still hangs in the air.
We could just as easily show no global warming for 11,000 years by showing no trend since the start of the Holocene.
Which is a good way of putting all this crap into context.
What’s your point? Should we count by day?
“Lies, Damn Lies and Statistics”
Looking at Figure 1, the temperature record for the past 12 years suggests a slight cooling.
Mr McCulloch is correct. Since January 2001, for instance, the RSS dataset shows global temperature falling at a rate equivalent to 0.5 K/century. However, most datasets show little or no trend (and if anything a minuscule uptrend) since that date.
1940’s-70’s 30 years of cooling.
1970’s-90’s 20 years of warming.
1990’s-2014 20 years of neither cooling nor warming.
Natural cycles, then. When do the fraud trials begin?
If there is anything that we have learned from 135 years of Ocean SST history, it is that global climate seems to be driven by the cycles of our oceans . These 60-70 year pole to pole Pacific and Atlantic ocean cycles point to cooling climate for the next 20-30 years. This cooler cycle may not trough until 2035/2045. So this so called pause will not end anytime soon.. The coldest period could be 2030-2050. This is similar to what happened 1880-1910 when the coldest period was 1900-1920.
“Herkimer” is right that the ocean oscillations [particularly the Pacific decadal oscillation] have a strong influence on temperatures. However, as India, China and eventually even poor Africa industrialize and provide universal electricity (chiefly from fossil fuels) as the fastest, surest way to lift their peoples out of poverty and hence to stabilize their populations, CO2 emissions and concentrations are bound to increase.
All other things being equal, therefore, I should expect some warming to occur between now and 2050. However, the Sun is likely to be less active over the next 40 years than over the past 40. That small but persistent reduction in solar activity may perhaps hold temperatures down. If so, the predictions of the useless models will look even sillier than they already do.
Sigh. I just posted this link to a wonderfully illuminating William Briggs comment on the McKittrick thread, but it is almost certainly a good idea to post it here as well: http://wmbriggs.com/blog/?p=5107, How To Cheat, Or Fool Yourself, With Time Series: Climate Example.
To summarize here:
(Emphasis his.)
The point being that it is bad to fit linear trends to global temperature, not good, because it is almost always a bad idea to fit a linear trend to a hand-picked segment of a timeseries and enormously risky and misleading to fit a trend even to the entire data set. Phil Jones is simply mistaken, either through ignorance (not unlikely) or because he wishes to convince himself that particular linear trends in some of the many, many data chords one can select in the many, many climate timeseries are meaningful. I strongly suggest that you read Briggs’ article (if you haven’t already) and take it to heart, because this is one of the most abused notions in the history of misapplied statistical reasoning.
As Brigss (and I independently, in other threads, because I’ve co-founded two companies based on predictive modelling for money so far and have a bit of expertise here and in AI and pattern recognition) have often pointed out — there is only one good reason to build a linear model of a timeseries — or a logistic model — or a Fourier model — or a quadratic model — or an exponential model — or a neural network based model (my personal favorite for high dimensional problems) — or a model based on the textual writings of Nostradamus. That is to predict the future. Not the present. Not even (really) the past, although see below.
This use is admirable, even though it is only marginally science — perhaps a first step towards science, because fitting an unmotivated or poorly motivated linear (or whatever) model to data is in fact a logical and statistical fallacy, little better than using Nostradamus unless and until it is backed up by a functional model and works!
Let me state that last bit once again, even more strongly: and works! A predictive model of any kind is useful precisely to the extent that it shows skill in prediction. Period. For as long as this desirable state of affairs lasts, which is regrettably often not very long when one fits a linear model, especially to a manifestly non-stationary time series of data! That is, it is particularly dumb to fit straight lines to data that ain’t on a straight line the minute one gets outside of one’s fit interval and where one expects the underlying (invisible) causal parameters that influence the data to be doing lots of wild and exciting and non-stationary things.
This is where Jones (and you, by inheritance) makes another capital mistake. Why in the world would anybody fit a linear trend to climate data when a glance at any of the extant series on pretty much any quantity suffices to demonstrate that almost none of the data can be fit by a linear trend for a time longer than ten to thirty years? Take a peek at HADCRUT4: http://www.woodfortrees.org/plot/hadcrut4gl/from:1800/to:2013/plot/hadcrut4gl/trend . A linear trend (drawn) sucks at fitting the data, which is nonlinear, non-stationary and incredibly poorly fit by a linear model. Yes, one can look at this and think “Hey, maybe I can fit this with a linear trend plus a fourier component.” Or, perhaps with an offset exponential trend or a logarithmic trend plus a fourier component. Or I look at it and think “Damn, I could easily build a NN to fit that timeseries”. But, if we did any of these things (and we could make all of them work to at least substantially improve on the linear trend) would that fit have any predictive value whatsoever?
Doubtful. If you look back 2000 years: http://upload.wikimedia.org/wikipedia/commons/c/c1/2000_Year_Temperature_Comparison.png you see that your model utterly fails to hindcast past temperatures. If you use common sense to think about the future, you realize that your model implies that 2000 years from now the temperature will be well above the boiling point of water. It might work for the HADCRUT4 data, it might even extrapolate for some ways into the future (exhibit some skill at prediction) but you know that the model — any of these models — will fail in the future, most of them quite rapidly because they didn’t even work in the past outside of the interval you happened to fit.
Here is where unmotivated or poorly motivated function fitting is a most dangerous approach to predictive modelling. You could quite possibly fit HADCRUT4 pretty well with some of the models I’ve proposed. I’ve tinkered a bit with linear plus fourier, and it was a vast improvement on linear, and I’ll bet I could do even better with either an exponential or a logarithm plus a fourier term (or two or three) to better catch the slight gain over the long term linear trend near the end. But do I expect any of those models to have any predictive skill? No, of course not. Why would they? They would laughably fail if I went back a mere 50 more years, and they would fail so badly on the 2000 year data even if I re-fit them to the 2000 year data that one wouldn’t even try in the first place. It is obviously an accident that the last 150 years can be fit in this way. The term “accident” here doesn’t mean that there may not be reasons for it — it means that those reasons themselves are accidents in the grand scheme of climate dynamics; they are not stationary
Here’s a radical idea, so dumb and yet so functional. Perhaps the best fit to the data is to fit chorded linear trends like the ones Briggs describes. Or one can spline the data, which basically means fitting e.g. cubics to segments to get an interpolating line. The former is a sort of “punctuated trend” model, where a “punctuated equilibrium” model would insist on fitting flat segments wherever possible joined by comparatively sudden steps. The latter would actually work decently well on HADCRUT4 if the steps were perhaps 10 years to 40 years (varying) wide. Obviously, punctuated trend would work even better. And a spline, like fitting it with a full Legendre polynomial series or Fourier series or any other complete functional basis on a finite line segment series, would obvious do as well as you want it do — you can fit it all the way down to the noise if you like.
The point of this is hopefully obvious. After we fit all of those trend segments together in some way that pleases us, what do those segments actually mean? Not a damn thing. After all, sometimes they trend up, sometimes down, sometimes flat. Sometimes the up trend lasts 20 years. Sometimes only 5 or 10. Sometimes a flat segment lasts 30 years. Sometimes a down trend lasts 20 years. We have no idea why a single one of these lines has the parameters that best fit the data. We haven’t a clue as to why the climate changes to a different trend wherever we with our vastly experienced eye and the seat of our pants decided that a trend had changed and started to fit a different linear trend to the following data up to another equally arbitrary point. And God help you if you think this sort of constructive process is extrapolable — really, any of these constructions. Wall Street is paved with the bones of brokers who thought they’d detected a reliable trend in market timeseries — bones driven into the pavement when the broker in question eventually jumped out of a high window onto it. And trust me, one day “climate science” is going to have its own boneyard.
In the meantime, we continue to live in “statistics hell”. McKittrick’s paper did not demonstrate that over 19 years of data the trend is indistinguishable from zero. It demonstrated that 19 years is the longest stretch over which one cannot reject the null hypothesis of no trend at the 95% confidence level. Those are not, actually the same thing. Over those 19 years, the data has a very definite trend. It’s just that, by cleverly applying an abstruse and complex model, he was able to find a way that the actual data had a probability of 0.05 (subject to a raft of assumptions about the nature of excursions of the data from a truly neutral trend, all Bayesian priors and none of them capable of surviving a posterior probability correction) given an “actual” trend line (whatever that means — I think nothing at all, what do you think?) with no slope. The incredibly silly of p = 0.05 which after all is only 1 in 20. What he’s really saying is that it is 95% likely that the data has a positive trend, but 95% isn’t likely enough to reject the possibility of no slope — and see what Briggs has to say about “confidence intervals” in linear trends fit to selected data chords. It’s a confidence game.
I repeat: please read Briggs’ post very, very carefully and try to actually learn from it. It makes the point that I think you wish to make, but it makes it in a statistically defensible way. Forget this trend, or that trend. Don’t draw trend lines at all — the data speaks for itself, and drawing a trend line through it is part of a complex lie, or rather an attractive fantasy that after doing so you can keep the trend line and ignore the data thereafter, because the trend line you so laboriously extract between carefully chosen endpoints means something. Keep trend lines only to the extent that you (the creator of the trend line) are willing to wager your professional reputation and all hope of future financial support for your work on the gamble that the trend line is extrapolable, that is, will exhibit actual predictive skill into the future. And for for the love of God, Montressor, if you take nothing else from Briggs, take this:
Again, emphasis his. But mine as well. The point is Christopher (if I may take the liberty of calling you Christopher, as Mr. Monckton sounds dreadfully formal and while you are no doubt a Lord, I’m an American and you aren’t my Lord;-) all of the confidence intervals you are asserting in these periodic postings of yours are sheer piffle, as are the trend lines themselves. So are the trend lines fit by many a well-intentioned climate scientist and less-well-intentioned politician, but just because they are statistical idiots isn’t any good reason to emulate them. Confidence in what, exactly? The assertion that the climate will continue to evolve following the fit trend line? Don’t make me laugh.
I agree completely that it is worthwhile pointing out that the predictive models in CMIP5 are actively, dynamically failing — as history suggests that any monotonic model will fail to predict the climate for nearly all of the time because the climate is non-stationary, no set of predictive parameters in a “fit” are likely to persist for as long as 30 years, depending on a whole raft of Bayesian assumptions that I, like Briggs, will quietly ignore for the purposes of this discussion and that are, probably, not true. The climate models that are being judged were deemed worth of consideration in the first place based solely on their success at fitting the reference interval, which is cosmically stupid in predictive modelling — anybody can fit the training data, especially when it is nearly monotonic. Skilled modellers would hold out a trial set and only train on part of the data, and very skilled modellers would insist on their model being able to track key non-monotonic features in any model intended to predict nonlinear phenomena, data that goes up and down. A single glance at figure 9.8a in AR5 is sufficient to give any competent modeller the willies — none of the models in CMIP5 come anywhere near the hindcast data (which will have to do as a trial set past the monotonic training set) and of course, as you point out, there is substantial and increasing deviation of the models singly and collectively from reality for the “future” of the training set up to the present. The models, in other words, have no skill.
That’s really the only thing one has to point out. Whatever the skill of the modellers, the models they have built have no skill. When assessed individually they’d be failing hypothesis tests with p less than 0.05 right and left, especially if those tests were extended to include hindcasts of the data. If individual runs are compared structurally to the actual climate, the failure would be even worse as they would generally fail to reproduce any of the gross statistical features of the actual climate — the right temperature variance, autocorrelation times, drought/flood, violence of storms, predictions of the distribution of atmospheric warming. Failing on any of these would be worrisome — failing on all of them can only be fixed one of two ways.
First, the simplest thing to do would be to just acknowledge that the models are not working well, that they have no skill, and should not be relied on. This would be the simplest thing because it is the truth, and because frankly, if they did work, it would actually be surprising given what we know about the computational complexity and difficulty of solving highly nonlinear PDEs for very long times into the future at a spatiotemporal granularity that is well over 20 orders of magnitude larger than what we might expect to need to do a good job of simulating the climate. Not to mention our incredible ignorance of the actual initial state and sensitivity of the dynamics to initial state and the approximated physics and the possibly unknown physics, but I’ll stop there.
The second way that would work just as well would be to say, well, maybe these models are actually working to some approximation, but when we examine the spread of results they predict, their probable error is so large that they are still useless as a predictive tool. This is (sure) another way of saying they have no skill, but it avoids the humiliation of failing a p-test. It isn’t that they fail a p-test, it is that the standard error in their predictions is so large that we can’t take them seriously in the first place, no physically reasonable time evolution of the climate is excluded or out of bounds of the envelope of their perturbed parameter predictions.
Note that either way, the conclusion is the same. The more complex models, like the simpler linear trends that are a lot easier for humans to digest and a lot easier to turn into statistical lies for political and economic gain, have no real skill, and the human species should view them with the same jaded eye we would view a racetrack tout who promised us a perfect “system” for predicting the outcome of horse races.
rgb
I agree – and took a pounding for saying so recently from the bobble heads who lap this stuff up. It is interesting in the way that tossing a deck of playing cards in the air is interesting if all the kings land face up or that clouds can look like sheep or godzilla or both.
It is a better political tool than a scientific finding and I’m ok with that. I do wonder what it would look like of a person were to plot as a time series the average temperature from each of Monckton of Brenchley’s plots over the time he’s been creating them. That would produce a trend of something and I don’t know what it would tell us except that no two plots produce the same average temperature over the series.
I am of course delighted that “dp” joins Professor Brown in questioning the use of statistical trends by the IPCC and by the climatological community. However, since linear trends are used with great frequency by climatologists, I use them here to provide a highly visible and very clear comparison between what was predicted and what has occurred.
Perhaps “dp” would like to produce a plot of the temperature anomaly for each of my successive monthly trend-lines. Since the Pause is lengthening, I should not expect to find a significant change. However, “dp”, in doing any such analysis, would of course be applying a statistical technique to a statistical technique with which it disagrees – hardly a valuable exercise.
@rgb: Where you absent the day they taught descriptive statistics?
Sigh! It ought to be self-evident by now to such regular readers of this column as Professor Brown that a) I have not stated or implied that a least-squares linear-regression trend has any predictive skill whatsoever, and have frequently made it explicit that it has none; b) I have not asserted or implied that a least-squares regression is the best method of determining a trend – merely that it is what the IPCC and most climatologists use; c) I do not assert that any statistical process is the best method of determining a trend.
Whether Professor Brown likes it or not, the IPCC and the world of climatology usually uses least-squares linear-regression trends. So I use them too: for I am trained in logic, and am content to argue on ground of my opponents’ choosing. If Professor Brown does not like this or any other of the statistical processes applied to data series in climatology, then there is really not the slightest point in addressing that complaint to me: he should instead address it to the Secretariat of the IPCC, or to Professor Jones, or to James Hansen, or to all the numerous climatologists who regularly use least-squares trends. While they use such trends, I shall use them too, for the sake of determining the extent to which the trends they had predicted are not in fact occurring.
One virtue of displaying the trend line is that it provides the very clearest visual indication that global warming has not been happening over the past decade or two. For this reason, my graphs often appear on national television programs, where they are effective because ordinary people can understand that a horizontal line that represents the data over a chosen period indicates that there has been no warming (or, for that matter, cooling) over the period in question. That fact runs counter to what they are being told daily in the news media (about which Professor Brown seems to make no complaint at all).
So the question is this. Are the news media and the scallywags driving the climate scare correct in saying that global warming is continuing at the predicted rate? If they are correct, then Professor Brown’s argument against my graphs should not be that he does not like me using this or any statistical process to discern the rate at which the temperature is or is not changing: it should be that my conclusion that there has been no warming for a decade or two is simply incorrect.
If, on the other hand, the media are not correct, then why, o why, is Professor Brown whining at me for providing a quite widely circulated visual demonstration that they are not correct, instead of whining at them for being incorrect? He is aiming, somewhat futilely, at the wrong target.
The Professor also takes me to task for daring to show in graphical form the IPCC’s prediction intervals, rather than simply its central estimates. He says those intervals are meaningless. I know that: but they are, for better or worse, the IPCC’s intervals, and it is legitimate for me to draw those intervals and also to draw the trends on the real-world, observed data, thereby showing that the trend lines do not fall on – or even particularly close to – the predicted intervals. Once again, there is really no point in his whining at me when he should be addressing his complaint about the meaninglessness of the IPCC’s intervals to the IPCC secretariat.
One of the difficulties in being a layman and having no piece of paper to say that I have received the appropriate Socialist training in these matters is that, with unbecoming frequency, I am somewhat arrogantly lectured because I use the methodologies that the IPCC and the world of climatology uses. I use these methodologies not because I approve of them but because they are the language that the IPCC and the climatologists are familiar with.
Why, for instance, is it all right for the IPCC to select four very carefully chosen least-squares linear-regression trend lines, apply them simultaneously to a single dataset, apply a fraudulent statistical dodge to pretend, quite falsely, that the rate of global warming is accelerating and that we are to blame, and yet without a murmur of dissent from Professor Brown or from the numerous me-too trolls on this thread who whine at my perfectly reasonable use of linear-regression trends? Why does Professor Brown not do as I have done, and write to the IPCC to make it clear that their wilful misconduct in resorting to flagrant and mendacious abuses of statistical process such as this one is not acceptable? That would be a far more constructive use of his time. If only I were not a lone complainer, the IPCC might start having to do proper science.
In fact, Professor Brown is coming quite close to saying that there is no value in applying any form of statistical trend to any dataset. That is a perfectly respectable point of view. However, we can either sit idly by and watch the media and the IPCC falsely claiming that global warming is occurring at the predicted and accelerating rate, or find some visually clear and academically precedented method of indicating that global warming is not occurring at the predicted rate. I prefer not to sit on my hands and whine, but to do something about this and several other of the falsehoods being perpetrated for political expediency and financial profit, at great cost not only in treasure but in lives. I invite Professor Brown to raise his game, and address to the IPCC the complaints he has pointlessly addressed to me.
“One virtue of displaying the trend line is that it provides the very clearest visual indication that global warming has not been happening over the past decade or two.”
..
Likewise, the trend line also shows that global cooling has not been happening over the past decade or two.
As an impartial observer with the greatest respect for both you and Dr. Brown, it seems that you may have taken offense where none was intended.
I always enjoy reading your posts and ripostes.
Only one quibble with this otherwise great article:
“the possibility that the Pause is occurring because the computer models are simply wrong about the sensitivity of . . . ”
Sounds like the models are actually the cause of the pause . . . even this proudly skeptical non-scientist wouldn’t go that far. . .
Cause of the Pause, though – has a nice ring to it : )
Note today’s “news”….. Hillary Clinton is all in that CLIMATE CHANGE is the NUMBER ONE PRIORITY for the USA…… SEE: http://www.msnbc.com/msnbc/hillary-clinton-calls-out-climate-deniers
Christopher Monckton,
Consider the intellectually dishonest terminology used to describe the behavior in recent decades of GASTA time series data and the RSS / UAH satellite time series data. Please see my comment just posted at another WUWT thread { http://wattsupwiththat.com/2014/09/05/friday-funny-what-to-do-when-youve-been-hiatused/#comment-1728458 } .
Here is that comment.
Also, on a different thought, I was recently very interested in and convinced by a recent comment by rgbatduke here on this thread and on another recent WUWT thread about the fundamental intellectual error committed by those who apply linear trends to either GASTA time series data or the RSS / UAH satellite time series data for the purpose of explaining a portion of a time series.
John
I agree with Mr Whitman that those who apply linear trends to surface or lower-troposphere global temperature trends for the purpose of explaining any part of a time series commit a fundamental intellectual error: for such trend-lines do not “explain” anything (still less do they predict anything): they merely provide one method of visualizing what has actually occurred.
Finding the reasons why there has been no global warming for a decade or two is one of the current hot topics in climate research, which is why more than 50 mutually inconsistent explanations have been conjured into being. Very nearly all of these explanations suffer from a fundamental intellectual error: they are untestable guesswork and are not, stricto sensu, science at all.
However, I disagree with his implication that it is better to describe the recent Great Pause as not “statistically significant” rather than as non-existent. For statistical significance is a notoriously slippery concept, whereas my finding the longest period each month during which there has been no global warming at all, using the statistical method that is more common than any other in the analysis of temperature change, has the merit of specificity, clarity, and self-consistency. It has been interesting to note how the Great Pause has been inexorably lengthening, and how this behavior is in ever starker contrast to the behavior that had been predicted with “substantial” – but misconceived “confidence”.
To those who complain that my temperature graph is somehow unfair, I reply that the rules of the game are clearly enough stated. I simply determine the earliest month in the recent record since when the temperature data show a zero least-squares linear-regression trend. The IPCC had originally predicted global warming of about 0.3 K/decade over the near term. The fact that the trend has been zero for approaching two decades indicates clearly that the IPCC was wrong. It is as simple as that: and no amount of wriggling will alter the fact that the predictions of the models are manifestly and flagrantly wrong, as my graphs reveal with a clarity that is, no doubt, painful to some.
Expand your horizons
..
Use more data sets besides RSS
Try UAH for one
Mr Tracton’s response to my reply to Professor Brown is that a zero trend-line indicates not only lack of warming but also lack of cooling. But I had already made that point explicit in my reply to the Professor. Now he says I should use the UAH as well as RSS datasets. It may be that he has not read the head posting, in which two of the three graphs feature the UAH as well as the RSS dataset.
Also, I provide detailed updates based on all five principal datasets every few months. But the monthly report on the RSS dataset, which usually provides its monthly value before any of the others, has become a regular feature here, and – to answer a point by an earlier commenter – successive updates show that the Great Pause has been gradually lengthening, though I expect that to change as the current el Nino begins to bite.
Besides, UAH is currently undergoing a revision that will remove its hot running over recent decades and bring it closer to the mean of the three terrestrial datasets, which show no global warming for well over 13 years.
Whichever way one slices and dices the datasets, the result is the same: the world has not been warming since 1990 at anything like the rate that was then predicted, and has not been warming recently at all.
Since the greenhouse theory indicates that, even with strongly negative temperature feedbacks, there should have been some global warming over recent decades, and since the climate, being a chaotic object, is deterministic, there must be reasons for the Great Pause that are manifestly under-represented in current climate models, raising legitimate questions about whether some of the comparatively rapid warming from 1976-2001 may have been attributable not to Man but to Nature, and about whether climate sensitivity – in the short to medium term, at any rate – is anything like as high as the IPCC profits by persuading the feeble-minded to believe.
Mr Ross asks whether I have taken offense at Professor Brown’s comment. Not in the least. I much admire and enjoy his vigorous comments and occasional rants here (admire the former and enjoy the latter). I should very much like the Professor to try to influence the IPCC to stop using fraudulent statistical techniques as a way to sex up the global warming dossier. And if, at the same time, he wants to point out that the IPCC is using silly global-warming intervals and is leaning too heavily on statistical methods rather than simply eyeballing the data and coming to the common-sense conclusion that the predicted rate of warming is not occurring, so much the better.
Lord Monckton,
Re:
“The fastest measured centennial warming rate was in Central England from 1663-1762, at 0.9 Cº/century – before the industrial revolution. It was not our fault.”
_________________________________________
You may wish to check this. According to CET annual data, the period 1663-1762 warmed at 0.86C/century. The periods 1908-2007 and 1909-2008 both warmed at 0.87C/century.
Although all three periods warmed at 0.9C/century to one decimal place, the fasted measured centennial warming rate using the annual averages was, by a tiny fraction, the period 1909-2008.
Further to the above, should CET for 2014 average 9.8C or greater, then the period 1915-2014 will set a new fastest measured centennial warming rate for the series.
I have checked the points raised by DavidR and can report as follows:
1. One should use monthly data, not annual data, wherever possible. It is the monthly records that we analyze in these columns. The monthly data for the Central England Temperature Record show a warming at 0.90 K over the century 1663-1762. The monthly data for 1909-2008, taken as the mean of the HadCRUt4, GISS, and NCDC terrestrial datasets, showed a warming of 0.79 K over the century. However, the monthly data on the Central England Temperature Record for 1909-2008 shows warming of 0.91 K.
Lord Monckton,
Thank you for the above. Using the monthly data I find that the periods 1908-2007 and 1909-2008 both have a warming rate of 0.91 over the century. Also, the period 1907-2006, although 0.90 for the century, is also fractionally warmer than the period 1663-1762, which I have in 4th place.
Should Central England Temperature (CET) values for the remaining four months of 2014 (Sept-Dec) remain anywhere close to their respective 1961-90 averages then the warming over the century 1915 to 2014 should easily set a new fastest centennial warming rate record.
The central point remains, however, that the rate of warming during a period with high CO2 emissions is just about the same as the rate during a period with low CO2 concentrations. One should not lose sight of the main point.
“On the RSS satellite data, there has been no global warming statistically distinguishable from zero for more than 26 years.”
How bizarre! every global temperature series, and even CET shows that most of the warming was from 1988 onwards.
“The ratio of el Niños to la Niñas tends to fall during the 30-year negative or cooling phases of the Pacific Decadal Oscillation, the latest of which began in late 2001.”
El Nino frequency and intensity increases sharply through the coldest parts of solar minima, there is no chance of a 30 year negative PDO mode with that happening soon.
The statement that there has been no significant trend in RSS data over the past 26 years is from the recently published paper by McKitrick. There is no reason I can see for believing it is incorrect. If you find it ‘bizarre’, your problem is with standard statistical hypothesis testing. As rgbatduke points out, it just means the probability that the past 26 years of temperatures are consistent with no trend exceeds 0.05 (just).
Ulric Lyons,
What, exactly, is “bizarre”? No one disagrees with the fact that there has been global warming. It is the planet’s natural recovery from the LIA. But it stopped around 1997. The UAH satellite data shows the same thing: no global warming after 1997.
Global warming has stopped, Ulric. For many years now. It may resume at some future time. Or not. Or, the planet may begin to cool. At this point, no one knows. All we know is that global warming stopped a long time ago.
Bizarre to claim statistically zero warming for the last 26 years, when most of the warming was from 1988 onwards. The recovery from the LIA is illusory, CET from 1730 to 1930 shows no warming trend: http://snag.gy/2q2kT.jpg
Mr.Stealey,
You attempted further up this thread to justify your claim that “global warming stopped at least 18 years ago” by using a graph of oceanic heat content covering 8 years. Yes, only 8 years. Why?? You can find graphs of OHC covering more than 18 years on WUWT, yet alone elsewhere on the Web, as I pointed out to you in response to your claim. These show no sign of such a halt, yet you keep on with your dogmatic assertion.
At least Eschenbach has the intelligence to recognise the problem that the rise in OHC is causing to AGW-gainsayers, though his attempt to show that this rise is insignificant was, frankly, risible
Ulric Lyons,
“No one disagrees with the fact that there has been global warming. It is the planet’s natural recovery from the LIA.”
________________
What physical process has driven “the planet’s natural recovery from the LIA”?
That is not my comment DavidR, it was from dbstealey.
David R says:
What physical process has driven “the planet’s natural recovery from the LIA?
Good question. What physical process caused the LIA in the first place?
============================
Bill H says:
…only 8 years. Why??
Because that is an ARGO chart, and that’s when ARGO started.
Here are some charts of ocean heat content [OHC] and SST:
chart 1 [10 year chart]
chart 2 Notice the “adjustments”. They constantly do this, when the data does not show what they want: global warming. That is dishonest, unless they show a step-by-step methodology, from the raw data to the ‘adjusted’ [non]data. Since they don’t show how or why they did the adjustments, the final result should be disregarded.
chart 3 Another “adjustment”, made without any methodological explanation. The funding for these agencies is dependent upon keeping the global warming scare alive. They have a vested self-interest in making adjustmnents that show more warming than there rally is. A true scientific skeptic questions all but raw data.
You make assertions, Bill H, but that is all they are. I post data. If you disagree, do the same.
++++++++++++++++++++++++
Ulric Lyons, can you provide a provenance for this chart you posted? Thanks. It looks like an anomaly chart, which would not show a trend.
Also, you still say:
Bizarre to claim statistically zero warming for the last 26 years, when most of the warming was from 1988 onwards.
As I stated above, I do not disagree that there has been warming since 1988. But that warming stopped around 1997. If you don’t like RSS data, then global warming still stopped many years ago. Every major organization including the IPCC admits that now. Please argue with them if you disagree.
It is annual CET from 1730 to 1930: http://climexp.knmi.nl/data/tcet.dat
Thanks, Ulric. As I thought, anomaly data.
Not anomaly data, and either would show the same trend.
Gentlemen, it is interesting to peruse your discussion, I am an engineer who has always doubted anthropogenic global warming. I am also a politician (State Representative). My suspicion is that water vapor by virtue of concentration play a much greater role than CO2 in the green house effect, and would therefore have a generally stabilizing effect. The sun would have a much greater effect on deviations in water vapor. The way I explain it to my colleagues is sun very very big; earth big; mankind very very small.
I also do a fair amount of computational fluid dynamics modeling of furnaces. Same types of models, different application. Since we model the furnace, a chaotic system, as though it were at steady state and we are calibrating the model results to averages taken over extended time periods hours in my cases; I always maintain that the one sure thing that we know about our modeling results is that we know they are wrong be cause the conditions modeled never actually exist in fact.
Daniel,
You are 100% correct. I also am an Engineer and while I am more familiar with solid Mechanics computer models (FEA), I have had a lot of exposure to results of CFD studies for systems that are order of magnitude smaller and less chaotic the Climate models. CFD is useful to compare configurations; however, their divergence from real world data gives rise to skepticism as to their use without real world data.
No experienced engineer would trust CFD results alone for an important design of complex systems.