A new paper published today by the Global Warming Policy Foundation explains how statistical forecasting methods can provide an important contrast to climate model-based predictions of future global warming. The repeated failures of economic models to generate accurate predictions has taught many economists a healthy scepticism about the ability of their own models, regardless of how complex, to provide reliable forecasts. Statistical forecasting has proven in many cases to be a superior alternative. Like the economy, the climate is a deeply complex system that defies simple representation. Climate modelling thus faces similar problems. —Global Warming Policy Foundation, 23 February 2016

The global average temperature is likely to remain unchanged by the end of the century, contrary to predictions by climate scientists that it could rise by more than 4C, according to a leading statistician. British winters will be slightly warmer but there will be no change in summer, Terence Mills, Professor of Applied Statistics at Loughborough University, said in a paper published by the Global Warming Policy Foundation. He found that the average temperature had fluctuated over the past 160 years, with long periods of cooling after decades of warming. Dr Mills said scientists who argued that global warming was an acute risk to the planet tended to focus on the period from 1975-98, when the temperature rose by about 0.5C. He said that his analysis, unlike computer models used by the IPCC to forecast climate change, did not include assumptions about the rate of warming caused by rising emissions. “It’s extremely difficult to isolate a relationship between temperatures and carbon dioxide emissions,” he said. –Ben Webster, The Times, 23 February 2016

Bishop Hill reports:

GWPF have release a very interesting report about stochastic modelling by Terence Mills, professor of applied statistics and econometrics at Loughborough University. This is a bit of a new venture for Benny and the team because it’s written with a technical audience in mind and there is lots of maths to wade through. But even from the introduction, you can see that Mills is making a very interesting point:

The analysis and interpretation of temperature data is clearly of central importance to debates about anthropogenic globalwarming (AGW). Climatologists currently rely on large-scale general circulation models to project temperature trends over the coming years and decades. Economists used to rely on large-scale macroeconomic models for forecasting, but in the 1970s an increasing divergence between models and reality led practitioners to move away from such macro modelling in favour of relatively simple statistical time-series forecasting tools, which were proving to be more accurate.In a possible parallel, recent years have seen growing interest in the application of statistical and econometric methods to climatology. This report provides an explanation of the fundamental building blocks of so-called ‘ARIMA’ models, which are widely used for forecasting economic and financial time series. It then shows how they, and various extensions, can be applied to climatological data. An emphasis throughout is that many different forms of a model might be fitted to the same data set, with each one implying different forecasts or uncertainty levels, so readers should understand the intuition behind the modelling methods. Model selection by the researcher needs to be based on objective grounds.

There is an article (£) in the Times about the paper.

I think it’s fair to say that the climatological community is not going to take kindly to these ideas. Even the normally mild-mannered Richard Betts seems to have got a bit hot under the collar.

My response to “clickbait”:

More numerical origami !

g

I used to tell people “I have a Black belt in origami and if you upset me I will fold you into funny shapes”.

Isn’t it strange how how sceptical models are considered rubbish but theirs are wonderful and true?

James Bull

If this “prediction” is equvalent to an “expert” system operating on real-world temperatures, rather than deriving from a model operating on how one thinks CO2 effects temperature, I do wonder how it will work out. Pure pragmatism should give better results than inadequate theory.

Take a look at the prepublication draft of the update to the US Global Change Research Strategic Plan document released today. http://www.nap.edu/catalog/23396/review-of-the-us-global-change-research-programs-update-to-the-strategic-plan-document?utm_source=NAP+Newsletter&utm_campaign=fff4a33bd4-NAP_mail_new_2016_02_23&utm_medium=email&utm_term=0_96101de015-fff4a33bd4-102113877&goal=0_96101de015-fff4a33bd4-102113877&mc_cid=fff4a33bd4&mc_eid=5e5b43f3ea

It says it is incorporating ‘relevant social science research’ into modeling and also how the results of the modeling are communicated.

Hocus pocus at taxpayer expense.

And if the taxpayers would only demand to stop funding it, it would go away. Vote accordingly.

Why does anyone bother. Both sides of the debate are absolutely polarised, doesn’t matter who publishes what. This has nothing to do with science.

Hah! No wrong answers! Just like the school system.

Everyone gets a trophy.

When the sides are so polarised it doesn’t matter about the nature of evidence it becomes a believe system. Don’t you see that?

The International Institute of Forecasting around 2004, covered the modelling. Their findings were that the models broke nearly every forecasting principle for accuracy and therefore were worthless.

An interesting part is that Professor Mills could not correlate temperature and CO2 rising.

Here are other spurious correlations,http://www.tylervigen.com/spurious-correlations.

Starts with the number of people who drowned by falling into a pool versus the number of films Nicholas Cage appeared in.

“An interesting part is that Professor Mills could not correlate temperature and CO2 rising.”There is no consideration of CO2 at all in the paper. No CO2 data is used. It is just statistical model fiting for temperature series (HADCRUT4, RSS and CET).

Yes, but he’s quoted above saying “It’s extremely difficult to isolate a relationship between temperatures and carbon dioxide emissions,”.

I know that you know that between 1958 and about 1975 or so, CO2 was increasing while temperature was not.

I know that you know that between 1975 or so, CO2 was increasing and so was temperature, therefore a cause-effect relationship can be investigated as a possibility.

I know that you know that between [2000] or so and now, CO2 continued to increase, but not so for temperature.

Enough said.

‘Yes, but he’s quoted above saying “It’s extremely difficult to correlate…”‘

What he is saying in polite language, is that there aint any correlation.

“What he is saying in polite language, is that there aint any correlation”Well, in scientific language you are supposed to demonstrate it with data. That isn’t done in this paper.

In any case, you’ve again missed a distinction. He said “to isolate a relationship”. Correlation isn’t right. CO2 concentration results from accumulated emissions, and that then is expected to produce a radiative heat flux. Temperature would rise as that heat accumulates. That is all nothing like expecting a linear correlation between emissions and temperature.

Nicky, all the models say warming is to be ACCELERATING and were clearly not seeing that in global measurement data. This ACCELERATED warming occurs only in the imaginary/pseudo-data generated by climate models.

The fitted models show no significant trend term:

(added bold)

The lack of a drift term has been shown in the statistical analysis, and since as you point out, rising CO2 concentration is to first approximation a linear increase (ignoring seasonality), the statistical analysis suggests that if there is an effect it is masked by offsetting effects.

” insignificantly different from zero. Omitting this from the model…”The observed trend was 0.6°C/cen, making a rise of about 1°C since 1850. That’s about what most people get. Whether the difference from zero is significant depends on the model. in most, it is highly significant. If you pile on the parameters (he has 4) then you provide other ways in which the obs could be fitted, with extra variation of the trend coefficient possible. In fact the analysis here is amateurish. The parameters have a joint distribution, with a covariance matrix. There isn’t a single standard error for any one of them. My guess is that he’s just taken the diagonal.

But even if the trend is not significantly different from zero, that doesn’t mean you can say that it

iszero. It’s equally likely to be 1.2, and more likely to be 0.6, as observed. The effect of omitting is that it is replaced with 0,with no uncertainty.The forecast should be repeated with an assumed fixed trend of 1.2. Just as valid, and it gives an idea of how much uncertainty is disguised by this totally unwarranted assumption.Wrong. That’s what all the numbers in brackets are below the coefficient estimates (0.0008 in the case of the trend term). So the trend estimate is 0.0005+/- 0.0016 at the 1.96σ level – or the confidence range is between -0.0011 and +0.0021, or the t statistic is 5/8, or just 0.625. Dropping the trend term reduces the variance of the model error (the difference between the model estimated values and the data) marginally, providing more justification for excluding it from the final model. Your suggestion of doubling the coefficient and including it as a forced term would increase the standard error of the model.

Elsewhere in this thread you point out that including annual anthropogenic CO2 emissions as an explicit variable would not form a useful addition to a model. The principle here is no different, and demonstrated via statistical analysis. You should note that in the analysis that is broken down into different time periods, significant trend terms are identified for the sub periods. However, once again the most recent period (since 2002) show no significant trend term – a pause or hiatus, in fact. An ARIMA model over the whole period since 1850 shows the limitations of estimating trends using simple linear regression, with its assumptions about the distribution of errors. Unsurprisingly, the ARIMA model demonstrates that there is a degree of correlation in the lagged errors (when we’re in an el Niño, we expect nearby months to be above trend for example). That is not consistent with the assumptions required for linear regression that errors should be Normal, and not autocorrelated.

“Wrong. That’s what all the numbers in brackets are below the coefficient estimates (0.0008 in the case of the trend term).”It’s not wrong. All you have said is where he has written down the supposed number. You haven’t shown how he derives it. I repeat – you have a four-parameter model; when you fit, the parameters have a joint distribution, with a covariance matrix. You don’t have individual variances for the coefficients.

But I’ve realised this is not the main error. It’s back where he goes from a linear trend model

y_t = b*x_t+a+ε_t

where the ε are the maybe autoregressive random terms, to

to a differenced form

Δy_t = b + Δε_t

and then proceeds as if Δε_t had a similar distribution. But they don’t. You can see this if the ε were iid, so the original model was OLS regression. Then if you assume the differenced is OLS, you’d get the estimate of b as mean(Δy_t), which is just the difference between the end values. And that has much greater SE than the OLS regression coefficient – that’s why people prefer OLS to just drawing a line between the ends. And in fact the Mills σ corresponds to this; for HADCRUT it is about 1°C/Century, which is indeed the σ of the endpoint difference estimate.

Instead, you have to take account of the dependence of the Δε_t differences. If you do, you recover the original OLS σ for trend, and the 0.6°C/Cen is highly significant, as it should be. R gives me a t-value of 51.

I’ll write a blog post about this tomorrow (it’s late here).

2000 instead of 200! Sorry

Statistical forecasting is obviously deficinient compared to computer modelling. It is unable to incorporate politically correct dogma as to what is supposed to happen, but limits itself to a shortsighted preoccupation with what actually does happen. It fails to capture all the emotion and guilt of human caused climate devastation and, will be of little help to climate modellers when grant time comes around. Left to its own devices this perversion of climate science runs the risk of becoming widely accepted in the science community due to its peculiar ability to actually predict future trends, and, Gaia forbid, might even end up part of the high school science curriculum. Where will society be if our children don’t learn to properly fear their own exhaled breath?

Nice.

How do you do statistical forecasting on something that can only ever happen once; never to be repeated.

Statistics, is an analysis of finite data sets of finite exactly KNOWN real numbers. It generates arbitrarily defined resultant real numbers that are a property of the origami algorithm, and are unique to that one data set.

It contains ZERO information, about ANY other numbers, past present or future, that are not already members of that data set.

Can’t do statistics on variables or unknown numbers.

G

ITIA’s Stochastic ForecastingD. Koutsoyiannis and the ITIA have been developing and advocating stochastic Forecasting Models

DEUCALION – Assessment of flood flows in Greece under conditions of hydroclimatic variability: Development of physically-established conceptual-probabilistic framework and computational tools

No WorriesITIA arrives at similar conclusions to Mills above. It summarizes in Welcome to HK climate

Thanks for the link.

As long as it is remembered that a computer, whatever it is used for, is just a tool and not a prophet then many of the real life problems we face because that has often been forgotten would never have arisen in the first place.

His discussion begins:

“The central aim of this report is to emphasise that, while statistical forecasting appears highly applicable to climate data, the choice of which stochastic model to fit to an observed time series largely determines the properties of forecasts of future observations and of measures of the associated forecast uncertainty, particularly as the forecast horizon increases.”And that is clearly true. It is the problem of statistical forecasting. In this case, for HADCRUT, for example, he fits an AR(0,1,3) model to the differences. That has three parameters for the autoregressive part, and one representing the constant, which is the differenced trend. Now more parameters gives a better fit, but generally means that you are less certain of each parameter, since different combinations of parameters can give a good fit. Here he gets, working with HADCRUT4, from 1850 I presume, a trend of 0.0005 C/month. That is about 0.6°C/century, a reasonably common value for that time period. But he says that the uncertainty is higher (with 4 parameters), so he then leaves it out of the model. IOW, assumes the trend is zero, before forecasting. But the trend wasn’t zero. And following his observation above, if you build in zero trend as part of your model, then of course you will get limited growth in the future.

Please refer to my post in response to your earlier comment. The choice of the ARIMA(0,1,3) model is not arbitrary. The trend coefficient is shown not to be significant statistically. If you look at Table 1 and Figure 1 where the model is estimated for different periods, it produces a zigzag of lines of different slope, the last of which is statistically not different from zero, and represents the current trend. It might be more salient to ask why the history is divided into 1850-1919, 1920-1944, 1945-1975, 1976-2001 and 2002 onwards (including the forecast to 2020), and then what might the process(es) driving the different trends and turning points.

GIGO forecast or model, it is only as good as the data.

https://youtu.be/QowL2BiGK7o?t=24m44s

The climate alarm community will have no great problem with explaining the failure of their complex models to accurately predict future trends.

They have 1. one big assumption/hypothesis which dominates their thinking and excludes all other possible thoughts, 2. modelling of complex systems based on that big assumption, 3. measurements, 4. adjusted measurements, with adjustments based on further modelling of complex systems.

Whatever the result from 4. look like, they can mix and match the output of 2. in order to duplicate the supposed “empirical evidence”, which they imagine to be 4. but which is probably more like 3.

AND – even if there is no evidence of warming, then the output of models of a climate system controlled by anthropogenic warming CAN be mixed and matched to reproduce the non-event that is, in reality, witnessed.

If you don’t believe that such a nonsense could ever occur, then read the abstract in this link. (below)

So, nothing happening can be explained by the hand-picked choice selection of net gains and losses from specific model outputs and the manipulation of those models.

So despite the increasing anthropogenic forcing, nothing happens, which is perfectly well explained by the theory of global warming!!!

I’m not kidding – read the first sentence of this abstract:

“The reconstructions account for the observation that the rate of GMSLR was not much larger during the last 50 years than during the twentieth century as a whole, despite the increasing anthropogenic forcing. Semiempirical methods for projecting GMSLR depend on the existence of a relationship between global climate change and the rate of GMSLR, but the implication of the authors’ closure of the budget is that such a relationship is weak or absent during the twentieth century.”

Full paper here: http://marzeion.com/sites/default/files/gregory_etal_13.pdf

Professor Mills forecasting skills aren’t looking too good..

https://andthentheresphysics.files.wordpress.com/2016/02/cb45ttrxiaaw1z__large.png

Nice try jim, but that thin blue line is not the actual measured 2015 temperature data.

PP,

Indeed it is. Prof Mills forecasts HADCRUT 4, and that is a plot of HADCRUT 4 for 2015.

Nick….the blue line between 2015 and 2016 is NOT presented by Mills.

What are you talking about?

The thick blue lines are those presented by Mills – Monthly HADCRUT4 data. The black and green lines are the upper and lower limits of the forecast. The thin blue line is what actually happened to HADCRUT4. Obviously this was not presented by Mills, as it would not have been a forecast. The figures presented have the thick blue line and limits, but not what actually happened.

It is usual to see how what happens compares to the prediction. i was recently asking for examples of predictions from the skeptical community – they are now coming thick and fast. This one is not looking too good at the moment, but time will tell.

Nick, can you do the same for CET temperatures? The data shows a very significant seasonal variation, but it should be possible to see if it is staying within the limits so far. This data series is little “manipulated” or adjusted, so should be a good measure of the predction method.

That should have been adressed to Jim, not Nick.

All the good professor has to do is make his prediction and decide beforehand what validates it using the real world.

Say 15 years of good fit.

Then wait and see if the prediction is right.

If its not, then he has to formulate a new hypothesis and start again.

Its been tried on the CO2 greenhouse gas model and invalidated the GCM’s used.

I sit and wait for new models, new null hypotheses and new testing with subsequent validation/invalidation.

90% of a statistician’s analysis will deal with the accuracy of recorded data on which models are based. It is probably the best discipline to clear the muddy waters. Go for it statos. Start in 1850. An awful lot of historical research is required

The fact that the data are “adjusted” before they are used should tell us most of we need to know about data accuracy. However, we can take comfort in the fact that the reports of the results are always more precise than the data are accurate. (sarc off)

Chapter 12 of ‘Climate Change – The Facts’ by Green and Armstrong ‘Forecasting global climate change ‘ has a pretty definitive exposition of this subject and shows clearly that warmist/alarmist forecasting is unprofessional.

A Warmer world would be a far better world than the one we have now. If mankind is actually capable of warming the planet we should do it. If we are contributing to the warming of the planet through CO2 emissions, we should keep it going. In the long run, CO2 will improve the quality of Life for everyone concerned. Life is a product of Warmth, and CO2 is a vital component of the Circle of Life. Those who stand against Warmth ultimately stand against Life.

I am not trolling, this is my actual position. I live in one the hottest places on Earth, the American Southwest, and I am not afraid. I have felt the power of 122̊ Fahrenheit press against my skin, and it is NOTHING compared to the stabbing fangs of sub-zero temperatures. I, a naked ape, would rather take my chances in the Sahara than in the Antarctic. We are hairless for a reason, we were formed in the throes of a Hot House – we are Children of Heat.

The whole debate about whether or not the Earth is warming is Meaningless without first addressing issue of whether or not it’s actually a Problem.

No Idea is more valid than its Premise – if the Premise is Bogus the Idea is Bogus.

If the Premise that a Warmer Earth is a Problem, is Bogus, the Idea that we should try to stop it is Bogus.

Saying that UK winters will get warmer while we are in a solar minimum is not useful or well researched. And English summers (though not June alone) do show a warming trend:

http://www.metoffice.gov.uk/pub/data/weather/uk/climate/actualmonthly/14/Tmean/England.gif

Great, a battle between two useless things.