Empirical Model Of The Global Mean Surface Temperature

Cooling of the Multidecadal Cyclic GMST until about 2030’s suggests La Nina conditions will dominate in the next twenty years.

Guest post by Girma Orssengo, PhD

IPCC’s climate model prediction for a global warming of about 0.2 deg C per decade for the next two decades is contrary to the observed climate pattern.

In the following graphs that show climate data analysis results, the Observed Global Mean Surface Temperature (GMST) shown in Graph “a” has an oscillating Residual GMST of +/- 0.2 deg C as shown in Graph “b”, and a Multidecadal Cyclic GMST of +/- 0.1 deg C as shown in Graph “e”.

As a result, because of these two oscillating components of the Observed GMST, it is incorrect for the IPCC to claim a constant warming rate of 0.2 deg C per decade that lasts for two decades.

Note that for the parameters of the model given in Equation 1, the Residual GMST from 1885 to 2011 shown in Graph “b” has zero mean and zero trend. The result shown in Graph “e” indicates the cooling of the Multidecadal Cyclic GMST until about 2030s. This result suggests La Nina conditions will dominate in the next twenty years. Finally, Graph “f” demonstrates there was no change in the climate pattern before and after mid-20th century, contrary to IPCC claim.

clip_image002

clip_image004

clip_image006

Observed GMST (Graph a) = Residual GMST (Graph b) + Model Smoothed GMST (Graph c)

Model Smoothed GMST = a*Cos[2*Pi*(Year-1910)/60] + b*(Year-1910)^2 + c*(Year-1910) + d

Where a = -0.1050, b = 3.598*10^(-5), c = 3.27*10^(-3), d = -0.345 (Equation 1)

Secular GMST = b*(Year-1910)^2 + c*(Year-1910) + d (Equation 2)

MultiDecadal Cyclic GMST = a*Cos[2*Pi*(Year-1910)/60] (Equation 3)

About these ads

75 thoughts on “Empirical Model Of The Global Mean Surface Temperature

  1. I’d call this “yet another regression”. What I’m missing is explanation of physical relevance of chosen regression components, particularly the 60-year cycle and quadratic baseline.

  2. Girma: Here we have again empirical statistics of the 60-year
    trisynodic Scafetta cycle, which we discussed in January/ February, when
    our Willis got mad on “statistical curve fitting”, demanding its astronomical
    climate forcing background to be put on the table….
    I will do it coming spring, because this cycle is an important decadal
    Holocene climate forcing cycle….JS

  3. This doesn’t allow for the sudden and deep solar slowdown which has begun, and is likely to reach a nadir around 2035. It is a non-linear non-sinusoidal interregnum which occurs on a complex cycle. For those who don’t think solar variation affects climate much – keep watching.

  4. It’s always been a problem for me that the period 1910-1940 showed the same or faster temperature increase than the 1979-1998 period. Fully half the temperature increase of the 20th century happened before 1950. Since we all know that the co2 output of the world greatly accelerated post 1950 and the great industrialization of the world it makes no sense that the temperature increase prior to 1950 should be the same as before 1950. That would imply that co2 is not the cause of the warming but something else. Or that co2 took over after the mysterious unexplained warming from 1910-1940 stopped. Since we now know of a 1000 year cycle where temperatures peak roughly every 1000 years it seems that maybe the warming we are getting now and prior to 1950 is mostly or all related to that phenomenon.

  5. Now you need to plot CO2 against graph d then you will get something what Dr. Vaughan Pratt did on JC’s Climate etc some time last year. He claims that ‘graph d is CO2 contribution.
    Do you have a different attribution?
    Dr. Pratt’s problem is the same one you encounter by analyzing, in climatic terms, too short data set.
    Now if you turn to the CET and consider 350 instead of 110 year long data set than the ambiguity would disappear:

    http://www.vukcevic.talktalk.net/CET-NV.htm

    here oscillating curves have true observational (empirical) properties, they are actual spectral components derived from the actual data set.
    I do not see anything noteworthy in there that can be attributed to recent CO2 increase that didn’t occur in the low CO2 era.

  6. More La Niña conditions mean more droughts in North America. A return to the Dust Bowl.
    That’s bad news. Very bad news if this is true.

  7. Looking at the graphs, it would appear that the rate of warming in raw data Graph “a” from 1910 to 1940 is steeper than the rate from 1960 to 2010. In Graph “c,” however, after mucking about with residuals and model formation, the latter period’s slope is slightly steeper than the former’s.

    Also, I note that, according to the graph titles, Graph “c” is derived from Graph “a” minus Graph “b,” and Graph “b” is derived from Graph “a” minus Graph “c.” A labeling error somewhere. Otherwise interesting, possibly significant. It would be nice if we had a longer set of observations that Hansen hasn’t diddled with.

    • It’s interesting you mention temp rise looks slightly lower between 1910-1940 vs the time after 1950 because temp charts I’ve seen for years showed that the 1910 -1940 went up faster. However the giss and other source for data keep adjusting the temperature in the past. So now it seems that 1910-1940 is slower than after 1950 and also that the decline in temperature between 1945-1975 seems to have become more of a flat period than a period of decline. Why they would feel the need or how they justify mucking with past temperatures is beyond me. It is interesting the modifications of past temperature 100% of the time reduce those temperatures. How amazingly coincident with the theory that co2 is the cause of all warming. We’ve seen over and over again in other sciences and disciplines that scientists of all types are conciously or subconsciously always destined to make experimental bias that confirms their theory. I have seen numerous times that when there are errors In the temperature data that make it look like temperatures deceased for any period of time intense scrutiny is applied to figure out how to create a way of discarding that data yet when temperatures come out higher they recieve no scrutiny at all. This has been so egregious that at times it’s been hard to believe. For instance a few years ago the temperature of the world was reported hitting a new peak. When someone glanced at the data they found that they had inadvertently copied in the soviet unions temperature from July into August and September. Since the temperature can be 10-20 degrees colder a few months later and since Russia is a very large land mass it had the effect of massively raising temps for the whole planet for that year until someone noticed the fact that Russia was incredibly hotter than it seemed it should be and pointed this out the “scientists” at NASA apparently didn’t notice that the whole country of Russia was 10 degrees Warner than it should be in their data. Talk about extreme evidence of experimenter bias. Normally the argument is that scientists cross check the work of other scientists. That there is a “competition” of scientists that weeds out errors of this type. So then why did it take a non-academic lay person to find this large error? Again proof that the scientific community is not policing itself. This kind of thing happens regularly and is almost never found by other scientists but by lay people. The temperature record would not have to be manipulated much for a trend to become significant or unsignificant. So the need for accuracy of this data is paramMount

  8. TonyB
    I think a little more explanation might be helpful….

    I agree. I believe there is an unwritten law that links the level of explanation with the level of comprehension. In other words, if you can’t explain your results, then you probably don’t understand them.

  9. Girma, you and I have discussed this before on Climate Etc. It follows from your analysis that there is no CO2 signal in the temperature/time graph that is detectable above the antural noise. Therefore, by definition, the total climate sensitivity for CO2 is indistinguishable from zero; since no signal is detectable

  10. I get similar results but use an x^2 factor instead of logarithmic. Cooling until 2030±5 depending on the dataset. The wavelength changes quite a bit depending on whether you are looking at global, regional, or local areas. Mine analyzes until it reaches minimum error using about 9 digits, not that it is that precise.

    Check out the one on sea level too – declining until 2019! But this was using January data, before the University of Colorado decided such a result was just a little too much to bear. With Envisat out of the way, it makes their job a lot easier.

    http://naturalclimate.wordpress.com/

  11. Girma,
    Place realistic error bands on the data and all your graphs disappear in the haze.
    Even recording accuracy (+/- 0.5 for most of the data) swamps any trends.

  12. Robbie says:
    September 3, 2012 at 1:12 pm
    More La Niña conditions mean more droughts in North America. A return to the Dust Bowl.
    That’s bad news. Very bad news if this is true.

    Well I hate to be the bearer of bad tidings.. but if you look at the Unisys SST anomalies map http://weather.unisys.com/surface/sst_anom_new.gif you will see a plume of upwelling cold water from the coast of Peru out into the Pacific. That looks very much like a La Nina. I think that the Nino 3.4 metrics have been fooled by the huge pool of cold water to the North of the Nino boxes. It certainly does not look like an El Nino.

  13. Steven Mosher says: September 3, 2012 at 2:15 pm

    http://xkcd.com/687/

    ……………………
    Hi Steven
    Looks very familiar

    CET = (pi) f (a, b, c)
    a = algorithm for width of English Channel due to the continental drift
    b = baroclinic pressure at the Earth core (by proxy of Earth’s magnetic field)
    c = ciclo solare

  14. Grima,
    extrapolate graph c into a cycle and predict “something else”; predict something that has already happened (tick), then apply it to more data (do it again).

    “Make something else come out right, in addition” -Feynman

    Other wise it’s just cargo cult crap (like the hockey stick). Time to do something more useful with a PhD.

  15. Caption check:
    Graph d = Graph c – Graph e
    Graph e = Graph c – Graph d

    Therefore:
    Graph e = Graph c – (Graph c – Graph e)
    Graph e = Graph e

    So Graph e has no relation to Graph c, nor to Graph d, only to itself.

    Is that what you were trying to say?

  16. This article, looking at the effects of Hadley “corrections” to SST

    http://judithcurry.com/2012/03/15/on-the-adjustments-to-the-hadsst3-data-set-2/

    contains this graph:
    http://curryja.files.wordpress.com/2012/03/icoads_monthly_adj0_40-triple1.png which shows a similar fit but using several periods not just 60 year period.

    The periods and starting years are determined by non-linear regression, and are thus determined by the data.

    Exponential increase in CO2 would only produce a linear (or likely less) increase in temp. I see no reason to fit a quadratic.

    Looking at rate of change (dT/dt) is most informative, since it removes the constant base line temp , which is arbitrary in the case of “anomalies”. The “constant” 0.42 K/c in dT/dt being a linear rise in temperature.

    tallbloke says:
    September 3, 2012 at 12:52 pm

    >> This doesn’t allow for the sudden and deep solar slowdown which has begun, and is likely to reach a nadir around 2035. It is a non-linear non-sinusoidal interregnum which occurs on a complex cycle. For those who don’t think solar variation affects climate much – keep watching.
    >>

    Well Girma’s plot doesn’t but the three terms in the dT/dt plot may well catch something like it. The system probalby is “non-linear non-sinusoidal ” in reality but 164 + 64 + 21 year cycles seems to give something near to what may be expected from very low cycles 24 and 25.

    The d2T/dt2 plot is also interesting but requires more commentry that I’ll omit for brevity here.

  17. Looks like a lame curve fitting exercise without any physical basis.

    I could come up with a dozen function forms, fit the function to the data and come up with tiny residuals. It would all be meaningless.

    It is only meaningful if there is a physical basis for the functional form.

  18. So, another computer model say cooling for the next 15 years. Somehow, I think “Mother Nature” will throw us a curve ball.

  19. From LazyTeenager on September 3, 2012 at 4:16 pm:

    I could come up with a dozen function forms, fit the function to the data and come up with tiny residuals.

    Good idea, let’s see what the possibilities are. Post your work when done.

  20. “””””…..Empirical Model Of The Global Mean Surface Temperature…..”””””

    So we have an “empirical” (made up) model of the GMST, something we have no believable idea of. Or alternatively, of which, we have no believable idea.

    Back in the 1960s, there was a famous paper on an “empirical” model of the fine structure constant alpha, or more strictly of 1/alpha, known to be around 137.
    The “empirical” model was; 1/alpha = 4thrt((pi)^a.b^c.d^e.f^g) where a through g are small integers, not necessarily different. Well the paper gave the actual values of a through g .
    The “empirical” model’s predicted; excuse me, projected value, agreed with the best peer reviewed experimentally measured value for 1/alpha to less than 2/3rds of the standard deviation of that measured value.; which happens to be known to a few parts in 10^8. I would say that’s a pretty “empirical” agreement with reality.

    Like Dr Orssengo’s “empirical” model, this fine structure model had no known connection to the physical universe; but it obviously was correct, because it was accurate to a few parts in 10^8, so it was wildle embraced, although no-one could discern how it connected to reality.

    A month later, a computer geek, published a list of several other sets of values (small integers) for a,b,c.d.e,f,g, which also led to 1/alpha to less than the standard deviation; currently about 4.5E-8.
    One of those was twice as accurate as the earlier result.
    A month after that, a more theoretical geek, pubished a model of an N-dimensional sphere whose radius was 1/alpha, and the solutions were the lattice points in this N-space for different a through g that lay within a thin shell of inner and outer radii, that were less than or greater than 1/alpha by the standard deviation increment; and he derived a complete list of at least 8 values that fit the “empirical” model.
    So if you don’t think you can get a believable “empirical ” result by simply f*****ing around with numbers; think again; you CAN !

  21. I’ll mention to Dr Orssengo these two graphs where HadCRUT3 since 1850 is detrended by the quadratic y = 0.000028*(x-1850) – 0.41.

    The next logical step is to incorporate the correlation of previous solar cycle length to temperature. After combining this and the cycle shown in Dr Orssengo’s graph (e) the residual appears to fit well with Lindzen & Choi’s value for 2xCO2.

  22. LazyTeenager: “Looks like a lame curve fitting exercise without any physical basis. ”

    Yep. Just like Newton’s gravity, Planck’s quantums, Einstein’s relativity, the Higgs Boson, Physics, Astronomy, Psychology, Sociology, Climate Science, MMTS adjustments, TOBS adjustments, treemometers and practical proxy practices. Physics may seem particularly queer in that list. But physics is based on empirical modelling tasks taken to Platonic abstraction; for better or worse.

    Now, if you’re asking for materialist explanans then that’s an entirely different affair and important only to Philosophers and Metaphysicians (They have tonics.). For otherwise it’s a case of ‘E si pur muove’. Though the work in the OP is still behind the curve of the resident Vukcevic when speaking of Fourier analysis of climatological cycles.

  23. See Klyashtorin and Lyubushin, 2007. While modern “Atmospheric CO2 increase as a lagging effect of ocean warming” may have foundered on the shoals of statistics lacking physical basis, my sense is this one will not. It was from fish, after all, that we learned of PDO.

  24. This posting is handwaving without even bothering to make clear how the results were derived. Since the author has a Ph.D., the author should know that there is published literature based on unit root time series analysis used to determine whether the global temperature anomaly (GTA) time series is stationary and whether there is a upward trend in the series. The author should know the following peer reviewed papers on this topic:

    http://www.uoguelph.ca/~rmckitri/research/warming.pdf

    http://www.buseco.monash.edu.au/ebs/pubs/wpapers/2011/wp4-11.pdf

    http://www.earth-syst-dynam-discuss.net/3/561/2012/esdd-3-561-2012.html

  25. Looks like a lame curve fitting exercise without any physical basis.

    No supporter of GCMs should object to curve fitting.

    While the writers of GCMs give explanations for their epicycles, that doesn’t make those explanations true. Occam’s Razor and all that.

  26. I’d call this “yet another regression”. What I’m missing is explanation of physical relevance of chosen regression components, particularly the 60-year cycle and quadratic baseline.

    The purpose of an empirical model is to represent the observed data as best as possible. It is clear that in Graph “f” 100% of the observed data is bounded by the model. That is the purpose of an empirical model rather than explanation of the physics.

  27. Tallbloke says
    This doesn’t allow for the sudden and deep solar slowdown which has begun, and is likely to reach a nadir around 2035. It is a non-linear non-sinusoidal interregnum which occurs on a complex cycle. For those who don’t think solar variation affects climate much – keep watching.

    Henry@Tallbloke or anyone who can help
    On the deceleration of maximum temperatures I discovered it is an ac wave, but I don’t know how to do the plot (best fit). See:

    http://wattsupwiththat.com/2012/08/23/agu-link-found-between-cold-european-winters-and-solar-activity/#comment-1067753

    Anybody here who can help, please?

  28. A model is only useful if it allows predicting future behaviour. I see no predictions by the author that could be later falsified by the real outcome.

  29. Nylo
    A model is only useful if it allows predicting future behaviour. I see no predictions by the author that could be later falsified by the real outcome.

    The model established a pattern as shown in Graph “f”. From this graph, It is easy to predict the climate if the pattern continues => Little warming in the next 15 years.

  30. Maus says: September 3, 2012 at 7:38 pm
    …….the resident Vukcevic when speaking of….

    the resident Vukcevic here on the WUWT is well behind the curve of himself , but there is always a hope that he may catch-up soon with his own private research.

    Nylo says:
    September 3, 2012 at 11:58 pm
    A model is only useful if it allows predicting future behavior
    Doing predictions is froth with danger.
    I do extrapolation. When it fails, it is fault of the used data set limitations such as length, resolution, compilation and many other factors I can’t be held accountable for.
    :)

  31. HenryP says:
    Henry@Tallbloke or anyone who can help
    On the deceleration of maximum temperatures I discovered it is an ac wave, but I don’t know how to do the plot (best fit). See:

    http://wattsupwiththat.com/2012/08/23/agu-link-found-between-cold-european-winters-and-solar-activity/#comment-1067753

    Anybody here who can help, please?

    Try gnuplot. It has fit command for non-linear least sqr to fit your desired function and very flexible ways to plot it all. It takes a bit of reading to learn how to get the best from it but it’s an effort well worth time. It goes well beyond just plotting once you master it.

  32. vukcevic says:
    Doing predictions is froth with danger.
    I do extrapolation. When it fails, it is fault of the used data set limitations such as length, resolution, compilation and many other factors I can’t be held accountable for.
    :)

    Data does not extrapolate. Only fitting a model can do that. Assumptions about the suitability of the model, the method of determining the fitted parameters and the assumption that the model will still be valid beyond the range of the source data are also key factors.

    All of which can be confounded by poor quality or manipulated data sets. So the whole exercise is indeed fraught with froth. ;)

  33. Girma says: The model established a pattern as shown in Graph “f”. From this graph, It is easy to predict the climate if the pattern continues => Little warming in the next 15 years.

    Your residuals are about twice the amplitude of your 60y cycle. So either noise is twice as big as your signal or there are other more significant factors you are not accounting for.

    Why do you assume the modelled part will dominate the larger residuals you do not capture in the model?

    Why did you chose to fit a parabola and what could cause such a variation?

    BTW, I agree that what you suggest is likely but don’t see that it follows from what you show here.

  34. LazyTeenager says:
    “It is only meaningful if there is a physical basis for the functional form.”

    So empirical tide predictions are “meaningless” then. Nonetheless, they have proved incredibly reliable the world over for more than a century.

    I suppose being LazyTeenager means you don’t need to think before posting.

  35. P. Solar says: September 4, 2012 at 3:22 am
    Data does not extrapolate
    Agree, perhaps I should have been less circumspect, the first part of my post relates to Maus’ reference to the Fourier analysis, so I continued with ‘extrapolation’ comment, which often used once spectral content of the data is found, to show what that the extrapolation either forward or back in time may reveal. Using MS Word’s auto spellchecker is also ‘fraught with froth’

  36. A model is only useful if it allows predicting future behaviour. I see no predictions by the author that could be later falsified by the real outcome.

    Spoken like a chartist and completely wrong. The major benefits of good models is to help one understand how systems work.

  37. P. Solar

    Your residuals are about twice the amplitude of your 60y cycle. So either noise is twice as big as your signal or there are other more significant factors you are not accounting for.

    Good question.

    Though the magnitude of the Multidecadal Cyclic GMST is only half of the Residual GMST, it appears that the Multidecadal Cyclic GMST drives the Residual GMST. For example, in Graph “f”, in the 1910s, the Multidecadal Cyclic GMST was below the secular trend curve and the Residual GMST was also near the bottom of the GMST band. In contrast, in the 1940s, the Multidecadal Cyclic GMST was above the secular trend curve and the Residual GMST was also near the top of the GMST band. In the 1970s, the Multidecadal Cyclic GMST was below the secular trend curve and the Residual GMST was also near the bottom of the GMST band.

    Therefore, it appears that the Multidecadal Cyclic GMST drives the Residual GMST.

  38. Check out my paper on Central UK Max Temp vs Sunshine Hours at NothingSettledNothingCertain.com (also at Tallblokes Talkshop): by comparing Bright Sunshine Hours to Max Temperatures from 1930, I derived very much the same thing, except that instead of the curivlinear trend I got a linear trend (which doesn’t look unreasonable here, either). The AMO/PDO cyclicty, loaded onto a sunshine-hours linear rise accounted for all but 0.1C/century, which could easily be UHIE or landuse related.

    I don’t think there is global maximum sunshine hours going back to 1920. Too bad: any increas in bright sunshine is a decrease in cloud cover. Ipso facto, Lord Monford might say.

    Of course you have to explain why there has been an historical reduction in cloud cover, but perhaps the warmists could say that more moisture causes more rain which causes less clouds at some point in the day which leads to more bright sunshine hours in the records: there’s a political spin for everything.

  39. “””””…..Eli Rabett says:

    September 4, 2012 at 5:41 am

    A model is only useful if it allows predicting future behaviour. I see no predictions by the author that could be later falsified by the real outcome.

    Spoken like a chartist and completely wrong. The major benefits of good models is to help one understand how systems work……”””””

    Well george e. smith believes that the major benefit of good models is to help one understand how the models work. He thinks we should be so lucky as to have real systems behave the same as our models. In the case of the GCMs, he believes the climate system does not behave the same as the models; or else we wouldn’t need 13 of them, or whatever the count is now up to.

  40. “””””…..John Marshall says:

    September 4, 2012 at 2:04 am

    Really? But the output of the sun is still falling and the sun is the ONLY source of heat we have to drive climate……”””””

    Well actually not. One thing we do NOT get from the sun, is “heat”. Well not any measurable amount, other than the microscopic amount of convection, due to the arrival of solar energetic
    particles on earth.

    We do get a lot of electromagnetic radiation energy from the sun; but that is NOT “heat”.

    We make all our heat right here on earth, mostly by wastingthe solar energy.

  41. Girma says: Therefore, it appears that the Multidecadal Cyclic GMST drives the Residual GMST.

    No, with respect, I think that’s too simplistic, otherwise you would have just found a larger 60y magnitude, but that would not fit later in the century.

    I think there probably are other shorter period terms that are constructively interfering at the points you notes and destructively interfering elsewhere. For example circa 20y aligning with your 60y in 1940, 200x .

    You could try a similar thing with dT/dt (just the difference of each successive pair of data) and also second diff. If your 60y is robust it should come out about the same (with suitable phase shift of pi/2 and amplitude reduced ). The parabola will be a linear increase in rate of change.

    Your parabola will be a const offset in second diff. How big ? Does it match your graph d) ?

    Be warned , second diff will be *very* noisy and will need filtering. Please don’t be tempted to use “runny mean” as a poor man’s filter , it distorts so much you’d be wasting your time looking for cycles.

    Higher order derivatives will attenuate longer periods (centennial or more) so if your fit results produce similar values there’s more chance that they represent a true signal and not just an arbitrary, coincidental fit to the data.

  42. P. Solar

    Look at the following reference on the relationship between PDO with La Nina and El Nino.

    …phase changes in the PDO have a propensity to coincide with changes in the relative frequency of ENSO events, where the positive phase of the PDO is associated with an enhanced frequency of El Niño events, while the negative phase is shown to be more favourable for the development of La Niña events.

    [] Verdon, D. C. and S. W. Franks (2006), Long-term behaviour of ENSO: Interactions with the PDO over the past 400 years inferred from paleoclimate records, Geophys. Res. Lett., 33, L06712, doi:10.1029/2005GL025052.

    http://bit.ly/OgPUTT

  43. Dr. Orssengo’s argument is based, in part, upon conflation of the idea that is referenced by the word “projection” with the idea that is referenced by the word “prediction.” Hence, for the umpteenth time in this blog, I have to point out that the two ideas differ. A “prediction” is an extrapolation from an observed to an unobserved but observable state of an event. IPCC models reference no events hence they make no “predictions.”

    • Terry: Your word please about PREVISION. This word is employed in
      the Spanish weather report: Prevision of the tiempo…. and, both projection
      of the tiempo is like prediction of the tiempo is like prevision of the tiempo….
      The narrator stands in front of the weather map and his narration is all
      the same: prevision, prediction and projection of tomorrow’s weather…..

      • Joachim Seifert:

        It is possible for the audience for a debate to reach a false conclusion through the ambiguity of reference by terms in which this debate is conducted to the associated ideas. To reach a false conclusion is a possibility when the terms “projection” and “prediction” are used as synonyms for the two terms reference different ideas. To use the two terms as synonyms is a commonplace in discourse on climatology.

        For a person of honest intent, there is not a downside to disambiguation of the language of the debate that maintains a distinction between the idea that is referenced by “projection” and the idea that is referenced by “prediction.” For a person of dishonest intent, there is the downside of losing the opportunity for illicit profit.

  44. Terry Oldberg,

    I like your comments, but I don’t think you’re going to change very many minds. Everyone, including the IPCC, takes their projections as predictions. Alarmists want to scare money out of taxpayers, and skeptics want to show how wrong the IPCC is:

    click1
    click2

    There are more, but you get the point.

  45. Terry Oldberg says:
    September 4, 2012 at 9:19 pm
    in the simplest of terms, a projection is the forward extrapolation of a trend – as in a bit like a slide presentation projector, projecting the current image forward, at the same time following the current or most recent trend (be it up or down)
    a prediction is similar, except that it is based on assumed future properties and future conditions, generally not being the same as current ones.

    In this (climatology) instance, the distinction can.perhaps be easily explained with the slide projector analogy as:
    projection: the current graph is projected forward (usually enlarging it) at current trend (e.g. temps going upwards!).
    prediction: the current trend is assumed to have been modified by other ‘things’ (usually modeled!) – so the graph is also ‘projected’ forward but in addition, the light waves are being bent by some assumed force (e.g. positive/negative feedbacks), or the white screen is being moved forwards or backwards to affect the final ‘image’.

    I can probably think of a better analogy, but no time this morning!

    Whilst I see that many use them ambiguously, there is, or should be, a distinction – at least in the scientific sense of their use.
    regards
    Kev

  46. Terry Oldberg:

    At September 4, 2012 at 9:19 pm you say:

    It is possible for the audience for a debate to reach a false conclusion through the ambiguity of reference by terms in which this debate is conducted to the associated ideas. To reach a false conclusion is a possibility when the terms “projection” and “prediction” are used as synonyms for the two terms reference different ideas. To use the two terms as synonyms is a commonplace in discourse on climatology.

    Of course you are right. “Prediction” and “projection” are very different.

    A prediction is a forecast of an anticipated event provided by a conjecture, hypothesis or theory. Comparison of the actual event with the forecast indicates whether the conjecture, hypothesis or theory requires amendment or rejection. This comparison of prediction with outcome is called experiment and is essential to the method called science used in several disciplines; e.g. physics, cosmology, biology, etc.

    A projection is a forecast of an anticipated event which is – or is based on – a guess or set of guesses. Comparison of the actual event with the forecast indicates a need to explain how the forecast was misunderstood. This ‘adjustment’ to obtain agreement of projection with outcome is called excuse and is essential to the method called pseudoscience used in several disciplines; e.g. astrology, climate science, palmistry, etc..

    Richard

  47. Kev-in-Uk:

    With respect, your post at September 4, 2012 at 11:55 pm makes a false distinction between “prediction” and “projection”.

    As I state in my post at September 5, 2012 at 12:56 am, in science a prediction is a forecast of an anticipated event provided by a conjecture, hypothesis or theory.

    You say “projection” is “the forward extrapolation of a trend”. But that says a projection is a prediction provided by the conjecture that the trend will not alter. Indeed, if ‘projection’ is merely ‘extrapolation’ then there is no need for the word ‘projection': extrapolation has clear meaning.

    The true difference between “prediction” and “projection” is as I state in my post at September 5, 2012 at 12:56 am.

    Richard

  48. Kev-in-Uk:

    In retrospect, the simplification for clarity in my response to your post could be thought to be a misrepresentation of your post (which was at September 5, 2012 at 2:14 am). That was not my intention, so I provide this addendum.

    If a ‘projection’ is based on an assumed or conjectured alteration to a trend then that does not alter my rebuttal of your distinction in any way.

    Richard

  49. richardscourtney says:
    September 5, 2012 at 2:50 am

    I was perhaps oversimplyfying but my thoughts remain the same – in that if you have observed a trend (e.g. rising temps) and make an ssumption that the trend will continue – you are in effect ‘projecting’ that trend into the future – without any additional effects.
    If you want to consider that maintaining the observed trend is a ‘theory’ then that’s fine of course!
    And yes I suppose it is still also a prediction in the sense that you are making an assumption that nothing changes, but it is still based on extrapolation of the ‘current’ data, yes?
    My thoughts on ‘prediction’ are that it is based on the further assumption that ‘other’ (usually several?) effects will take place, which as you suggest, are either guesses or probabalistic tendencies and indeed can be pure conjecture!
    So, to my mind – when the climate boys use the term prediction, it should only be when they have ‘added in’ additional conjecture/guesswork, math, etc (so logically, all models must be predictive?) and when they use the term ‘projection’ it should be only when current observations/trends are being extrapolated forward?
    Not sure if that makes sense to others, but it does to me!
    In respect of ‘extrapolation’ – that can be extrapolating between two points, not just projecting a line forwards of the last point, yes? In which case, the extrapolation becomes a projections?
    either way it makes my head hurt! gotta go – busy today!
    cheers
    Kev

  50. Kev-in-Uk:

    Thankyou for your reply to me at September 5, 2012 at 4:05 am.

    You say

    In respect of ‘extrapolation’ – that can be extrapolating between two points, not just projecting a line forwards of the last point, yes? In which case, the extrapolation becomes a projections?

    Sorry, but I don’t get it.

    I fail to understand any difference between extrapolation and your explanation of ‘projection’.

    The inference of values between two points is interpolation (n.b. not extrapolation). And not all trends are linear.

    I stand by my understandings of prediction and projection as I explain them in my post at September 5, 2012 at 12:56 am. As Terry Oldberg rightly says, these understandings go to the heart of all the modeling principles intended to indicate future climate including the principles of the empirical model provided by Orssengo.

    Richard

  51. Terry Oldberg

    IPCC models reference no events hence they make no “predictions.”

    For the next two decades, a warming of about 0.2°C per decade is projected for a range of SRES emission scenarios.

    That is a prediction as all the scenarios made the same estimate for a warming of 0.2 deg C per decade.

    • I agree, because for 2 decades, 0.4 C are projected=previsioned. And if you
      prevision 0.4 C and than you TALK about it, you predict 0.4 C. In all cases,
      if you prevision (seeing the future) and you do talk or making graphs for the
      future, you make an prediction…..Both cases do have an underlying prevision, which
      by talking produces automatically prediction of the future….

  52. richardscourtney says:
    September 5, 2012 at 6:22 am

    correct – in my haste, I forgot the correct term (i.e. interpolation) – but my observations still stands in that an extrapolation is an inferred ‘projection’ from known data using current observed trends. Or vice versa, a projection is inferred from an extrapolation of the current data. So, they are essentially identical in this context?
    A projection cannot be considered to be from ‘made up’ or ‘altered’ (by models, etc) future data – to my way of thinking, that then becomes a prediction, and is substantially different in its make-up.
    If we disagree, then I guess we will just have to disagree!
    regards
    Kev

  53. Girma:

    As you say in your post at September 5, 2012 at 6:45 am, the IPCC does make predictions. However, you illustrate this with reference to the SRES scenarios.

    IPCC AR4 Chapter 10.7 provides a more clear example of direct predictions involving (a) no scenarios and (b) SRES scenarios. It can be read at

    http://www.ipcc.ch/publications_and_data/ar4/wg1/en/ch10s10-7.html#10-7-1

    The IPCC says there

    The multi-model average warming for all radiative forcing agents held constant at year 2000 (reported earlier for several of the models by Meehl et al., 2005c), is about 0.6°C for the period 2090 to 2099 relative to the 1980 to 1999 reference period. This is roughly the magnitude of warming simulated in the 20th century. Applying the same uncertainty assessment as for the SRES scenarios in Fig. 10.29 (–40 to +60%), the likely uncertainty range is 0.3°C to 0.9°C. Hansen et al. (2005a) calculate the current energy imbalance of the Earth to be 0.85 W m–2, implying that the unrealised global warming is about 0.6°C without any further increase in radiative forcing. The committed warming trend values show a rate of warming averaged over the first two decades of the 21st century of about 0.1°C per decade, due mainly to the slow response of the oceans. About twice as much warming (0.2°C per decade) would be expected if emissions are within the range of the SRES scenarios.

    So, the unequivocal predictions are
    “The committed warming trend values show a rate of warming averaged over the first two decades of the 21st century of about 0.1°C per decade, due mainly to the slow response of the oceans.
    Please note that this is “committed warming” “of about 0.1°C per decade” which must happen because of effects prior to the “the first two decades of the 21st century”.
    and
    “About twice as much warming” (0.2°C per decade) would be expected “if emissions are within the range of the SRES scenarios.” And the emissions have been in that range.
    The IPCC says all these predictions have estimated uncertainty of –40 to +60%.

    But there has been no warming since the start of the 21st century. Therefore, for the prediction of solely “committed warming” to be correct there must be continuous linear rise of more than 0.4°C before the end of this decade or, alternatively, there must now be an instantaneous rise of more than 0.2°C that is sustained for the remainder of this decade. Even allowing for the minimum estimate resulting from the estimated uncertainty of -40%, the needed linear rise over the next 8 years is more than 0.38°C and the required instantaneous rise is more than O.18°C to be sustained for the next 8 years.

    And for the prediction of “committed warming” together with emissions “within the range of the SRES scenarios” to be correct then there must be continuous linear rise of more than 0.8°C before the end of this decade or, alternatively, there must now be an instantaneous rise of more than 0.4°C that is sustained for the remainder of this decade. Even allowing for the minimum estimate resulting from the estimated uncertainty of -40%, the needed linear rise over the next 8 years is more than 0.7°C and the required instantaneous rise is more than 0.38°C to be sustained for the next 8 years.

    Technically, the IPCC predictions could come true. But for that to be so then rises I mention must occur over the next 8 years. And these rises are extremely improbable: the total rise over the last century was less than 0.8°C.

    So, the IPCC makes specific predictions and those predictions are wrong.

    Richard

  54. Eli Rabett says:
    September 4, 2012 at 5:41 am

    “Spoken like a chartist and completely wrong. The major benefits of good models is to help one understand how systems work.”

    There are many types of models, there are behavioral, functional, and varying levels of fidelity of physical models. Some of these are nothing more than guesses at how the system works by the modeler, biases and assumptions included.
    GCM’s are somewhere between a functional model and a low fidelity physical model, and include modeler biases and assumptions, and provide little in the way of understanding of the system.

    • it all comes down to this.   The 2007 ipcc predicted temps by
      2012 would be about 0.4 degree warmer than it actually is and the
      minimal temperature it could possibly be if we stopped co2 production
      years ago is higher than the actual temperature.   There is simply no
      way to explain this other than the theory is wrong.  Even the
      scientists who did the models said if the temperature showed a 15 year
      trend of zero then the models were wrong.    The next models I hope
      show the effect of pdo/nao/Enso and therefore will show I believe a
      much reduced overall impact of co2 probably less than 50% of current
      models.  I don’t see how they can produce a report and not look like
      idiots unless they adjust the sensitivity to co2.   Also I will be
      extremely disappointed if they don’t consider longer cyclical
      phenomenon and reinstate the mwp and lia.  Numerous peer reviewed
      studies have shown that there are these longer cycles and they must be
      accounted for in the analysis.   I just don’t see how the next report
      can be honest at all and not admit that the impact of co2 has to be
      recalibrated.

  55. Sorry

    From the above IPCC chart, the trend FOR ALL THE SCENARIOS is for a warming of 0.2 deg C per decade until about [2025]..

  56. Girma:

    Thankyou for your responses to me that provide support to our mutual presentations of the fact that the IPCC does make predictions.

    I now write to provide a confident prediction; viz
    The IPCC predictions will be morphed into projections but the Orssengo predictions will not.
    I explain these predictions as follows.

    1.
    My post at September 5, 2012 at 8:11 am concluded saying;

    Technically, the IPCC predictions could come true. But for that to be so then rises I mention must occur over the next 8 years. And these rises are extremely improbable: the total rise over the last century was less than 0.8°C.
    So, the IPCC makes specific predictions and those predictions are wrong.

    2.
    My earlier post September 5, 2012 at 12:56 am at defines scientific “prediction” and “projection”. I remind that I said those definitions are:

    A prediction is a forecast of an anticipated event provided by a conjecture, hypothesis or theory. Comparison of the actual event with the forecast indicates whether the conjecture, hypothesis or theory requires amendment or rejection. This comparison of prediction with outcome is called experiment and is essential to the method called science used in several disciplines; e.g. physics, cosmology, biology, etc.

    A projection is a forecast of an anticipated event which is – or is based on – a guess or set of guesses. Comparison of the actual event with the forecast indicates a need to explain how the forecast was misunderstood. This ‘adjustment’ to obtain agreement of projection with outcome is called excuse and is essential to the method called pseudoscience used in several disciplines; e.g. astrology, climate science, palmistry, etc..

    MY PREDICTIONS ARE

    As 2020 draws near whenever the failure of the IPCC predictions is mentioned then the excuse will be made that the predictions actually were projections, and the use of excuse will be direct proof that they have become projections.

    As 2020 draws near then the Orssengo predictions will be compared to empirical data and this experiment will be direct proof that they are predictions.

    Richard

Comments are closed.