The Met Office responds to Doug Keenan’s statistical significance issue

Bishop Hill reports that Doug Keenan’s article about statistical significance in the temperature records seems to have had a response from the Met Office.

WUWT readers may recall our story here: Uh oh, the Met Office has set the cat amongst the pigeons:

===========================================

The Parliamentary Question that started this was put by Lord Donoughue on 8 November 2012. The Question is as follows.

To ask Her Majesty’s Government … whether they consider a rise in global temperature of 0.8 degrees Celsius since 1880 to be significant. [HL3050]

The Answer claimed that “the temperature rise since about 1880 is statistically significant”. This means that the temperature rise could not be reasonably attributed to natural random variation — i.e. global warming is real. 

The issue here is the claim that “the temperature rise since about 1880 is statistically significant”, which was made by the Met Office in response to the original Question (HL3050). The basis for that claim has now been effectively acknowledged to be untenable. Possibly there is some other basis for the claim, but that seems extremely implausible: the claim does not seem to have any valid basis.

=============================================

The Met office website text is here and there is a blog post here.

About these ads

133 thoughts on “The Met Office responds to Doug Keenan’s statistical significance issue

  1. All the response says is that there are other reasons, statistically, for the temperature rise than that proposed for CAGW CO2. It does NOT invalidate the CAGW hypothesis. Since the IPCC et al claim there is real-world theories and observations for the CO2-as-demon narrative, CAGW lives still: while multiple causes could be responsible, investigation has narrowed the suspect down to one.

    It is as if a murder case were underway in court, and the defense attorney has asked the detective is Colonel Mustard could have done the crime and not the Professor in the box. The detective says yes, the Professor could have done it (motive, opportunity, fingerprints on the gun), but the Colonel not only had all those things but was seen by three policemen and a nun pulling the trigger and kicking the body.

    The admission means unprecedented and unique cannot stand, but the villain charged still looks best for the crime.

  2. I read the introduction. Does not say what I expected it to say. Their arguments are lame. The best model they could come up with to show that Keenan’s choice of model is also not perfect does not seem to produce results any better (it even seems worse) than Keenan’s – by thier own numbers! I think that they are counting on people not having the slightest idea of what they are talking about.

  3. From the response:

    “…A wide range of observed climate indicators continue to show changes that are consistent with a globally warming world, and our understanding of how the climate system responds to rising greenhouse gas levels….”

    The Met Office are arguing that one statistical approach is much like another, and no statistic actually proves anything. But that the rising temperatures since 1880 (which everyone accepts) are ‘consistent’ with the theory that we’re all going to fry.

    When I leave my house in the morning and get in my car, those actions are ‘consistent’ with the theory that I’m going to rob a bank downtown. So I wonder why I don’t get arrested…?

  4. “Statistical significance” is a trap for the unwary, since many think that statistical significance means significant change, per se. That’s not at all true. Statistical significance is normally calculated to test the null hypothesis, which in this case signifies zero or no “true” warming
    Given enough data points, even minuscule, totally insignificant warming can be declared statistically significant. I see this misunderstanding constantly. If one claims a .5 degree increase, for example, then the proper statistical test is whether there has been that much change, with
    results at the 01 and 05 level of confidence provided. .

  5. Wow. The Met Office’s blog post was painful to read, not for it’s crushing arguments, rather for use of arm-waving and logically fallacious arguments. As a professional meteorologists myself, I would expect far better from a national Meteorological organization. If they are going to lie, at least lie well, not like a bloviating bloke.

  6. I thought that they were going address the reason why they chose a statistacal model that fits the data 1000 times worse than others that are available. That is the real question.

  7. Prof Slingo’s paper, although well researched, does not at any point prove that CO2 is the reason why the climate has warmed. She and everyone else involved in it has not put forward a satisfactory explanation as to why there has been no GW for he last 16 years despite the headlining 400ppm atmospheric CO2

  8. Maybe there is an answer as to why they chose the statistical model that they did. Maybe it is better to choose first order models than third order models for example – I don’t know. That is what I wanted from them…an expaination, not arm waving.

  9. ‘Thus, the Met Office does not use one of these statistical models to assess global temperature change in relation to natural variability. In fact, work undertaken at the Met Office on the detection of climate change in observational data is predominantly based on the application of formal detection and attribution methods. These methods combine observational evidence with physical knowledge of the climate (in the form of general circulation models) and its response to external forcing agents, and have a solid foundation in statistics. These methods allow physical knowledge to be taken into account when assessing a changing climate and are discussed at length in Chapter 9 of the Contribution of Working Group I to IPCC AR48.’

    Er…so they do use statistical models, just other ones and the limited understanding of forcing agents.

  10. When clear and concise questions and accusations are answered with “word blizzards”, even the fly on the wall knows who’s blowing smoke.

  11. whether they consider a rise in global temperature of 0.8 degrees Celsius since 1880 to be significant.
    ======
    ..and this is where everyone has lost the argument

    rise? from what……..NORMAL

    You’ve let the crooks define normal…………..

  12. It is still carp… when you take a Sin wave and add 1, then say everything above 0 is significant and blame it on CO2 instead of where you added +1…. sigh. Totally a biased model.

  13. “Those are my principles, and if you don’t like them… well, I have others.”

    Groucho Marx.

  14. Keenan’s statistical model is physically wrong.

    When you analyze data you choose a model. picking a model that is physically wrong ( for example a random walk for temperature) can get you a better fit, but it’s a mistake.

    A good example would be people that look at ice melt in september and fit that data with a linear trend. Well, before we even start we know this model is physically wrong.

    How do we know that? well at some time in the future your model will predict negative ice area. So a linear model might be useful for communicating the loss rate, but you know that its physically wrong, so you should not hang anything too heavy on it. That is, if that choice of models leads to stupid conclusions, thats a good hint the model is misleading, regardless of how well it “fits” the data.

    Put another way. Keenan chose a model that fit the data better. That model says there is no warming. But looking at the data we know it has warmed. Looking at the Thames we know it isnt frozen. Looking at the sea level we know it has gone up. We know the LIA was cooler. plants know it. animals know it. ice knows it. What this means is that Keenan has chosen the wrong model. There are an infinite number of models that fit the data as well or better than his model. fitting the data “better” is not the acid test of a good model. First and foremost the model has to be physically realistic. Keenan’s is not.

  15. In Slingo’s defense (God, did I just say that?), or more correctly in defense of the 5 or 6 persons she credits who probably wrote the entire response, she (they) make the point that surface temperature is only one of 11 “indicators” (ice coverage, specific humidity, tropospheric and stratospheric temperature, etc.) that the Met Office uses to study climate change. In fact, the pdf refers to a rather nice collection of 50 or so datasets on all these indicators that were made available in 2010. I for one found the collection to be quite useful, although the datasets need to be updated to 2013. It is also possible (likely?) that these datasets are cherry-picked, leaving out inconvenient ones. So buyer beware. Here are the datasets:

    http://www.metoffice.gov.uk/hadobs/indicators/11keyindicators.html

  16. The important information is that Slingo replied at all. Keenan’s post obviously stung, and there must be additional powerful politics at work behind the scenes.
    The UK gov is very far out on a limb. WSJ article about clear cutting North Carolina to feed wood pellets to Drax at a subsidized cost increase of £600 million is not sitting well with the Sierra Club and WWF. Met already acknowledged the pause, and revised interim forecast to no change until near end of decade. And blew it by asserting the just past miserable winter/spring was due to global warming. Clear loss of credibility.
    Now if only we could begin to see equivalent climb down in the US, as opposed to OBumer tweets about Cook’s nonsense, proving not only poor judgement about quality of information but detachment from the real world’s current state of play. Keystone XL being exhibit 1.

  17. They say that they don’t only depend on stats, they depend on ‘a deep understanding of the climate system, and ‘complex models’.

    The trouble is that the hypothesis that CO2 drives everything is just a hypothesis, and the models that they use have obviously failed, as can be seen from their outputs. When you ask about these, they justify the CO2 hypothesis and the models by referring to the stats – saying that the models and hypothesis MUST be right, because there is statistically significant warming going on.

    This is a common bureaucratic circular argument trick. It needs to be exposed for what it is…

  18. Steven Mosher says: @ May 31, 2013 at 11:20 am
    There are an infinite number of models that fit the data as well or better than [Keenan’s] model. fitting the data “better” is not the acid test of a good model. First and foremost the model has to be physically realistic. Keenan’s is not.

    ==================================================

    Hmmm; interesting Mosher requires this rigor of Keenen’s model, but not the Met (or IPPC…).

  19. How to avoid addressing the issue in one easy lesson. This silly response from the MET Office shows their true colors. They decided not to address the issue of statistical significance at all, but instead talk around the issue. FAIL.

  20. Pardon me if someone has already said this, but I think we need to clarify the meaning of “statistical significance” in the context of regression analysis. The purpose of doing a regression is to test a hypothesis. In this case, the hypothesis is that warming since 1980 exceeds normal climate fluctuations. It appears that the warming is in fact “not statistically significant”. That means the hypothesis that warming exceeds normal variability must be rejected. There is no proof that in fact, the warming is “unprecedented”, or even unusual. We need more data. A longer time series would be best.

  21. Arthur4563, spot on. This a well known problem with the Fisher procedure. Significance depends on effect size and sample size. For samples of ‘infinite’ size each null hypothesis should be rejected at each significance level, whatever the result may be. A null hypothesis postulating an exact null is trivially false. Perhaps we have forgotten that Fisher devised his procedure for making simple decisions about experiments. Here is a simple question: if we take the temperature record of the past century and would put it erroneously on his head, would we would get the same significance level while testing the null hypothesis of no-change?

  22. Steven Mosher: Keenan’s statistical model is physically wrong.

    That’s one possibility.

    The basic problem is that, after observing what seems like a change, there is no longer any way to formulate what would have been the null hypothesis a priori, that is, a reasonable expectation of what would have happened absent the hypothesized cause of the change.

    Thinking back to the Little Ice Age, and hypothesizing before the rise that CO2 might or might not cause an increase in temp, what would the null hypothesis of negligible CO2 effect look like? Stationary independent year-on-year mean changes? Stationary red noise? Non-stationary chaos? All we can say now is that, for some of the possible null hypotheses that might have been chosen, the change in temperature is compatible with no effect of CO2; but for other null hypotheses that might have been chosen, the change in temperature is not compatible with no effect of CO2.

    Looking forward, a reasonable null hypothesis is that from 1950 onward the spectral density of the mean temperature time series is unchanged from what it was before 1950. The problem, as everyone knows, is that there are not enough data for a sufficiently precise estimate of the prior spectral density function.

  23. Steven Mosher says: @ May 31, 2013 at 11:20 am
    There are an infinite number of models that fit the data as well or better than [Keenan’s] model. fitting the data “better” is not the acid test of a good model. First and foremost the model has to be physically realistic. Keenan’s is not.

    I think that is Keenan’s point. It isn’t physically realistic, but it still works better than the Mutt Office’s.

  24. Latitude says:
    May 31, 2013 at 10:58 am
    whether they consider a rise in global temperature of 0.8 degrees Celsius since 1880 to be significant.
    ======
    ..and this is where everyone has lost the argument

    rise? from what……..NORMAL

    You’ve let the crooks define normal…………..

    Start of the MWP? The Roman Warm? The Optimum?

    The temps 137 million years ago before the big plunge into cold in the Cretaceous?

    According to this: http://www.telegraph.co.uk/science/dinosaurs/7624014/Dinosaurs-died-from-sudden-temperature-drop-not-comet-strike-scientists-claim.html

  25. My eyes glaze over when assaulted with management speak. Unlike circumlocution which may have a nugget contained within, the dark arts of management and politics have evolved to have no nuggets. There is nothing, just a confluence of words designed so that when it goes belly up no one can be held responsible as no one actually said anything. Of course we all know this sad state of affairs exists because no one has the balls to admit they released the Kraken and he is behind all these nasty weather events.

  26. Steven Mosher says:
    May 31, 2013 at 11:20 am
    “Keenan’s statistical model is physically wrong.

    There are an infinite number of models that fit the data as well or better than his model. fitting the data “better” is not the acid test of a good model. First and foremost the model has to be physically realistic. Keenan’s is not.”

    Though your appreciation of empirical science has improved recently, you blunder once again here. You contrast “fitting the data better” with “being physically realistic.” Fine, but you seem blithely unaware that there are critical relationships between “the data” and the “physical realism” of the model.

    The most important of those relationships is that the data is the ultimate evidence for the physical model (actually, physical theory). The data is used to select the model. You cannot say that the statistical model fits the data but conflicts with physical reality. That is the same thing as saying that the statistical model fits the evidence for the physical theory but conflicts with the physical theory. Nonsense.

    You continue to make the fundamental mistake of climate modelers. You believe that you can rationally discuss the “physical reality inside the computer model” apart from the physical evidence for the model (physical theory). It cannot be done.

    Well, it cannot be done in science where falsification and observational evidence rule. It is commonly done in metaphysics. Read Charles Sanders Peirce’s “The Fixation of Belief,” (1877). Peirce was a Pragmatist, though a scientific Pragmatist like W. V. Quine, and you might find him a congenial thinker.

  27. What I got from the response was, lots of arm waving and a further admission that none of the statistical models are of much value, but, we KNOW CO2 is responsible for any warming, we just know.

  28. @ Steven Mosher 11:20 am (I guess I’ll pile on…)
    You say a “linear model might be useful for communicating the loss rate” but since it will inevitably lead to negative areas, it is wrong, regardless of how it fits the data. I have no argument with that. But that’s about all.

    Then you say, Keenan chose a model that fit the data better. That model says there is no warming. But looking at the data we know it has warmed.

    This is a contortionist’s Red Herring.
    First off, Keenan’s model isn’t wrong because it leads to physically impossible scenarios and violates boundary conditions. You say it is wrong because by “looking” at the data it doesn’t support the conclusion you prefer, not because it leads to impossible physics. A failure in logic. A failure in simile. A slight of hand with predicate.

    Second. Keenan’s model does NOT say “there IS NO Warming.” It only says that natural variability is such as it is that what appears to be a warming in the data could be statistical noise from a non-warming system. The Null Hypothesis (temperature change is natural) cannot be rejected with the Keenan’s model. You should know the difference. If I flip a coin 10 times with 7 head and 3 tails, I cannot reject the idea that it is a fair coin.

    Third. All models are “Wrong”. Some models are wronger than others. (Asimov: The Relativity of Wrong) It ill serves a scientific argument to say a model is wrong without offering a model that, better fits the data and better fits the boundary conditions or is easier to work with acceptable increases in error.

    Fourth. Given the resources involved, how do you hold Keenan’s model to a higher standard than the MET’s or IPCC’s?

  29. So basically the MET office is saying “if you don’t like those bananas, it’s OK as we’ve got plenty of other bananas!”

  30. Just read Slingo’s paper. It is illogical.

    It should be noted that the Met Office does not rely solely on statistical models in its detection and attribution of climate change.

    But Panel 5 of fig 2 is entirely based on statistical models. The upshoot is ascribed to man’s effect because of models.

    If the effect of heat capture by spectroscopic attributes of CO2 led to that then… well, we would understand all the feedbacks in the oceans, atmosphere, mankind’s economy and all the unknowns.

    It is reasonable to make that assumption but she ought to acknowledge it is an assumption and not put in bold that other data backs it up when the other data does not support that. The observations support the warming of the world. OK. But not the cause of the warming of the world.

    And Keenan’s model says that it could be a random walk that leads to the warming of the world. To say it is the work of man relies on other models.

    This is a circular argument.
    Whoops.

  31. Steven Mosher says:
    May 31, 2013 at 11:20 am

    “Put another way. Keenan chose a model that fit the data better. That model says there is no warming. ”

    Really? As I read Mr. Keenan’s argument, he seems to have taken the observed warming as a given and proceeded to argue that proper statistical analysis shows it to be “not significant” i.e. not demonstrably outside the bounds of natural random variation. The question then resolves down to what constitutes the “proper” statistical analytic procedure. From my perspective that question is about as “settled” as climate science in general. From observing comment threads here and elsewhere over the years, I’ve seen commenters to numerous to count, with varying degrees of at least claimed statistical expertise, arguing steadfastly that their preferred methodology is the gold standard of statistical mathematics. They have been, almost universally. met by other comments arguing quite contrary positions. I personally lack the level of expertise to fully evaluate these arguments, mostly because I haven’t been inclined invest my time in exploring a field that I consider to be mostly dubious. I admit statistical analysis is, potentially at least, a valuable and necessary tool for modern science and modern life, but in its current practice it seems to be used less to reveal hidden truths than to obfuscate them. “Statisticians” have become like lawyers, willing to offer analyses that support whatever agenda they or their clients are pushing. I have become like the people in the village in the story of “The Little Boy Who Cried Wolf”. Having been lied to so many times in the past I have lost the capacity to respond appropriately when something that may actually be true is presented.

    Truthfully, I did not find Mr. Keenan’s analysis to be entirely compelling, but I would say the same for Ms. Slingo’s counter argument and the length and detail included in it at least explains why the Met Office had to wait for the 6th iteration of the question before they were willing to respond.

  32. From Steven Mosher on May 31, 2013 at 11:20 am:

    A good example would be people that look at ice melt in september and fit that data with a linear trend. Well, before we even start we know this model is physically wrong.

    How do we know that? well at some time in the future your model will predict negative ice area.

    Errrrrrrrrr!!!

    As you have not yet fit the data, you should not know a linear fit would have a negative trend. Eyeballing it as having a negative linear trend is preliminary fitting.

    Any negative trend linear line will eventually cross zero and indicate negative something. Likewise any positive trend linear line can indicate there was a negative something in the past. Thus by your reckoning I’d have to say any linear trend except zero, dead flat, must be physically wrong thus shouldn’t be used.

    Except linear models are used all the time, successfully. I’m sorry to explode your worldview like this, but most people can accept there is no negative ice area, once the line crosses the zero it means there is zero ice area, no matter how far the line drops.

    Also, you already are certain the trend is negative. You say we should know “before we even start” the linear fit is wrong because it will be negative. Therefore, you have shown bias, you EXPECT a negative trend, before you have even begun the curve fitting, before examining the data.

    Minus ten points for House Slytherin.

  33. Steven Mosher says:
    May 31, 2013 at 11:20 am
    A good example would be people that look at ice melt in september and fit that data with a linear trend. Well, before we even start we know this model is physically wrong.

    How do we know that? well at some time in the future your model will predict negative ice area. So a linear model might be useful for communicating the loss rate, but you know that its physically wrong, so you should not hang anything too heavy on it. That is, if that choice of models leads to stupid conclusions, thats a good hint the model is misleading, regardless of how well it “fits” the data.

    Put another way. Keenan chose a model that fit the data better. That model says there is no warming. But looking at the data we know it has warmed. Looking at the Thames we know it isnt frozen. Looking at the sea level we know it has gone up. We know the LIA was cooler. plants know it. animals know it. ice knows it. What this means is that Keenan has chosen the wrong model.

    Your post has been answered by many, so I will comment only parts I see missing in the answers.
    You give a good example of starting measuring the ice melt in September, then you talk about temperature raise since the LIA?
    You talk about plants and animals knowing of the warming since LIA but forget about wine grapes growing at higher latitudes then now during the MWP.
    Your logic seems very twisted to me towards what you may want to achieve, not towards a clear logic conclusion.

  34. “Put another way. Keenan chose a model ………That model says there is no warming.”
    -Not true Prof Mosh :-(

  35. Stephen Rasey says: May 31, 2013 at 12:34 pm
    “First off, Keenan’s model isn’t wrong because it leads to physically impossible scenarios and violates boundary conditions.”

    Yes it is. A random walk is unbounded and has no fixed reference point. If it were to apply, at some stage (not too far away), all life would be extinguished (with probability 1). If it applied in the past, we would not be here. It is physically impossible because there is not the energy available for that unbounded behaviour.

    “Second. Keenan’s model does NOT say “there IS NO Warming.” It only says that natural variability is such as it is that what appears to be a warming in the data could be statistical noise from a non-warming system.”
    I haven’t seen any numbers that actually say that. Do you have any in mind?

    But I wouldn’t be surprised. The model is a random walk with three (3) orders of autocorrelation. A highly autocorrelated random walk does proceed in near straight lines for quite some time. Steps are repeated with little change. The only really random choice is the initial direction. It’s not hard for it to emulate a trend.

    “Fourth. Given the resources involved, how do you hold Keenan’s model to a higher standard than the MET’s or IPCC’s?”
    Being physically possible is a pretty basic standard. The Met’s is, Keenan’s isn’t. That’s not applying a higher standard.

  36. Nick Stokes: A random walk is unbounded and has no fixed reference point.

    Oh Brother. Ever since Einstein, Brownian motion has been successfully used to model lots of processes. The fact that the support for the normal distribution is infinite has never prevented it from being a successful math model for finite measurements.

  37. “If it were to apply, at some stage (not too far away), all life would be extinguished (with probability 1). If it applied in the past, we would not be here.”

    This seems to me to be mere objection for objection’s sake. The model is a local approximation, like fitting a linear trend doesn’t mean you think it will stay linear indefinitely far into past and future, but that you’re implicitly representing a curve with a power series and trying to estimate only the constant and linear terms, because you don’t have enough data to estimate any of the higher-order terms.

    Similarly, the use of an integrated model doesn’t mean that it is actually unbounded, only that there is a root of the characteristic equation close enough to the unit circle for it to be effectively indistinguishable from one given the length of data you have, it to be a good approximation, and the statistics are more reliable if you make that approximation. It’s like taking a short segment of a curve and approximating it as straight, because you don’t have enough data to say otherwise.

    ARIMA models are so useful for approximating physical processes because they can represent the discretely sampled output of differential equations. ARIMA(3,1,0) says that the temperature in each year is the accumulation of the heat that is added or subtracted in each year, and that this latter figure can be well-approximated as a second-order differential equation. Using an integrated model is effectively saying that the forces pushing it back to the equilibrium are smaller than the ‘weather’ noise being added or subtracted from year to year, drowning them out. So over the short term, you can get a good fit by ignoring them.

    And in any case, Doug’s point was not that ARIMA(3,1,0) is the “right” model, but that saying the rise is significant because it doesn’t fit ARIMA(1,0,0), (which is what the Met Office initially did), is logically invalid. There are lots more driftless models that would need to be excluded before you could conclude there was a drift, and the ARIMA(3,1,0) one was simply an example.

  38. Mosher ” First and foremost the model has to be physically realistic.”

    On that basis, the evidence suggests the models which anticipated a tropospheric hotspot are not physically realistic. (I know these are physical simulations and not the statistical models being discussed here.)

    Recall how Santer and a bunch of other researchers/co-authors wen looking for it. Can’t blame them for trying – if they had confirmed “vertical amplification” of temperature change embodied in the hotspot, it would surely have secured their places in history. They couldn’t.

  39. Steven Mosher says:

    “Keenan’s statistical model is physically wrong.

    When you analyze data you choose a model. picking a model that is physically wrong ( for example a random walk for temperature) can get you a better fit, but it’s a mistake.

    A good example would be people that look at ice melt in september and fit that data with a linear trend. Well, before we even start we know this model is physically wrong.”

    I have the same impression, you have a valid point. But if it was that simple to refute , why did it take the Met. Office over six months and a probably precedented five times refusal to answer an official parliamentary question?

    Keenan if no fool. Neither do I think he is out to deceive. What he was trying ( and has succeeded ) to do was to force the Met. Office to state that it all depends upon the validity of the model you choose. It seems it is this that they were so steadfastly resisting.

    The corollary is clear: how appropriate or valid is the model that the Met. Office chose to use?

    They know they are one move away from checkmate and that’s why they are stalling.

  40. Hmm, Julia Slingo says, after a disussion of various statistical models and their fit to empirical measures over a portion of the 19th and 2oth centuries that:

    “These results have no bearing on our understanding of the climate system or of its response to human influences such as greenhouse gas emissions and so the Met Office does not base its assessment of climate change over the instrumental record on the use of these statistical models.”

    Why didn’t they just say that models don’t count and that results should be handled wearing nitrile gloves in the first place?

  41. Nullius in Verba says: May 31, 2013 at 1:59 pm
    “The model is a local approximation, like fitting a linear trend doesn’t mean you think it will stay linear indefinitely far into past and future, but that you’re implicitly representing a curve with a power series and trying to estimate only the constant and linear terms, because you don’t have enough data to estimate any of the higher-order terms.”

    The idea of testing for significant trend, or increase, is to see if something has changed. There wasn’t a trend before, now there is. It doesn’t suggest that there has always been a trend, or always will be.

    But the purpose of Keenan’s analysis has been to suggest that nothing has changed. It’s just random variation like we’ve always had.

    But random walk variation can’t have been the regular state of affairs. So if you want to adopt it as a local model, you need an idea of when it became a random walk and why.

  42. Nick Stokes – You addressed Stephen Rasey’s “First off, Keenan’s model isn’t wrong because it leads to physically impossible scenarios and violates boundary conditions” incorrectly. He wasn’t saying that Keenan’s model wasn’t wrong, he was criticising Steven Mosher for using an illogical argument instead of that reason.

  43. “It is physically impossible because there is not the energy available for that unbounded behaviour.” -Nick
    Errr?
    -The oceans contain around 10^24 grams of water
    The average temp is around 4 C whilst the surface temp is around 17 C
    If the oceans slowly became well mixed, over the next 1000 years (~10^10 seconds)
    How many watts of forcing would it take to balance this out? !
    -A speed up or slow down in the downward heat transfer into the oceans would seem quite capable of generating very unbounded behaviour.

  44. Greg Goodman says: May 31, 2013 at 2:30 pm
    “a probably precedented five times refusal to answer an official parliamentary question?”

    That’s simply untrue. The MO did not refuse to answer any questions. In fact, what is being discussed here is their answer to the second question, which was posed as a follow-up to the first.

    What the Met Office does seem to have been reluctant to do is to undertake a calculation prescribed by Doug Keenan. That is not refusing to divulge facts – it is resisting doing something that they don’t think is appropriate to do. Q’s in the HoL are not usually used as a management tool.

    But it’s not even that Keenan wanted the answer. He wanted the Met to put their name to his calc. And they didn’t want to, probably because they thought it would be misused. As it has been.

  45. Chas says: May 31, 2013 at 3:04 pm
    “-A speed up or slow down in the downward heat transfer into the oceans would seem quite capable of generating very unbounded behaviour.”

    No, unbounded is unbounded. As in boiling. As in white hot.

    As I said above, if you want to say that it’s just natural variation and nothing has changed, then you have to propose something that would work without changing.

  46. To repeat what I said at Bishop Hill:-

    This issue becomes a lot simpler to resolve if you remember that the test of a model is its utility. So you need to state your purpose if you are going to judge one model against another. If goodness-of-fit to a time series is your only interest then you will make one judgement, but if you are (for instance) interested in skill at forecasting you will likely make another.

  47. “The idea of testing for significant trend, or increase, is to see if something has changed.”

    The idea of testing for a significant trend is to see if there is evidence for a trend. Change can occur for other reasons besides trends.

    “There wasn’t a trend before, now there is. It doesn’t suggest that there has always been a trend, or always will be.”

    How do you know there was no trend before? How do you know there is one now?

    “But the purpose of Keenan’s analysis has been to suggest that nothing has changed. It’s just random variation like we’ve always had.”

    No, the purpose of Keenan’s analysis was to show that the Met Office analysis claiming that there was a significant trend, based on the data not fitting a driftless AR(1) model, was invalid. Keenan’s position is, as I understand it, that there is no evidence in either direction. The Met Office’s claim that there is was bogus.

    “But random walk variation can’t have been the regular state of affairs. So if you want to adopt it as a local model, you need an idea of when it became a random walk and why.”

    I just said. The model says that over the short term the behaviour is indistinguishable from random walk because the restoring forces are drowned out by the amplitude of the year-to-year noise. You have weather each year that adds or subtracts a random chunk of energy to the climate system. The temperature this year will be the temperature last year plus some multiple of this random chunk. The distribution of the chunk has to vary with temperature to keep the accumulated temperature within bounds, but this shift in the mean is small compared to the spread, and so, like the curvature of a short-enough segment of curve, can be safely neglected.

    It’s approximately random walk, and the approximation is good enough over periods short enough such that the average of the accumulated noise is smaller than the restoring forces pushing the climate back to the equilibrium.

    That there’s a unit root in the statistics is a thoroughly mainstream result.

  48. nullifies in verba says:

    “Cange can occur for other reasons besides trends.”

    This is the first time I have ever seen a trend being claimed as a cause of change. I was under the impression that the trend was the manifestation of that change. When I turn the kettle on, the trend is for the temperature of the water to increase. It tells me nothing about the cause, just the a change is occurring over time.

  49. Nullius in Verba says: May 31, 2013 at 3:28 pm
    “The idea of testing for a significant trend is to see if there is evidence for a trend. Change can occur for other reasons besides trends.”

    The emphasis was meant to be on “significant”. If you’re testing for significance, you’re testing whether you have identified a change from a normal, which can’t have had that trend indefinitely.

    “How do you know there was no trend before?”
    A fixed trend can’t have been the normal state of affairs, just as with random walk.

    “It’s approximately random walk, and the approximation is good enough over periods short enough”
    But short periods won’t do. If you want to say that the present observation is natural variation and nothing has changed, then you need a model of that variation which doesn’t change. If you have to postulate that random walk just applies during the period, then that needs explaining just as a trend would.

    In fact, a random walk with autocorrelation 1 would be a straight line.

  50. From Nick Stokes on May 31, 2013 at 3:16 pm:

    As I said above, if you want to say that it’s just natural variation and nothing has changed, then you have to propose something that would work without changing.

    This is the point where Smokey would start slapping you around the room, because the Null Hypothesis states the warming is natural variability, the onus is on YOU to prove it is not. The Null Hypothesis does not require proving. It does not require one to “propose something that would work without changing”. Climate skeptics have nothing to prove. If you want to say it’s not natural, than YOU prove otherwise.

    You’re a smart guy, Nick. You know how the Null Hypothesis works. You’re being disingenuous to act like you don’t. Why are you coming here to the World’s Most Viewed Climate Website and trying to deceive others? Go back to RC and impress your pal Gavin with your antics.

    That’s what Smokey would have said. With many links including many links to graphs. You’re really having fun taking advantage of his absence so you can pull out the cheap tricks that he never fell for, I can tell.

  51. Null hypothesis: [statistics] A statement that essentially outlines an expected outcome when there is no pattern, no relationship, and/or no systematic cause or process at work; any observed differences are the result of random chance alone. The null hypothesis for a spatial pattern is typically that the features are randomly distributed across the study area. Significance tests help determine whether the null hypothesis should be accepted or rejected.

    From GIS dictionary

  52. “This is the first time I have ever seen a trend being claimed as a cause of change.”

    You’re right. That’s not clear.

    There are two separate aspects to consider: the objective truth about the physical system, and our subjective estimate of it filtered through the murky lens of observation.

    An objective trend in a physical system is understood to mean that the reality is some deterministic function of time (the first derivative of which is the trend) plus random noise from other unspecified causes.

    But the random noise can cause change too. And if the random noise is such that neighbouring values are correlated, this can give rise to successive values all going up or all going down more than you would expect, given that most of our intuitions are built on results about uncorrelated noise. If you take a short segment of the output, it changes in a way that looks like an objective underlying trend. It isn’t though, it’s a random outcome that could equally well have come out going the other way.

    Take a sequence of random numbers, and then compute a moving average of them, taking blocks of a hundred or so consecutive values. Look at a short section of this moving average series. Does it look like there’s a trend in them? The numbers will start low and gradually rise, or start high and gradually fall. From the point of view of subjective observation, you could call that a ‘trend’. But the underlying objective reality is that there is no difference in how they’re generated over time. The distribution is always the same, and the true average is a constant, neither rising nor falling.

    In the case of the weather, we do actually know that there is an underlying trend (the greenhouse effect is real physics) but we don’t know how big, or whether it is big enough to show up against the background noise. The idea of this sort of test is to show that the observations are feasible as the outcome of a process with no underlying trend, and therefore we cannot say we know for a fact that the rise has shown up. It doesn’t say that there’s no rise, or that the trendless model is the truth. It’s only a null hypothesis that hasn’t been rejected.

  53. … “Looking at the Thames we know it isnt frozen.” …

    Wow with such “heavy weight” argument what else to say… But maybe there is a few “good ones” sourced direct from warming consensus biased Wikipedia:

    Wikipedia(River Thames frost fairs)

    However, the colder climate was not the only factor that allowed the river to freeze over in the city: the Thames was broader and shallower in the Middle Ages – it was yet to be embanked, meaning that it flowed more slowly.[5] Moreover, old London Bridge, which carried a row of shops and houses on each side of its roadway, was supported on many closely spaced piers; these were protected by large timber casings which, over the years, were extended – causing a narrowing of the arches below the bridge, thus concentrating the water into swift-flowing torrents. In winter, large pieces of ice would lodge against these timber casings, gradually blocking the arches and acting like a dam for the river at ebb tide.[6][7]

    “The last frost fair”

    The frost fair of 1814 began on 1 February, and lasted four days. An elephant was led across the river below Blackfriars Bridge. A printer named “Davis” published a book, Frostiana; or a History of the River Thames in a Frozen State. This was to be the last frost fair. The climate was growing milder; also, old London Bridge was demolished in 1831[12][13][14] and replaced with a new bridge with wider arches, allowing the tide to flow more freely;[15] additionally, the river was embanked in stages during the 19th century, which also made the river less likely to freeze.

    January 1814 mean temperature: -2.9 degC [*]

    “The Thames freezes over upstream, beyond the reach of the tide, more often – above the weir near Windsor for example. The last great freeze of the Thames upstream was in 1963″

    January 1963 mean temperature: -2.1 degC [*]

    January 1684 mean temperature: -3 degC (Great Frost) [*]

    AFAIK The Great Freeze of ’63 did not involve London. Only at Windsor and upstream (beyond the reach of the tide). 1776 is reported to have frozen at London w/t only -1.6degC mean for january.

    Philosophical question directly related to the thread:
    What or how we define “Normality”?
    Was those time colder than actual? Was it colder than normal?
    Is actual time warmer than the “colder” refered period? Is it warmer than normal?

    All those techno babling on statistics is simply offuscation of this central question of definition of normality. Sometimes you can look at a data graph and see trends w/o doing a damn statistic test of the null hypothesis.

    [*] MANLEY, G. Central England temperatures: monthly means 1659 to 1973

  54. kadaka (KD Knoebel) says: May 31, 2013 at 3:53 pm
    “the Null Hypothesis states the warming is natural variability, the onus is on YOU to prove it is not. The Null Hypothesis does not require proving.”

    But it does require stating. And it has to be plausible enough that rejecting it, which is the objective, is an interesting result.

    The usual null here is stationary process with noise of some sort. Keenan wants to substitute random walk as the model of natural variation. But that isn’t sustainable.

    It also isn’t physical for another reason. Earth’s temperature is determined by a balance between incoming and outgoing radiation. That fixes it because of the S-B law. There can be temporary variations where heat accumulates for a while, and the effect at the surface can change because of GHGs etc. But it can’t just drift. A random walk has no fixed point.

  55. ” If you want to say that the present observation is natural variation and nothing has changed, then you need a model of that variation which doesn’t change.”

    That’s what we’ve got. The model doesn’t change over time, but it’s only valid for a short segment. Any short segment.

    It’s the same reasoning by which you would analyse the behaviour of a curve by fitting a straight line to a part of it. Your ‘it’s a straight line’ model doesn’t change, and applies to any short segment. Pick a different segment, and it’s still ‘a straight line’ – just a slightly different one. We don’t have to explain why the curve is ‘straight’ just in that period we’re analysing. It’s ‘straight’ in any sufficiently short segment. Nothing changes in that regard.

    “Null hypothesis: [statistics] A statement that essentially outlines an expected outcome when there is no pattern, no relationship, and/or no systematic cause or process at work; any observed differences are the result of random chance alone.”

    That definition is not quite right. The null hypothesis is the hypothesis that you are trying to demonstrate is false. Usually you are interested in showing the existence of a new or previously unknown deterministic effect, and so the default position you are trying to disprove is generally that there is no such effect. But you can also do it the other way round.

    You could, for example, take a 3C/century trend plus some noise model as your null hypothesis, and then try to reject it. If you succeed, we would know that 3C/century plus that noise model was untenable. If you fail, we would know no more than we did to start with.

  56. Nullius in Verba says: May 31, 2013 at 4:32 pm
    “That’s what we’ve got. The model doesn’t change over time, but it’s only valid for a short segment.”

    ???
    I’d like to see such a model fully specified.

  57. kadaka (KD Knoebel) says:
    May 31, 2013 at 3:53 pm

    From Nick Stokes on May 31, 2013 at 3:16 pm:
    As I said above, if you want to say that it’s just natural variation and nothing has changed, then you have to propose something that would work without changing.

    (Kadaka) This is the point where Smokey would start slapping you around the room, because the Null Hypothesis states the warming is natural variability, the onus is on YOU to prove it is not. The Null Hypothesis does not require proving. It does not require one to “propose something that would work without changing”. Climate skeptics have nothing to prove. If you want to say it’s not natural, than YOU prove otherwise.

    Ahhhh Grass Hopper, you have learned well the lessons taught by Master Smokey!
    MtK

  58. “I’d like to see such a model fully specified.”

    ???
    It’s just ARIMA(3,1,0). I get the feeling you’re still not understanding my point.

  59. Nick Stokes says:
    May 31, 2013 at 4:26 pm
    kadaka (KD Knoebel) says: May 31, 2013 at 3:53 pm
    “the Null Hypothesis states the warming is natural variability, the onus is on YOU to prove it is not. The Null Hypothesis does not require proving.”
    But it does require stating. And it has to be plausible enough that rejecting it, which is the objective, is an interesting result.

    The usual null here is stationary process with noise of some sort. Keenan wants to substitute random walk as the model of natural variation. But that isn’t sustainable.

    It also isn’t physical for another reason. Earth’s temperature is determined by a balance between incoming and outgoing radiation. That fixes it because of the S-B law. There can be temporary variations where heat accumulates for a while, and the effect at the surface can change because of GHGs etc. But it can’t just drift. A random walk has no fixed point.

    OK, help me out here. I was under the impression that random walks do, in fact have a centroid, ala the drunk and the lampost. Am I thinking of another phenomenon?

  60. More accurately, any (and every) proponent of the current CAGW=CO2 Hypothesis MUST be able to explain why the climate – across every earlier period of interest – has varied by as and more of today’s random changes: Specifically, what has caused the Roman Warming Period, the Dark Ages, the Medieval Warming Period, the Little Ice Age, and today’s Modern Warming Period; and then, why such changes no longer occur .

  61. “OK, help me out here. I was under the impression that random walks do, in fact have a centroid, ala the drunk and the lampost.”

    The mathematical model extends infinitely far both forwards and backwards in time. Because it’s hard to get your head round the idea of something with no defined distribution at any point, introductory explanations usually start with the rooted random walk, which is specified to be at position zero at time zero. Then at any subsequent or previous time there is a definite position distribution, with mean zero, that gets broader with time. Once you’ve got your head round that, you take away the coordinate system, and point out that things look exactly the same if you pick any other point on the path as the origin.

  62. @Nick Stokes: “The usual null here is stationary process with noise of some sort. Keenan wants to substitute random walk as the model of natural variation. But that isn’t sustainable.”

    Why not?

  63. Steven Mosher says:
    May 31, 2013 at 11:20 am
    Keenan’s statistical model is physically wrong.

    When you analyze data you choose a model. picking a model that is physically wrong ( for example a random walk for temperature) can get you a better fit, but it’s a mistake.

    In a causal universe, randomness does not exist, at least at the macro level.

    When we talk about randomness, as in random walk, we mean the causes are unknown.

    If a random model fits the data better than a proposed physical model. This is proof the proposed physical model is wrong.

    Keenan’s statistical model proves your physical model is wrong.

  64. Nullius in Verba says: May 31, 2013 at 4:49 pm
    “It’s just ARIMA(3,1,0). I get the feeling you’re still not understanding my point.”

    You’re right. Where is the short period specified in ARIMA(3,1,0)?

    D.J. Hawkins says: May 31, 2013 at 5:21 pm
    “I was under the impression that random walks do, in fact have a centroid, ala the drunk and the lampost. Am I thinking of another phenomenon?”

    I think so. A random walk has no memory. Ir doesn’t know where to go back to. A gas molecule prtetty much follows a random walk.

  65. @Mike Jonas says 2:52 pm and Nick Stokes
    My First Off was poorly phrased. What I said:

    First off, Keenan’s model isn’t wrong because it leads to physically impossible scenarios and violates boundary conditions. You say it is wrong because by “looking” at the data it doesn’t support the conclusion you prefer, not because it leads to impossible physics. A failure in logic. A failure in simile. A slight of hand with predicate.

    was very poorly phrased.

    A better phrased point:
    Keenan’s model would be wrong if it leads to physically impossible scenarios and violates boundary conditions. Instead, you [Steven Mosher] discard this predicate and say instead it is wrong because by “looking” at the data it doesn’t support the conclusion you prefer, not because it leads to impossible physics. A failure in logic. A failure in simile. A slight of hand with predicate.

    But the third point is the stronger. Of course Keenan is wrong. All models are wrong. But is it wronger than a linear fit the MET uses? Which one leads toward more impossibilities?

  66. “Where is the short period specified in ARIMA(3,1,0)?”

    When people say that a sufficiently short segment of any smooth curve looks straight, where is “sufficiently short” defined?

    “A gas molecule prtetty much follows a random walk.”

    Good example!

    A gas molecule cannot actually follow a random walk because it’s behaviour is bounded in both space and time. Gases have to be confined to act as gases, and the confinement prevents it moving arbitrarily far. For example, gas molecules in the atmosphere are gravitationally bound to the Earth.

    The caveats are hidden in your words “pretty much”. It’s the same sort of “pretty much” that I’m using. Over short enough periods of time, gas molecules do indeed follow “pretty much” a random walk. Over a long enough time, gravity will pull it back towards the average. But the forces on a molecule from its impacts with it’s neighbouring molecules are much larger than the gravitational forces keeping it on the Earth, and so the latter can be ignored.

  67. Stephen Rasey says: May 31, 2013 at 6:06 pm
    “But is it wronger than a linear fit the MET uses?”

    Well, have you ever heard the Met, or anyone else, spontaneously talk about the trend since 1880 as evidence of anything. They were clearly not keen to do so here, and only did when asked. Apart from anything else, the further back you go, the lower the trend. But more importantly, and related, the less the notion of linear rise makes sense. The forcing history is nothing like what would be needed.

    The MO and others talk of the increase since 1970, 1950 or whatever. This is the period clearly influenced by anthrops, and there are no obvious non-linear effects to take into account (well, since 1975 at least). If they have to answer a question like is the rise since 1880 significant, and are pushed to quantify it, then their choice of model is a sensible one to use.

    A question – is the temperature rise over the last 20,000 years statistically significant? What test would you use?

    Nullius in Verba says: May 31, 2013 at 6:18 pm
    “When people say that a sufficiently short segment of any smooth curve looks straight, where is “sufficiently short” defined?”

    I doubt if they do say that. Any examples? I think people look at sections where they think there might be reason to expect a trend. Or possibly limited by what they have. But I can’t imagine shortness on its own being a sought after feature.

    Re gas etc – these processes don’t have time boundaries. Argon atoms have been following a random walk for billions of years. They have physical boundaries, enforced by gravity (with path length) at the upper. If the model of “natural variability” includes such boundaries, it should identify them.

  68. Mac the Knife says:
    May 31, 2013 at 4:49 pm

    Many times Smokey has put Nick Stokes in his place. But some folks are impervious to reason.

  69. I think this would depend on the parameters of the model, the size of the step in the random walk is dependent on the forcing for a given year being greater than the previous. Because of the cubic factor in emission, it would seem that the hotter it gets, the lower the probability of a positive step. This physical constraint should keep the result bounded.

    Even accepting the deficiencies, the fact remains that the MET has clearly failed to justify it’s choice of model. The MET in their haste even suggests there are models which have a better fit probability than the one offered – the blog post fails to say why they are not using THAT model either. Foot, meet mouth

    Finally the MET say their view is consistent with the “Physics and chemistry” when the positive feedback loop gain they are assuming is of the order of 0.95 and has NO time component (is scalar) which is very non-physical – fairy tale lala land stuff to us Engineers. Feedbacks have lag, lag causes instability, climate science postulates huge feedbacks without instability – Nutty!

    FOI Question
    Please supply all documentation regarding the relative fit of statistical models to global warming series data.

  70. Margaret Hardman says:

    “Null hypothesis: [statistics] A statement that essentially outlines an expected outcome when there is no pattern, no relationship, and/or no systematic cause or process at work; any observed differences are the result of random chance alone.”

    ==============================================

    That definition is wrong, Margaret. “Random chance” is not involved in the Null Hypothesis. The Null Hypothesis is the statistical hypothesis that states that there are no differences between observed and expected data.

    Thus, what is “expected” is that past climate parameters will not be exceeded. And as a matter of fact, past climate parameters have not been exceeded: the current climate is well within past parameters; there has been no acceleration of global warming, and the planet has been both warmer and colder than in the past.

    Which leads to Occam’s Razor: “One should not increase, beyond what is necessary, the number of entities required to explain anything.” [William of Ockham, 1285-1349]

    The “carbon” entity is not necessary to explain the current climate. CO2 is an unecessary distraction, which only serves to confuse the issue. As we now know, CO2 has no effect on global temperatures. The demonization of ‘carbon’ is a false alarm.

  71. “Re gas etc – these processes don’t have time boundaries. Argon atoms have been following a random walk for billions of years.”

    Including before the big bang, and after the heat death of the universe?

    Infinity is bigger than you think.

  72. dbstealey says: May 31, 2013 at 6:54 pm
    “Many times Smokey has put Nick Stokes in his place. But some folks are impervious to reason.”

    Smokey and dbstealey together might be able to do it :)

  73. When I arrived at my US Air Force duty station in England in January of 1970, the Labour government of Harold Wilson had almost achieved third-world nation status for the UK through their socialist economic policies. However, discovery and exploitation of North Sea gas and oil saved the UK from their economy from going under, thanks to their clunky nationalized industries, long enough to survive until Margaret Thatcher turned things around. When I returned for long visits in 1988, I couldn’t believe the vibrancy of the British economy. Tony Blair was “Margaret Thatcher – Lite” and didn’t bugger things up.

    However, this Reuters article “UK climate act limits energy choices: Gerard Wynn” shows that the British really didn’t lose their knack for snatching defeat from the jaws of victory, in this case by hamstringing their energy future. It was just hibernating, waiting for the moment when feckless leadership allowed it to rise and muck everything up again.

    http://www.reuters.com/article/2013/05/31/column-wynn-uk-energy-idUSL5N0EA2XE20130531

    The UK has significant fossil fuel prospects through fracking, just as they once had from North Sea gas and oil. With their genius for mismanagement, I’m sure they will muck this up too.

  74. On a recent thread, someone stated that temperature time series look like 1/f noise (which is naturally occurring in electronic circuits). Just wondering if there is a statistical model for 1/f noise.

  75. “Just wondering if there is a statistical model for 1/f noise.”

    I think ARFIMA(1, 1/2, 1) is an example of 1/f noise, although not the only such process.

  76. Whenever these climate loons are making completely ludicrous predictions, I always respond with this:

    As previous claims are no longer matching the observed reality, the edifice is collapsing. I like to think of this as the theme song.

  77. dbstealey says:
    May 31, 2013 at 7:11 pm
    Margaret Hardman says:
    —————————–

    At least she’s trying to get there.

    Margaret, what you need to do is to take the CONCLUSION that CO2 is a terrible monster and causes global warming, climate change, extreme weather weirding and all that crap and flush it from your brain.

    Start again from scratch, from the supporting DATA (of which there is none, as you will find).

    We’ll go to stage two when you think you’ve found some.

  78. Nick Stokes says:
    May 31, 2013 at 1:32 pm
    Stephen Rasey says: May 31, 2013 at 12:34 pm
    ‘“First off, Keenan’s model isn’t wrong because it leads to physically impossible scenarios and violates boundary conditions.”
    Yes it is. A random walk is unbounded and has no fixed reference point. If it were to apply, at some stage (not too far away), all life would be extinguished (with probability 1). If it applied in the past, we would not be here. It is physically impossible because there is not the energy available for that unbounded behaviour.”

    No statistic describes the world or any part of reality. When statisticians refer to “bell curves” they do not believe that bell curves exist in the world. A statistic is nothing more nor less than a line drawn through points on a graph. It may be formulated as a lovely mathematical equation but it no more describes reality than any other purely mathematical equation, an equation containing no predicates that are descriptive such as “___is spinning.”

    When a statistician describes a statistic as a random walk statistic, he does not imply that there is some actual physical process that is the random walk. Nor does he imply that reality has a structure that could accommodate some physical process that you imagine to be the random walk.

    Your understanding of statistics, at least what you expressed above, goes beyond the naive and approaches the childish.

    Climate scientists must learn that science is a description of the world. Science uses mathematics to assign measures to the world. A statistic is a measure but not a description of the world

    If one treats the physical analogies that are used to explain statistics as describing the world then one is making the pure metaphysician’s fundamental error; that is, he is making inferences from the characteristics of his system of representation to the characteristics of reality. That is the formula for metaphysics and anti-empiricism.

  79. From Nick Stokes on May 31, 2013 at 3:16 pm:
    “As I said above, if you want to say that it’s just natural variation and nothing has changed, then you have to propose something that would work without changing.”

    Nick Stokes’ brain, at least the part that gathers information and makes inferences. But it does have a range of behaviors and various, interesting higher and lower points in that range..

  80. Theo Goodwin says: May 31, 2013 at 9:13 pm
    “A statistic is a measure but not a description of the world”

    Theo, this is topsy-turvy. This is exactly my argument that you are expressing. The Met calculated statistics of the time series – trend and its uncertainty. Trend is just a statistic – a scaled first moment. Their calculation was an appropriate response to the question asked.

    Yet all the argument here is about fitting. About the only number out of Keenan’s analysis is that his ARI(3,1,0) model fits better than trend+AR(1) (AIC). But that’s irrelevant. And yet on that pointless observation (as you’ve expressed well) it is said that the Met has admitted, well, something.

    However, if you really want to argue about fitting, then it’s pointless to fit a non-physical model.

  81. dbstealey
    May 31, 2013 at 7:11 pm

    “The “carbon” entity is not necessary to explain the current climate. CO2 is an unecessary distraction, which only serves to confuse the issue. As we now know, CO2 has no effect on global temperatures. The demonization of ‘carbon’ is a false alarm.”

    When I click your link I go to a cherry picked graph. I would welcome a link to a well sourced, peer reviewed and published in a high impact scientific journal rather than to a blog.

    As for the definition of null hypothesis. I lifted it, with acknowledgement. I didn’t write it myself.

  82. Stephen Mosher says May 31, 2013 11:20pm

    “Well, before we even start we know the model is physically wrong.How do we know that? Well, at some time in the future your model will predict negative ice area.”

    Alternately a model that predicts a positive trend will increase infinitely. This is the other side of the coin of your statement above.

    Your argument about this would probably be that since we can never reach infinite amounts of anything then positive trends can never be shown to be physically wrong.

    To be amusing — in the real world, funding for the negative trend model would reach a point where it must logically terminate. For the model that predicts a positive trend the funding never ends and so the amount of money funding it stretches to infinity.

    Interestingly I can conceive of a type of mathematics that deals only in positive numbers. Then infinity and zero would be perfect opposites of each other and both unreachable. The concept of negative numbers is a mathematical concept — an artifact of mathematics. Negative amounts have no physical existence — only a conceptual one (or more correctly only a mathematical one).

    (I don’t really want to go here but things in the physical world have a physical existence or they don’t exist. Things in the mathematical world have a mathematical existence or they don’t existence. “Concepts” are an artifact of language and can existence without either a physical or mathematical basis. You may not recognize it but that last statement is really just very old Greek philosophy.)

    (Sigh, I can’t help myself here and must make a funny. Applying the above, it is because humans use language, a system neither physically or mathematically based, that all humans are crazy. Conceptual thinking is crazy thinking. It has no basis in any type of “reality”.

    Wait, wait, have i not just shown that everything i have said above must be crazy thinking? Oh, damn, what i have i just done to myself. Perhaps i better shut my jaw.

    Eugene WR Gallun

  83. Compare the situation with the following example. You have won the capital prize in a lottery and someone says that this is unfair because the lottery organization is corrupt. The prize did not result from a random (or natural) process. This is possible and he has the duty to prove it. The question is of whether this can be done with statistical means. He could demonstrate for example that too much winners are friends of the lottery organization. However, we are dealing here with just one unique period in history. We have got the prize for one century with rising temperatures. Other centuries got Warm Periods or Little Ice Ages and we may suppose that these were not the result of human intervention. What about our century?

    For statistical models as proposed, we should look for logical fallacies at a level beyond its content. It compares with analyzing perpetuum-mobile engines, which cannot do what they promise because we cannot obtain energy from nothing. We cannot obtain decisive statistical information from one example. The whole issue of statistical significance is a fallacy. The example Keenan used in his article (see Bishop Hill) is telling. He took the observation of ten heads in a row while tossing a coin (you certainly would get a prize for that). Does that suggest human intervention or unfair play? He forgot to tell us that, on the assumption of independence and fifty-fifty chance, the probability of every possible sequence of ten tosses equals 1/2 ^ 10, which is much more significant than a magic five percent. There is nothing special about ten heads in a row or one thousand heads in a row besides our fascination with certain regularities. From a statistical point of view and with sufficiently many measurements each temperature development of the past century would be trivially significant whatever null hypothesis we choose or whatever models we employ for comparison.

    If you have won the capital prize in a lottery, and nothing is wrong with the winners so far, is it possible to demonstrate with statistical means that your prize is the result of human intervention? My answer is no.

  84. tonyb says:
    May 31, 2013 at 11:41 pm
    ///////////////////////////////////

    Tony

    You are being more than kind.

    No mathematician would say there was any correlation between CO2 and temperature in the global temperature records. This follows from the twin fact that:

    (i) the rate of warming during the 1970s/1990s warming period is no greater than the rate of warming during the 1920/1940 warming period, although during the 1970s/1990s there is a significant rise in CO2, hence supposedly significant driving forcing, whereas during the 1920s/1940s there was no significant rise in CO2 and hence no significant driving force. This shows the rate of change is the same with or without CO2.

    (ii) there is anti-correlation. The temperatures cooled duringb the 1949s/early 1970s just when CO2 emissions began to rise significantly. Whilst correlation does not mean causation, anti-correlation is almost invarably fatal to a claim of causation.

    These are not the only examples of a lack of correlation between CO2 and temperature, but are the most stark. Further examples included (iii) that there is no first order correlation in the satellite data sets. These contain 33 years of data and there is no first order correlation (flat temperatures between say 1979 and about 1997, flat beteen about 1999 to date, merely with a one off step change in and around the 1998 super El Nino, and unless that event was caused by rising CO2, there is simply no first order correlation in that data set), and (iv) the present temperature stasis of between about 15 to 22 years depending upon which data set is used.

    The fact that CET shows a temperature fall this century of about 0.5degC, just over half of the warming seen in the last century, and the fact that CET shows a fall in winter temperatures since 2000 of almost 1.5degC is further evidence that a correlation between upward rising CO2 levels driving temperature is not evident in the data sets.

    The same is so of the paleo record. Whilst there is some similarities between CO2 levels and temperature, there are many examples of anti-correlation. Further, it appears that CO2 lags temperature, not drives it. . :

  85. Why would anyone expect that, after a step function input of large magnitude(the end of the last ice age) that the surface temperature of the planet would be constant?
    I would expect oscillations in temperature. The trick is to figure out the periods and magnitudes of these. It may not be possible with our present understanding of the system.

  86. Margaret Hardman says:
    May 31, 2013 at 10:44 pm
    //////////////////////////
    Margaret.

    You fall into an often seen trap.

    One can not cherry pick data to prove a theory. At best, any such chery picked data is consistent with the theory, and no more than that.

    However, it is quite legitimate to cherry pick data to suggest that there is a problem with a theory. If a theory is sound, it should be able to explain the cherry picked data sample, and if it cannot, then there is a problem with the theory. This may be just slight, requiring further refinement to the theory, or it may be fatal to the theory.

    Most theories are falsified by cherry picked scenarios which the theory is unable to explain. For example Newtonian mechanics is ‘fine’ in most scenarios and is still being used today in most scenarios. however, we know in the extremes (if you like, the cherry picked scenario that it cannot properly and adequately explain the extreme scenario).

    The theory of CO2 induced global warming must be able to explain (i) why there were ice ages when there were very high levels of CO2, (ii) why in the paleo record there are periods of rising temperatures when CO2 was falling, (why in the paleo record there are warm periods when CO2 levels were low, (iv) why in the paleo record there are periods of falling temperatures as CO2 levels rise (v) why in the paleo record CO2 levels lag temperature change if CO2 is the driver of temperature changes as opposed to the consequence of a change in temperature (vi) more recently, the Holocene optimum, the Minoan warm period, the Roman warm period, the MWP, the LIA, (vii) the 1860s to 1880s warming, the 1880s to 1910 cooling, the 1920 to 1940s warming, the 1940s to early 1970s cooling, the temperature stasis of the past 14 or so years, (viii) the reason why the rate of warming during the 1970s to 1990s warming is no greater than the rate of the 1920s to 1940s warming when the rise in CO2 emissions is significantly higher during the later warming period than it was during the former warming period, in short why is the rate of change not far greater during the later warming period/

    These are not an exhaustive list (for example one could ask why there is no first order correlation between CO2 levels and temperatuire change in the 33 years worth of satellite data), but they are examples of issues that the AGW theory must adequately explain, failing which it is falsified .

  87. @Richard Verney

    I said it was a cherry picked graph, not cherry picked data. You might argue that your post covers that point. I would argue that choosing how to present the data and what data to present is part of demonstrating not that your theory is correct or incorrect but that it leads some people to believe one over the other.

  88. If we need to use complex statistical models and processes to spot/define a small trend in data that varies considerably, do we need to ask the question: if this trend is so small that we need to do all of this complex work to try and find it, and if the answer we get depends upon which method is chosen from a range of contested methods, then is it valid to do it?

  89. philincalifornia says:

    Margaret Hardman says:
    —————————–

    At least she’s trying to get there.

    With all due respect, my fellow Kalifornian [and I do respect you], Margaret isn’t trying very hard — man.☺

    Margaret Hardman asserts plenty of vague criticisms, but she has no real, substantive facts that solidly support her belief system. She criticizes this chart as not being “peer reviewed” [as if that means anything in modern climatism]. But the rest of us know that the Wood For Trees databases are widely accepted by both sides as legitimate. And that chart shows clearly that CO2 has no measurable effect on temperature.

    But I like commentators like Margaret. They show that the alarmist crowd lacks any testable, empirical, reproducible data to support their beliefs, which amount to simple — and incorrect — assertions. On this “Best Science” site, Margaret’s assertions are not enough. She needs to post verifiable data to support her argument; something that is missing from her posts.

    ==========================================

    Nick Stokes says:

    “…this is topsy-turvy.”

    Nick is in Australia, so that is to be expected. ☺

  90. Read the Met Office statement, VERY VERY Carefully.

    I Hope everyone picks up on the absolutely monumental statement about GW warming the MO have issued here.

    Think about what the Met Office haven’t said as much as what they have said…..

    http://metofficenews.wordpress.com/2013/05/31/a-response-on-statistical-models-and-global-temperature/#respond

    “Mr Keenan says that there is “no basis” for the claim that the increase in global temperatures since the late 1800s is too large to be reasonably attributed to natural random variation. He goes on to argue that this is because we haven’t used the right statistical model.”

    The Met Office, have not actually challenged Mr Keenans’ argument, and are in effect agreeing with him.

    They haven’t rebutted the theory that according to statistical modelling the change of temperatures since 1880 is anything other than random fluctuations.

    They make a strawmans argument. We know it warming and its down to man because of all these other factors. This press release that all global warming since 1880 isn’t statistically significant.

    They haven’t argued against his Maths or Stats. They have just said we know its warming, we have other indicators other than temperature, shut up and Trust us………

  91. In Slingo’s paper…..

    “Our calculations of whether a linear trend with first order autoregressive noise model is more
    likely to emulate the global temperature timeseries than the driftless third order
    autoregressive integrated model, show a range of relative likelihood values from 0.001 to
    0.32 depending on which dataset is used (HadCRUT4, NASA, NOAA) and the starting date
    of the timeseries (1850-1900). This means that the driftless third order autoregressive model
    is more likely to represent the timeseries with these starting dates than the linear trend
    model. ”

    Basically the temperatures could just be random noise………

  92. The Real Headlines based on the conclusions in the met-offices own paper, should be that the changes since 1880 in the UK temperature best fit a random noise pattern of natural variation. Which should be shouted loud and clear.

    Everyone please read the actual conclusions on the MO paper, on their own website.

    They are agreeing with Doug Keenan that the change in UK temperatures are basically random noise……

  93. TonyB has said (twice now, effectively): “A sharp drop in temperatures in Britain over the last decade is not easy to explain to British MP’s so Julia Slingo is under pressure Internationally and locally to explain exactly what is happening.”

    Actually I looked at this the other day, with respect to the CET annual mean maximum. Starting at 2002 (the best cherry picked year :-)), and using an independent identically distributed normal for the errors, the negative trend has a significance of over 6%, so many statisticians would not yet count it as significant yet. However, once 2013 is over it probably will be. To remain non-significant, 2013 just needs to be as warm as any of the years 1997 to 2007. But this is looking unlikely.

    Rich.

  94. @Steven Mosher
    @Nick Stokes
    An ARMA model is a simplified, or truncated, convolution.
    If a system has long term “memory” or “persistance”, which in a linear model would be represented by long time constants, trends in the output under random, zero mean, inputs will occur. The duration and magnitude of the trends will increase with increasing persistance. The questions are:

    a) What is the degree of persistance in a system that will produce trends observed in temeprature?

    b) Is this behaviour represented in ARMA models?

    c) Is the historical data sufficient to calculate the persistance reliably?

    Are Random walks bounded? Obviously not for an infinite sequence, but the probability of the displacement becomes Gaussian so that the probablity of the tails becomes very small. If a Gaussian input with a continuous distribution to a system with persistance is considered, the input is not bounded, but the distribution of the output will much narrower than that of the input.

  95. It seems to me, this MP, MR Donoghue. Was asking rhetorically if an .8C rise in 163 years is “statistically significant”.
    To a layman, it is insignificant at its face. Statisticians will bicker over the semantics and the numbers. What is “normal”, “significant” etc.
    To claim .8 degrees across 163 years is significant, important, dire, scary, ominous is absolutely an untenable position.
    The MET Office is arranging deck chairs as their Titanic sinks.

  96. See owe to Rich

    Whilst most of us would agree that a 0.5C drop in CET over a decade was significant a drop of 1.5C in the same period during the winter is highly significant no matter how good you might be at massaging statistics.

    This has contributed to the toxic combination of Sharply rising energy costs at a time of sharply falling temperatures.

    Julia might find that very hard to explain away especially as the Met office had forecast warmer wetter winters

    tonyb

  97. dbstealey

    I might be trying to get there but at least I think I know what I said. I said the graph was cherry picked and that I would like to see something from the peer reviewed literature and not from someone’s blog. I don’t know if the graph was peer reviewed or not. It doesn’t say. But I do know better than to take a baldly given graph without context or explanation. Been in the game of understanding data for too long.

  98. Margaret Hardman says:

    “Been in the game of understanding data for too long.”

    Then apparently you’ve already lost the game, because as stated: everyone on both sides of the debate accepts the WFT database, which is routinely used by scientific skeptics and climate alarmists alike.

    You can also claim that any particular chart ever posted was “cherry picked”. But that is a truly lame argument: you don’t dispute the data itself, because you cannot, so you fall back on the vague claim of ‘cherry-picked’. You’ve lost this argument, my friend. ‘Cherry picked’ or not, those scientific facts are empirical evidence, and the evidence clearly shows that CO2 is not causing any measurable global warming.

    If I am wrong, you must at the very least show that the data, and the chart derived from that data, is wrong. Good luck with that.

  99. I attempted to leave a comment on the Met Office blog. Somehow I question whether it will make it through moderation. Here’s the comment:

    Unfortunately, this blog post is full of circular reasoning and unsupported claims.

    In #1, you write: “The basis for this claim is not, and never has been, the sole use of statistical models to emulate a global temperature trend. Instead it is based on hundreds of years of scientific advancement, supported by the development of high-quality observations and computational modeling.”

    You appear to say the statistical models do not support the claim the temperature trend is too large to be natural but the claim is supported by “high-quality observations and computational modeling.” The issue here is that high-quality observations are at odds with computational modeling. There has been a consistent and significant rise in atmospheric CO2 since 1998 but no clear corresponding rise in temperature.

    In #3 you write: “Our judgment that changes in temperature since 1850 are driven by human activity is based on information not just from the global temperature trend, or statistics, but also our knowledge of the way that the climate system works, how it responds to global fossil fuel emissions and observations of a wide range of other indicators, such as sea ice, glacier mass, sea level rise, etc.”

    This is circular reasoning. You are saying “We know the physics of atmospheric CO2 are causing global warming because we know the physics of atmospheric CO2 cause warming.” I’m sorry, but that is circular reasoning. It is not science. You have a hypothesis, and it is a reasonable hypothesis – but you cannot allow the hypothesis to dictate what the data is telling you. Statistical analysis of the data is essential to determine is the hypothesis is true or if something else may be in play such as a natural negative feedback to the climate system.

    In #4 you write: “Because the Met Office does not make an assessment of global warming solely on statistics – let alone the statistical models referred to in Mr Keenan’s article, this exercise is of very little, if any, scientific use.”

    This is clearly wrong. Mathematics is the language of science. If you cannot make your argument based on statistics, then you cannot make your argument.

  100. dbstealey

    Linking me back to the same graph is not going to change my mind. I link you to this graph:

    http://uknowispeaksense.wordpress.com/2013/05/24/denier-contradictions-csp-contradictory-beyond-belief/ole-humlums-graph/

    I don’t expect it to change your mind either. I am not saying the data or the graph are wrong. My point was that if you change the start and finish points you can make a different story. Whether it is the correct story or not depends on your explanation. If CO2 were the only factor in climate science then the story would be simple and your graph and mine would be easy to interpret. However, as this site constantly informs me, there are plenty of other factors to take into consideration: the Solar cycle, cosmic rays, volcanoes, ocean circulation patterns, aerosols, CFCs, Unce Tom Cobleigh and all. So the picture, the graphs, show part of the story but not the whole story.

  101. I notice my comments are not being posted at the moment. Am I now persona non granta?

  102. Margaret, it happens to all of us from time to time. I think it is caused by a glitch or congestion. Your missing comment will show up soon.

  103. Obviously not, so here goes. I think I am getting dumped into spam because I was linking to a, shock horror probe, forbidden site, perhaps. Not sure. Anyway, in a spirit of openness I present a link that shows something similar but illustrates my point, equally cherry picked, but this time showing a bit more on the x axis.

    http://systems.broadviewenergy.com/app_cms/includes/downloader.php?f=7b2b8d4d79ce1aa1054d16f3d6733501

    Anyway, if CO2 were the only factor involved in the climate then I would be inclined to give the graph you pointed me at more credence. But it is like watching a film through a letterbox. You get some of the picture but not all of it. Sometimes things are happening off screen and you only get an inkling of them later. End of analogy.

    But CO2 is not the only factor, as anyone who has spent any time considering this point well knows. This site, with its claim at the top, tells me that we need to take into account thunderstorms, ocean currents, volcanoes (or at least some of them), cosmic rays, CFCs, sunspots, and a long list of other things. Fair enough. I take them into consideration and make my own mind up.

    I come to this site seeking answers with, I hope, enough intelligence and background to make sense of them. Others may judge my intelligence and background in ways I might not like. That is their prerogative. However, I am not impressed by some of the answers I find here. I have a lengthy background in skepticism (I am talking skepticism of a wide range of claims) and spent many years unsure of which way to turn on the climate debate. I have an inborn reaction against extreme claims and take politician’s proclamations with whole plates of salt. My wife often reminds me that she bought Al Gore’s powerpoint presentation for me to watch years ago and I still haven’t done so.

    I go to other sites seeking the same answers to the same questions. I don’t always enjoy or agree with what I see there. I might be unusual in that I don’t stick with what supports my worldview but I like to see if that worldview is correct. Knowing the tricks that people will play to get my attention, buy my vote, make me part with my cash, helps me to make decisions based on evidence in the way that I believe it should be interpreted. That is down to me. It is up to you how you view the evidence and I respect your views. You are totally entitled to them. I suspect we will never agree on this matter even though we may agree on other matters. On another site I was derided for having used evidence, reason and logic to arrive at my current position. I assume you will agree that such a response is not skepticism but close mindedness.

  104. Margaret Hardman,

    Your link to “broadviewenergy” is a simple overlay. It does not show any correlation. For example, this chart shows a clear correlation between ∆T and ∆CO2. We can see that a change in temperature causes a subsequent change in CO2. But your ‘broadview’ chart shows no such correlation; thus, it is meaningless.

    Furthermore, there are no charts showing that ∆CO2 causes ∆T. Therefore, if CO2 has any effect on temperature, it is so minuscule that it cannot be measured. Thus, CO2 can be completely disregarded for all practical purposes; and the “carbon” scare is easily deconstructed.

    Finally, your link to “uknowispeaksense” is a waste of time. Any blog that labels scientific skeptics as “deniers” is always a waste of time, and it has no place being posted on the internet’s “Best Science & Technology” site. If that is the best you can do, you have already lost the debate.

    To make any headway here, you need to show a verifiable cause-and-effect correlation between ∆T, and a subsequent ∆CO2. If you can show such a cause-and-effect correlation, you will be the first to do so.

    But it appears that the best you can do is to show a chart overlay between CO2 and temperature. Overlays tell us nothing whatever about cause and effect. It is pointless hand-waving — which is the only thing the alarmist crowd is good at.

    Finally, you write: “My wife often reminds me that she bought Al Gore’s powerpoint presentation for me to watch years ago and I still haven’t done so.”

    That is a strange comment, “Margaret”. It appears you are trolling this site.

  105. In his article at Bishop Hill ( http://www.bishop-hill.net/blog/2013/5/27/met-office-admits-claims-of-significant-temperature-rise-unt.html), Doug Keenan correctly points out that one’s conclusion regaring the statistical significance of the warming since 1880 is dependent upon the model that one selects in computing the level of the significance. In reaching its conclusion, the Met office selected a “linear trend” model but Keenan argues that they should have selected a “driftless ARIMA(3,1,0)” model, for the later makes the observed temperatures one-thousand times more likely. The rule which selects for use that model which makes the data most likely is, however, logically unfounded.

  106. dbstealey says:
    June 1, 2013 at 3:04 am

    Nick Stokes says:

    “…this is topsy-turvy.”

    Nick is in Australia, so that is to be expected. ☺
    +++++++++++++++++++++++++++++++++++++

    Carbon dioxide in the Southern Hemisphere must be upside down too. Otherwise, how can Antarctica be explained ?

    …… you know, sea ice is a proxy for CO2-induced global warming and all that (crap).

  107. Margaret Hardman says:
    June 1, 2013 at 2:32 pm
    Civil partnership, Sir.
    +++++++++++++++++++++
    I knew it. Hardman’s a pseudonym isn’t it – like Seymour Butts?

    Plastic products are still mostly derived from petroleum “Margaret”, so not a very PC “dickprint”.

    You still haven’t figured out that you’re about seven levels of league play below this blog, have you?

    But carry on though – for the viewers.

  108. Nick Stokes says:
    May 31, 2013 at 9:42 pm

    Now we are getting near the same page. I can accept your point. But then I have to add that Keenan’s point seems to be that the MET Office has no reasoned basis for preferring their statistic to his statistic which shows no statistically significant warming. I see nothing in what you have said that conflicts with Keenan’s point. Also, remember that Keenan’s point occurs in the context of a question raised by Parliament.

  109. uff, The admission means unprecedented and unique cannot stand, but the villain charged still looks best for the crime.

    Really? Let’s see, of the 0.8 C rise, 0.5 C occurred before CO_2 got off of the industrial peg at “trivial and irrelevant”, clearly part of the general recovery from the LIA (which actually happened, once one de-Manns the collective proxy data. Of the remaining 0.3 C, we have (splitting the rise with the admittedly stupid assumption of extrapolatable linear trends in climate data, an assumption contradicted by the merest glance at the LONG term record but nevertheless an essential part of your analysis of the “best villain”, which assumes implicitly that we know what the temperature SHOULD have been without CO_2 (and without the question begging predictions based on equally unverified assumptions about feedback):

    1880 – 1970: 0.5 C
    1970 – 2013: 0.3 C

    (in round numbers, because another absurdity in the entire discussion is writing these numbers with two or three significant digits and without error bars, when the error bars even for the satellite era are at least 0.1 C and for the bulk of the period are more like 0.5 C).

    So over roughly 90 years, we had anywhere from 0 to 1 C warming, followed by 40 year where we had anywhere from 0.2 to 0.4 C warming. If one accounts for the probable error, makes the stupid assumption of linearity, ignores the fact that the LIA was the coldest period in the entire nonlinear temperature record of the Holocene, and considers the temperature record as a whole including the last 16 warming-free years, the big question is — do we believe that a crime has been committed at all?

    Perhaps — and I’m just throwing this out there — the victim fell down the stairs of their own accord and cracked their own skull perfectly naturally, and the detective is being paid off to pin it all on Colonel Mustard. Perhaps they are all innocent — Plum, Mustard, Green… — and the real perpetrator is the house itself, with its rickety boards, loose rugs at the tops of stairs, and perhaps the victim’s tendency to tipple a bit too much. Perhaps the victim wasn’t quite dead and Mustard tried to rescusitate her but made the mistake of moving her with the rope and severed her already broken spine — or to leave the silly metaphor behind, perhaps some fraction of the post-industrial rise is anthropogenic — but the net anthropogenic contribution (given that we contribute lots of stuff, some with a warming effect and some with a cooling effect) is likely to be less than the total observed rise.

    The historical record indeed does not falsify Anthropogenic Global Warming as a hypothesis, but it comes pretty close to falsifying Catastrophic AGW as a hypothesis. Hansen’s publicly televised claims of 5 meter SLR, his published assertions of boiling oceans and a Venus-like conversion of the Earth’s climate, his claims of (always “possible”, never “certain”, just to make sure that they cannot really be falsified to prove him wrong) 5+ degree climate sensitivity are quietly failing, one after another. AR5 appears likely to backpedal to 2 to 3 C climate sensitivity, and if you sorted out the contributing research by time the sensitivity is in full retreat because the predictions based on the higher sensitivities are all diverging rapidly from the actual climate record. People are looking for “missing heat” in a panic. CAGW has been quietly converted to CACC — Catastrophic Anthropogenic Climate Change — so that any natural climate disaster can now be blamed on greenhouse gases, even though global temperatures are no longer rising. Papers are appearing that predict a rise of 1-2 C by the end of the century — solidly non-Catastrophic — essentially what we might expect from CO_2 alone with net-neutral feedback. AR6 may never occur, because the IPCC might no longer exist in a few years and will not exist if temperatures remain flat or fall a bit in the meantime. Changing CAGW to CACC cannot disguise the growing divergence between fantasy predictions of runaway warming and the reality of flat temperatures in spite of far greater increases in CO_2 concentration than the ones that supposedly kicked off the “hockey stick”.

    SLR may, in the end, prove to be the straw that breaks the camel’s back. The ocean is a rather sensitive thermometer. SLR from 1880 to the present has risen (for all of the nonsense about “acceleration” at the end) 9 whole inches, roughly one whole inch for every degree celsius of global temperature rise. Even Trenberth has gone on record recently as predicting (again the silly linear extrapolation) 30 whole centimeters of SLR by the end of the century, which is the absolute upper limit that is justified by the data — so far. That is around 13 inches, so far from “catastrophic” that it isn’t funny — it is a rate that is utterly invisible to nearly everybody, just as the 9 inches over the last 140 years has been. Nearly 100% of the “catastrophic” effects of CACC were to come from Hansen’s — word’s fail me — “egregious”, no, “ergot-derived”, probably not, “exagerrated”, well certainly, fantasy of 5 meter SLR, and when that is taken off of the table all that is left is, well, weather — trying to pretend that category 1 Sandy was our fault instead of the accidental collision of two storms (which happens, but usually out to sea) in the middle of the longest stretch ever recorded without a category 3 or better storm making landfall in the US. The Oklahoma tornado is turned into proof of CACC, ignoring the clear statistical evidence that we are in the middle of a similar stretch of less than normal tornado activity, if anything.

    Have they no shame? Is no lie too much to tell to maintain the public perception a) that the Earth is warming (it isn’t, not for 16 years) and b) that we are en route to a climate catastrophe (where there isn’t the slightest shred of evidence of any climate related catastrophe ever, anywhere caused by human beings since we turned goatherds loose in the then-green Sahara and Sahel, which was admittedly a big whoops).

    In the real-world game of global climate clue, I suspect a frame job. Perhaps it is time to “arrest” the detectives we trusted to be objective in the case and charge them with egregious perjury (tampering with data, omitting error estimates, rewriting the AR reports after the fact), accepting bribes (all the detectives have jobs in the first place only because there is a presumption of the truth of CAGW, otherwise we would never choose to invest anywhere near as much as we are investing in this sort of research), jury tampering (colluding with the supposedly objective news media to ensure that they only report any climate event from the perspective of “it must be proof of CAGW/CACC”).

    But is this really criminal? Many of those involved truly believe that whether or not they are cherrypicking convincing data and arguments and ignoring the confounding ones, the conclusion is true and hence any lie — erm, “bending” of the facts — is (apparently) justified. All I can say is — look at the cost. We have tossed hundreds of billions of dollars down this particular rathole, en route to trillions that it will cost on an ever accelerating basis. This is real money spent now, an ongoing catastrophe quite aside from any imaginary projected future ones. By pursuing this to the point of insanity Europe has pushed itself into a financial crisis that threatens the stability of its currency and could trigger the worst depression there in a hundred years if not a resumption of the once-eternal European wars as people resurrect communism, fascism, or any other “ism” that promises to make things better once again. California has barely recovered from the last financial crisis but continues to crush its citizens under the heel of a repressive and destructive energy policy that easily doubles the cost of energy there compared to fair market value nearly anywhere else. 2/3 of the world’s population is struggling to get to where they have things like clean running water, flushable toilets, washing machines, electric lights — the comforts that we all have had for over 100 years and currently consider to be indispensible to mere human existence and we are relentlessly pumping the cost of energy ever higher (to the delight of the entire energy industry, who knows perfectly well that we can’t afford to live without their products and that is happy to provide them at any inflated price we choose to set as they’ll get their percentage cut off of the gross retail price no matter what).

    This is the real catastrophe, the human catastrophe. While we are all arguing about whether or not Mustard killed the GW victim or she died of natural causes,
    the detectives are ignoring the riot in the street outside that is causing millions of deaths every year, mostly children, and the continuation of literally untold misery and discomfort compared to our own energy-rich existences for well over half the population of the world.

    The climate community, as the priests of the CAGW religion, have turned the entire question of climate change into Pascal’s Wager — sure, CAGW might be wrong, but if it is right the disaster is so great that any cost now to ameliorate it is worth it. This causes us to be blind to the aggregated costs of the amelioration right now, eked out one lost or wasted life at a time. This is not just criminal, to the extent that it is supported by lies or distortions of the truth or political rewriting of AR facts, it is evil.

    The only way humans can make good decisions is by basing them on our real best state of knowledge at the time, including the uncertainties. No good purpose is served by claiming that a scientific conclusion as uncertain as CACC, based on dysfunctional GCMs and in increasing disagreement with actual observational data even across the period where they should be most predictive, is established fact. Only when the uncertainty of this assertion and its failure to reasonably agree with the ongoing data is made clear to people can they fairly judge how to risk their money — to alleviate a possible problem in 80 years or the ongoing slaughter of innocents due to energy poverty now.

    Personally, I grew up in India. I could look out my back window and see the cost in human misery of energy poverty in the form of a mud hut occupied by a family of five that cooked on cow dung or charcoal when they could get it, lit their home at night (if at all) with a ghee/oil lamp made out of terra-cotta, oil with a twist of cotton burning as a wick. The field behind their house was their bathroom, and I imagine that they were given water by our servants, or carried it from elsewhere as there was no free water to be had in India’s hot, dry climate. To me this is a no-brainer — if there was truly solid evidence of catastrophe in the making, that would be one thing, but based on the actual evidence, diverting the world’s resources away from the plight of world poverty into a boondoggle that actively perpetuates it is one of the saddest things imaginable.

    So think about this, the next time that you are contemplating Hansen’s insane claims. The world isn’t just preventing an imaginary catastrophe in a hundred years that isn’t well supported by the data. It is in the middle of an ongoing catastrophe right now, a quiet catastrophe that claims the lives of millions and damages the lives of billions. We have to make rational economic choices between the two. We cannot do this as long as self-serving groups trumpet the imaginary catastrophe as a proven fact and twist every common occurrence into “evidence” that it is actually happening now

    rgb

    • rgbatduke:

      To your indictment of climatological researchers, one could add that the $200 billion provided to them by taxpayers has produced models that provide policy makers on CO2 emissions with no information about the outcomes from their policy decisions; thus, these models are incapable of scientifically supporting policy decisions. However, through repeated use of a deceptive argument, climatological researchers have made it sound to policy makers and the lay public as though these models provide policy makers with this information. These allegations have been proved within the peer review system and are proveable in court.. For details, see the peer reviewed article at http://wmbriggs.com/blog/?p=7923.

  110. rgbatduke

    Well said.

    Meanwhile, in the real world -as opposed to the globally averaged make believe world of a single size temperature that fits all- here is what is happening to temperature AND fuel costs in Britain

    This has been brought to the attention of some British MP’s whose energy policies and pig headed pursuit of alternative energy at any price has impoverished many of us. Actually, its worse than it looks as the Winter temperature in Britain has fallen a staggering 1,.5c over the last decade throwing millions of people to fuel poverty. Soaring fuel prices and plummeting temperatures should surely make our policy makers think again?

    Here is current CET giving a perspective to the past;

    http://wattsupwiththat.com/2013/05/08/the-curious-case-of-rising-co2-and-falling-temperatures/

    Incidentally I have now worked past my start date of 1538 back to 1498 and the temperature then was around as warm as it was prior to the last ten years decline.

    tonyb

  111. rgbatduke says:June 2, 2013 at 6:26 am
    “Even Trenberth has gone on record recently as predicting (again the silly linear extrapolation) 30 whole centimeters of SLR by the end of the century”

    No he didn’t, at least if you’re referring to this. He simply said the current rate of rise is 30cm/century.

  112. Nick

    It is quite clear from the context of the link you supply that he endorses this figure.

    You have got to admire this phrase for its sheer chutzpah;

    “My colleagues and I have just published a new analysis showing that in the past decade about 30% of the heat has been dumped at levels below 700m, where most previous analyses stop.”

    When as an ‘expert reviewer’ on the draft of AR5 I asked the IPCC for sight of the research that confirmed abyssal warming they couldn’t (or wouldn’t) supply anything.

    Tonyb

  113. tonyb says: June 3, 2013 at 1:18 am
    “It is quite clear from the context of the link you supply that he endorses this figure.”

    Tony, I can’t see any hidden context. The statement is plainly in the present:

    “Global sea level keeps marching up at a rate of more than 30cm per century since 1992 (when global measurements via altimetry on satellites were made possible), and that is perhaps a better indicator that global warming continues unabated.”

    And that’s straight from the Church and White paper referred to (with minor inaccuracy):
    “From 1993, the rates of rise estimated from tide gauge and altimeter data (after correction for GIA effects [Douglas and Peltier, 2002]) are about 3mmyr [Leuliette et al., 2004; Church et al., 2004]”
    It’s true Church and White do make a prediction (“If this acceleration remained constant”) but there’s nothing about it in what Trenberth said.

  114. Nick Stokes:

    When it is said that “the current rate of rise is 30 cm/century” it sounds from the sentence structure as though a fact is being expressed but this is not true. It is a linear theory that is being expressed and this theory cannot be a scientific one for, as the sea levels of the past are not observable, this theory it is insusceptible to being tested.

  115. Re: Doug Proctor comment
    “The detective says yes, the Professor could have done it (motive, opportunity, fingerprints on the gun), but the Colonel not only had all those things but was seen by three policemen and a nun pulling the trigger and kicking the body.”

    The defence quickly discredits the three witnesses ,Pachauri, Al Gore and Peter Gleick
    and AGW is acquitted.

Comments are closed.