No significant warming for 17 years 4 months

By Christopher Monckton of Brenchley

As Anthony and others have pointed out, even the New York Times has at last been constrained to admit what Dr. Pachauri of the IPCC was constrained to admit some months ago. There has been no global warming statistically distinguishable from zero for getting on for two decades.

The NYT says the absence of warming arises because skeptics cherry-pick 1998, the year of the Great el Niño, as their starting point. However, as Anthony explained yesterday, the stasis goes back farther than that. He says we shall soon be approaching Dr. Ben Santer’s 17-year test: if there is no warming for 17 years, the models are wrong.

Usefully, the latest version of the Hadley Centre/Climatic Research Unit monthly global mean surface temperature anomaly series provides not only the anomalies themselves but also the 2 σ uncertainties.

Superimposing the temperature curve and its least-squares linear-regression trend on the statistical insignificance region bounded by the means of the trends on these published uncertainties since January 1996 demonstrates that there has been no statistically-significant warming in 17 years 4 months:

clip_image002

On Dr. Santer’s 17-year test, then, the models may have failed. A rethink is needed.

The fact that an apparent warming rate equivalent to almost 0.9 Cº is statistically insignificant may seem surprising at first sight, but there are two reasons for it. First, the published uncertainties are substantial: approximately 0.15 Cº either side of the central estimate.

Secondly, one weakness of linear regression is that it is unduly influenced by outliers. Visibly, the Great el Niño of 1998 is one such outlier.

If 1998 were the only outlier, and particularly if it were the largest, going back to 1996 would be much the same as cherry-picking 1998 itself as the start date.

However, the magnitude of the 1998 positive outlier is countervailed by that of the 1996/7 la Niña. Also, there is a still more substantial positive outlier in the shape of the 2007 el Niño, against which the la Niña of 2008 countervails.

In passing, note that the cooling from January 2007 to January 2008 is the fastest January-to-January cooling in the HadCRUT4 record going back to 1850.

Bearing these considerations in mind, going back to January 1996 is a fair test for statistical significance. And, as the graph shows, there has been no warming that we can statistically distinguish from zero throughout that period, for even the rightmost endpoint of the regression trend-line falls (albeit barely) within the region of statistical insignificance.

Be that as it may, one should beware of focusing the debate solely on how many years and months have passed without significant global warming. Another strong el Niño could – at least temporarily – bring the long period without warming to an end. If so, the cry-babies will screech that catastrophic global warming has resumed, the models were right all along, etc., etc.

It is better to focus on the ever-widening discrepancy between predicted and observed warming rates. The IPCC’s forthcoming Fifth Assessment Report backcasts the interval of 34 models’ global warming projections to 2005, since when the world should have been warming at a rate equivalent to 2.33 Cº/century. Instead, it has been cooling at a rate equivalent to a statistically-insignificant 0.87 Cº/century:

clip_image004

The variance between prediction and observation over the 100 months from January 2005 to April 2013 is thus equivalent to 3.2 Cº/century.

The correlation coefficient is low, the period of record is short, and I have not yet obtained the monthly projected-anomaly data from the modelers to allow a proper p-value comparison.

Yet it is becoming difficult to suggest with a straight face that the models’ projections are healthily on track.

From now on, I propose to publish a monthly index of the variance between the IPCC’s predicted global warming and the thermometers’ measurements. That variance may well inexorably widen over time.

In any event, the index will limit the scope for false claims that the world continues to warm at an unprecedented and dangerous rate.

UPDATE: Lucia’s Blackboard has a detailed essay analyzing the recent trend, written by SteveF, using an improved index for accounting for ENSO, volcanic aerosols, and solar cycles. He concludes the best estimate rate of warming from 1997 to 2012 is less than 1/3 the rate of warming from 1979 to 1996. Also, the original version of this story incorrectly referred to the Washington Post, when it was actually the New York Times article by Justin Gillis. That reference has been corrected.- Anthony

About these ads

429 thoughts on “No significant warming for 17 years 4 months

  1. 1. Time to point out again that when the warmists convinced the world to use anomaly graphs in considering the climate system they more or less won the game. As Essex and McKitrick (and others) point out, temperature, graphed in Kelvins, has been pretty close to flat for the past thousand years or so. The system displays remarkable homeostasis, and almost no lay people are aware of this simple fact.
    2. I would like to make a documentary in which man-on-the-street interviews are conducted where the interviewee gets to draw absolute temps over the last century, last millennium, etc. The exaggerated sense of what has been happening would be hilarious, and kind of sad, to see.
    3. The intellectual knots that the warmists have already tied themselves into explaining away the last decade and a half of global temps have been ugly. And, as most here know, I am betting that the ugliness gets uglier for the next decade and a half — at least.
    4. Don’t sell your coat.

  2. There can be no CO2-GW, A or otherwise. And even if there were, there could be no positive feedback. CO2 is the working fluid in the control system maintaining OLR = SW thermalised.

    This is imposed by irreversible thermodynamics – the increased radiation entropy from converting 5500 K SW to 255 K LW. The clouds adapt to control atmosphere entropy production to a minimum.

    Basic science was forgotten by Hansen when the first GISS modelling paper wrongly assumed CO2 blocked 7 – 14 micron OLR and LR warming was the GHE: 1981_Hansen_etal.pdf from NASA. They got funding and fame for 32 years of a scientific scam.

  3. Very nice post …. I made some similar remarks in comments on a John Abrahams / Dana Nuticelli article in the Guardian yesterday – just asking how climate change effects could be “accelerating” when temperatures have not been going up ….. and had my comments repeatedly censored. I woke up this morning to find I am now banned as a commenter. Simply a very sad indictment of the inability of warmest ‘scientists’ to tolerate any form of critique or basic obvious questioning.

  4. Note that “No warming” and “no statistically significant warming” are not the same thing. The most reasonable interpretation of Santer’s statement is that there has to be no measured warming for 17 years, and as is clear from the diagram there has been warming, only not large enough to be statistically significant. The uncertainly is large enough that the data are also consistent with a trend of 0,2 K/decade, i.e., in line with IPCC predictions.

  5. Yes indeed. A few days ago, the Belgian newspaper ‘Metro’, too, wrote that the temperatures are accelerating dangerously. Well heavens…

  6. I am 100% positive I remember Gavin saying 10 years somewhere on ReallywrongClimate. No warming for 10 years, the models were wrong….

  7. Re Santer et al. (2011). Is it not the case that this paper explicitly refers to lower troposphere (i.e. satellite) data and that it also explicitly refers to the “observational” data, rather than statistical significance levels?

    In other words, all Santer et al. 2011 stated was that we should see a warming trend in the raw satellite data over a period of 17 years. At present that is what we do see in both UAH and RSS (much more so in UAH).

    I don’t immediately see what Santer et al. 2011 has to do with statistical significance in a surface station data set such as HadCRUT4.

  8. I keep seeing these graphs with linear progressions. Seriously. I mean seriously. Since when is weather/climate a linear behavorist? The equations that attempt to map/predict magnetic fields of the earth are complex Fourier series. Is someone, somewhere suggesting that the magnetic field is more complex than the climate envelope about the earth? I realize this is a short timescale and things may look linear but they are not. Not even close. Like I said in the beginning, the great climate hoax is nothing more than what I just called it. I am glad someone has the tolerance to deal with these idiots. I certainly don’t.

  9. So how did the climate scientists and the news media including the NYT report the 1998 El Nino? Apocalypse now, I would suggest! So even if the start date was cherry picked, it would be fair game.

  10. No statistically significant warming in 18 years and 5 months:

    http://woodfortrees.org/plot/rss/from:1995/plot/rss/from:1995/trend

    #Time series (rss) from 1979 to 2013.42
    #Selected data from 1995
    #Least squares trend line; slope = 0.00365171 per year

    No varming in 16 years and 5 months:

    http://woodfortrees.org/plot/rss/from:1997/plot/rss/from:1997/trend

    #Time series (rss) from 1979 to 2013.42
    #Selected data from 1997
    #Least squares trend line; slope = -0.000798188 per year

    Oh lord…

  11. SteveF wrote Estimating the Underlying Trend in Recent Warming
    (“12 June, 2013 (20:10) Written by: SteveF” posted at Lucia’s The Blackboard

    The slope since 1997 is less than 1/6 that from 1979 to 1996. . . .
    Warming has not stopped, but it has slowed considerably. . . .
    the influence of the ENI on global temperatures (as calculated by the by the global regression analysis) is just slightly more than half the influence found for the tropics alone (30S to 30N): 0.1099+/- 0.0118 global versus 0.1959+/-0.016 tropics. . . .
    The analysis indicates that global temperatures were significantly depressed between ~1964 and ~1999 compared to what they would have been in the absence of major volcanoes. . . .
    the model does not consider the influence of (slower) heat transfer between the surface and deeper ocean. In other words, the calculated impact of solar and volcanic forcings would be larger (implying somewhat higher climate sensitivity) if a better model of heat uptake/release to/from the ocean were used.

    This looks like a SteveF provides a major improvement in understanding and quantifying the “shorter” term impacts of solar, volcanoes and ocean oscillations (ENSO) and their related lags. Now hope he can get it formally published.

  12. This post is preaching to the choir (and, with all due respect for Christopher Monckton’s energy in the climate debates, it is by a scientific dilettante, however well-informed and clearly intelligent, to an audience of laypersons–what the failure of climate science, in the present incompetent consensus, has brought us all to). (And I am not one of the many who has a pet theory, and claims to have all the answers–I merely kept my eyes and mind open for clear, definitive evidence of what is really wrong, and found it, as some portion of the readers here well know. I am a professional scientist, a physicist, in the older academic tradition, that knew how to Verify.)

    ImranCan’s comment above confirms what so many should already know: The Insane Left (my term for them) only dared to alarm the world with this monumental fraud because they fervently want to believe a benevolent universe (not God, heaven forbid, but only a universe in which “you create your own reality”–one of the great lies of the modern world) has put into their hands an authoritative instrument through which their similarly-fixated political ideology could take over… the western world, at least. The “science” has ALWAYS been “settled”, period, because they NEED it to be, to hold together their fundamentally creaky coalition of peoples bitter, for any reason, against “the old order”. They want a revolution, one way or another. And this is war, one way or another. The best hope for mankind, and especially the western world, is that somehow a growing number of those who have been suborned to the Insane Left will come to their senses, let their innate intelligence come out, and declare their independence and opposition to the would-be tyrants.

  13. Perhaps off-topic, but I am having serious thoughts about why we constantly refer to the “greenhouse effect”. To use a greenhouse is to use a pretty poor analogy; the Earth is not surrounded by a hard shell of “greenhouse gasses”, with air movements and other causes of potential cooling inside strictly regulated. It could be that we are not only barking up the wrong tree, but we are in the wrong garden, in the wrong country – and it is not even a tree!

    About 99% of the Earth’s atmosphere (i.e. 20.9% oxygen and 78% nitrogen) is not composed of “greenhouse gasses.” Why not test the idea: find a greenhouse, and remove 99% of the glass, so as to leave a thin web of glass (let us assume this is possible). I doubt you will be able to measure any difference between the “inside” of the greenhouse and outside; however, to “improve” its effectiveness, add 0.05% more glass. Stand back, and watch in amazement as the temperatures soar!

    You don’t think someone is trying to sell us a load of snake oil, do you?

  14. ImranCan says at June 13, 2013 at 4:05 am

    Very nice post …. I made some similar remarks in comments on a John Abrahams / Dana Nuticelli article in the Guardian yesterday – just asking how climate change effects could be “accelerating” when temperatures have not been going up ….. and had my comments repeatedly censored. I woke up this morning to find I am now banned as a commenter. Simply a very sad indictment of the inability of warmest ‘scientists’ to tolerate any form of critique or basic obvious questioning.

    I also linked to the MET office and showed that temperature rises are not accelerating. In additon I pointed out the theoretical basis for the acceleration was challenged empirically by the lack of the Tropical Hotspot (with a link to Jo Nova).

    So I also am now banned from posting at the Guardian. That is, I am subject to “pre-moderation”.

    The worst impact of creating this echo-chamber is the decline in the Guardian’s readership. The number of comments on their environment blogs is declining rapidly.
    It is a shame that a lively, left-wing forum has decided to commit suicide by out-sourcing moderation to alleged scientists who can’t defend their position.

    How long until the advertisers realise?

  15. “So I also am now banned from posting at the Guardian.”

    Welcome to the newspeak Orwellian media complex, Winston.

    Fortunately, we are still free enough in this world to tell the Guardian (and, most importantly, their $ponsor$) to stuff it…

  16. How long before the 17 year test becomes a 25 year test? – just a matter of homogenising!

  17. If memory serves, it seems that the Meteorological community has used the ‘thirty-year’ time frame for standardizing its records, in order to classify climate and climate zones. I suspect that meteorologists might soon suggest that a ‘fifty-year’ or even a ‘sixty-year’ time frame become the standard reference frame.

    That would be one way to get around Gavin’s “… seventeen year …” test.

    Or, we could just adjust the data some more, to make them fit the models … … … ………

  18. At first there were a few looking for the truth. Then there were more. Soon there were many. Next there was an army marching for the truth. Now the truth goes marching on!

    Oh, it’s that army of ones again. They have liberated the truth.

    sorry, but I don’t know how to put musical notes in a blog post ;-)

  19. What I want to know from any Warmists is what would falsify the climate model projections as used by the IPCC? Example 20 years of no warming?

  20. M Courtney at 5:25 a.m. says:
    “So I also am now banned from posting at the Guardian. That is, I am subject to “pre-moderation”.

    The worst impact of creating this echo-chamber is the decline in the Guardian’s readership. The number of comments on their environment blogs is declining rapidly.
    It is a shame that a lively, left-wing forum has decided to commit suicide by out-sourcing moderation to alleged scientists who can’t defend their position.

    How long until the advertisers realise?”

    Would that these former institutions of the Fourth Estate were subject to the forces of the market. Many would have failed already. However, they are being funded — and their employees (formerly investigative journalists) fully paid and supported — as the mouthpiece of elites who are acting similarly to the Robber Barons of the U.S. 19th Century. At least the Robber Barons through their greed also brought productivity. Not so much these elites. Who are they? Fabulously wealthy Islamists on our oil money; brilliant financial scam artists like financiers whether “left or right” (debt posing as equity); IT corporations who (corps are persons) destroy competition; all those corporations that also hate “the market” (immigration “reform” for cheap labor — that will take care of those independent Americans); and the secular religionists. What a motley group.

    They will eventually fail. We must see that they do not take the rest of us along with them. Thank you Anthony and crew for your valiant and courageous efforts.

  21. It is meaningless to say that there is warming, just not statistically significant warming. Someone who says that does not know what statistical significance is.

  22. The one time a “Cherry-picking” accusation fails is when you use the present day as an anchor & look back into the past.

    The observed temperature differential just doesn’t meet any definition of “catastrophic,” “runaway,” “emergency,” “critical,” or any synonym you can pull out of the (unwarming) air to justify the multitude of draconian measures ALREADY IN PLACE that curtail world economies or subsidize failing alternative energy attempts!!!

  23. I like to use RSS because it is not contaminated with UHI, extrapolation and infilling. As indicated above the trend has been perfectly flat for 16.5 years (Dec. 1996). At some point in the near future, given the current cooling that could be later this year, the starting point could move back to the start of 1995. That would mean around 19 years with a zero trend.

    I like to use the following graph because it demonstrates a change from the warming regime of the PDO to the cooling regime. It also shows how you could have many of the warmest years despite the lack of warming over the entire interval.

    http://www.woodfortrees.org/plot/rss/from:1996.9/to/plot/rss/from:1996.9/to/trend/plot/rss/from:1996.9/to:2005/trend/plot/rss/from:2005/to/trend

  24. How long before the warmists make 1998 go away like they did with the MWP ? Funny how 1998 was the shot across the bow warning when it was on the right side of the graph but an inconvienient truth on the left.

  25. M Courtney says:
    June 13, 2013 at 5:25 am
    “It is a shame that a lively, left-wing forum has decided to commit suicide by out-sourcing moderation to alleged scientists who can’t defend their position.”
    Guardian, Spiegel and NYT are the modern versions of the Pravda for the West. I read them to know what the 5 minute hate of the day is.

  26. Looks like Lucia’s website is overloaded. I can get through on the main page but I can’t open SteveF’s post without getting an error message. I tried to leave him the following comment:

    SteveF: As far as I can tell, your model assumes a linear relationship between your ENSO index and global surface temperatures.

    Trenberth et al (2002)…

    http://www.cgd.ucar.edu/cas/papers/2000JD000298.pdf

    …cautioned against this. They wrote, “Although it is possible to use regression to eliminate the linear portion of the global mean temperature signal associated with ENSO, the processes that contribute regionally to the global mean differ considerably, and the linear approach likely leaves an ENSO residual.”

    Compo and Sardeshmukh (2010)…

    http://journals.ametsoc.org/doi/abs/10.1175/2009JCLI2735.1?journalCode=clim

    …note that it should not be treated as noise that can be removed. Their abstract begins: “An important question in assessing twentieth-century climate change is to what extent have ENSO-related variations contributed to the observed trends. Isolating such contributions is challenging for several reasons, including ambiguities arising from how ENSO itself is defined. In particular, defining ENSO in terms of a single index and ENSO-related variations in terms of regressions on that index, as done in many previous studies, can lead to wrong conclusions. This paper argues that ENSO is best viewed not as a number but as an evolving dynamical process for this purpose…”

    I’ve been illustrating and discussing for a couple of years that the sea surface temperatures of the East Pacific(90S-90N, 180-80W) show that it is the only portion of the global oceans that responds linearly to ENSO, but that the sea surface temperatures there haven’t warmed in 31 years:

    On the other hand, the sea surface temperature anomalies of the Atlantic, Indian and West Pacific (90S-90N, 80W-180) warm in El Niño-induced steps (the result of leftover warm water from the El Niños) that cannot be accounted for with your model:

    A more detailed, but introductory level, explanation of the processes that cause those shifts can be found here [42MB .pdf]:

    http://bobtisdale.files.wordpress.com/2013/01/the-manmade-global-warming-challenge.pdf

    And what fuels the El Ninos? Sunlight. Even Trenberth et al (2002), linked above, acknowledges that fact. They write, “The negative feedback between SST and surface fluxes can be interpreted as showing the importance of the discharge of heat during El Niño events and of the recharge of heat during La Niña events. Relatively clear skies in the central and eastern tropical Pacific allow solar radiation to enter the ocean, apparently offsetting the below normal SSTs, but the heat is carried away by Ekman drift, ocean currents, and adjustments through ocean Rossby and Kelvin waves, and the heat is stored in the western Pacific tropics. This is not simply a rearrangement of the ocean heat, but also a restoration of heat in the ocean.”

    In other words, ENSO acts as a chaotic recharge-discharge oscillator, where the discharge events (El Niños) are occasionally capable of raising global temperatures, where they remain relatively stable for periods of a decade or longer.

    In summary, you’re treating ENSO as noise, while data indicate that it is responsible for much of the warming over the past 30 years.

    Regards

  27. I wonder if ACGW advocates feel a little like advocates of the Iraq invasion felt when no WMDs were discovered? Just a random thought.

  28. Rather off-topic, but there are 4 questions that I would like the answer to:
    1. We are told the concentration of CO2 in the atmosphere is 0.039%, but what is the concentration of CO2 at different heights above the earth’s surface? As CO2 is ‘heavier than air’ one would expect it to be at higher percentages near the earth’s surface.
    2. Do the CO2 molecules rise as they absorb heat during the day from the sun? And how far?
    3. Do the CO2 molecules fall at night when they no longer get any heat input from the sun?
    4. When a CO2 molecule is heated, does it re-radiate equally in all directions, assuming the surroundings are cooler, or does it radiate heat in proportion to the difference in temperaure in any particular direction?
    Any comments gratefully received.

  29. Human beings caused the largest extinction rate in the planet’s history ( Pleistocene extinctions ). These extinctions came at a different time and at a different rate to be linked to the climate changes and its clear wild climate swings in short periods of time ( relatively ) did pretty much nothing to the earth’s species on any significant scale. Its exactly the same now. We are still causing extinctions at a record rate, simply by being here, not by “altering” the climate, and even if we did ( or are ) altering the climate, then this effect on the planet is insignificant to the simple fact that we are just “here”… So-called “climate scientists” are often no such thing, they do not understand the basics of pre-historic climate change and the parameters involved. They completely ignore the most important evidence. Large animals in Africa alone survived the P.E. period simply by having evolved along side humans, as soon as humans left Africa at a very fast rate, they pretty much wiped out the mega fauna everywhere else…. It is this pattern of human behaviour that is statistically significant, not fractions of a degree celcius. I wish alarmists would actually study a bit more !

  30. I suppose we could always wait until 2018. By which time the World will be bankrupt and it won’t matter. Alternatively we could start applying the precautionary principal the other way round. How about: A clear lack of correlation between hypothesis and reality should preclude precipitate action beyond that which is prudent and can be shown to have a benefit.

  31. First of all, skeptics didn’t pick 1998, the NOAA did in the 2008 State of the Climate report.

    That report says, “Near-zero and even negative trends are common for intervals of a decade or less in the simulations, due to the model’s internal climate variability. The simulations rule out (at the 95% level) zero trends for intervals of 15 yr or more, suggesting that an observed absence of warming of this duration is needed to create a discrepancy with the expected present-day warming rate.”

    It does not say “The simulations rule out (at the 95% level) zero trends for intervals of 15 years or more, except intervals starting in 1998…”

    Second, I don’t know why anyone is bending over backwards to try to find statistical significance (or lack thereof) in a goalpost changing 17 year trend when we already have an unambiguous test for the models straight from the NOAA. Why bother with ever changing warmist arguments? Just throw the above at them and let them argue with the NOAA over it.

  32. The problem is that models of catastrophic climate change are being used by futurists and tech companies and rent seekers generally to argue that our written constitutions need to be jettisoned and new governance structures created that rely more on Big data and supercomputers. To deal with the global warming crisis. wish I was making this up but I wrote about the political and social economy and using education globally to get there today. Based primarily on Marina Gorbis’ April 2013 book The Nature of The Future and Willis Harman’s 1988 Global Mind Change.

    You can’t let actual temps get in the way of such a transformation. Do you have any idea how many well-connected people have decided we are all on the menu? Existing merely to finance their future plans and to do as we are told.

  33. The Guardian is left-wing. That won’t be popular with people who aren’t.
    But it wasn’t dumbed down. It wasn’t anti-democratic. It wasn’t just hate.

    The Guardian was part of the civil society in which develops the politcal awareness that a democracy needs.
    So was the Telegraph from the other side.

    But the Guardian has abandoned debate. That is the death of the Guardian. A loss which will be a weakening of the UK’s and the entire West’s political life.

  34. Interesting, and by the way:

    On March 13, WUWT announced that Climategate 3.0 had occurred.

    What happened to it?

    Everybody just ignoring it ever happened?

  35. Because of the the thermal inertia of the oceans and the fact that we should really be measuring the enthalpy of the system – the best metric for temperature is the SST data which varies much more closely with enthalpy than land temperatures.The NOAA data ftp://ftp.ncdc.noaa.gov/pub/data/anomalies/annual.ocean.90S.90N.df_1901-2000mean.
    data shows no net warming since 1997 and also shows that the warming trend peaked in about 2003 and that the earth has been in a slight cooling trend since then.This trend will likely steepen and last for at least 20 years and perhaps for hundred of years beyond that if ,as seems likely, the warming peak represents a peak in both the 60 and 1000 year solar cycles,
    For a discussion and detailed forecast see

    http://climatesense-norpag.blogspot.com/2013/04/global-cooling-methods-and-testable.html

  36. StephenP The CO2-contentration is constant throughout the atmosphere. Winds ensure that the atmosphere is stirred enough that the small density difference doesn’t matter. Nor does absorption or emission of photons cause the molecules to move up or down. CO2-molecules radiate equally in all directions.

    Scott, “It is meaningless to say that there is warming, just not statistically significant warming. Someone who says that does not know what statistical significance is.”

    I’d say that on the contrary, anyone who thinks a measured trend that is larger than zero but not quite reaches statistical significance is the same as no trend doesn’t not know enough about statistics. Compare these three measurement: 0.9+-1, 0+-1 and -0.9+-1. None of them is statistically different from zero, but the fist one allows values as high as 1.9 while the last one allows values as low as -1.9.

  37. Thomas says:
    June 13, 2013 at 6:56 am
    “I’d say that on the contrary, anyone who thinks a measured trend that is larger than zero but not quite reaches statistical significance is the same as no trend doesn’t not know enough about statistics.”

    And without sufficient knowledge as to what the future actually provides (or a accurate model :-)) then drawing any conclusions based on which end of any distribution the values may currently lie is just a gloryfied guess.

    If you were to draw conclusion about the consistency with which the data has has moved towards a limit you would have a better statistical idea about what the data is really saying.

  38. Saying that we need to wait for a certain interval in order to conclude that “the models are wrong” is dangerous and incorrect for two reasons. First — and this is a point that is stunningly ignored — there are a lot of different models out there, all supposedly built on top of physics, and yet no two of them give anywhere near the same results!

    This is reflected in the graphs Monckton publishes above, where the AR5 trend line is the average over all of these models and in spite of the number of contributors the variance of the models is huge. It is also clearly evident if one publishes a “spaghetti graph” of the individual model projections (as Roy Spencer recently did in another thread) — it looks like the frayed end of a rope, not like a coherent spread around some physics supported result.

    Note the implicit swindle in this graph — by forming a mean and standard deviation over model projections and then using the mean as a “most likely” projection and the variance as representative of the range of the error, one is treating the differences between the models as if they are uncorrelated random variates causing >deviation around a true mean!.

    Say what?

    This is such a horrendous abuse of statistics that it is difficult to know how to begin to address it. One simply wishes to bitch-slap whoever it was that assembled the graph and ensure that they never work or publish in the field of science or statistics ever again. One cannot generate an ensemble of independent and identically distributed models that have different code. One might, possibly, generate a single model that generates an ensemble of predictions by using uniform deviates (random numbers) to seed
    “noise” (representing uncertainty) in the inputs.

    What I’m trying to say is that the variance and mean of the “ensemble” of models is completely meaningless, statistically because the inputs do not possess the most basic properties required for a meaningful interpretation. They are not independent, their differences are not based on a random distribution of errors, there is no reason whatsoever to believe that the errors or differences are unbiased (given that the only way humans can generate unbiased anything is through the use of e.g. dice or other objectively random instruments).

    So why buy into this nonsense by doing linear fits to a function — global temperature — that has never in its entire history been linear, although of course it has always been approximately smooth so one can always do a Taylor series expansion in some sufficiently small interval and get a linear term that — by the nature of Taylor series fits to nonlinear functions — is guaranteed to fail if extrapolated as higher order nonlinear terms kick in and ultimately dominate? Why even pay lip service to the notion that R^2 or p for a linear fit, or for a Kolmogorov-Smirnov comparison of the real temperature record and the extrapolated model prediction, has some meaning? It has none.

    Let me repeat this. It has no meaning! It is indefensible within the theory and practice of statistical analysis. You might as well use a ouija board as the basis of claims about the future climate history as the ensemble average of different computational physical models that do not differ by truly random variations and are subject to all sorts of omitted variable, selected variable, implementation, and initialization bias. The board might give you the right answer, might not, but good luck justifying the answer it gives on some sort of rational basis.

    Let’s invert this process and actually apply statistical analysis to the distribution of model results Re: the claim that they all correctly implement well-known physics. For example, if I attempt to do an a priori computation of the quantum structure of, say, a carbon atom, I might begin by solving a single electron model, treating the electron-electron interaction using the probability distribution from the single electron model to generate a spherically symmetric “density” of electrons around the nucleus, and then performing a self-consistent field theory iteration (resolving the single electron model for the new potential) until it converges. (This is known as the Hartree approximation.)

    Somebody else could say “Wait, this ignore the Pauli exclusion principle” and the requirement that the electron wavefunction be fully antisymmetric. One could then make the (still single electron) model more complicated and construct a Slater determinant to use as a fully antisymmetric representation of the electron wavefunctions, generate the density, perform the self-consistent field computation to convergence. (This is Hartree-Fock.)

    A third party could then note that this still underestimates what is called the “correlation energy” of the system, because treating the electron cloud as a continuous distribution through when electrons move ignores the fact that individual electrons strongly repel and hence do not like to get near one another. Both of the former approaches underestimate the size of the electron hole, and hence they make the atom “too small” and “too tightly bound”. A variety of schema are proposed to overcome this problem — using a semi-empirical local density functional being probably the most successful.

    A fourth party might then observe that the Universe is really relativistic, and that by ignoring relativity theory and doing a classical computation we introduce an error into all of the above (although it might be included in the semi-empirical LDF approach heuristically).

    In the end, one might well have an “ensemble” of models, all of which are based on physics. In fact, the differences are also based on physics — the physics omitted from one try to another, or the means used to approximate and try to include physics we cannot include in a first-principles computation (note how I sneaked a semi-empirical note in with the LDF, although one can derive some density functionals from first principles (e.g. Thomas-Fermi approximation), they usually don’t do particularly well because they aren’t valid across the full range of densities observed in actual atoms). Note well, doing the precise computation is not an option. We cannot solve the many body atomic state problem in quantum theory exactly any more than we can solve the many body problem exactly in classical theory or the set of open, nonlinear, coupled, damped, driven chaotic Navier-Stokes equations in a non-inertial reference frame that represent the climate system.

    Note well that solving for the exact, fully correlated nonlinear many electron wavefunction of the humble carbon atom — or the far more complex Uranium atom — is trivially simple (in computational terms) compared to the climate problem. We can’t compute either one, but we can come a damn sight closer to consistently approximating the solution to the former compared to the latter.

    So, should we take the mean of the ensemble of “physics based” models for the quantum electronic structure of atomic carbon and treat it as the best prediction of carbon’s quantum structure? Only if we are very stupid or insane or want to sell something. If you read what I said carefully (and you may not have — eyes tend to glaze over when one reviews a year or so of graduate quantum theory applied to electronics in a few paragraphs, even though I left out perturbation theory, Feynman diagrams, and ever so much more:-) you will note that I cheated — I run in a semi-empirical method.

    Which of these is going to be the winner? LDF, of course. Why? Because the parameters are adjusted to give the best fit to the actual empirical spectrum of Carbon. All of the others are going to underestimate the correlation hole, and their errors will be systematically deviant from the correct spectrum. Their mean will be systematically deviant, and by weighting Hartree (the dumbest reasonable “physics based approach”) the same as LDF in the “ensemble” average, you guarantee that the error in this “mean” will be significant.

    Suppose one did not know (as, at one time, we did not know) which of the models gave the best result. Suppose that nobody had actually measured the spectrum of Carbon, so its empirical quantum structure was unknown. Would the ensemble mean be reasonable then? Of course not. I presented the models in the way physics itself predicts improvement — adding back details that ought to be important that are omitted in Hartree. One cannot be certain that adding back these details will actually improve things, by the way, because it is always possible that the corrections are not monotonic (and eventually, at higher orders in perturbation theory, they most certainly are not!) Still, nobody would pretend that the average of a theory with an improved theory is “likely” to be better than the improved theory itself, because that would make no sense. Nor would anyone claim that diagrammatic perturbation theory results (for which there is a clear a priori derived justification) are necessarily going to beat semi-heuristic methods like LDF because in fact they often do not.

    What one would do in the real world is measure the spectrum of Carbon, compare it to the predictions of the models, and then hand out the ribbons to the winners! Not the other way around. And since none of the winners is going to be exact — indeed, for decades and decades of work, none of the winners was even particularly close to observed/measured spectra in spite of using supercomputers (admittedly, supercomputers that were slower than your cell phone is today) to do the computations — one would then return to the drawing board and code entry console to try to do better.

    Can we apply this sort of thoughtful reasoning the spaghetti snarl of GCMs and their highly divergent results? You bet we can! First of all, we could stop pretending that “ensemble” mean and variance have any meaning whatsoever by not computing them. Why compute a number that has no meaning? Second, we could take the actual climate record from some “epoch starting point” — one that does not matter in the long run, and we’ll have to continue the comparison for the long run because in any short run from any starting point noise of a variety of sorts will obscure systematic errors — and we can just compare reality to the models. We can then sort out the models by putting (say) all but the top five or so into a “failed” bin and stop including them in any sort of analysis or policy decisioning whatsoever unless or until they start to actually agree with reality.

    Then real scientists might contemplate sitting down with those five winners and meditate upon what makes them winners — what makes them come out the closest to reality — and see if they could figure out ways of making them work even better. For example, if they are egregiously high and diverging from the empirical data, one might consider adding previously omitted physics, semi-empirical or heuristic corrections, or adjusting input parameters to improve the fit.

    Then comes the hard part. Waiting. The climate is not as simple as a Carbon atom. The latter’s spectrum never changes, it is a fixed target. The former is never the same. Either one’s dynamical model is never the same and mirrors the variation of reality or one has to conclude that the problem is unsolved and the implementation of the physics is wrong, however “well-known” that physics is. So one has to wait and see if one’s model, adjusted and improved to better fit the past up to the present, actually has any predictive value.

    Worst of all, one cannot easily use statistics to determine when or if one’s predictions are failing, because damn, climate is nonlinear, non-Markovian, chaotic, and is apparently influenced in nontrivial ways by a world-sized bucket of competing, occasionally cancelling, poorly understood factors. Soot. Aerosols. GHGs. Clouds. Ice. Decadal oscillations. Defects spun off from the chaotic process that cause global, persistent changes in atmospheric circulation on a local basis (e.g. blocking highs that sit out on the Atlantic for half a year) that have a huge impact on annual or monthly temperatures and rainfall and so on. Orbital factors. Solar factors. Changes in the composition of the troposphere, the stratosphere, the thermosphere. Volcanoes. Land use changes. Algae blooms.

    And somewhere, that damn butterfly. Somebody needs to squash the damn thing, because trying to ensemble average a small sample from a chaotic system is so stupid that I cannot begin to describe it. Everything works just fine as long as you average over an interval short enough that you are bound to a given attractor, oscillating away, things look predictable and then — damn, you change attractors. Everything changes! All the precious parameters you empirically tuned to balance out this and that for the old attractor suddenly require new values to work.

    This is why it is actually wrong-headed to acquiesce in the notion that any sort of p-value or Rsquared derived from an AR5 mean has any meaning. It gives up the high ground (even though one is using it for a good purpose, trying to argue that this “ensemble” fails elementary statistical tests. But statistical testing is a shaky enough theory as it is, open to data dredging and horrendous error alike, and that’s when it really is governed by underlying IID processes (see “Green Jelly Beans Cause Acne”). One cannot naively apply a criterion like rejection if p < 0.05, and all that means under the best of circumstances is that the current observations are improbable given the null hypothesis at 19 to 1. People win and lose bets at this level all the time. One time in 20, in fact. We make a lot of bets!

    So I would recommend — modestly — that skeptics try very hard not to buy into this and redirect all such discussions to questions such as why the models are in such terrible disagreement with each other, even when applied to identical toy problems that are far simpler than the actual Earth, and why we aren’t using empirical evidence (as it accumulates) to reject failing models and concentrate on the ones that come closest to working, while also not using the models that are obviously not working in any sort of “average” claim for future warming. Maybe they could hire themselves a Bayesian or two and get them to recompute the AR curves, I dunno.

    It would take me, in my comparative ignorance, around five minutes to throw out all but the best 10% of the GCMs (which are still diverging from the empirical data, but arguably are well within the expected fluctuation range on the DATA side), sort the remainder into top-half models that should probably be kept around and possibly improved, and bottom half models whose continued use I would defund as a waste of time. That wouldn’t make them actually disappear, of course, only mothball them. If the future climate ever magically popped back up to agree with them, it is a matter of a few seconds to retrieve them from the archives and put them back into use.

    Of course if one does this, the GCM predicted climate sensitivity plunges from the totally statistically fraudulent 2.5 C/century to a far more plausible and still possibly wrong ~1 C/century, which — surprise — more or less continues the post-LIA warming trend with a small possible anthropogenic contribution. This large a change would bring out pitchforks and torches as people realize just how badly they’ve been used by a small group of scientists and politicians, how much they are the victims of indefensible abuse of statistics to average in the terrible with the merely poor as if they are all equally likely to be true with randomly distributed differences.

    rgb

  39. The NYT says the absence of warming arises because skeptics cherry-pick 1998, the year of the Great el Niño, as their starting point.

    Going back to 1998 is small potatoes. Let’s go back 1000 years, 2000, 5000, even back to the last interglacial. The best data we have show that all of those times were warmer than now.

    17 years? Piffle.

  40. rgbatduke says:
    June 13, 2013 at 7:20 am

    Saying that we need to wait for a certain interval in order to conclude that “the models are wrong” is dangerous and incorrect for two reasons. First — and this is a point that is stunningly ignored — there are a lot of different models out there, all supposedly built on top of physics, and yet no two of them give anywhere near the same results!

    As I understand it, running the same model twice in a row with the same parameters won’t even produce the same results. But somehow averaging the results together is meaningful? Riiiight. As meaningful as a “global temperature” which is not at all.

  41. Steven said:
    “Since when is weather/climate a linear behavorist?… I realize this is a short timescale and things may look linear but they are not. Not even close.”

    Absolutely spot-on Steven. Drawing lines all over data that is patently non-linear in its behaviour is a key part of the CAGW hoax.

  42. RichardLH, the context of the discussion is Monckton’s statement that “On Dr. Santer’s 17-year test, then, the models may have failed. A rethink is needed.” This statement is based on a (IMHO probably intentional) mixing of the measured trend which is what Santer was talking about and whether the trend is statistically significant or not. How can a model be falsified by a value of the trend that isn’t significantly different from the expected?

  43. This whole argument is the most ridiculous thing I’ve ever seen…
    …who in their right mind would argue with these nutters when you start out by letting them define what’s “normal”

    You guys have sat back and let the enemy define where that “normal” line is drawn…
    ….and then you argue with them that it’s above or below “normal”

    Look at any paleo temp record……and realize how stupid this argument is

  44. Thomas says:
    June 13, 2013 at 7:29 am
    RichardLH, the context of the discussion is Monckton’s statement that “On Dr. Santer’s 17-year test, then, the models may have failed. A rethink is needed.”

    I suspect that if you vist the link provided then you might discover that there is indeed some supporting evidence from the sattelite record for his observation.

  45. Do not be for-lone deniers. From a Solar Cycle peak to the valley typically causes a global temperature reduction of -0.1C.

    Unfortunately, we will all suffer if the global temperature decreases. Paradoxically, fewer hurricanes [cooler ocean temperatures] but greater crop damage due to cold temperature swings.

    This is one case where I really wish that I was wrong. Heat bothers me, cold scares me. I’m too old to transition my life style and become an Eskimo.

  46. Warmers went full stupid on predictions and pay the price now, skeptics shouldn’t emulate the behavior. By doing so it validates the junk nature of the temperature stats as being linked to human co2 and carbon. Which is total speculation and not supported by long-term proxies.

    AGW is an emotional political argument, by playing make believe “it’s about science” meme only helps continue what should be dead on arrival in the first place. A hundred year temp chart given the tiny scale involved is fundamentally meaningless from the science view. That the models failed
    isn’t a surprise and is a cost to advocates but making claims about co2 impact or no based on the temp stat is validating the canard of AGW at the same time it is trying to be critical of it. The stat has nothing to say about “cause” one way or the other. It’s o.k. to point out warmer failure and manipulation on the topic but it has nothing to say about “why” things are the way they are in climate.

    I thought the mitigation film support from Monckton suffered the same flaws, why validate mythology of your opponent as a tactic? Looks like a rabbit hole.

  47. rgbatduke said in part

    ‘So I would recommend — modestly — that skeptics try very hard not to buy into this and redirect all such discussions to questions such as why the models are in such terrible disagreement with each other, even when applied to identical toy problems that are far simpler than the actual Earth..’

    I live 15 miles from the Met office who constantly assure us that their 500 year model projections of future climate states are more accurate than their two or three day forecasts. Why this is not challenged more I don’t know, because we see the results of modelling every day in the weather forecasts and that even during a day of feeding in new information the output-the forecast-has changed considerably and bears no relation to the original.

    We have a met office app and the weather it will give us for the weekend will have changed twenty times by the time we actually get there. The ‘likely climate’ in 20 50 or 500 years time is infinitely more difficult to know than what is going to happen in two days time at a place 15 miles from their head office. The simple answer is they have no idea of all the components of the climate and their models are no more able to forecast the climate in future decades as they can the weather of the future month.

    tonyb

  48. Latitude is quite right.The models are all stuctured wrongly and their average uncertainties take no account of the structural uncertainties .In order to make the anthropogenic climate change a factor important enough to justify their own existence and to drive government CO2 policies the IPCC and its modellers had to perform the following mental gymnastics to produce or support a climate sensitivity to a doubling of CO2 of about 3 degrees.
    a) Make the cause follow the effect . ie, even though CO2 changes follow temperature changes ,they simply assume illogically that CO2 change is the main driver.
    b) The main GHG – Water vapour – also follows temperature independently of CO2 yet the effect of water vapour was added on to the CO2 effect as a CO2 feedback for purposes of calculating CO2 sensitivity.
    c) Ignore the very serious questions concerning the relaibility of the ice core CO2 data. From the Holocene peak temperature to the Little Ice age CO2 ice core data for example one might well conclude that if CO2 was driving temperature it is an Ice House not a Greenhouse gas on mult-millenial scales.
    The temperature projections of any models based on these irrational and questionable assumptions have no place in serious dicussion.All the innumerable doom-laden papers on impacts in the IPCC reports and elsewhere (eg Stern report) which use these projections as a basis are a complete and serious waste of time and money.Until you know within well defined limits what the natural variability actually is it is not possible to estimate the sensitivity of global temperatures to anthropogenic CO2 with any useful accuracy as far as policy is concerned.
    Unfortunately the establishment scientists have gambled their scientifc reputations and positions on these illogical propositions and are so far out on the limbs of the tree of knowledge that they will find it hard to climb back before their individual boughs break.

  49. My understanding is that we are slowly rising out of the Little Ice Age, so the ‘natural’ temperature condition should be a slight upwards slope – about 0.5 deg C per century.

    If this rise is subtracted from the record, how long does the ‘flat’ period then become? A quick eyeball using woodfortrees suggests that it starts around 1995 – giving us 18 flat years so far….

  50. dodgy geezer

    CET has been shown to be a reasonable proxy for global temperatures and here it is from 1538 (my reconstruction) with the Met office instrumental period commencing 1659. It shows a steady rise throughout.

    http://wattsupwiththat.com/2013/05/08/the-curious-case-of-rising-co2-and-falling-temperatures/

    There has been a substantial downturn here over the last decade which presumably will eventually be reflected in the global temperature
    tonyb

  51. Look at the satellite data sets gives 23 years. Remove the ENSO spikes and there jhas been a cooling since 1880.

  52. I suspect that the inability of climate science to cross calibrate the various global estimated temperature data sets (sattelite, ballon, thermometer) or reconcile any of them to their models is at the heart of the problem.

    It does not bode well that the trends distribute Sattelite – Termometer – Model.

  53. “From now on, I propose to publish a monthly index of the variance between the IPCC’s predicted global warming and the thermometers’ measurements…In any event, the index will limit the scope for false claims..”

    What a beezer wheeze, Sir Christopher. That’ll defrock the rank amateurs, charlatans and criminals… :)

  54. rgbatduke says:
    June 13, 2013 at 7:20 am

    You…uh, erm…you mean the science ISN’T settled? :0 Nice! Great idea on sorting out the models.

  55. [snip - Greg House under a new fake name. Verified by network path. Mr. House has been shown the door but decided to come back as a fake persona preaching the Slayer/Principia meme. -Anthony]

  56. “But if it’s colder than normal, that’s proof of warming.”
    We know they say that, but just now in this video a UN Climate delegate at Bonn says it so explicitly and idiotically that it almost blows your mind. Here the delegate insists that the freezing German summer weather is proof of warming. Insane:

  57. RGBATDUKE says: “…it looks like the frayed end of a rope,” Ahhh, that’ll be the rope that we give ‘em enough of to hang themselves…

  58. @Dodgy Geezer

    The idea that we are still coming out of the last ice age is a common misperception. The end of the last ice age happened at the beginning of the current Holocene period about 12,000 years ago. Since then temperatures have actually gone down a bit and we have been very stable for the last 6000 years or so.

    unless you live in Greenland, of course. . .

  59. cwon14: I agree totally C02 has NO effect whatsoever on temperature confirmed by Salby et al and many others. To continue to argue with warmists that there is no correlation and put up graphs ect I believe is a waste of time and is just pandering to them which is exactly what they want

  60. [snip - Greg House under a new fake name. Verified by network path. Mr. House has been shown the door but decided to come back as a fake persona preaching the Slayer/Principia meme. -Anthony]

  61. “””””…..Mark Hladik says:

    June 13, 2013 at 6:02 am

    If memory serves, it seems that the Meteorological community has used the ‘thirty-year’ time frame for standardizing its records, in order to classify climate and climate zones. I suspect that meteorologists might soon suggest that a ‘fifty-year’ or even a ‘sixty-year’ time frame become the standard reference frame.

    That would be one way to get around Gavin’s “… seventeen year …” test.

    Or, we could just adjust the data some more, to make them fit the models … … … “””””

    Well there’s a very good reason for that “thirty year time frame” for climate results to become “real”, and also a good reason it should increase.

    A recent study published in (I believe) Physics Today, analyzed the career outcomes for USA PhD in Physics, “graduates”.

    The basic bottom line is that one third of US Physics PhDs eventually land a permanent real job, that utilizes their (limited) skill set. About 5% found temporary work. Bur 2/3 of all of them end up as lifelong post-doc fellows at some institute or other; never ever using their science learning for anything useful.

    By going into the “climate field”, with its 30 year “payoff” time scale, these folks can live off grants for their full career, and really never need to show any believable results, before the next generation of unemployable post-doc fellows, take their place.

    As current socialist programs slowly strangle the American economy, making it increasingly difficult for the “middle class” to ever achieve a viable retirement state, the mean career length, must necessarily increase, so the time base for “meaningful” climate results, will have to increase.

    Recent articles about the fortunes; or lack thereof, of the LL NIF, so called National Ignition Facility, are hinting that this much ballyhooed boondoggle will never ever achieve ignition break even.

    We were told it had a 70% chance of igniting, when the project was approved; now they are saying less than 50%. There is a suggestion that they need to go to a somewhat larger DT fuel pellet.

    Oh but that is going to require about a 5X increase in the size and power of the laser. Well think how many post-doc fellows that can keep busy.

    We already know just how big a Thermo-nuclear energy source has to be to work properly; and also how far away from human habitation it needs to be for safety; about 93 million miles.

  62. Even taking things down to the very simple basics, one cannot dissuade the warmists.
    If you have a theory that rising man made co2 is causing glaobal warming, and you go ahead with models to show that this is possible/true, than you figures MUST be in agreement with observations. Global warming is at a standstill, but co2 levels rise ….therefore your theory is WRONG.

  63. @ rgb@duke. I took the liberty of sending an email to Judy Curry asking that she take a look at your comment and consider asking you to write a tightened up version to be used as a discussion topic at ClimateEtc. Please give this some thought and ping her at her home institution to the Southwest of you. (Okay, West Southwest.)

    Thank you,

    RayG

  64. I want to zero in on the most important line stated, “It is better to focus on the ever-widening discrepancy between predicted and observed warming rates.”

    In one sentence Monckton has zeroed into the total failure of the alarmist group, the models are wrong. They have overestimated the impact of increased CO2.

  65. Jai Mitchell said about a comment from Dodgy Geezer

    ‘The idea that we are still coming out of the last ice age is a common misperception. The end of the last ice age happened at the beginning of the current Holocene period about 12,000 years ago. Since then temperatures have actually gone down a bit and we have been very stable for the last 6000 years or so.’

    DG said nothing about the ‘Ice age’. He specifically referenced the ‘little ice age’ meaning the period of intermittent intense cold that ended with glacier retreat which occurred from 1750/1850. That term is something you would have been better employed in commenting on if you had felt like being pedantic;

    “The term Little Ice Age was originally coined by F Matthes in 1939 to describe the most recent 4000 year climatic interval (the Late Holocene) associated with a particularly dramatic series of mountain glacier advances and retreats, analogous to, though considerably more moderate than, the Pleistocene glacial fluctuations. This relatively prolonged period has now become known as the Neoglacial period.’ Dr Michael Mann

    http://www.meteo.psu.edu/holocene/public_html/shared/articles/littleiceage.pdf

    tonyb

  66. jai mitchell says:
    June 13, 2013 at 9:43 am

    @Dodgy Geezer

    The idea that we are still coming out of the last ice age is a common misperception. The end of the last ice age happened at the beginning of the current Holocene period about 12,000 years ago. Since then temperatures have actually gone down a bit and we have been very stable for the last 6000 years or so.

    unless you live in Greenland, of course. . .
    ——————————

    Dodgy said Little Ice Age, not the “last ice age”.

    Earth is at present headed toward the next big ice age (alarmists in the ’70s were right about the direction but wrong as to time scale). Global temperatures are headed down, long-term. The trend for at least the past 3000 years, since the Minoan Warm Period, if not 5000, since the Holocene Optimum, is decidedly down. The short-term trend, since the depths of the Little Ice Age about 300 years ago, is slightly up, of course with decadal fluctuations cyclically above & below the trend line.

  67. @rgbatduke says:
    June 13, 2013 at 7:20 am

    “Let me repeat this. It has no meaning! It is indefensible within the theory and practice of statistical analysis. You might as well use a ouija board as the basis of claims about the future climate history as the ensemble average of different computational physical models that do not differ by truly random variations and are subject to all sorts of omitted variable, selected variable, implementation, and initialization bias. The board might give you the right answer, might not, but good luck justifying the answer it gives on some sort of rational basis.”

    ———————————————————————————————————————

    Whilst grasping the basic principles of what you say, I cannot comment on what might pass for “legitimate” contemporary interpretation of principle and methodology as actually practiced and accepted within the wide range of applications across many disciplines by those claiming an expertise and the right to do so.

    I am fairly confident that these are in practice “elastic” depending on requirements, and that on the basis that where justification is required, those promoting and defending such “desirable” formulations bring more energy and commitment, and utilize a mechanism of reference to the “particularities” of their endeavors to which others are not privy, to neutralize any queries. This is of course antithetical to the concept of knowledge, let alone a body of it.

    This is pervasive across any field of activity in which an expertise based on specialist understanding is claimed. It cannot be viewed as being isolated from the promotion into classified Disciplines of such things as observation and commentary on such things as civic affairs into political “science”, which is in actuality just a matter of opinion and fluid interaction. Such things actively incorporate the justifying “truth is what you make” it whilst at the same time elevate this to the level of the immutable, governed by autonomous laws, in order to dignify and as a mechanism to prevail, both as an activity in itself and in the positioning of the proponents. There can be no appeal to first principles that are accepted as defining the limits of interpretation, because they don’t exist.

    “Climate Science” as an orthodoxy, and as a field, as opposed to investigations into particular areas that may have have relevance to climate, does not exist as science. What is most obvious and disturbing about AGW is its lack of intellectual underpinning – in fact its defiance of the basic application of intelligence which you highlight in this abuse of the specific rigor required in adhering to this manifestation of it in statistical methodology.

    You are right to say: “do not engage”. It is essential to refuse to concede the legitimacy of interaction with those who claim it when such people are palpably either not sincere, not competent, or not what they claim to be. To state and restate the fundamental basis of inadequacy is what is obligatory. A lack of acknowledgement, and an unwillingness to rethink a position based on this, tells everyone who is willing and capable of listening everything they need to know about such people and the culture that is their vehicle. You do not cater to the dishonest, the deceptive, or the inadequate seeking to maintain advantage after having insinuated themselves, when it is clear what they are. You exclude them.

    To be frustrated, although initially unavoidable since it derives from the assumption that others actually have a shared base in respect for the non-personal discipline of reality, is not useful. It is only when the realization occurs that what within those parameters is a “mistake” is not, and will not be, seen as a mistake by its proponents – whether through inadequacy or design – that clarity of understanding and purpose can emerge.

    The evidence is constant and overwhelming that “Climate Science” and “Climate Scientists” are not what they claim to be. Whether this is by incompetence or intent is in the first instance irrelevant. They are unfit. What they are; what they represent; what they compel the world to; is degradation.

    The blindingly obvious can be repeatedly pointed out to such people to no effect whatsoever.

    They must be stopped. They can only be stopped by those who will defend and advance the principles which they have subverted and perverted. This demands hostility and scathing condemnation. This is not a time in history for social etiquettes, whether general or academic.

  68. “””””……StephenP says:

    June 13, 2013 at 6:28 am

    Rather off-topic, but there are 4 questions that I would like the answer to:
    1. We are told the concentration of CO2 in the atmosphere is 0.039%, but what is the concentration of CO2 at different heights above the earth’s surface? As CO2 is ‘heavier than air’ one would expect it to be at higher percentages near the earth’s surface.
    2. Do the CO2 molecules rise as they absorb heat during the day from the sun? And how far?
    3. Do the CO2 molecules fall at night when they no longer get any heat input from the sun?
    4. When a CO2 molecule is heated, does it re-radiate equally in all directions, assuming the surroundings are cooler, or does it radiate heat in proportion to the difference in temperature in any particular direction?
    Any comments gratefully received…….””””””

    Stephen; let’s start at #4. That’s a bit of a tricky question. In an atmospheric situation, any time any molecule or atom “radiates” (they all do), there is no preferred direction for the photon to exit. Arguably, the molecule has no knowledge of direction, or of any conditions of its surroundings, including no knowledge of which direction might be the highest or lowest Temperature gradient. So a radiated photon is equally likely to go in any direction.
    As to a CO2 molecule which has captured an LWIR photon, in the 15 micron wavelength region for example, one could argue, that the CO2 molecule has NOT been heated, by such a capture; but its internal energy state has changed, and it now is likely oscillating in its 15 micron “bending mode”, actually one of two identical “degenerate” bending modes.
    In the lower atmosphere, it is most likely that the CO2 molecule will soon collide with an N2 molecule, or an O2 molecule, or even an Ar atom. It is most unlikely to collide with another CO2 molecule. At 400 ppm, there are 2500 molecules for each CO2, so it is likely to be 13-14 molecular spacings to the next CO2; our example doesn’t even know another like it is even there.

    When such a collision occurs, our CO2 molecule is likely to forget about doing the elbow bend, and it will exchange some energy with whoever it hit. Maybe the LWIR photon is re-emitted at that point; perhaps with a Doppler shift in frequency, and over a lot of such encounters, the atmospheric Temperature will change; probably an increase. The CO2 molecule itself, really doesn’t have a Temperature; that is a macro property, of a large assemblage of molecules or atoms.

    But the bottom line is that an energy exchange in such an isolated event, is likely to be in any direction whatsoever.

    We are told that CO2 is “well mixed” in the atmosphere. I have no idea what that means. At ML in Hawaii, the CO2 cycles about 6ppm p-p each year; at the north pole it is about 18 ppm, and at the South pole it is about -1ppm (opposite phase). That’s not my idea of well mixed.

    A well mixed mixture, would have no statistically significant change in composition between samples taken anywhere in the mixture; well in my view anyway.

    I suspect that there is a gradient in CO2 abundance with altitude. With all the atmospheric instabilities, I doubt that it is feasible to measure it.

  69. It’s his Lordship this, his Lordship that, “he’d deny a blackened pot”
    But there for all the world to see, he shows the MET wot’s wot

  70. rgbatduke says (June 13, 2013 at 7:20 am): [snip]

    Wow. I read every word, understood about half, concur with the rest. The part I didn’t understand took me way, way back to college physics, when we solved the Schrödinger equation for the hydrogen atom. That was the closest I ever came to being a physicist. :-) While I enjoyed the trip down memory lane, if you expand this comment into an article, I’d suggest using an example more familiar to most readers than the physics of a carbon atom. :-)

    I looked up the xkcd comic for green jelly beans. During my “biostatistician” period, I was actually involved in a real life situation similar to that–’nuff said.

    I remember a thread on WUWT in which a commenter cherry-picked an IPCC GCM that came closest to the (then) trend of the so-called global average temperature. Other commenters asked why the IPCC chose to use their “ensemble” instead of this model. Apparently the model that got the temperature “almost right” was worse than the other models at predicting regional cloud cover, precipitation, humidity, temperature patterns, etc. Green jelly beans all over again.

  71. rgbatduke says at June 13, 2013 at 7:20 am
    A lot of very insightful information.

    Of course averaging models ignores what the models are meant to do. They are meant to represent some understanding of the climate. Muddling them up only works if thy all have exactly the same understanding.
    That is they are either known to be all perfect in which case they would all be identical as there is only one real climate.
    Or they are known to be all completely unrelated to the actual climate. That is they are assumed to be 100% wrong in a random way. If they were systematically wrong they couldn’t be mixed up equally.

    So what does the fact that this mixing has been done say about expert opinion on the worth of the climate models?

    My only fault with the comment by rgbatduke is that it was a comment not a main post. It deserves to be a main post.

  72. As I understand it, running the same model twice in a row with the same parameters won’t even produce the same results. But somehow averaging the results together is meaningful? Riiiight. As meaningful as a “global temperature” which is not at all.

    This, actually, is what MIGHT be meaningful. If the models perfectly reasonably do “Monte Carlo Simulation” by adding random noise to their starting parameters and then generate an ensemble of answers, the average is indeed meaningful within the confines of the model, as is the variance of the individual runs. Also, unless the model internally generates this sort of random noise as part of its operation, it will indeed produce the same numbers from the same exact starting point (or else the computer it runs on is broken). Computer code is deterministic even if nature is not. This isn’t what I have a problem with. What I object to is a model that predicts a warming that fails at the 2-3 sigma level for its OWN sigma to predict the current temperatures outside still being taken seriously and averaged in to “cancel” models that actually agree at the 1 sigma level as if they are both somehow equally likely to be right.

    The models that produce the least average warmingin the whole collection that contributes to AR5 are the only ones that have a reasonable chance of being at least approximately correct. Ones that still predict a climate sensitivity from 3 to 5 C have no place even contributing to the discussion. This is the stuff that really has been falsified (IMO).

    Also, global temperature is a meaningful measure that might well be expected to be related to both radiative energy balance and the enthalpy/internal energy content of the Earth. It is not a perfect measure by any means, as temperature distribution is highly inhomgeneous and variable, and it isn’t linearly connected with local internal energy because a lot of that is tied up in latent heat, and a lot more is constantly redistributing among degrees of freedom with vastly different heat capacities, e.g. air, land, ocean, water, ice, water vapor, vegetation.

    This is the basis of the search for the “missing heat” — since temperatures aren’t rising but it is believed that the Earth is in a state of constant radiative imbalance, the heat has to be going somewhere where it doesn’t raise the temperature (much). Whether or not you believe in the imbalance (I’m neutral as I haven’t looked at how they supposedly measure it on anything like a continuous basis if they’ve ever actually measured it accurately enough to get out of the noise) the search itself basically reveals that Trenberth actually agrees with you. Global temperature is not a good metric of global warming because one cannot directly and linearly connect absorbed heat with surface temperature changes — it can disappear into the deep ocean for a century or ten, it can be absorbed by water at the surface of the ocean, be turned into latent heat of vaporization, be lost high in the troposphere via radiation above the bulk of the GHE blanket to produce clouds, and increase local albedo to where it reflects 100x as much heat as was involved in the evaporation in the first place before falling as cooler rain back into the ocean, it can go into tropical land surface temperature and be radiated away at enhanced rates from the T^4 in the SB equation, or it can be uniformly distributed in the atmosphere and carried north to make surface temperatures more uniform. Only this latter process — improved mixing of temperatures — is likely to be “significantly” net warming as far as global temperatures are concerned.

    rgb

  73. ‘The models that produce the least average warmingin the whole collection that contributes to AR5 are the only ones that have a reasonable chance of being at least approximately correct. Ones that still predict a climate sensitivity from 3 to 5 C have no place even contributing to the discussion. This is the stuff that really has been falsified (IMO).”

    The best estimates of ECS come from paleo data and then observational data. For ECS they range from 1C to 6C.

    The climate models range from 2.1C to 4.4C for ECS and much lower for TCR.

    Finally, there is no such thing as falsification. There is confirmation and disconfirmation.
    even Popper realized this in the end as did Feynman.

  74. Remember that it is global temperature, not energy imbalance, that is the factor expected to be responsible for the feedbacks that turn the gradual changes we have barely noticed into a global catastrophe.

    If the energy being absorbed doesn’t cause the global temperature changes then the proposed mechanisms for the feedbacks – like increased water vapour in the atmosphere – don’t work.

    And therefore the priority given to the field of Climatology needs to be reassessed.

  75. rgbatduke says:
    June 13, 2013 at 11:42 am
    “…one cannot directly and linearly connect absorbed heat with surface temperature changes — it can disappear into the deep ocean for a century or ten…”

    Disappear? How? Will someone PLEASE explain the mechanism to me?

  76. One must always remember the mandate of the IPCC when review information they provide. They are not mandated to study all possible causes of climate change, only human caused climate change:

    “The Intergovernmental Panel on Climate Change (IPCC) was established by World Meteorological Organization and United Nations Environmental Programme (UNEP) in 1988 to assess scientific, technical, and socioeconomic information that is relevant in understanding human-induced climate change, its potential impacts, and options for mitigation and adaptation.”

    Hence, the whole concept of open science within the IPCC is not relevant since they are working with a stated and clear agenda.

  77. climatereason says:
    June 13, 2013 at 11:29 am

    Luther Wu

    Byron will be turning in his grave

    tonyb
    ________________
    I’m sure you meant Kipling…

  78. The secret to our success, such as it is, is the ability to adapt to changing conditions.
    If conditions were unchanging, what would be the point of random mutations in DNA.

  79. Steven Mosher says (June 13, 2013 at 11:54 am): “Finally, there is no such thing as falsification. There is confirmation and disconfirmation. even Popper realized this in the end as did Feynman.”

    Perhaps you could explain the difference between “falsification” and “disconfirmation”, or link a reference that does. Preferably at kindergarten level. :-)

  80. rgbatduke says: <i."One cannot generate an ensemble of independent and identically distributed models that have different code. "

    Yep. I guess that must be like having an ‘average car’ and then telling children that that’s what all cars really look like…. Now that would be something to see, an average car. (Bearing in mind, an Edsel might well be in the mix somewhere).

  81. rgbatduke says:
    June 13, 2013 at 7:20 am
    Saying that we need to wait for a certain interval in order to conclude that “the models are wrong” is dangerous and incorrect for two reasons.

    Thank you for your post, it is brilliant and should be elevated to a blog-post itself. The idea you present is only logical and indeed is a shame it has not been already done so.
    Indeed there makes no sense to continue to use models which are so far away from reality. Only models which have been validated by real data should continue to be used.
    It is what scientists do all the time… in science. They scrap models that have been invalidated and focus on those which give best results, they do not continue to use an ensemble of models of which 95% go into Nirvana and draw a line somewhere 95% Nirvana 5 % real.

    Then real scientists might contemplate sitting down with those five winners and meditate upon what makes them winners — what makes them come out the closest to reality — and see if they could figure out ways of making them work even better. For example, if they are egregiously high and diverging from the empirical data, one might consider adding previously omitted physics, semi-empirical or heuristic corrections, or adjusting input parameters to improve the fit.

    Thank you again!

  82. @rgbatduke
    Brilliant comment (essay). Of course formimg an ensemble of model outputs and saying that its mean is “significant” is arrant nonsense -it isn’t a proper sample or a hypothesis test and it certainly isn’t a prediction. All one can say, given the disparity of results, something is wrong with the models, as you point out. The thing that is so depressing is that people who should know better seem to believe it – probably because they don’t understand it.

    On the subject of Monte-Carlo, some non-linear systems can give a very wide range of results which reflect the distribution of inputs that invoke the non-linearity. In my field, cardiac electrophysiology, this is particularly important and small changes in assumptions in a model will lead to unrealistic behaviour. Even forming simple statistics with these results is wrong for the reasons you so eloquently state. Widely diverging results should force attention on the non-linear behaviour that cause this divergence and a basic questioning of the assumptions.

  83. You can have hours of fun trying to estimate m with statistical significance when given a data set generated by

    y = m*x + c

    where
    (y[n]-y[n-1]) ~ F(0,a)
    x[n] – x[n-1] = L

    where F is a non-stationary non-normal distribution. It is even more fun if you assume that F is normal and stationary even though it is not. But fun does not pay the bills.

  84. @ rgb@duke. I took the liberty of sending an email to Judy Curry asking that she take a look at your comment and consider asking you to write a tightened up version to be used as a discussion topic at ClimateEtc. Please give this some thought and ping her at her home institution to the Southwest of you. (Okay, West Southwest.)

    Thank you,

    RayG

    Sounds like work. Which is fine, but I’m actually up to my ears in work that I’m getting paid for at the moment. To do a “tightened up version” I would — properly speaking — need to read and understand the basic structure of each GCM as it is distinguished from all of the rest. This not because I think there is anything in what I wrote above that is incorrect, but because due diligence for an actual publication is different from due diligence for a blog post, especially when one is getting ready to call 40 or 50 GCMs crap and the rest merely not yet correct while not quite making it to the level of being crap. Also, since I’m a computational physicist and moderately expert in Bayesian reasoning, statistics, and hypothesis testing I’d very likely want to grab the sources for some of the GCMs and run them myself to get a feel for their range of individual variance (likely to increase their crap rating still further).

    That’s not only not a blog post, that’s a full time research job for a couple of years, supported by a grant big enough to fund access to supercomputing resources adequate to do the study properly. Otherwise it is a meta-study (like the blog post above) and a pain in the ass to defend properly, e.g. to the point where it might get past referees. In climate science, anyway — it might actually make it past the referees of a stats journal with only a bit of tweaking as the fundamental point is beyond contention — the average and variance badly violate the axioms of statistics, hence they always call it a “projection” (a meaningless term) instead of a prediction predicated upon sound statistical analysis where the variance could be used as the basis of falsification.

    The amusing thing is just how easy it is to manipulate this snarl of models to obtain any “average” prediction you like. Suppose we have only two models — G and B. G predicts moderate to low warming, gets things like cloud cover and so on crudely right, it is “good” in the sense that it doesn’t obviously fail to agree with empirical data within some reasonable estimate of method error/data error combined. B predicts very high warming, melting of the ice pack in five years, 5 meter SLR in fifty years, and generally fails to come close to agreeing with contemporary observations, it is “bad” in the specific sense that it is already clearly falsified by any reasonable comparison with empirical data.

    I, however, am a nefarious individual who has invested my life savings in carbon futures, wind generation, and banks that help third world countries launder the money they get from carbon taxes on first world countries while ensuring that those countries aren’t permitted to use the money to actually build power plants because the only ones that could meet their needs burn things like coal and oil.

    So, I take model B, and I add a new dynamical term to it, one that averages out close to zero. I now have model B1 — son of B, gives slightly variant predictions (so they aren’t embarrassingly identical) but still, it predicts very high warming. I generate model B2 — brother to B1, it adds a different term, or computes the same general quantities (same physics) on a different grid. Again, different numbers for this “new” model, but nothing has really changed.

    Initially, we had two models, and when we stupidly averaged their predictions we got a prediction that was much worse than G, much better than B, and where G was well within the plausible range, at the absolute edge of plausible. But now there are three bad models, B, B1, and B2, and G. Since all four models are equally weighted, independent of how good a job they do predicting the actual temperature and other climate features I have successfully shifted the mean over to strongly favor model B so that G is starting to look like an absolute outlier. Obviously, there is no real reason I have to start with only two “original” GCMs, and no reason I have to stop with only 3 irrelevant clones of B.

    Because I am truly nefarious and heavily invested in convincing the world that the dire predictions are true so that they buy more carbon futures, subsidize more windmills, and transfer still more money to third world money launderers, all I have to do is sell it. But that is easy! All of the models, G and B+ (and C+ and D+ if needed) are defensible in the sense that they are all based on the equations of physics at some point plus some dynamical (e.g. Markov) process. The simple majority of them favor extreme warming and SLR. There are always extreme weather events happening somewhere, and some of them are always “disastrous”. So I establish it as a well-known “fact” that physics itself — the one science that people generally trust — unambiguously predicts warming because a simple majority of all of these different GCMs agree, and point to any and all anecdotal evidence to support my claim. Since humans live only a pitiful 30 or 40 adult years where they might give a rat’s ass about things like this (and have memories consisting of nothing but anecdotes) it is easy to convince 80% of the population, including a lot of scientists who ought to know better, that it really, truly is going to warm due to our own production of CO_2 unless we all implement a huge number of inconvenient and expensive measures that — not at all coincidentally — line my personal pocket.

    Did I mention that I’m (imaginarily) an oil company executive? Well, turns out that I am. After all, who makes the most money from the CAGW/CACC scare? Anything and everything that makes oil look “scarce” bumps the price of oil. Anything and everything that adds to the cost of oil, including special taxes and things that are supposed to decrease the utilization of oil, make me my margin on an ever improving price basis in a market that not only isn’t inelastic, it is inelastic and growing rapidly as the third world (tries to) develop. I can always sell all of my oil — I have to artificially limit supply as it is to maintain high profits and prolong the expected lifetime of my resources. Greenpeace can burn me in friggin’ effigy for all I care — the more they drive up oil costs the more money I make, which is all that matters. Besides, they all drive SUVs themselves to get out into the wilderness and burn lots of oil flying around lobbying “against” me. I make sure that I donate generously to groups that promote the entire climate research industry and lobby for aggressive action on climate change — after all, who actually gets grants to build biofuel plants, solar foundries, wind farms, and so on? Shell Oil. Exxon. BP. Of course. They/we advertise it on TV so people will now how pious the oil/energy industry is regarding global warming.

    Not that I’m asserting that this is why there are so many GCMs and they are all equally weighted in the AR5 average — that’s the sort of thing that I’d literally have to go into not only the internals of but the lineage of across all the contributing GCMs to get a feel for whether or not it is conceivably true. It seems odd that there are so many — one would think that there is just one set of correct physics, after all, and one sign of a correctly done computation based on correct physics is that one gets the same answer within a meaningful range. I would think that four GCMs would be plenty — if GCMs worked at all. Or five. Not twenty, thirty, fifty (most run as ensembles themselves and presenting ensemble averages with huge variances in the first place). But then, Anthony just posted a link to a Science article that suggests that four distinct GCMs don’t agree within spitting distance in a toy problem the sort of thing one would ordinarily do first to validate a new model and ensure that all of the models are indeed incorporating the right physics.

    These four didn’t. Which means that at least three out of four GCMs tested are wrong! Significantly wrong. And who really doubts that the correct count is 4/4?

    I’m actually not a conspiracy theorist. I think it is entirely possible to explain the proliferation of models on the fishtank evolutionary theory of government funded research. The entire science community is effectively a closed fishtank that produces no actual fish food. The government comes along and periodically sprinkles fish food on the surface, food tailored for various specific kinds of fish. One decade they just love guppies, so the tank is chock full of guppies (and the ubiquitous bottom feeders) but neons and swordtails suffer and starve. Another year betas (fighting fish) are favored — there’s a war on and we all need to be patriotic. Then guppies fall out of fashion and neons are fed and coddled while the guppies start to death and are eaten by the betas and bottom dwellers. Suddenly there is a tankful of neons and even the algae-eaters and sharks are feeling the burn.

    Well, we’ve been sprinkling climate research fish food grants on the tank for just about as long as there has been little to no warming. Generations of grad students have babysat early generation GCMs, gone out and gotten tenured positions and government research positions where in order to get tenure they have had to write their own GCMs. So they started with the GCMs they worked with in grad school (the only ones whose source code they had absolutely handy), looked over the physics, made what I have no doubt was a very sincere attempt to improve the model in some way, renamed it, got funding to run it, and voila — B1 was born of B, every four or five years, and then B1′ born of B1 as the first graduated student graduated students of their own (who went on to get jobs) etc — compound “interest” growth without any need for conspiracy. And no doubt there is some movement along the G lines as well.

    In a sane universe, this is half of the desired genetic optimization algorithm that leads to ever improving theories and models The other half is eliminating the culls on some sort of objective basis. This can only happen by fiat — grant officers that defund losers, period — or by limiting the food supply so that the only way to get continued grant support is to actually do better in competition for scarce grant resources.

    This ecology has many exemplars in all of the sciences, but especially in medical research (the deepest, richest, least critical pockets the world has ever known) and certain branches of physics. In physics you see it when (for a decade) e.g. string theory is favored and graduate programs produce a generation of string theorists, but then string theory fails in its promise (for the moment) and supersymmetry picks up steam, and so on. This isn’t a bad ecology, as long as there is some measure of culling. In climate science, however, there has been anti-culling — the deliberate elimination of those that disagree with the party line of catastrophic warming, the preservation of GCMs that have failed and their inclusion on an equal basis in meaningless mass averages over whole families of tightly linked descendents where whole branches probably need to go away.

    Who has time to mess with this? Who can afford it? I’m writing this instead of grading papers, but that happy time-out has to come to an end because I have to FINISH grading, meet with students for hours, and prepare and administer a final exam in introductory physics all before noon tomorrow. While doing six other things in my copious free moments. Ain’t got no grant money, boss, gotta work for a living…

    rgb

  85. rgbatduke says: June 13, 2013 at 7:20 am
    “One simply wishes to bitch-slap whoever it was that assembled the graph and ensure that they never work or publish in the field of science or statistics ever again.”

    Well, who did assemble it? It says at the top “lordmoncktonfoundation.com”.

  86. On the subject of Monte-Carlo, some non-linear systems can give a very wide range of results which reflect the distribution of inputs that invoke the non-linearity. In my field, cardiac electrophysiology, this is particularly important and small changes in assumptions in a model will lead to unrealistic behaviour. Even forming simple statistics with these results is wrong for the reasons you so eloquently state. Widely diverging results should force attention on the non-linear behaviour that cause this divergence and a basic questioning of the assumptions.

    Eloquently said right back at you. Computational statistics in nonlinear modeling is a field where angels fear to tread. Indeed, nonlinear regression itself is one of the most difficult of statistical endeavors because there really aren’t any intrinsic limits on the complexity of nonlinear multivariate functions. In the example I gave before, the correct many electron wavefunction is a function that vanishes when any two electron coordinates (all of which can independently vary over all space) are the same, that vanishes systematically when any single electron coordinate becomes large compared to the size of the atom, that is integrable at the origin in the vicinity of the nucleus (in all coordinates separately or together), that satisfies a nonlinear partial differential equation in the electron-electron and electron nucleus interaction, that is fully antisymmetric, and that obeys the Pauli exclusion principle. One cannot realize this as the product of single electron wavefunctions, but that is pretty much all we know how to build or sanely represent as any sort of numerical or analytic function.

    And it is still simple compared to climate science. At least one can prove the solutions exist — which one cannot do in the general case for Navier-Stokes equations.

    Does climate science truly stand alone in failing to recognize unrealistic behavior when it bites it in the ass? Widely diverging results should indeed force attention on the non-linear behavior that causes the divergence and a basic questioning of the assumptions. Which is, still fairly quietly, actually happening, I think. The climate research community is starting to face up to the proposition that no matter how invested they are in GCM predictions, they aren’t working and the fiction that the AR collective reports are somehow “projective” let alone predictive is increasingly untenable.

    Personally, I think that if they want to avoid pitchforks and torches or worse, congressional hearings, the community needs to work a bit harder and faster to fix this in AR5 and needs to swallow their pride and be the ones to announce to the media that perhaps the “catastrophe” they predicted ten years ago was a wee bit exaggerated. Yes, their credibility will take a well-deserved hit! Yes, this will elevate the lukewarmers to the status of well-earned greatness (it’s tough to hold out in the face of extensive peer disapproval and claims that you are a “denier” for doubting a scientific claim and suggesting that public policy is being ill advised by those with a vested interest in the outcome). Tough. But if they wait much longer they won’t even be able to pretend objectivity — it will smack of a cover-up, and given the amount of money that has been pissed away on the predicted/projected catastrophe, there will be hell to pay if congress decides it may have been actually lied to.

    rgb

  87. rgbatduke says:
    June 13, 2013 at 1:17 pm

    rgbatduke, thanks for the good laugh and brilliant additional post!
    I am sure your horoscope looks 5 stars for you today.

  88. Why do we even waste time arguing over the statistical significance of every minor blip in the temperature curves? Another recent peer-reviewed paper assures us once again that the tropical hot spot, that inseperable signature of the models, is nowhere to be found. As Dr. Feynman has taught us, the models have failed the data test and are therefore worthless. It’s as simple as that.

  89. rgbWell, who did assemble it? It says at the top “lordmoncktonfoundation.com”.

    Aw, c’mon Nick, you can do better than that. Clearly I was referring to the AR5 ensemble average over climate models, which is pulled from the actual publication IIRC. This is hardly the first time it has been presented on WUWT.

    And the spaghetti graph is even worse. Which is why they don’t present it in any sort of summary — even lay people inclined to believe in CAGW would question GCMs if they could see how divergent the predictions are from each other and from the actual climate record over the last 33 years, especially with regard to LTT and SST and SLR. SLR predictions are a joke. SST predictions have people scrabbling after missing heat and magic heat transport processes. The troposphere is a major fail. Everybody in climate science knows that these models are failing, and are already looking to explain the failures but only in ways that don’t lose the original message, the prediction (sorry, “projection”) of catastrophe.

    I’ve communicated with perfectly reasonable climate scientists who take the average over the spaghetti seriously and hence endorse the 2.5C estimate that comes directly from the average. It’s high time that it was pointed out that this average is a completely meaningless quantity, and that 2/3 of the spaghetti needs to go straight into the toilet as failed, not worth the energy spent running the code. But if they did that, 2.5 C would “instantly” turn into 1-1.5C, or even less, and this would be the equivalent of Mount Tambora exploding under the asses of climate scientists everywhere, an oops so big that nobody would ever trust them again.

    Bear in mind that I personally have no opinion. I think if anything all of these computations are unverified and hence unreliable science. We’re decades premature in claiming we have quantitative understanding of the climate. Possible disaster at stake or not, the minute you start lying in science for somebody’s supposed own benefit, you aren’t even on the slippery slope to hell, you’re already in it. Science runs on pure, brutal honesty.

    Do you seriously think that is what the AR’s have produced? Honest reporting of the actual science, including its uncertainties and disagreements?

    Really?

    rgb

  90. Dr. Pachauri said that he would not take notice of these trends unless they continued for 40 years.
    I could not work that out seeing Dr.Carter wrote that 30 year spans are climate as opposed to the general comment regarding weather. Does the money run out then!?

  91. rgbatduke says: June 13, 2013 at 1:45 pm
    “Aw, c’mon Nick, you can do better than that. Clearly I was referring to the AR5 ensemble average over climate models, which is pulled from the actual publication IIRC.”

    You say exactly what you are referring to:
    “This is reflected in the graphs Monckton publishes above, where the AR5 trend line is the average over all of these models and in spite of the number of contributors the variance of the models is huge. It is also clearly evident if one publishes a “spaghetti graph” of the individual model projections (as Roy Spencer recently did in another thread) — it looks like the frayed end of a rope, not like a coherent spread around some physics supported result.”

    The graphs Monckton publishes above! But these are clearly marked “lordmoncktonfoundation.com” – not a common IPCC adornment. You’ve cited Monckton graphs, Spencer graphs. If there is an AR5 graph with the features you condemn (AR5 trend line etc) where is it?

  92. @climatereason & John Tillman

    –Yes, I misread his statement but then it only makes one consider. If you all think that we are actually supposed to be headed into another ice age, then why are we “recovering” from the little ice age?

    And if you are all such big fans if the medieval warm period, why wasn’t the little ice age a “recovery” from that, (since we are supposed to be headed into another ice age)

    it sounds to me like you are really grasping at straws here.

  93. Disappear? How? Will someone PLEASE explain the mechanism to me?

    One proposed mechanism is that e.g. UV light passes into the ocean bypassing the surface layer where absorbed IR turns straight into latent heat with no actual heating, warms it at some moderate depth, which is then gradually mixed downward to the thermocline

    The catch is, the water in the deep ocean is stable — denser and colder than the surface layer. It turns over due to variations in surface salinity in the so-called “global conveyor belt” of oceanic heat circulation on a timescale of centuries, and much of this turnover skips the really deep ocean below the thermocline because it is so very stable at a nearly uniform temperature of 4 C. Also, water has a truly enormous specific heat compared to air, even dumping all of the supposed radiative imbalance into the ocean over decades might be expected to produce a truly tiny change in water temperature, especially if the heat makes it all the way down to and through the thermocline.

    So one ends up with deep water that is a fraction of a degree warmer than it might have been otherwise (but nevertheless with a huge amount of heat tied up in that temperature increase) that isn’t going anywhere until the oceanic circulation carries it to the surface decades to centuries from now.

    To give you some idea of how long it takes to equilibrate some kinds of circulation processes, Jupiter may well be still giving off its heat of formation from four and a half billion years ago! as it is radiating away more energy than it is receiving. Or there could be other processes contributing to that heat. Brown dwarf stars don’t generate heat from fusion, but nevertheless are expected to radiate heat away for 100 billion years from their heat of formation. The Earth’s oceans won’t take that long, but they are always disequilibrated with the atmosphere and land and act as a vast thermal reservoir, effectively a “capacitor” that can absorb or release heat to moderate more rapid/transient changes in the surface/atmospheric reservoirs, which is why Durham (where I live most of the year) is currently 5-10 F warmer outside than where I am sitting in Beaufort next to the ocean at this minute.

    So if the “missing heat” really is missing, and is going into the ocean, that is great news as the ocean could absorb it all for 100 years and hardly notice, moderating any predicted temperature increase in the air and on land the entire time, and who knows, perhaps release it slowly to delay the advent of the next glacial epoch a few centuries from now. Although truthfully nobody knows what the climate will do next year, ten years from now, or a century from now, because our current climate models and theories do not seem to work to explain the past (at all!), the present outside of a narrow range across which they are effectively fit, or the future of whenever they were fit. Indeed, they often omit variables that appear to be important in the past, but nobody really knows why.

    rgb

  94. rgb@duke says:

    “To give you some idea of how long it takes to equilibrate some kinds of circulation processes, Jupiter may well be still giving off its heat of formation from four and a half billion years ago!

    But since Jupiter’s year is 11.89 years long, it has been radiating for only a mere 379 million years.

    [Just practicing to be a SkS 'science' writer... ☺]

  95. jai Mitchell said

    ‘Yes, I misread his statement but then it only makes one consider. If you all think that we are actually supposed to be headed into another ice age, then why are we “recovering” from the little ice age?

    And if you are all such big fans if the medieval warm period, why wasn’t the little ice age a “recovery” from that, (since we are supposed to be headed into another ice age)’

    So you misread the comment (we all do it) but instead of acknowledging that you then go off at a tangent. The world warms and cools. it cools down before we reach a glacial period and warms up after it. THE Ice age is the daddy of them all, but there have been numerous lesser glacial periods within the last 4000 years of ‘neo glaciation’ or a number of little ice ages if you like, with ‘our’ LIA being the coldest of them all during the holocene. I’ve graphed 6 periods of glaciation over the last 3000 years-‘our’ lia wasn’t the only one as Matthes pointed out, just the most recent.
    tonyb

  96. l expect to see further cooling for.the rest of the year.
    The current jet stream pattern is what is putting a brake on the warming , but l do think we can expect to see more heavy rain and the risk of floods across the NH during the rest of the year. As Arctic air dives deep to the south.

  97. Jeeze, Nick:

    First of all, note “fig 11.33a” on the graph above. Second, note reproductions from the AR5 report here:

    http://wattsupwiththat.com/2012/12/30/ar5-chapter-11-hiding-the-decline-part-ii/

    Then there is figure 1.4:

    http://wattsupwiththat.com/2012/12/14/the-real-ipcc-ar5-draft-bombshell-plus-a-poll/

    Sure, these are all from the previously released draft, and who knows somebody may have fixed them. But pretending that they are all Monckton’s idea and not part of the actual content of AR5 at least as of five or six months ago is silly. If you are having difficulty accessing the leaked AR5 report and looking at figure 11.33, let me know (it is reproduced in the WUWT above, though so I don’t see how you could be). You might peek at a few other figures where yes, they average over a bunch of GCMs. Is Monckton’s graph a precise reproduction of AR5 11.33a? No, but it comes damn close to 11.33b. And 11.33a reveals the spaghetti snarl in the models themselves and makes it pretty evident that the actual observational data is creeping along the lower edge of the spaghetti from 1998/1999 (La Nina) on.

    So, is there a point to your objection, or where you just trying to suggest that AR5 does not present averages over spaghetti and base its confidence interval on the range it occupies? Because 11.33b looks like it does, I’m just sayin. So does 11.11a. So does 1.4, which has the additional evil of adding entirely idiotic and obviously hand drawn “error bars” onto the observational data points.

    But where in AR5 does it say “these models appear to be failing”? Or just “Oops”?

    Mind you, perhaps they’ve completely rewritten it in the meantime. Who would know? Not me.

    rgb

  98. Nick Stokes says:

    You say exactly what you are referring to:

    Yes, he does. And you understand quite well what he said. And yet you lie and pretend otherwise. Why must you lie, Nick?

    The graphs Monckton publishes above! But these are clearly marked “lordmoncktonfoundation.com” – not a common IPCC adornment. You’ve cited Monckton graphs, Spencer graphs. If there is an AR5 graph with the features you condemn (AR5 trend line etc) where is it?

    It is on the graphs that Monckton and Spencer published, of course. But then, you knew that.

    Monckton and Spencer cite and present the AR5 model ensemble graphs in their own insightful critiques of the AR5 work. Duke expands on those critiques in a particularly cogent way. And you lie about it. Everything under heaven has its purpose, it seems.

  99. @ rgbatduke says:
    June 13, 2013 at 1:45 pm

    “…perfectly reasonable climate scientists…”

    ————————————————————————————————————————–

    Should read: “…give the impression of perfectly reasonable…”

    A simulation.

    Reason is not restricted to the capacity to follow one comment or assertion (in any language including mathematical) with another that in itself does not create an obvious disjunction with either the first or with other points of apparent relevance that it is obviously contingent on at that particular point. This is mechanical in nature, and relies on the perception that what can be expressed within those particular confines constitutes all that is both required and possible.

    This is a lawyers mode of being with apparent plausability of association being in itself the demonstration of the required reality to be established. It is also the mechanism which is used when it is said that someone is “being reasonable” in that they will accept a situation or proposition on the basis that a resolution is desirable quite regardless of the seen and understood, and incompletely identified or acknowledged, elements or context that would otherwise “complicate” matters. These rely on a circumscribed view, and a self-contained justification. Not fundamental principle.

    Being “reasonable” in the above social or proceedural way is not evidence of reason. Reason, or the effective existence and application of intelligence, requires, at the start, not just acceptance of a reality but the desire to be subject to it. At any and all times.

    The world is full of people who are practiced at, by virtue of not appearing hostile, or not observably failing to agree with that which cannot be denied, seeming “reasonable”. This, in itself, is meaningless. To be genuinely reasonable requires a readiness to admit realities that undermine conveniences built on and around a contrary conception.

    Reason and honesty are synonymous.

    In the case of “Climate Scientists” who will not or cannot acknowledge a reality pertaining to this field, they are not “reasonable” in any meaningful way. If, in conjunction with such a position, they can pass this off as “reasonable” it merely illustrates a core aspect of their character.

    I realize that your use of the word reasonable above was both off-hand and likely intended to communicate the socially civilized nature of the exchanges you refer to, with no apparent hostility or reticence that might be characterized as evasion or duplicity.

    But it is very important not to paint a false picture. A pretense of openness and “reasonableness” fails if basic foundational issues of indisputable importance are not acknowledged. And that is the case with these “scientists”.

    A stick is a stick. Two plus two does not equal five.

    There are no excuses.

  100. and the Bonn talks end in failure:

    14 June: Bloomberg: Alessandro Vitelli: UN Climate-Talks Collapse Piles Pressure on November Summit
    United Nations talks on reforms to emissions-market rules stalled this week after members rejected a proposal to reconsider the body’s decision-making rules, putting additional pressure on a climate summit in November.
    The loss of two weeks’ negotiating time means that items that were due to be discussed in Bonn from June 3 through June 14 may now be revisited at the UN’s annual climate conference in Warsaw at the end of the year, adding to an already-packed agenda that may not be fully addressed, according to a project developers’ group…
    The loss of two weeks’ negotiating time may mean that a review of UN offset market rules may not be completed by the end of the year, said Gareth Phillips, chairman of the Project Developers’ Forum, a group representing investors and developers of clean energy projects that generate carbon credits.
    “We’ve lost a massive amount of time,” Phillips said today in an interview in Bonn. “Parties were already in two minds over whether they could complete the review of the CDM in Warsaw, so now it looks very unlikely we can conclude the work by then.”…
    ***“You really can’t expect there to be a negotiation at the seriousness of this one, which is about transforming the whole global energy economy, without there being hurdles and obstacles,” she (Ruth Davis, political director of Greenpeace U.K.) said today in an interview in Bonn…

    http://www.bloomberg.com/news/2013-06-13/un-climate-talks-collapse-piles-pressure-on-november-summit.html

  101. Eustace Cranch says:
    June 13, 2013 at 12:11 pm

    “…one cannot directly and linearly connect absorbed heat with surface temperature changes — it can disappear into the deep ocean for a century or ten…”

    Disappear? How? Will someone PLEASE explain the mechanism to me?

    Disappeared = not currently measured

  102. rgbatduke says: June 13, 2013 at 2:43 pm
    “Jeeze, Nick:
    First of all, note “fig 11.33a” on the graph above.”

    Yes, but the graph is not Fig 11.33a. Nothing like it.

    You said, for example,
    “Note the implicit swindle in this graph — by forming a mean and standard deviation over model projections and then using the mean as a “most likely” projection and the variance as representative of the range of the error, one is treating the differences between the models as if they are uncorrelated random variates causing >deviation around a true mean!.”

    The AR5 graphs you linked to do not do any of that. No variance or standard deviation is quoted. They do show quantiles of the actual model results, but that is just arithmetic. At most they speak of an “assessed likely range”. There’s nothing anywhere about variance, uncorrelated random deviates etc. That’s all Monckton’s addition.

    JJ says: June 13, 2013 at 2:47 pm
    “And you understand quite well what he said. And yet you lie and pretend otherwise. Why must you lie, Nick?”

    What an absurd charge. Yes, I understand quite well what he said. He said that the graphs that are shown are a swindle, the maker should be bitch-slapped etc. And he clearly thought that he was talking about the IPCC. But he got it wrong, and won’t admit it. The things he’s accusing the IPCC of are actually Monckton alterations of what the IPCC did.

    Now you may think that doesn’t matter. But what does factual accuracy count for anyway, in your world.

  103. M Courtney says:

    My only fault with the comment by rgbatduke is that it was a comment not a main post. It deserves to be a main post.

    I concur!

    With the title “The Average of Bull$#!^ is not Roses”

    :)

  104. Steven Mosher says:
    June 13, 2013 at 11:54 am

    Finally, there is no such thing as falsification. There is confirmation and disconfirmation.
    even Popper realized this in the end as did Feynman.

    At least you recognise that Popper’ philosophy is toxic to AGW, as it is to other anti-science scams such as the linear no-threshold hypothesis of radiation carcinogenesis, politically mandated to strip the west of its nuclear industry.

    However as Popper says, “there are no inductive inferences”. Induction will only take you down the garden path.

    Notice how AGW is being pushed into untestable corners, like longer timescales and the deep ocean. You guys are scared of Popper. You need to be.

  105. Nick Stokes says:
    June 13, 2013 at 3:23 pm
    rgbatduke says: June 13, 2013 at 2:43 pm
    “Jeeze, Nick:
    First of all, note “fig 11.33a” on the graph above.”

    Yes, but the graph is not Fig 11.33a. Nothing like it.

    The great AGW WORM-OUT has begun.

    “Predict global warming? Me?? No – that’s just a Monkton fabrication.
    All we did was project a statistical envelope of warm-cold wet-dry storm-notstorm glacier retreatadvance moreless twisters and peccatogenic day-to-day change in weather which never happened in pre-industrial times.”

    Get used to this, NS is the figurehead (aka frigging in the rigging) of a vast diatribe of AGW denial that is on its way.

  106. I don’t like your temperature graph based on HadCRUT4. There are many things wrong with it starting with the choice of scale. The temperature region included is too narrow and should begin where satellite data begin which is 1979. Bimonthly resolution is too course for significant detail – should use at least monthly resolution. And linear fit through a forest of noise is worthless. The right way to show a temperature record is not to use a running mean or to fit any graph to it but to outline it with a broad semi-transparent band as wide as the average random fuzz that is part of the record. That random fuzz is not noise but represents cloudiness that varies randomly. This limits its amplitude and anything that sticks far out is an anthropogenic artifact. You can use linear fit later once you can actually see that it is linear. To find the shape of the mean temperature curve in the presence of ENSO oscillations (which are everywhere) you start by putting dots in the middle of each line connecting an El Nino peak with its neighboring La Nina valley and connecting the dots.This is done after the transparent band is laid down. There will be some random deviations but that is the nearest you will ever get to global mean temperature. These are just general requirements. In my opinion only satellite data should be used when available because ground-based data have been manipulated and secretly computer processed. They do not show the true height of El Nino peaks and their twenty-first century segments have all been raised up by as much as a tenth of a degree. But their worst imaginary feature has been a non-existent warming in the eighties and nineties. They call it the late twentieth century warming and it is still part of AR5 previews like the horsetail graphs of CMIP5. In researching my book What Warming? I compared satellite and ground-based temperature curves and found that satellite curves showed an 18 year linear segment from 1979 to 1997. But ground-based curves showed a steady warming in that time slot which they called “late twentieth century warming.” I considered it fake and put that in the book. Nothing happened. Until last fall, that is, when GISTEMP, HadCRUT, and NCDC temperature repositories decided in unison to get rid of that fake warming and follow the satellite data in the eighties and nineties. Nothing was said about it. I consider this coordinated action an admission that they knew the warming was fake. Their twenty-first century data are likewise screwed up and cannot be trusted. I also discovered that all three were secretly computer processed. That was an accident because they did not know that their software left traces of its work in their database. These consist of sharp, high spikes sticking up from the broad magic marker band at the beginnings of years. They looked like noise but noise does not know the human calender. They are in exact same places in all three data sets and have been there at least as far back as 2008. What connection, if any, they have with that fake warming I do not know. But now that we know there is a no-warming zone in the eighties and nineties and a no-warming zone also in the twenty-first century we can put it all together. There is only a narrow strip between, enough to accommodate the super El Nino of 1998 and its associated step warming. The step warming was caused by the large amount of warm water the super El Nino carried across the ocean. In four years it raised global temperature by a third of a degree Celsius and then stopped. As a result, all twenty-first century temperatures are higher than the nineties. Hansen noticed this and pointed out that of the ten highest temperatures, nine occurred after 2000. Not surprising since they all sit on the high warm platform created by the step warming, the only warming during the entire satellite era. These years cannot be greenhouse warming years because the step warming was oceanic, not atmospheric in origin. There is actually no room left for greenhouse warming during the satellite era because the two no-warming stretches and the super El Nino use up the entire time available. That means no greenhouse warming for the last 34 years. With this fact in mind, can you believe that any of the warming that preceded the satellite era can be greenhouse warming? I think not.

  107. ClimateReason,
    (how do you quote somebody on this?)

    u said, ” I’ve graphed 6 periods of glaciation over the last 3000 years-’our’ lia wasn’t the only one as Matthes pointed out, just the most recent.”

    hasn’t that shown that the temperatures have been going down during this period? The LIA is associated with the maurader minimum, Saying that we are “recovering” from that implies that the sun itself is “recovering” from that. However, the change in temperatures during the last 5 decades are not based on changes in the sun’s intensity since that effect is pretty much instantaneous.

    if you look at this chart, and if you think that solar irradiance is the cause of the variation then we would have 1.5 C average variation every 6.5 years due to the solar cycle (the solar cycle does cause some variation but only very little since it is only .075% of the total sun’s activity (peak to trough)

  108. jai mitchell says:
    June 13, 2013 at 3:41 pm

    “However, the change in temperatures during the last 5 decades are not based on changes in the sun’s intensity since that effect is pretty much instantaneous.”

    Sigh… Just another guy who does not understand the concept of frequency response.

  109. phlogiston says: June 13, 2013 at 3:37 pm
    ” No – that’s just a Monkton fabrication.”

    I don’t think it’s a Monckton fabrication. The attribution could be clearer, but it’s properly marked “lordmoncktonfoundation.com”. I don’t even think it’s that bad, but RGB says:
    “One simply wishes to bitch-slap whoever it was that assembled the graph and ensure that they never work or publish in the field of science or statistics ever again.”

    Clearly he thought he was referring to the IPCC, but the graph is labelled “Monckton”, and his diatribe matches the graph in this post. It does not match the AR5 graphs that he later linked to.

  110. Nick Stokes says:
    June 13, 2013 at 3:52 pm
    phlogiston says: June 13, 2013 at 3:37 pm
    ” No – that’s just a Monkton fabrication.”

    Clearly he thought he was referring to the IPCC, but the graph is labelled “Monckton”, and his diatribe matches the graph in this post. It does not match the AR5 graphs that he later linked to.

    I’m sure Monkton himself can clarify the provenance of this figure.

  111. phlogiston says: June 13, 2013 at 4:06 pm
    “I’m sure Monkton himself can clarify the provenance of this figure.”

    Lord M says
    “The correlation coefficient is low, the period of record is short, and I have not yet obtained the monthly projected-anomaly data from the modelers to allow a proper p-value comparison.”

    It sure sounds like he’s doing the stats and graphing himself.

  112. Nick Stokes says:
    June 13, 2013 at 4:27 pm

    An un-vetted person doing statistics, how shocking!

    Do you assert – contrary to Monkton – that the ensemble models are spot-on in predicting the global temperature trend in the last two decades? Or are we still in the cloud of unknowing?

  113. Hmmm, I was never too good in math. But let me give this a try. We are about 5 years through solar cycle 24, and in this cycle, the sun is very quiet. Solar cycle 23 lasted for 12.6 years, and the sun was very quiet during this cycle as well. In fact, there were 821 spotless days for the sun during cycle 23, and that level of spotless days or more was only achieved about 100 years before during solar cycle 11.

    But back to the math part, for which I am terrible at doing. However, I can do simple arithmetic. So the total length of years for solar cycles 23 and 24 is 12.6 years + 5 years = 17.6 years. Now, you say that global warming has stopped for 17 years?

    I guess I am too simple to figure these things out. Climate is soooo complicated.

  114. phlogiston says: June 13, 2013 at 4:36 pm
    “Nick Stokes says:
    An un-vetted person doing statistics, how shocking!”

    I am not shocked. It was RGB who spoke harshly of it.

    “Do you assert – contrary to Monkton – that the ensemble models are spot-on in predicting the global temperature trend in the last two decades?”

    No, and they don’t claim to be. Basically GCMs are numerical weather programs that generate weather. But they are not forecasting weather – there is no expectation that the weather will progress just as a model predicts (that’s mostly why they disagree so much on this scale). The expectation is that, as with reality, the weather will average out into an identifiable climate. And as with reality, that takes a while.

  115. Why is it when we cherry pick a start date of the year 1000 do the warmists suddenly shut up? It’s probably because of the Viking swords in their backs..

  116. Greg Mansion says:
    June 13, 2013 at 9:59 am

    Greg, I’m sure there were many years when Mayan Priests threw the virgins into the pit and crop results improved. By chance of course. All the spaghetti graphs in the world wouldn’t improve the “science” of human sacrifice and crop results. Nor would or reasoned people start a dissident debate based on graphs produced by the Priests at the time even if they were on the right side of science. Mayans I’m sure were more honest and didn’t attribute their beliefs to science at all.

    It seems to me many skeptics make a priority of co-opting the basic warming talking points which are a pure fallacy. Causal assumptions about the temperature stats always peeve me. Lowering the logic bar is essential for the AGW believers and the temp stat graph does exactly that. It doesn’t matter long-term about the short-term changes of the graph, if you accept the talking point you’ve lost an important piece of logic in the farce of AGW debating.

    There should be a lot more qualifying important points about the AGW scam if people comment on the temp stat and the “pause” from the skeptical side. Then again many skeptics live for the weeds like this of the debate which will go on forever if left to them. Monckton is slipping.

  117. Steven Mosher says:
    June 13, 2013 at 11:54 am
    “Finally, there is no such thing as falsification. There is confirmation and disconfirmation.
    even Popper realized this in the end as did Feynman.”

    So, your first statement makes the claim that it is false to claim that there is such a thing as falsification? ;)

    Seriously, I’m pretty sure that you misunderstood Popper (I can’t make be sure about Feynman but it doesn’t sound like him either). I think you must be thinking of naive falsificationalism which is not the same thing.

    Cheers, :)

  118. The racehorse is running to provenance, but it’s in Rhode Island. Seems skeert blue of the devil, so stridefully he avoids the point.
    ==========

  119. “bring out pitchforks and torches as people realize just how badly they’ve been used by a small group of scientists and politicians, how much they are the victims of indefensible abuse of statistics to average in the terrible with the merely poor as if they are all equally likely to be true with randomly distributed differences.

    rgb”

    rgb, this sort of thing is the modus operandi of bad climate science. The adjustments made to the temperature record took the good high quality rural thermometers and averaged them with the poorly sited ones and apparently added something additional. The rural sites averaged 0.155C/decade trend, poorly sited 0.248C/decade and NOAA’s final adustment resulted in 0.309C/decade average in the contiguous 48. How on earth could the best model in the world, based on good physics ever “hindcast” or “project” this. Assuming the rest of the world temps are fiddled in similar fashion as they most certainly are, this would mean that the “observed” trends are even exaggerated and the departure from projections even greater.

  120. Nick Stokes says:
    June 13, 2013 at 5:02 pm
    “The expectation is that, as with reality, the weather will average out into an identifiable climate. And as with reality, that takes a while.”

    I don’t think that’s right at all. From the IPCC’s Third Assessment report Section 8.5.1.1

    “The model evaluation chapter of the IPCC Second Assessment Report (Gates et al., 1996) found that “large-scale features of the current climate are well simulated on average by current coupled models.””

    From the above we can see that models are averaged together because doing so allows you to “simulate” the large scale features of the climate. IOW, individual models on their own do not simulate those large-scale features(if some did there would be no need to average them at all). It has nothing to do with the “time” you let a model run for.

    Cheers, :)

  121. The fact that there has been no warming for 17 years is an anomaly. An anomaly is an observation that cannot be explained by the assumed mechanisms, the assumed hypothesis/hypotheses. There are three standard approaches to address anomalies: 1) Ignore them (that is the most common approach, name calling is useful if there are ignorant people that persist in bring up the anomalies, the use of the word ‘denier’ is the type of imaginative approach that can be used to stifle discussion), 2) Make the anomaly go away by reinterpreting the data (GISS is an example of that approach), or 3) Develop a modified mechanism or a new mechanism to explain them away.

    There is no question that the lack of warming for 17 years is real, not an instrumental error, or a misinterpretation of the measurements. The Hadcrut3 to Hadcrut4 and the GISS manipulations are pathetic warmists attempts to raise planetary temperature which only muddies the water and does not remove the anomaly.

    Thermometers have not changed with time. There is no logical reason to propose a change in the laws of physics to explain what is observed. The laws of physics have not changed with time.

    If the CO2 mechanism (William: Big if) does not saturate, increasing CO2 in the atmosphere should result in an increase in forcing which should result in a gradually increasing planetary temperature that oscillations with the normal ‘chaotic’ planetary mechanisms. What should be observed as atmospheric CO2 increases is a wavy asymptotically (increasing asymptotically as the CO2 forcing is continually increasing) increasing planetary temperature.

    That is not observed.

    The warmists have proposed that the additional forcing due to increased atmospheric CO2 is hiding in the ocean. They also tried the hypothesis that increased aerosols due to China coal use inhibited the warming. Some scallywag however noted that the majority of the warming was observed in the Northern hemisphere where Chinese aerosol concentration should be highest which should inhibit warming in the Northern Hemisphere which is the opposite of observations. The Northern Hemisphere ex-tropics warmed four times more than the tropics, twice as much as the planet as whole (which curiously is also what happens during a Dansgaard-Oeschger cycle).

    The problem with the heat hiding in the ocean hypothesis is there must be a mechanism that would suddenly send the additional energy from the CO2 forcing into the ocean to stall the warming. In addition to the requirement for a new mechanism that would suddenly send heat into the deep ocean, there needs to be heat regulating mechanism that must mysteriously increase to cap the CO2 warming. (i.e. The heat hiding in the ocean must fortuitously increase to cap planetary temperature rise.)

    The warmists if they were interested in solving the scientific puzzle should have summarized the problem situation and possibilities. When that is done it is clear some hypotheses are not valid.

    Summary of the CO2 mechanism in accordance with warmist theory.
    1) Based on theoretical calculations and measurements increased atmospheric CO2 does not result in a significant increase in planetary temperature in the lower troposphere. That region of the atmosphere is saturate as the absorption spectrum of CO2 and water overlap and there is sufficient CO2 in the lower troposphere as CO2 is a heavy than air molecule (CO2 concentration is greater proportionally at lower elevations due to its higher mass than O2 and N2) and there is a greater amount of water vapour, so increased CO2 does not theoretically result in significant warming in the lower troposphere.

    2) At higher elevation in the atmosphere there is less water vapour, so all else being equal (i.e. the conditions at that elevation are as assumed by the models) the additional atmospheric CO2 should theoretically cause increase warming at higher elevations in the troposphere. The warming at the higher regions in troposphere should then by radiation of long wave radiation warm the planet’s surface.

    Logical Option A:
    If heat is not hiding in the ocean and the laws of physics hold, then something is causing the CO2 mechanism to saturate in the upper troposphere such that increased CO2 or other greenhouse gases does not cause warming in that region of the atmosphere. If logical option A is correct, and if the upper troposphere was already saturated, such that increased CO2 does not cause significant warming, then something else caused the warming in the last 70 years.

    It is known that planetary temperature has cyclically warmed and cooled in the past (Dansgaard-Oeschger cycles) and it is known that there are solar magnetic cycle changes that correlate with the warming and cooling cycles. An example is the Medieval Warm period that is followed by the Little Ice age.

    The warmists have chosen to ignore the fact that there is cyclic warming and cooling in the paleo record.
    Greenland ice temperature, last 11,000 years determined from ice core analysis, Richard Alley’s paper.

    http://www.climate4you.com/

    So if the CO2 mechanism was saturated at a level of say 200 ppm, then additional CO2 has a negligible affect on planetary temperate. A new mechanism is therefore required to explain the 70 years of warming that is observed.

    The above graph shows a new mechanism is not required. The same mechanism that caused the Dansgaard-Oeschger warming and cooling caused the warming in the last 70 years.
    Now as the solar magnetic cycle has rapidly slowed down, we would expect the planet to cool.
    If the planet cools, will know that something is different in the upper troposphere the model assumptions and the something that is different inhibits the greenhouse warming mechanism. (Inhibit is the correct term rather than saturate).

    Logical Option B:
    The heat is hiding in the oceans. Planetary temperature has not risen in the tropics where there should be the greatest CO2 forcing on the planet as the tropical region has the most amount of long wave radiation that is emitted off to space and there is amply water to amplify the CO2 warming. The heat is hiding in the ocean hypothesis requires particularly in the tropics that there by a step increase in ocean mixing to hide the heat in the deep ocean.
    There is no observational evidence of increased surface winds why would there be temperatures in the tropics have not increased significantly. There is no driver to force heat into the deep ocean. The question is why suddenly now should heat start to hide in the deep ocean? There needs to a physical explanation as to what is suddenly changed to force heat particularly in the tropics into the deep ocean. Ignoring the fact that there is no explanation of what would turn on heat hiding in the ocean, there is an ignored problem that if there is suddenly intermixing of surface waters with deep ocean waters atmospheric CO2 levels should drop as CO2 is pulled into the colder deeper waters. That is not observed. Atmospheric CO2 is gradually rising.

  122. jai mitchell says:

    “The LIA is associated with the maurader minimum…”
    +_+_+_+_+_+_+_+_+_+_+_++++++++_+_+_+_+_

    Admit it: you’re just winging it. Anyone who doesn’t understand the context [or how to spell] the Maunder Minimum [which refers to sunspot numbers] is only pretending to understand the subject.

  123. Nick Stokes is here to quibble again. rgbatduke has written some very compelling posts today so Nick must punish him by quibbling over trivial points which customarily arise from Nick’s deliberately obtuse misreading of isolated statements while ignoring the most forceful and incisive arguments found in the comments. It’s his modus operandi at ClimateAudit, so I suppose we shouldn’t be surprised to witness in spades here.

    Thanks to rgb for the being generous with his time and for passionately dissecting these issues in depth. Occasionally people convey a deep understanding of the core problems facing climate science and rgb did it brilliantly today. It’s comforting to know he’s teaching at a well-known university.

  124. @Nick Stokes -> your comment
    “The expectation is that, as with reality, the weather will average out into an identifiable climate. And as with reality, that takes a while.”

    Given your like of the facts a man of science like myself, can you expand the factual basis on why the weather must average out and when you say “takes a while” how long is that and what basis are you using for that statement.

    For the record I believe that both sides of the climate change argument is about as far from science as you can get you get and neither side should be able to use science in the description of what they are doing … it is about as scientific as astrology and horoscopes based on political agendas.

  125. Thomas says:
    June 13, 2013 at 4:17 am
    as is clear from the diagram there has been warming, only not large enough to be statistically significant.
    =============
    Wrong. The error bars show that there may or may not have been warming. There is no way to know for sure.

    That is the meaning of statistical significance. That within certain bounds you cannot say which way the answer lies. Temperature is within those bounds, so you cannot accurately say “there has been warming”.

  126. Nick Stokes says:
    June 13, 2013 at 5:02 pm
    The expectation is that, as with reality, the weather will average out into an identifiable climate. And as with reality, that takes a while.
    ============
    That doesn’t make the expectation correct. The law of large numbers does not hold for chaotic time series. You cannot calculate a meaningful average for chaotic systems over time. The result is spurious nonsense.

  127. @rgbatduke says:
    June 13, 2013 at 7:20 am
    I’ve read thousands of posts on science blogs and this post of yours stand head and shoulders above any other that I’ve read and I’ve read many excellent ones.
    I don’t know how you did it but, by God, it really hit the spot for me and I’m sure for many others too.
    Thank you.
    (Mr Watts, I’ve posted rgb’s full text from a H/T from StreetCred on BishopHill. Apologies if I’ve overstepped the mark and please feel free to snip)

  128. jai mitchell says:
    June 13, 2013 at 2:15 pm

    @climatereason & John Tillman

    –Yes, I misread his statement but then it only makes one consider. If you all think that we are actually supposed to be headed into another ice age, then why are we “recovering” from the little ice age?

    And if you are all such big fans if the medieval warm period, why wasn’t the little ice age a “recovery” from that, (since we are supposed to be headed into another ice age)

    it sounds to me like you are really grasping at straws here.
    —————————————-

    This has been explained many times to you. Either you somehow missed all the explanations or want to remain willfully obtuse.

    “Recovery” means regression to the mean from excursion above or below a trendline. The world recovered from the Medieval Warm Period by returning back to the trend, then continuing on below it into the LIA. Since about 1700 Earth has been “recovering” from that cold period.

    From the Minoan Warm Period 3000 years ago, the long term temperature trend line has been down, but with (possibly quasi-sine wave) cyclical excursions above & below it, all occurring naturally. The Minoan WP was followed by a cold period, which was followed by the Roman WP, followed by the Dark Ages Cold Period, interrupted by the lesser Sui-Tang WP (the peak of which was lower than the Roman & the subsequent Medieval WPs), followed by more cold, then the Medieval WP, followed by the remarkably frigid LIA, followed by the Modern WP. The trend line connecting the peak of the Minoan, Roman, Medieval & Modern WPs is decidedly down.

    There is no prima facie case for any significant human effect on climate unless & until the Modern WP gets warmer than the Medieval, which hasn’t happened yet. Each recovery from the preceding cycle, whether warm or cold, has peaked or troughed out at a lower temperature, based upon proxy data, such as the Greenland ice cores. This is just one of many inconvenient truths about CACCA.

    Had you really tried to study & understand Dr. Aksofu’s graph, you would grasp this simple concept instead of clutching at CAGW straws.

  129. He says we shall soon be approaching Dr. Ben Santer’s 17-year test: if there is no warming for 17 years, the models are wrong.

    There is a propensity to quote one sentence from the Santer paper (jn the abstract) as if it is the defning point therein, and weild it as a benchmark for the surface data stat sig, or for model verification, or to claim that the anthropogenic signal is lost. This is a profound misunderstanding of the paper, which conlcudes;

    In summary, because of the effects of natural internal climate variability, we do not expect each year to be inexorably warmer than the preceding year, or each decade to be warmer than the last decade, even in the presence of strong anthropogenic forcing of the climate system. The clear message from our signal-to-noise analysis is that multi-decadal records are required for identifying human effects on tropospheric temperature.

    This is not a discrepancy with the abstract, which maintains that you need *at least* 17 years of data from the MSU records, but that may not always be sufficient.

    When trends are computed over 20-year periods, there is a reduction in the amplitude of both the control run noise and the noise superimposed on the externally forced TLT signal in the 20CEN/A1B runs. Because of this noise reduction, the signal component of TLT trends becomes clearer, and the distributions of unforced and forced trends begin to separate (Figure 4B). Separation is virtually complete for 30-year trends

    …On timescales longer than 17 years, the average trends in RSS and UAH near-global TLT data consistently exceed 95% of the unforced trends in the CMIP-3 control runs (Figure 6D), clearly indicating that the observed multi-decadal warming of the lower troposphere is too large to be explained by model estimates of natural internal variability….

    For timescales ranging from roughly 19 to 30 years, the LAD estimator yields systematically higher values of pf”– i.e., model forced trends are in closer agreement with observations….

    The 17-year quote is a minimum under one of their testing scenarios. They do not recommend a ‘benchmark’ at all, but point out that the signal to noise ratio declines the more data you have.

    It is not enough to cite a quote out of context. Data, too must be analysed carefully, and not simply stamped with pass/fail based on a quote. Other attempts at finding a benchmark (a sound principle) are similar to Santer’s general conclusions that you need multi-decadal records to get a good grasp of signal (20, 30, 40 years).

  130. ditto on the comments of praise for @rgbatduke postings above. home run after home run. I felt what I was reading was truly inspired. I would like to echo the other comments that these posting be elevated to a blog article. Perhaps just collected “as is” into a posting.

    The logic to me is inescapable. Ask 10 people the answer to a question. If you get 10 different answers then one can be pretty sure than at least 9 of them are wrong, and 1 of them might be right. You cannot improve the (at most) 1 possibly right by averaging it with the other (at least) 9 wrong answers.

    So why, when we have 30 models that all give different answers, do we average them together? Doesn’t this means that the climate scientists themselves don’t know which one is right? So how can they be so sure that any of them are right?

    If you asked 30 people the answer to a question and they all gave the wrong answer, what are the odds that you can average all the wrong answers and get a right answer? Very likely one of the wrong answers is closer to the right answer than is the average.

  131. 10 years minimum, but 15 years practically, 17 years for confirmation, 20 years with padded proof, 30 years would eliminate any natural effects, 60 years would clarify the long term natural trends and 90 years would definitely answer some important questions…but if we had 120 years of worldwide satellitle coverage I couldn’t really predict what we would know…surely we should collect such data and then reconvene.

  132. Thank you Lord Monckton of Benchley, for a job well done.

    I especially enjoyed seeing the R2 value of the 17 year 4 month trend……0.11…

    0.11?… 0.11!? Are you frigging kidding me?

    And we still take these grant whor….umm.. bed-wetters seriously?

    It is to laugh.

    It it weren’t for the $TRILLIONS being wasted on this hoax, it would almost be funny…Almost…

    The eventual cost to the scientific community’s credibility and the actual economic and social destruction this silly hoax has inflicted on the world’s economy so far has not been so humorous; tragic comes to mind.

  133. Samurai, I also nearly dropped my uppers when I saw the R2 value is 0.11.

    It’s almost ZERO! Close enough to almost call it zero. At least it isn’t negative, but then, it could start to be without much of a change.

  134. Anthony/moderators: You come down hard on others, like the dragons or whatever, and some others. Why not give Nick Stokes his one little chance at puerile nastiness, then cut off all his even more juvenile following posts?

    Mosher: In re “falsification” as used by rgb@duke. I don’t think he used it in the sense that you think he did. I think he used it in the sense of something a tort lawyer would love to sink his claws into; i.e., “climate scientists” lying through their teeth and misappropriating public funds either through sheer venality or total lack of skill. You may want to clarify that with Mr. RGB – who has clearly posted some of the best thinking we’ve seen on this matter of GCMs.

    REPLY: Well, as much as I think Nick has his head up some orifice at times due to his career in government making him wear wonk blinders, he did do one thing that sets him apart from many others who argue against us here. When our beloved friend and moderator Robert Phelan died, Nick Stokes was the only person on the other side of the argument here (that I am aware of) who made a donation to help his family in the drive I setup to help pay for his funeral.

    For that, he deserves some slack, but I will ask him to just keep it cool. – Anthony

  135. Finally, there is no such thing as falsification. There is confirmation and disconfirmation.” — Steven Mosher, June 13, 2013 at 11:54 am

    I agree with that. Just as verification, in the sense of proving a truth, can’t be had in any science, neither can falsification which is the same thing — proving a truth, the truth that something is false. Haha! Doh!

    Even Popper realized this in the end as did Feynman.

    Feynman, of course, is no surprise. Besides, I understand he didn’t have a lot of time for philosophy. You can see why. :)

    But Popper, on the other hand, is a surprise to me. I’ve read some of his stuff and similar but not all of it by a long shot, and on the whole I am very sympathetic to it, except for your point above where I thought he had a big blind spot. He kept banging on about corroboration, for instance, when confirming (putative) truths but seemed a little more adamant when it came to falsification. Dogmatic I’d say.

    In fact, the last I heard on this—and that was at least about a couple of decades ago maybe—was that he used to throw a hissy fit if someone brought the symmetry up. Ooh, touchy! :)

    I didn’t know he recanted though. That’s news to me. Good for him.

    The upshot is that he took us all round the houses and back to where we started in the first place — stuck with induction. Haha. Fun if you have nothing better to do.

  136. pottereaton says: June 13, 2013 at 7:02 pm

    Nick Stokes is here to quibble again. rgbatduke has written some very compelling posts today so Nick must punish him by quibbling over trivial points which customarily arise from Nick’s deliberately obtuse misreading

    And if there is anything at which Nick Stokes has proven himself to be the numero uno expert, it is in the art and artifice of “deliberately obtuse misreading” (although, much to my disappointment, there have been times – of which this thread is one – that Steve Mosher has been running neck and neck with Stokes)

    But that aside … having just subjected myself (albeit somewhat fortified by a glass of Shiraz) to watching the performances (courtesy of Bishop Hill) across the pond of so-called experts providing testimony at a hearing of the U.K. House of Commons Environmental Audit Committee, I’ve come to the conclusion that ‘t would have been a far, far better thing had they requested the appearance and testimony of rgbatduke than they have ever done before!

  137. “But Popper, on the other hand, is a surprise to me. I’ve read some of his stuff and similar but not all of it by a long shot, and on the whole I am very sympathetic to it, except for your point above where I thought he had a big blind spot. He kept banging on about corroboration, for instance, when confirming (putative) truths but seemed a little more adamant when it came to falsification. Dogmatic I’d say.”

    In the end of course he had to admit to the fact that real scientists don’t actually falsify theories. They adapt them. I’m refering to his little fudge around the issue of auxilary hypothesis.

    “As regards auxiliary hypotheses we propose to lay down
    the rule that only those are acceptable whose
    introduction does not diminish the degree of falsi ability
    or testability of the system in question, but, on the
    contrary, increases it.”

    That in my mind is an admission that scientists in fact have options when data contradicts a theory: namely the introduction of auxilary hypothesis. Popper tried to patch this with a “rule”
    about auxiliary hypotheses, but the rule in fact was disproved. Yup, his philosophical rule was
    shown to be wrong.. pragmatically.

    In Popper formulation we are only allowed to introduce auxiliary hypotheses if those are testable
    and if they dont “diminish” falsifiability ( however you measure that is a mystery ) This approach to science was luckily ignored by working scientists. The upshot of Poppers approach is that one could reject theories that were actually true.

    in the 1920’s physicists noted that in beta decay( a neutron into a proton and electron) the combined energy of the proton and the electron was greater than the energy of the neutron.
    This lead ssome physicists to claim that conservation of energy was falsified.

    Pauli suggested that there was also an invisible particle emitted. Fermi named it neutrino.
    However at the time there was no way of detecting this. By adding this auxiliary hypothesis conservation of energy was saved, BUT the auxiliary hypotheses was not testable. Popper’s rule would have said “thou shalt not save the theory”

    Of course in 1956 the neutrino was detected and conservation of energy was preserved, but by Poppers “rulz” the theory would have been tossed. The point being is that theories dont get tossed. They get changed. Improved. and there are no set rules for how this happens. Its a pragmatic endeavor. So that scientists will keep a theory around, even one that has particles that can’t be detected, as long as that theories is better than any other. Skepticism is a tool of science its not science itself.

    If you want an even funnier example see what Feynman said about renormalization.

    “The shell game that we play … is technically called ‘renormalization’. But no matter how clever the word, it is still what I would call a dippy process! Having to resort to such hocus-pocus has prevented us from proving that the theory of quantum electrodynamics is mathematically self-consistent. It’s surprising that the theory still hasn’t been proved self-consistent one way or the other by now; I suspect that renormalization is not mathematically legitimate.”

    So there you go. in order to keep a theory in play, a theory that worked, Feynman used a process that he thought was mathematically suspect. haha, changing math to fit the theory.

  138. Hilary

    “And if there is anything at which Nick Stokes has proven himself to be the numero uno expert, it is in the art and artifice of “deliberately obtuse misreading” (although, much to my disappointment, there have been times – of which this thread is one – that Steve Mosher has been running neck and neck with Stokes)”

    I find your intolerance of Nick’s contrary opinions and other contrary opinions to be out of line with the praise for this site which the good Lord bestowed just the other day.

    Lets be clear on a couple things. Feynman is no authority on how science works. read his opinion on renormalization and you will understand that he did not practice what he preached.
    Popper was likewise wrong about science. This isnt a matter of philosophical debate, its a matter of historical fact.

    Here is a hint. You can be a sceptic and not rely on either of these guys flawed ideas about how science in fact operates. Theories rarely get “falsified” they get changed, improved, or forgotten when some better theory comes along. Absent a better theory, folks work with the best they have.

  139. Steven Mosher says:
    June 13, 2013 at 11:54 am

    Finally, there is no such thing as falsification. There is confirmation and disconfirmation.
    even Popper realized this in the end as did Feynman.
    ———————————————–

    Please confirm with actual statements Popper & Feynman that they “realized” this. Absent your providing evidence to this effect, I think that you have misunderstood the mature thought of both men.

    The physicists and philosophers of science Alan Sokal and Jean Bricmont, among others, could not have disagreed with you more. In their 1997 (French; English 1998) book “Fashionable Nonsense” they wrote, “When a theory successfully withstands an attempt at falsification, a scientist will, quite naturally, consider the theory to be partially confirmed and will accord it a greater likelihood or a higher subjective probability… But Popper will have none of this: throughout his life he was a stubborn opponent of any idea of ‘confirmation’ of a theory, or even of its ‘probability’…(however) the history of science teaches us that scientific theories come to be accepted above all because of their successes”.

    The history of science is rife with instances of falsification, which neither Popper nor Feynman would I’m sure deny (again, please provide evidence against this view, given their well known support of the theory of falsifiability). There very much indeed is such a thing. Nor would either deny that to be scientific an hypothesis must make falsifiable predictions. If either man did deny this tenet, please show me where.

    For instance, Galileo’s observation of the phases of Venus conclusively falsified the Ptolemaic system, without confirming Copernicus’ versus Tycho’s.

    As you’re probably aware, Popper initially considered the theory of natural selection to be unfalsifiable, but later changed his mind. I have never read anywhere in his work that he changed his mind about falsifiability. The kind of ad hoc backpedaling in which CACCA engages is precisely what Popper criticized as unscientific to the end. If I’m wrong, please show me where & how.

    And that goes double for Feynman.

  140. Samurai says –
    “If it weren’t for the $TRILLIONS being wasted on this hoax, it would almost be funny…Almost…
    The eventual cost to the scientific community’s credibility and the actual economic and social destruction this silly hoax has inflicted on the world’s economy so far has not been so humorous; tragic comes to mind.”

    INVESTORS are really, really concerned about CAGW and the environment!!! nil chance they’ll ever admit it’s a hoax:

    13 June: Reuters: Laura Zuckerman: Native Americans decry eagle deaths tied to wind farms
    A Native American tribe in Oklahoma on Thursday registered its opposition to a U.S. government plan that would allow a wind farm to kill as many as three bald eagles a year despite special federal protections afforded the birds…
    They spoke during an Internet forum arranged by conservationists seeking to draw attention to deaths of protected bald and golden eagles caused when they collide with turbines and other structures at wind farms.
    The project proposed by Wind Capital Group of St. Louis would erect 94 wind turbines on 8,400 acres (3,400 hectares) that the Osage Nation says contains key eagle-nesting habitat and migratory routes.
    The permit application acknowledges that up three bald eagles a year could be killed by the development over the 40-year life of the project…
    The fight in Oklahoma points to the deepening divide between some conservationists and the Obama administration over its push to clear the way for renewable energy development despite hazards to eagles and other protected species.
    The U.S. Fish and Wildlife Service, the Interior Department agency tasked with protecting eagles and other wildlife to ensure their survival, is not sure how many eagles have been killed each year by wind farms amid rapid expansion of the facilities under the Obama administration.
    UNDERESTIMATED EAGLE DEATHS
    ***Reporting is voluntary by wind companies whose facilities kill eagles, said Alicia King, spokeswoman for the agency’s migratory bird program.
    She estimated wind farms have caused 85 deaths of bald and golden eagles nationwide since 1997, with most occurring in the last three years as wind farms gained ground through federal and state grants and other government incentives…
    ***Some eagle experts say federal officials are drastically underestimating wind farm-related eagle mortality. For example, a single wind turbine array in northern California, the Altamont Pass Wind Resource Area, is known to kill from 50 to 70 golden eagles a year, according to Doug Bell, wildlife program manager with the East Bay Regional Park District.
    Golden eagle numbers in the vicinity are plummeting, with a death rate so high that the local breeding population can no longer replace itself, Bell said.
    The U.S. government has predicted that a 1,000-turbine project planned for south-central Wyoming could kill as many as 64 eagles a year.
    ***It is illegal to kill bald and golden eagles, either deliberately or inadvertently, under protections afforded them by two federal laws, the Migratory Bird Treaty Act and the Bald and Golden Eagle Protection Act…
    In the past, federal permits allowing a limited number of eagle deaths were restricted to narrow activities such as scientific research…
    ***Now the U.S. Fish and Wildlife Service is seeking to lengthen the duration of those permits from five to 30 years to satisfy an emerging industry dependent on investors seeking stable returns…

    http://in.reuters.com/article/2013/06/13/usa-eagles-wind-idINL2N0EP1ZS20130613

    ——————————————————————————–

  141. rgbatduke at 1:17 pm – Oh Yes, follow the money. Corporate America, which of course includes Big Oil, has consistently been the main supplier of money to the Green Movement for decades.

  142. ferdberple 7:30 pm. Impressive cherrypicking of a partial sentence there to make it sound as if I’m wrong.

  143. In the past I ahve defended Nick Stokes for making pertinent points despoite ther being unpopular here.

    However he has really made a fool of himself here.
    The question isn’t who made this particular average of model outputs it is whether anyone should make an average of model outputs at all. Clearly, Monckton has made this average of model outputs to criticise the average of model outputs in the forthcoming AR5 (read the post).
    Yet, the posts of rgbatduke persuasively argue that making an average of model outputs is a meaningless exercise anyway.

    But criticising Monckton for taking the methodology of AR5 seriously is daft.
    Criticising AR5 for not being serious is the appropriate response.

    I look forward to Nick Stokes strongly condemning any averaging of meolds in AR5. But I fear I may be disappointed.

  144. Why do all these predictions get based on a linear projection? Try putting a cyclic waveform on the noisy one and compare the correlations then. They beat the hell out of any linear ones.

  145. M Courtney says: June 14, 2013 at 12:26 am
    “However he has really made a fool of himself here.
    The question isn’t who made this particular average of model outputs it is whether anyone should make an average of model outputs at all.”

    Model averaging is only a small part of the argument here. Let me just give a few quotes from the original RGB post:

    “Note the implicit swindle in this graph — by forming a mean and standard deviation over model projections and then using the mean as a “most likely” projection and the variance as representative of the range of the error, one is treating the differences between the models as if they are uncorrelated random variates causing >deviation around a true mean!.”

    “What I’m trying to say is that the variance and mean of the “ensemble” of models is completely meaningless, statistically because the inputs do not possess the most basic properties required for a meaningful interpretation. “

    “Why even pay lip service to the notion that or for a linear fit, or for a Kolmogorov-Smirnov comparison of the real temperature record and the extrapolated model prediction, has some meaning?”

    “This is why it is actually wrong-headed to acquiesce in the notion that any sort of p-value or Rsquared derived from an AR5 mean has any meaning.”

    My simple point is that these are features of Lord Monckton’s graphs, duly signed, in this post. It is statistical analysis that he added. There is no evidence that the IPCC is in any way responsible. Clear?

    As to averaging models, no, I don’t condemn it. It has been the practice since the beginning, and for good reason. As I said above, models generate weather, from which we try to discern climate. In reality, we just have to wait for long-term averages and patterns to emerge. In model world, we can rerun simultaneously to try to get a common signal. It’s true that models form an imperfect population, and fancy population statistics may be hard to justify. But I repeat, the fancy statistics here seem to be Monckton’s. If there is a common signal, averaging across model runs is the way to get it.

  146. David Cage says:
    June 14, 2013 at 12:32 am
    Why do all these predictions get based on a linear projection? Try putting a cyclic waveform on the noisy one and compare the correlations then. They beat the hell out of any linear ones.

    Indeed this analysis (which shows short term cyclic forms in the UAH data) http://s1291.photobucket.com/user/RichardLH/story/70051 supports the non-linear argument.

  147. It’s true that models form an imperfect population, and fancy population statistics may be hard to justify.

    Which is the point.
    Monckton can justify it by referring ot AR5 which he is commenting on. Whatever fancy stastistics he uses is not relevant to the question of whether including different models – that have no proven common physics – is appropriate at all. He is commenting on AR5.

    The point of the original RGB post, as you quote, is the latter idea: The question of whether including different models that have no common physics is appropriate at all.
    So what Monckton did is irrelevant to the original RGB post. Monckton was addressing AR5.

    AR5 is the problem here (assuming the blending of disparate models still occurs in the published version).

  148. The following is a summary of the comments concerning the observed and unexplained end of global warming. The comments are interesting as they show a gradual change in attitudes/beliefs concerning what is the end of global warming.

    Comment:
    If the reasoning in my above comment is correct the planet will now cool which would be an end to global warming as opposed to a pause in global warming.
    Source“No Tricks Zone“

    http://notrickszone.com/2013/06/04/list-of-warmist-scientists-say-global-warming-has-stopped-ed-davey-is-clueless-about-whats-going-on/

    5 July, 2005
    “The scientific community would come down on me in no uncertain terms if I said the world had cooled from 1998. OK it has but it is only 7 years of data and it isn’t statistically significant…,” Dr. Phil Jones – CRU emails.

    7 May, 2009
    “No upward trend…has to continue for a total of 15 years before we get worried,” Dr. Phil Jones – CRU emails.

    15 Aug 2009
    “…This lack of overall warming is analogous to the period from 2002 to 2008 when decreasing solar irradiance also countered much of the anthropogenic warming…,” Dr. Judith L. Lean – Geophysical Research Letters.

    19 November 2009
    “At present, however, the warming is taking a break.[...] There can be no argument about that,” Dr. Mojib Latif – Spiegel.

    19 November 2009
    “It cannot be denied that this is one of the hottest issues in the scientific community. [….] We don’t really know why this stagnation is taking place at this point,” Dr. Jochem Marotzke – Spiegel.

    13 February 2010
    Phil Jones: “I’m a scientist trying to measure temperature. If I registered that the climate has been cooling I’d say so. But it hasn’t until recently – and then barely at all.”
    BBC: “Do you agree that from 1995 to the present there has been no statistically-significant global warming?”
    Phil Jones: “Yes, but only just.”

    2010
    “…The decade of 1999-2008 is still the warmest of the last 30 years, though the global temperature increment is near zero…,” Prof. Shaowu Wang et al – Advances in Climate Change Research.

    2 June 2011
    “…it has been unclear why global surface temperatures did not rise between 1998 and 2008…,” Dr Robert K. Kaufmann – PNAS.

    18 September 2011
    “There have been decades, such as 2000–2009, when the observed globally averaged surface-temperature time series shows little increase or even a slightly negative trend1 (a hiatus period)…,” Dr. Gerald A. Meehl – Nature Climate Change.

    14 October 2012
    “We agree with Mr Rose that there has been only a very small amount of warming in the 21st Century. As stated in our response, this is 0.05 degrees Celsius since 1997 equivalent to 0.03 degrees Celsius per decade.” Source: metofficenews.wordpress.com/, Met Office Blog – Dave Britton (10:48:21) –

    30 March 2013
    “…the five-year mean global temperature has been flat for a decade,” Dr. James Hansen –

    The Economist.
    7 April 2013
    “…Despite a sustained production of anthropogenic greenhouse gases, the Earth’s mean near-surface temperature paused its rise during the 2000–2010 period…,” Dr. Virginie Guemas – Nature Climate Change.

    22 February 2013
    “People have to question these things and science only thrives on the basis of questioning,” Dr. Rajendra Pachauri – The Australian.

    27 May 2013
    “I note this last decade or so has been fairly flat,” Lord Stern (economist) – Telegraph.

  149. “13 February 2010
    Phil Jones: “I’m a scientist trying to measure temperature. ….”

    I can read a thermometer AND can use Microsoft Excel. Now where is my grant money?

  150. “Lets be clear on a couple things. Feynman is no authority on how science works. read his opinion on renormalization and you will understand that he did not practice what he preached.
    Popper was likewise wrong about science. This isnt a matter of philosophical debate, its a matter of historical fact.”

    Wow Mosher bashes Feynman ánd Popper. So wot’s your achievement in science compared with Feyman and Popper, Mosher? Already received a Noble prize? Your arrogance is toe-curling…

  151. Steven says: June 13, 2013 at 4:36 am

    “I keep seeing these graphs with linear progressions. Seriously. I mean seriously. Since when is weather/climate a linear behavorist? The equations that attempt to map/predict magnetic fields of the earth are complex Fourier series. Is someone, somewhere suggesting that the magnetic field is more complex than the climate envelope about the earth? I realize this is a short timescale and things may look linear but they are not. Not even close. Like I said in the beginning, the great climate hoax is nothing more than what I just called it. I am glad someone has the tolerance to deal with these idiots. I certainly don’t.”
    ————————————–
    YES, YES, and YES!

    I can’t see how any legitimate scientist would entertain these climate hacks beyond the first mention of a linear projection in their papers. At that statement they prove they don’t know what they are talking about. I agree you can use a line to interpolate data between two actual data points, but to fit a line and then project that into the distant future? Give me a giant break.

    If you don’t know the real function it is wrong to assume a line will work. You might as well assume a Taylor expansion out to twelfth order for that matter. Assume anything; you’ll most likely be wrong. Assuming a line doesn’t get you any closer to being right.

    The most amazing thing to me is that the line doesn’t even fit the data displayed! If they would analyze the residuals they’d see they weren’t normally distributed. The line isn’t even appropriate over the short timescale they plot.

    Dr. Santer’s 17 year plot clearly shows the temperatures have gone up and are now coming back down. It’s not even leveling off, no more than the peak of the voltage on an AC circuit. It smoothly goes up and comes back down.

    Can you imagine these guys as an artillery battery? They’d plot the first few points of the shell as it comes out of the barrel and project it linearly to their target.

  152. Blimey – some incredible minds here. Genuinely impressive stuff!
    I shall now sum up my research in this matter using my limited intellect.
    Its June.
    I’m cold.

  153. ‘After summer floods and droughts, freezing winters and even widespread snow in May this year, something is clearly wrong with Britain’s weather.

    ‘Concerns about the extreme conditions the UK consistently suffers have increased to such an extent that the Met Office has called a meeting next week to talk about it.

    ‘Leading meteorologists and scientists will discuss one key issue: is Britain’s often terrible weather down to climate change, or just typical?’

    Read more: http://www.dailymail.co.uk/news/article-2341484/Floods-droughts-snow-May-Britains-weather-got-bad-Met-Office-worried.html#ixzz2WBzcNZIc
    Follow us: @MailOnline on Twitter | DailyMail on Facebook

  154. Can I edit Dr. Stokes’ comment to make clearer?

    If there is a common signal programmed into the code of multiple models, averaging across model runs is the way to get it to show up in the output.

  155. barry says:
    It is not enough to cite a quote out of context. Data, too must be analysed carefully, and not simply stamped with pass/fail based on a quote. Other attempts at finding a benchmark (a sound principle) are similar to Santer’s general conclusions that you need multi-decadal records to get a good grasp of signal (20, 30, 40 years).

    I actually agree with this statement. The amount of time is not the biggest factor. The question is related to finding some factors that could come into play (“principle”). That is why the almost perfect fit of global temperatures with the DPO is so significant.

    The current 16.5 years of no warming is actually around 8 years of warming followed by 8+ years of cooling that peaks right at the PDO switch. That is the “sound principle” that demonstrates that we really don’t even need to wait 17 years, we can say with high certainty that the PDO has a stronger influence on temperatures than CO2. And, if that is true then CO2’s effect is very small.

  156. Nick Stokes says:
    As to averaging models, no, I don’t condemn it. It has been the practice since the beginning, and for good reason. As I said above, models generate weather, from which we try to discern climate. In reality, we just have to wait for long-term averages and patterns to emerge. In model world, we can rerun simultaneously to try to get a common signal. It’s true that models form an imperfect population, and fancy population statistics may be hard to justify. But I repeat, the fancy statistics here seem to be Monckton’s. If there is a common signal, averaging across model runs is the way to get it.

    Nick, the reason averaging A MODEL makes sense is because you are trying to eliminate the affect of noise. When you average multiple models what are you doing? In essence you are averaging differing implementations of physics. Please inform me what a normal distribution of different physics provides? And, what is the meaning of the mean of a normal distribution of different physics. Dr. Brown made this clear. It is so idiotic I can’t even imagine you supporting this nonsense. You are smarter than that.

  157. Nick Stokes says:
    June 14, 2013 at 1:05 am
    As to averaging models, no, I don’t condemn it. It has been the practice since the beginning, and for good reason. As I said above, models generate weather, from which we try to discern climate. In reality, we just have to wait for long-term averages and patterns to emerge.
    ============
    There is no good reason to average chaos. It is a mathematical nonsense to do so because the law of large numbers does not apply to chaotic time series. There is no mean around which the data can be expected to converge.

    The reason averaging works for some problems is because there is a mean to be discovered. You sample contains noise, and over time the noise will be random. Some positive and some negative. Over time the law of large numbers operates to equal out the positive and negative noise, and the signal will emerge.

    However, as rgbatduke has posted, all this goes out the window when you are dealing with chaos. Chaotic systems are missing a constant mean and constant deviation. There is no convergence, only spurious convergence. False, misleading convergence that is not what it appears.

    In chaotic systems you have attractors, which might be considered local means. When you use standard statistics to analyze them, you appear to get good results while the system is orbiting an attractor, but then it shoots off towards another attractor and makes a nonsense of your results.

    So the idea that you can improve your results by taking longer samples of chaotic systems is a nonsense. The longer a chaotic system is sampled, the more likely if will diverge towards another attractor, making your results less certain not more certain.

    This is the fundamental mistake in the mathematics of climate. The assumption that you can average a chaotic system (weather) over time and the chaos can be evened out as noise. That is mathematical wishful thinking, nothing more. Chaos is not noise. It looks like noise, but it is not noise and cannot be treated as noise if you want to arrive at a meaningful result.

  158. Why are there so many models? In engineering we have standards and committees to make changes to the standards. Assuming there is only one physics there should be only one model where all the changes that get made must be approved by a standards committee. Sure that takes a little extra effort but the result is one, arguably better, model. Instead we have dozens of which none are of much value (other than to the paychecks of the modelers).

    Of course, this is the difference between researchers and engineers. The former is not too concerned with accuracy.

  159. Richard M says:
    June 14, 2013 at 6:20 am

    “Of course, this is the difference between researchers and engineers. The former is not too concerned with accuracy.”

    Also the difference between discovery and manufacture.

  160. “Richard M says:

    June 14, 2013 at 6:20 am”

    In engineering we (I do) know what +/- 2 microns are (+/- 3 microns, bin the job and start again). It is measureable, it is finite. On the other hand, computer based climate cartoon-ography, sorry I mean climate modelling, is, in it’s basic form, just a WAG where nothing is finite nor even measured (Other than the monthly pay check).

  161. Richard M says: June 14, 2013 at 6:08 am
    “When you average multiple models what are you doing? In essence you are averaging differing implementations of physics. Please inform me what a normal distribution of different physics provides?”

    There is no expectation of a normal distribution involved in averaging.

    But why do you think different models use different physics?

    ferdberple says: June 14, 2013 at 6:19 am
    “There is no good reason to average chaos.”

    This would mean that you could never speak of any weather average. But we do that all the time, and find it useful.

    Some folks are overly dogmatic about chaos.

  162. I am most grateful to Professor Brown for having pointed out that taking an ensemble of models that use different code, as the Climate Model Intercomparison Project does, is questionable, and that it is interesting to note the breadth of the interval of projections from models each of which claims to be rooted in physics.

    In answer to Mr. Stokes, the orange region representing the interval of models’ outputs will be found to correspond with the region shown in the spaghetti-graph of models’ projections from 2005-2050 at Fig. 11.33a of the Fifth Assessment Report. The correspondence between my region and that in Fig. 11.33a was explained in detail in an earlier posting. The central projection of 2.33 K/century equivalent that I derived from Fig. 11.33a seems fairly to reflect the models’ output. If Mr. Stokes thinks the models are projecting some warming rate other than that for the 45 years 2005-2050, perhaps he would like to state what he thinks their central projection is.

    Several commenters object to applying linear regression to the temperature data. Yet this standard technique helpfully indicates whether and at what rate stochastic data are trending upward or downward, and allows comparison of temperature trends with projections such as those in the Fifth Assessment Report. A simple linear regression is preferable to higher-order polynomial fits where – as here – the data uncertainties are substantial.

    Some commenters object to making any comparison at all between what the models predict and what is happening in the real world. However, it is time the models’ projections were regularly benchmarked against reality, and I shall be doing that benchmarking every month from now on. If anyone prefers benchmarking methods other than mine, feel free to do your own thing. One understands that the cry-babies and bed-wetters will not be at all keen to have the variance between prediction and observation regularly and clearly demonstrated: but the monthly Global Warming Prediction Index and comparison graph are already being circulated so widely that it will soon be impossible for anyone to get away with lying to the effect that global warming is occurring at an unprecedented rate, or that it is worse than we ever thought possible, or that the models are doing a splendid job, or that we must defer to the consensus because consensus must be right.

    Finally, Mr. Mansion says that, just as correlation does not imply causation, absence of correlation does not imply absence of causation. In logic he is incorrect. Though correlation indeed does not imply causation, absence of correlation necessarily implies absence of causation. CO2 concentration continues to increase, but temperature is not following it. So, at least at present, the influence of CO2 concentration change on temperature change is not discernible.

  163. “RichardLH says:

    June 14, 2013 at 6:24 am

    Also the difference between discovery and manufacture.”

    Based on my previous post, the discovery you haven’t read (Understood) the science (Drawing)! I agree!

  164. barry says:
    June 13, 2013 at 8:22 pm
    similar to Santer’s general conclusions that you need multi-decadal records to get a good grasp of signal (20, 30, 40 years).
    ================
    The problem is that we are likely dealing with a strange attractor, a fractal distribution, Which implies that regardless of the scale, the variability will appear the same. What this means mathematically is that there is no time scale that will prove satisfactory. There is no time scale at which you can expect the signal to emerge from the noise, because the noise is not noise. It is chaos. The system will continue to diverge, no matter if you collect data for 100, 1000, 1 million, 1 billion years.

    The best that can be hoped for in our current understanding is to look for patterns in how the system orbits its attractors. This behavior may give some degree of cyclical predictability, or not, depending on the motion of the attractors. We use this approach to calculate the ocean tides with a high degree of precision, even though the underlying physics is chaotic.

    Climate science on the other hand has ignored the cyclical behavior of climate and instead attempted to use a linear approximation of a non-linear system. And is now confused because the linear projections are diverging from observation. Yet this divergence is guaranteed as a result of the underlying chaotic time series.

  165. “Nick Stokes says:

    June 14, 2013 at 6:33 am

    This would mean that you could never speak of any weather average. But we do that all the time, and find it useful.

    Some folks are overly dogmatic about chaos.”

    And some folks are overly accepting of “averages”. It’s meaningless to compare an absolute, as is ALWAYS the case in weathercasts, with an average. But it is done everyday, in every weathercast.

  166. Monckton of Brenchley says: June 14, 2013 at 6:36 am
    “The central projection of 2.33 K/century equivalent that I derived from Fig. 11.33a seems fairly to reflect the models’ output. If Mr. Stokes thinks the models are projecting some warming rate other than that for the 45 years 2005-2050…”

    I was far less critical of your graphs than Prof Brown, and I don’t particularly want to argue projections here. I was merely pointing out that they are indeed your estimates and statistics, and the graphs are not IPCC graphs, as they are indeed clearly marked.

  167. Nick Stokes says at June 14, 2013 at 6:33 am

    But why do you think different models use different physics?

    Because they all give different results. Sure they must have some bits in common (I hope they use a round planet) but they don’t all model everything in the same way.

    So what are you bundling?
    Not variations in inputs to see what the model predicts are the most significant component.
    Not variations in a single parameter model to see if that parameter is modelled correctly.

    You are averaging a load of different concepts about how the climate works. That is the error that rgbatduke skewered at June 13, 2013 at 7:20 am…

    there are a lot of different models out there, all supposedly built on top of physics, and yet no two of them give anywhere near the same results!

    BTW, Nick Stokes: Please don’t think I am criticising you personally. I greatly respect your coming here into the lion’s den. I just have nowhere else to go now I can’t engage at the Guardian (sigh).

  168. The faulty mathematics of the hockey stick and tree ring calibration could well be what led climate science down a dead end. The hockey stick made climate appear linear over large enough time scales to give some assurance of predictability. By minimizing the signal and amplifying the noise, tree ring calibration made temperatures appear stable over very long time periods, leading climate scientists to believe that linear models would prove well behaved. However, they were built on faulty mathematics. The fault is called “selection by the dependent variable”. It results in a circular argument. It is a reasonably well known statistical error and it is hard to believe the scientists involved were not aware of this, because some of them were formally trained in mathematics.

  169. Duncan says:
    June 14, 2013 at 5:11 am

    Blimey – some incredible minds here. Genuinely impressive stuff!
    I shall now sum up my research in this matter using my limited intellect.
    Its June. I’m cold.
    ——————————————————————————————————-
    And I am out of funds, too, because this year’s unnervingly long, cold winter cost me 1000 Euros extra just for heating my home. Out of the window go my summer holidays…

    Global warming? I am all for it! But where is it?

  170. Mr. Stokes vexatiously persists in maintaining that Professor Brown had criticized my graphs, long after the Professor himself has plainly stated he had criticized not my graphs but the IPCC’s graphs, from one of which I had derived the interval of models’ projections displayed in orange and correctly attributed in the second of the two graphs in the head posting.

    Of course it is embarrassing to Mr. Stokes that global warming is not occurring at anything like the predicted rate; and it is still more embarrassing to him that the variance between prediction and reality is now going to be visibly displayed every month. But continuing to lie to the effect that Professor Brown was criticizing my graphs when the Professor has said he was doing no such thing does not impress. Intellectual dishonesty of this kind has become the hallmark of the climate extremists.

  171. Lars said…Only models which have been validated by real data should continue to be used.

    I’m a little confused. If you are using real data,then I was taught,and have experienced,that I do not need a model. Right now (8:30 AM MDT),my thermometer outside my south facing window,in the shade, shows it is +5C.After a little further checking,yup,it is indeed June 14/2013,not November,so it is cool out. I do not need a model to tell me that! The only model I need is the one of the F-104 (in 1/48th scale),I helped my then 12 year old step sister build,which still hangs in her bedroom. It is no more a reality capable of doing anything other than collecting dust,then a climate model is. And the rednekk truck I will use today to get to the lake pulling the boat I will use for fishing(fingers crossed) is a reality,not a model.
    Has everybody forgotten GIGO?

  172. The problem is that we are likely dealing with a strange attractor, a fractal distribution, Which implies that regardless of the scale, the variability will appear the same.

    Climate, in loose terms, is the average of the variability.

    What this means mathematically is that there is no time scale that will prove satisfactory. There is no time scale at which you can expect the signal to emerge from the noise, because the noise is not noise. It is chaos. The system will continue to diverge, no matter if you collect data for 100, 1000, 1 million, 1 billion years.

    By that reckoning, the seasons should be indistinguishable.

    The best that can be hoped for in our current understanding is to look for patterns in how the system orbits its attractors. This behavior may give some degree of cyclical predictability, or not, depending on the motion of the attractors. We use this approach to calculate the ocean tides with a high degree of precision, even though the underlying physics is chaotic.

    There is no reason to presume that, given an ever increasing forcing, climate should be cyclical. On geological time scales stretching to hundreds of millions of years, there is no cyclical behaviour. There is no reason to expect it on every time scale. The cyclical, or osciallting processes we are sure of (ENSO, solar cycle on multi-decadal scale) are the variability within the climate system. You appear to be arguing that the world’s climate has ascillated roughly evenly around a mean for the length of its existence. Surely you know that this is wrong.

    Climate science on the other hand has ignored the cyclical behavior of climate and instead attempted to use a linear approximation of a non-linear system. And is now confused because the linear projections are diverging from observation. Yet this divergence is guaranteed as a result of the underlying chaotic time series.</blockquote.

    I'm fairly confident 'climate science', which discusses the four seasons, is aware of cyclical behaviour. Weather is chaos, climate is more predictable. The millennial reconstructions don't have cyclical patterns, but they do have fluctuations. We now have an ever-increasing forcing agent, so the question is not whether the global climate will change, but by how much. That is where the discussion of supposedly diverging trends is centred.

  173. Another strong el Niño could – at least temporarily – bring the long period without warming to an end.

    That is true. It will certainly not be CO2 that will bring the long period of warming to an end. Look at the following graph for RSS.

    http://www.woodfortrees.org/plot/rss/from:1996.9/plot/rss/from:1996.9/trend

    The area on the left that is below the green flat slope line needs a 1998 or 2010 El Nino to counter it. Any El Nino that is less strong will merely move the start time for a flat slope for RSS from December 1996 towards December 1997.

  174. Monckton of Brenchley:
    Though correlation indeed does not imply causation, absence of correlation necessarily implies absence of causation.
    You have me puzzling on this one. If true, then RSA encryption should be impossible. The input causes the output, but as I understand it (probably incorrectly) it is next to impossible to find a correlation between the two. I don’t immediately see how your statement is a logical necessity.

  175. Juan, it would be the encryption algorithm that causes the output, wouldn’t it? Inspecting these two, you could discover a correlation to the output.

    I think Monckton’s point holds. Consider two kinds of event which are uncorrelated. What would you take as evidence that “in spite of complete lack of correlation, events of type A cause events of type B”? I don’t think anything would count as evidence, do you? I can’t imagine a possible world in which there is such evidence. The meaning of “causation” and “complete lack of correlation” just don’t overlap. So, I would conclude that absence of correlation necessarily implies absence of causation.

  176. juan slayton says:
    June 14, 2013 at 9:15 am

    “…it is next to impossible to find a correlation between the two.”

    You generally do not have “the two”, just the one, the output.

  177. All those little adjustment upwards in recent history has come back to haunt the alarmists. The temperatures keep on failing to rise so they have to keep on adjusting just to keep the trend flat, hehe

  178. For those with an interest, several months ago, the University of Kentucky hosted of forum on climate change with three excellent speakers who were all self-described conservatives. Liberals reported how they better understand that there are thoughtful conservative perspectives on, and solutions to, climate change, thus allowing for a broadened public discussion. In turn, conservatives in attendance learned the same thing. You can watch the recording of this event at http://bit.ly/135gvNa. The starting time for each speaker is noted at this page, so you can listen to the speakers of greatest interest to you.

  179. ferdberple says: June 14, 2013 at 6:56 am “The faulty mathematics of the hockey stick and tree ring calibration could well be what led climate science down a dead end. The hockey stick made climate appear linear over large enough time scales to give some assurance of predictability. …….”

    The hockey stick is primarily an AGW industry marketing tool created by Michael Mann, Limited Liability Climatologist (LLC), in response to a pressing market need for a scientific-looking analysis product which eliminates the Medievel Warm Period.

    But do the climate modelers take the hockey stick seriously enough to incorporate its purported historical data into their hindcasts and/or into their predictions, either directly or indirectly? Perhaps someone can give us some informed information as to whether they do or they don’t.

    In any case, what ever happens with the future trend in global mean temperature — up, down, or flat — the climate science community as a whole will never abandon its AGW dogma.

    The great majority of climate scientists — 80%, 90%, 97%, whatever percentage it actually is — will continue with “It’s the CO2, and nothing else but the CO2, so help us God”, regardless of how convoluted the explanations must become to support that narrative.

  180. I am trying, just for fun, as a kind of a game, to imagine how the politicians, public, and warmists would react to global cooling. …. ….It is curiously difficult to imagine the scenario of global cooling after 20 years of nonstop media discussions, scientific papers, the IPCC reports, yearly climate conferences, and books all pushing global warming as a crisis. …. …..
    To imagine global cooling, it seems it is necessary to pretend or try to imagine the warming of the last 70 years had nothing to do with the increase in atmospheric CO2. Try to imagine that the warming was 100% due to solar magnetic cycle changes. (That makes it possible for the warming to be reversible.) Got that picture? Now imagine the onset of significant cooling, back to 1850’s climate. The cooling will be significant and rapid, occurring over roughly 5 years. Can you picture that change?

    Will the public request a scientific explanation for the onset of significant planetary cooling? Will the media start to interview the so called ‘skeptics’? Will the media connect the sudden slowdown of the solar magnetic cycle with the planetary cooling? … …Will the media ask why no one noticed that there are cycles of warming and cooling in the paleo climate record that correlate with solar magnetic cycle changes? The warming and cooling cycles are clearly evident. There are peer reviewed papers that connected past solar magnetic cycles changes with the warming and cooling cycles. How is it possible that this evidence was ignored? When there was 17 years without warming why did no one relook at the theory?

    How long will the public accept massive subsides of scam green energy if there is unequivocal significant evidence the planet is cooling? Add a stock market crash and a currency crisis to the picture.

    Greenland ice temperature, last 11,000 years determined from ice core analysis, Richard Alley’s paper.

    http://www.dailymail.co.uk/news/article-2341484/Floods-droughts-snow-May-Britains-weather-got-bad-Met-Office-worried.html#ixzz2WBzcNZIc

    http://en.wikipedia.org/wiki/Little_Ice_Age

    Little Ice Age
    The Little Ice Age (LIA) was a period of cooling that occurred after the Medieval Warm Period (Medieval Climate Optimum).[1] While it was not a true ice age, the term was introduced into the scientific literature by François E. Matthes in 1939.[2] It has been conventionally defined as a period extending from the 16th to the 19th centuries,[3][4][5] or alternatively, from about 1350 to about 1850,[6]….

    Europe/North America
    ….The population of Iceland fell by half, but this was perhaps caused by fluorosis after the eruption of the volcano Laki in 1783.[20] Iceland also suffered failures of cereal crops, and people moved away from a grain-based diet.[21] The Norse colonies in Greenland starved and vanished (by the early 15th century), as crops failed and livestock …. …. Hubert Lamb said that in many years, “snowfall was much heavier … ….Crop practices throughout Europe had to be altered to adapt to the shortened, less reliable growing season, and there were many years of dearth and famine (such as the Great Famine of 1315–1317, although this may have been before the LIA proper).[25] According to Elizabeth Ewan and Janay Nugent, “Famines in France 1693–94, Norway 1695–96 and Sweden 1696–97 claimed roughly 10% of the population of each country. In Estonia and Finland in 1696–97, losses have been estimated at a fifth and a third of the national populations, respectively.”[26] Viticulture disappeared from some northern regions. Violent storms caused serious flooding and loss of life. Some of these resulted in permanent loss of large areas of land from the Danish, German and Dutch coasts.[24] … ….Historian Wolfgang Behringer has linked intensive witch-hunting episodes in Europe to agricultural failures during the Little Ice Age.[36]

    Comment:
    As the planet has suddenly started to cool, I would assume GCR now again modulates planetary cloud cover. We certainly appear to live in interesting times.

    http://ocean.dmi.dk/arctic/meant80n.uk.php

  181. Accuracy was never the goal of climate models — there’s no money in that. Scientists were forced to “Chicken Little” the results to try and spur action by governments. Seventeen years later and the sky hasn’t fallen. The new “Chicken Little” meme is “Extreme Climate Events”.

  182. Steven says:
    June 13, 2013 at 4:36 am

    I keep seeing these graphs with linear progressions. Seriously. I mean seriously. Since when is weather/climate a linear behavorist? …. I am glad someone has the tolerance to deal with these idiots. I certainly don’t.
    >>>>>>>>>>>>>>>>>>>
    Monckton and others use the assumptions made by the Warmists, like linear behavior and use their much abused/fudged data sets and STILL win the scientific debate. No wonder the Climastrologists refused to debate knowledgeable people or even entertain questions about warming from the lay audience. Only by continually moving the goal posts and silencing any and all questions can they keep the Hoax going.

  183. “””””…..ferdberple says:

    June 14, 2013 at 6:19 am

    Nick Stokes says:
    June 14, 2013 at 1:05 am
    As to averaging models, no, I don’t condemn it. It has been the practice since the beginning, and for good reason. As I said above, models generate weather, from which we try to discern climate. In reality, we just have to wait for long-term averages and patterns to emerge.
    ============
    There is no good reason to average chaos. It is a mathematical nonsense to do so because the law of large numbers does not apply to chaotic time series. There is no mean around which the data can be expected to converge………”””””””””

    Averaging is a quite well defined, and quite fictitious process, that we simply made up in our heads; like all mathematics. It’s over half a century, since I last had any formal instruction in mathematics; but I do have a degree in it, so I vaguely recollect how it can sometimes be quite pedantic, in its exact wording.

    But in layperson lingo, it is quite simple. You have a set of numbers; hopefully each of them expressed in the same number system; binary/octal/decimal/whatever.
    You add all of the numbers together, using the rules for addition, that apply to whatever branch or arithmetic you are using, and then you divide the total by the number of original input numbers, you started with, and the result is called the “average”. Some may use the word mean, as having the same meaning; but I prefer to be cautious, and not assume that “mean” and “average” are exactly the same thing.
    So that is what “average” is. Now notice, I said nothing about the original numbers, other than they all belong to the same number system. There is no assumption that the numbers are anything other than some numbers, and are quite independent of each other.

    No matter, the definition of “average” doesn’t assume any connections, real or imagined, between the numbers. There also is no assumption that the “average” has ANY meaning whatsoever. It simply is the result of applying a well defined algorithm, to a set of numbers.

    So it works for the money amount on your pay check, each pay interval, or for the telephone numbers in your local phone book, or for the number of “animals” (say larger than an ant) per square meter of the earth surface (if you want to bother checking the number in your yard.)
    Or it also works for the number you get if you read the thermometer once per day, or once per hour, outside your back door.

    In all cases, it had NO meaning, other than fitting the defined function “average”, that we made up.

    If you sit down in your back yard, and mark out a square meter, and then count the number of larger than ant sized animals in that area; you are not likely to count a number, that equals the world global average value. Likewise, whatever the source of your set numbers, you aren’t likely to ever find that average number wherever you got your set from. It’s not a “real” number; pure fiction, as distinct from the numbers you read off your back door thermometer, which could be classified as “data”.

    Averages, are not “data”; they are the defined result of applying a made up algorithm to a set of numbers; ANY set of numbers drawn from a single number system, and the only meaning they have, is that they are the “average”.

  184. @Bart

    Bart says:
    June 13, 2013 at 3:51 pm

    jai mitchell says:
    June 13, 2013 at 3:41 pm

    “However, the change in temperatures during the last 5 decades are not based on changes in the sun’s intensity since that effect is pretty much instantaneous.”

    Sigh… Just another guy who does not understand the concept of frequency response.
    ————-

    Bart,

    your link simply says that the response can be delayed by up to 90 degrees. Since the period of the cycle is 11 years, 90 degrees is 5.5 years.

    the average over the entire cycle has not changed significantly over the last 50 years. It sounds like you don’t understand the question.

    I will restate it.

    If the LIA was caused solely by solar activity (and not also caused by an abnormal increase in volcanic activity) Then the amount of warming since then would cause a significant change in the cycle of temperatures based on the current solar cycle every 6 years or so (from trough to maximum)

    Your link only says that this effect is “delayed” not “averaged” over the period of the sine wave.

  185. M Courtney says:June 14, 2013 at 6:54 am
    “That is the error that rgbatduke skewered at June 13, 2013 at 7:20 am…

    ‘there are a lot of different models out there, all supposedly built on top of physics, and yet no two of them give anywhere near the same results!’

    That’s not “skewered”. It’s nonsense. The Earth “uses the same physics” and famously gets different results, day after day. The phyaics models use is clearly stated, and differs very little. The same model will produce different weather with a small change in initial conditions (butterfly effect).

  186. @rgbatduke
    One thing in your 1st post that I missed and is really important is your reference to Taylor’s theorem, which is a really important point.
    Given small intervals, Taylor’s theorem allows one to linearise a system by ignoring higher derivatives. Normally we do this knowing that it is an approximation and compute a new results as one extends the interval from the initial condition, taking care to achieve stability. As the interval increases one needs an increased number of higher order terms to describe the system. However, in an observed,system such as temperature, we have difficulty in extracting the 1st derivative, let alone the higher derivatives. Hence we use linear trends because we can’t measure the signal sufficiently accurately to do any thing else.

    This impinges on averaging model results. If we have several models, their outputs at T+Dt could be identical and we could say that the models were good. However, the mechanisms could be different and that the higher order derivatives could be different at T=0. However the models have been calibrated over a short period so that they conform to a data set. When one averages the “output”, one is, by implication, also obtaining an average of the initial derivatives, which seems highly questionable. As time increases, the results of the models will depend increasingly on the higher derivatives at the initial conditions and they will then diverge. One could say that the models’ first order term is reasonably correct but by averaging one is also saying that the higher derivatives don’t matter..

  187. Monckton of Brenchley says:
    June 14, 2013 at 7:04 am
    “Mr. Stokes vexatiously persists in maintaining that Professor Brown had criticized my graphs, long after the Professor himself has plainly stated he had criticized not my graphs but the IPCC’s graphs”

    The Professor plainly stated what graphs he was criticising:
    “This is reflected in the graphs Monckton publishes above, where …”
    He seems to have been under the impression that they are IPCC graphs, but they aren’t, are they?

    His criticisms are quite specific. No IPCC graphs have been nominated which have the kind of statistics that he criticises. Your graphs do.

  188. I often read these threads from the bottom up, so I see the recent comments, and can go back up to see what inspired them.

    So I finally got to the original post of rgbatduke, that many had referenced.

    So now I know, that I made a correct decision, when I decided to forgo the pleasures of rigorous quantum mechanics; and launch into a career in industry instead. Even so, starting in electronics with a degree in Physics and Maths, instead of an EE degree, made me already a bit of an oddball.

    But I also remember when I got dissatisfied with accepting that the Voltage gain for a Pentode stage was simply “gm. Rl” and I figured, I should be able to start from the actual electrode geometries inside a triode or pentode, and solve the electrostatic field equations, to figure out where all the electrons would go, so I would have a more accurate model of the circuit behavior. And this was before PCs and Spice. Well I also remember, when I decided on the total idiocy of that venture, and consigned it to the circular file.

    So Prof. Robert demonstrated why sometimes, too much accuracy is less than useless, if you can’t actually use the result to solve real problems. Well I eventually accepted that Vg = gm.Rl is also good enough for both bipolar and MOS transistors too, much of the time. Well, you eventually accept that negative feedback designs are even better, and can make the active devices almost irrelevant.

    It is good if your Physics can be rendered sufficiently real, so you might derive Robert’s carbon spectrum to advance atomic modeling capability, for a better understanding of what we think matter is; but no, it isn’t the way to predict the weather next week.

    A recently acquired PhD physicist friend, who is currently boning up on QM at Stanford; mostly as an anti-Alzheimer’s brain stimulus, told me directly, that QM can only mess things up, more than they currently are; well unless of course, you need it.

    Thanks Robert..

  189. Nick Stokes says (June 14, 2013 at 11:55 am): “The Earth “uses the same physics” and famously gets different results, day after day.”

    Different from what? AFAIK we only have one Earth.

  190. One thing about the uncertainty in the trend since Jan 1996, the trend could be zero or -0.029 per decade and the trend could be 0.207 C per decade, with equal probability, but more likely around 0.089 C per decade.

  191. “Gary Hladik says:

    June 14, 2013 at 12:55 pm

    Different from what? AFAIK we only have one Earth.”

    Certainly only one Earth in my world however some may live multiple realities.

  192. ferdberple says: at June 14, 2013 at 6:19 am quite a lot about chaos that meant that models couldn’t be aggregated, notably

    There is no good reason to average chaos

    Nick Stokes replied at June 14, 2013 at 6:33 am

    This would mean that you could never speak of any weather average. But we do that all the time, and find it useful. Some folks are overly dogmatic about chaos.

    Which sounded reasonable. I took Nick Stokes at his word, but then he says…

    The Earth “uses the same physics” and famously gets different results, day after day. The physics models use is clearly stated, and differs very little. The same model will produce different weather with a small change in initial conditions (butterfly effect).

    So which is it?

    Nick Stokes, you are not sounding consistent.

  193. Consistency
    Nick Stokes says: at June 14, 2013 at 12:09 pm in reply to Monckton of Brenchley, June 14, 2013 at 7:04 am:

    His criticisms are quite specific. No IPCC graphs have been nominated which have the kind of statistics that he criticises. Your graphs do.

    But Monkton is still responding in kind to the leaked AR5 graphs. He has to compare apples with apples even if we don’t like them apples.
    I agree the leaked AR5 graphs are rubbish.
    But for consistency, will you (Nick Stokes) condemn the IPCC if the published AR5 includes such an average?

  194. jai mitchell says:
    June 14, 2013 at 11:22 am

    No, what it says is gain is distributed over frequency, and you cannot deduce sensitivity to long term excitations based on short term ones without thorough knowledge of the frequency response of the system.

    Please. You are out of your depth.

  195. Bart,

    The sinusoidal solar cycle, held at a constant average incidence for 50 years will not produce a long-term warming.

    There is a point when extremist contrarianism becomes disinformation–you have completely crossed that line.

    the solar function has a period of 12 years. On average, it has been relatively constant for over 50 years. you cannot infer warming due to a relatively constant average solar irradiation, even if it does operate on a sinusoidal function. . .that is just voodoo science.

    unless you can prove to me that the earth’s response to increased solar activity isn’t felt for over 40 years. . .I suppose you have a peer reviewed document that states something to that effect?????

  196. @ Patrick

    No Patrick, the RSS is without adjustments, the Hadcrut is with adjustments but they fit together almost completely.

  197. DbStealey

    you said,
    True. And the ‘lower tropo’ is cherry-picked. Global surface temps are the relevant meteric. See here.
    but your link shows RSS lower troposphere values. . .not global surface. If you wanted to use global surface then you should have looked here

    http://www.woodfortrees.org/plot/gistemp/from:1993/plot/gistemp/from:1993/trend/plot/esrl-co2/from:1993/normalise/offset:0.68/plot/esrl-co2/from:1993/normalise/offset:0.68/trend

    (note, the original plot was 1993, not 1997.9 (just before the largest el nino in recorded history that you decided to cherry pick)

  198. jai mitchell says:
    June 14, 2013 at 2:55 pm
    ————————————–

    As has been commented upon in this blog many times, the UV component of TSI fluctuates by a factor of two, on about the time scale of observed sine wave above & below the trend line of recovery from the LIA in average temperature, with appropriate lag to produce observed PDO & AMO oscillations.

  199. M Courtney says: June 14, 2013 at 2:07 pm
    “So which is it?”

    There is no inconsistency there. Weather outcomes are very sensitive to perturbations; this is reflected in model performance. But long term climate averages make sense and are universally used in the everyday world.

    Fluid mechanics have dealt with this for many years. Turbulence is classic chaotic flow. For over a century it has been dealt with by Reynolds averaging.

    “But Monkton is still responding in kind to the leaked AR5 graphs.”
    That makes no sense, and he didn’t even talk about model averaging in his post. I’m simply dealing with his ridiculous attempt to pretend that RGB was not talking about the graphs published in this post, when he clearly said that he was.

  200. Mr. Stokes continues to lie in his habitual fashion. Professor Brown states quite plainly that it was the IPCC’s graph, reproduced as part of one of my graphs, that he was criticizing.

    After you had speculated on who had compiled my graph, which has the words “lordmoncktonfoundation.com” plainly written on it, Professor Brown writes: “Aw, c’mon Nick, you can do better than that. Clearly I was referring to the AR5 ensemble average over climate models, which is pulled from the actual publication IIRC.”

    Professor Brown’s criticism is directed at the compilation of an ensemble from models using different code. That is what the IPCC reproduced in its draft of AR5, and that is what I reproduced from AR5, and labelled it as such.

    I note that Mr. Stokes is entirely unable to refute what my graph demonstrates: that the models are over-predicting global temperatures. It does not matter whether one takes the upper bound or lower bound of the models’ temperature projections or anywhere in between: the models are predicting that global warming should by now be occurring at a rate that is not evident in observed reality. Get used to it.

    The moderators may like to consider whether outright lying on Mr. Stokes’ part is a useful contribution here. It illustrates the intellectual bankruptcy of the paid and unpaid trolls who cling to climate extremism notwithstanding the evidence, but otherwise it is merely vexatious.

  201. jai mitchell says:

    “…a 2.1C increase in 2100 from 1940 levels.”

    And you accuse me of cherry-picking!

    Bart is right, you are way out of your depth. Even the über-alarmist NY Times now admits that global warmibg has stopped. Go argue with them if you don’t like it.

  202. Nick Stokes says:
    June 14, 2013 at 3:04 pm

    M Courtney says: June 14, 2013 at 2:07 pm
    “So which is it?”
    There is no inconsistency there. Weather outcomes are very sensitive to perturbations; this is reflected in model performance. But long term climate averages make sense and are universally used in the everyday world.

    Fluid mechanics have dealt with this for many years. Turbulence is classic chaotic flow. For over a century it has been dealt with by Reynolds averaging.

    “But Monkton is still responding in kind to the leaked AR5 graphs.”
    That makes no sense, and he didn’t even talk about model averaging in his post. I’m simply dealing with his ridiculous attempt to pretend that RGB was not talking about the graphs published in this post, when he clearly said that he was.

    Nick, you know perfectly clear that rgb was addressing the divergence between the models and the reality and between the models themselves. He makes it pretty clear in his post, that much too many models which are modelling so bad the reality are still used. Models which contradict each other. This is not weather perturbation reflected in model performance, the divergence is growing and growing.
    Yes long term climate averages are universally used, however this is exactly what rgb shows it is wrong. Averaging dirt does not give good results.

    You know perfectly clear that the outputs from IPCC models are exactly as rgb describes them.
    And you know perfectly that you are doing just a divergence inventing excuses.

    You also know that it is not climate variances and turbulences which make models go so far away from reality. The issue is simply they do not model correctly the current processes or they miss something. rgb’s post makes perfectly sense, and he does not criticises Christopher Monkcton’s chart, but the majority of models used by the current climate science to achieve those averages. You know that what he says makes perfect sense, this is why you try to move the discussion in a collateral diversion.

  203. Monckton of Brenchley says: June 14, 2013 at 3:48 pm
    Professor Brown states quite plainly that it was the IPCC’s graph, reproduced as part of one of my graphs, that he was criticizing.

    “Reproduced as part of”? Here’s how it is described above

    “In answer to Mr. Stokes, the orange region representing the interval of models’ outputs will be found to correspond with the region shown in the spaghetti-graph of models’ projections from 2005-2050 at Fig. 11.33a of the Fifth Assessment Report. The correspondence between my region and that in Fig. 11.33a was explained in detail in an earlier posting. The central projection of 2.33 K/century equivalent that I derived from Fig. 11.33a seems fairly to reflect the models’ output.”

    And here is Fig 11.33a. Reproduced? “Seems fairly to reflect”?

    But RGB’s criticism was directed at the statistics in Lord Monckton’s graph. Let me quote:
    “Note the implicit swindle in this graph — by forming a mean and standard deviation over model projections and then using the mean as a “most likely” projection and the variance as representative of the range of the error, one is treating the differences between the models as if they are uncorrelated random variates causing >deviation around a true mean!.”

    Nowhere in the AR5 Fig 11.33 is a mean and standard deviation created, with variance, treating the difference as if they are uncorrelated random variates etc. Those are Lord M’s statistics.

  204. chip, chip…chipping away…

    14 June: Bloomberg: Stefan Nicola & Alessandro Vitelli: Forest Carbon Won’t Be Tradable Commodity, Climate Expert Says
    Emissions reductions created through forest protection never will become a tradable commodity, and private investors are beginning to realize that, a consultant for the Third World Network said.
    Forest carbon can’t be measured as accurately as CO2 discharges from industrial projects, Kate Dooley, who advises the environmental group on climate change issues, said today in Bonn. Under the United Nations’ Reduced Emissions from Deforestation and Forest Degradation program, or REDD, developing nations protect and manage their forests in exchange for funding from developed states to support their efforts.
    “If you think that REDD can be established as a carbon market, if you think that forest carbon can be measured to the level of accuracy to satisfy investors to invest in it as a carbon market, I think that there’s a lot of disappointment in that,” she said in an interview at the UN talks in the German city. “Governments will drive this and the private-sector interest in forest carbon is really falling away.” …

    http://www.bloomberg.com/news/2013-06-14/forest-carbon-won-t-be-tradable-commodity-climate-expert-says.html

  205. Jai, please explain why you consider UAH data “faulty” but are OK with the demonstrably faked data from GISS & UEA. Thanks.

  206. [snip - Greg House under a new fake name. Verified by network path. Mr. House has been shown the door but decided to come back as a fake persona preaching the Slayer/Principia meme. -Anthony]

  207. @jai mitchell, Bart is right: you *ARE* out of your depth. You stepped off in to the deep end the moment you posted the statement a while back in another thread that CO2 stores heat.

  208. Popper and Feynman both had a wonderful ability to clearly articulate difficult and complex things, yet our dear Mosher has managed to obfuscate them both in one comment. Bravo! Well played, sir.

  209. jai mitchell says:

    “…unless you can prove to me…”

    Earth to jai mitchell: scientific skeptics have nothing to prove. The onus is entirely on the purveyors of the global warming conjecture to show that it is valid. But they have failed.

    The failed conjecture that global warming is continuing — and even accelerating [!?] — is owned by the alarmist cult. But real world evidence and empirical observations have falsified the alarmist belief that global warming is continuing.

    Empirical evidence shows that global warming has stopped for the past seventeen+ years, no matter what you mistakenly believe. The über-alarmist NY Times even admits that fact now. And climate alarmist Phil Jones also admits that global warming has stopped. In fact, anyone who pays attention to reality knows that global warming has stopped.

    But if your religion requires you to believe that global warming is continuing, and even accelerating, then who are skeptics to disagree? All we have are facts, which cannot stand up against anyone’s emotion-based True Belief.

  210. Greg Mansion says:
    June 14, 2013 at 6:10 pm
    Therefore your “no warming for X years” argument misses the point and is absolutely worthless.

    The latest we have from NOAA on this topic is:
    ”The simulations rule out (at the 95% level) zero trends for intervals of 15 yr or more, suggesting that an observed absence of warming of this duration is needed to create a discrepancy with the expected present-day warming rate.”

    Did we all miss an update on this topic by NOAA? If so, please enlighten us.

  211. In all of this talk about averaging models, etc, and selecting the best ones and discarding the worst, it brings to mind the graphs we see every year (and especially those of us in Florida pay very close attention to). I refer, of course, to the projected hurricane tracks from the various models which are out there. I’m sure the models are quite complex, and they do their bestto predict the hurricanes’ paths. What we see on the news or the NOAA website is a simple “skinny line” as they call it of the projected track.

    Viewing a spaghetti plot of the different models is quite enlightening (as I’m sure most here have done). Sometimes, when the steering forces are very strong they converge on nearly the same solution (in the short term, anyway) but longer term — or in the absence of such forces — they diverge amazingly. I recall seeing the plots of TS Andrea a few weeks ago while it was still forming in the Gulf of Mexico. The model tracks went *everywhere*. Some west, some north, some east….. If the forecasters attempted to average that mess absolutely nothing useful would have come out of it.

    I don’t know the history of the models, but I assume some must have been more accurate in certain circumstances (and if they were totally bogus they’d have been tossed), so you have a skilled forecaster looking at all of these tracks, using his experience (and I assume knowing which models did a better job under which circumstances) to come up with a projected track. I recall reading a lot of discussion (in the discussion section of the NOAA site) over the years about how this model or that model wanted to shift the track significantly, but the forecaster didn’t buy it (or only shifted the projected track slightly).

    So having the differing models is valuable to give insights to a problem because one model may work better at times than others. But a trained eye needs to make sense of it all. Simple averaging is pointless.

  212. milodonharlani

    I gave you a lengthy explaination but the moderators are holding onto it for now.

    anyone who says that there has been no warming since the 1998 El Nino needs to realize that this has been one of the warmest years in human history and the last 10 years have been the warmest decade in human history.

    unless an annual temperature drops below the 1979 average (which it hasn’t done in over 35 years now) I am not concerned about your pet theories.

  213. [snip - Greg House under a new fake name. Verified by network path. Mr. House has been shown the door but decided to come back as a fake persona preaching the Slayer/Principia meme. -Anthony]

  214. Werner Brozek says: June 14, 2013 at 7:13 pm
    “Did we all miss an update on this topic by NOAA?”

    No. You did leave out that they seem to be talking about ENSO-adjusted trends. But even without that, there hasn’t been 15 years of zero trend. Lord M says that for his 17 year stretch, the trend was 0.89°C/century.

  215. Greg Mansion says:
    June 14, 2013 at 7:46 pm

    What exactly is your point?

    The models have been shown to be wrong by “no warming for X years”.

    From the article:
    “He says we shall soon be approaching Dr. Ben Santer’s 17-year test: if there is no warming for 17 years, the models are wrong.”

    The title said: “No significant warming for 17 years 4 months”

    But did you know that RSS has no warming at all for 16 years and 6 months? The slope is actually slightly negative.

  216. Harold Ambler says:
    June 13, 2013 at 3:39 am
    “1. Time to point out again that when the warmists convinced the world to use anomaly graphs in considering the climate system they more or less won the game. As Essex and McKitrick (and others) point out, temperature, graphed in Kelvins, has been pretty close to flat for the past thousand years or so. The system displays remarkable homeostasis, and almost no lay people are aware of this simple fact.”

    The concept of anomaly was adopted so that trends would be the fundamental evidence of climate science. As everyone knows, trends are not evidence. Serious scientists should stop using anomalies.

  217. jai mitchell says:
    June 14, 2013 at 7:27 pm
    milodonharlani

    I gave you a lengthy explaination but the moderators are holding onto it for now.

    anyone who says that there has been no warming since the 1998 El Nino needs to realize that this has been one of the warmest years in human history and the last 10 years have been the warmest decade in human history.

    unless an annual temperature drops below the 1979 average (which it hasn’t done in over 35 years now) I am not concerned about your pet theories.

    Actually the 1930’s Dust Bowl years were as warm or warmer, before they were adjusted/homogenized/re-imagined and politically downgraded.

    Also, you apparently don’t understand that “warming” and “warm” are two entirely different words.

    And recent Antarctic ice core data from the MWP era discredits your notion that the last decade was the warmest in human history.

  218. Bob Tisdale says:
    June 13, 2013 at 6:22 am

    Another excellent post, Mr. Tisdale. If the Greens could understand you they would champion you as the defender of the natural world against the fantasies of the modelers.

  219. Nick Stokes says:
    June 14, 2013 at 8:11 pm

    No. You did leave out that they seem to be talking about ENSO-adjusted trends.

    We have been through this before and I know you do not agree with me, but for the benefit of new people, the La Ninas immediately following the 1998 El Nino totally cancel the effects of the El Nino.

    But even without that, there hasn’t been 15 years of zero trend.

    It is zero for over 16 years on three data sets. Here are the exact times and slopes:

    HadCRUT3, 16 years, 1 month. slope = -5.84905e-06 per year

    Hadsst2, 16 years, 2 months. slope = -0.000360188 per year

    RSS, 16 years, 6 months. slope = -0.000286453 per year

    To see for yourself, see:

    http://www.woodfortrees.org/plot/hadcrut3gl/from:1997.1/plot/hadcrut3gl/from:1997.1/trend/plot/hadsst2gl/from:1997.1/plot/hadsst2gl/from:1997.1/trend/plot/rss/from:1996.9/plot/rss/from:1996.9/trend

  220. [snip - Greg House under a new fake name. Verified by network path. Mr. House has been shown the door but decided to come back as a fake persona preaching the Slayer/Principia meme. -Anthony]

  221. Nick Stokes says:
    June 14, 2013 at 8:11 pm
    Werner Brozek says: June 14, 2013 at 7:13 pm
    “Did we all miss an update on this topic by NOAA?”

    No. You did leave out that they seem to be talking about ENSO-adjusted trends. But even without that, there hasn’t been 15 years of zero trend. Lord M says that for his 17 year stretch, the trend was 0.89°C/century.

    And 0.89°C/century = 0.0089C/per year, beyond the precision of the instruments recording the data for the last century, and beyond the uncertainty\error bars of the reported results. Not to mention the fact that the temperature data record has been mangled beyond belief by the gatekeepers.

  222. Thomas says:
    June 13, 2013 at 6:56 am

    Statistical significance is not a measure but a test. When we say that the number is not statistically significant we mean that our statistical calculations show that the number failed the test and, therefore, has no value.

  223. Jai Mitchell – The last ten years in RECORDED human history (so 200 years divided by 100,000 years equals what per cent?). Every time I travel to the big city and experience that UHI effect, I understand (maybe) where you are coming from. Glad that in the country, we have missed those alleged record setting heat waves.

  224. wbrozek says: June 14, 2013 at 8:47 pm
    “I know you do not agree with me, but for the benefit of new people, the La Ninas immediately following the 1998 El Nino totally cancel the effects of the El Nino.”

    Yes, I remember some of it but not that. They don’t cancel. Trend is weighted like a seesaw. El Nino at the start strongly downs the trend. Recent La Nina’s push it further down. La Nina immediately following 1998 partly cancels, but has less weight, as well as being smaller.

    “Here are the exact times and slopes:”
    NOAA specified surface temperature, which rules out two of them. The other is obsolete.

  225. Mr. Stokes, having been caught out in a repeated, barefaced lie, continues to wriggle like a stuck pig. He had falsely said, over and over, that Professor Brown had “criticized” one of my graphs, even though Professor Brown had replied to him when he had first made the point. The Professor had explicitly stating that he had criticized the approach taken by the IPCC in the Fifth Assessment Report.

    In an increasingly desperate attempt to maintain the lie now that it has been exposed, Mr. Stokes says Professor Brown criticized my graph for the “implicit swindle” of having formed a “mean and standard deviation over model projections and then using the mean as a ‘most likely’ projection and the variance as representative of the range of the error”.

    However, I had done no such thing. Nowhere in the head posting, nor in the graphs therein, nor in the previous posting to which Mr. Stokes has already been referred, do I state that I have formed a “mean and standard deviation over model projections”.

    Instead, I have simply displayed the range of those projections as it is displayed in Fig. 11.33a of the IPCC’s Fifth Assessment Report, adding the IPCC’s own central projection from the models, which – if Mr. Stokes had bothered to do some reading instead of lying – is given in the following passage from the second-order draft of the report:

    “The global mean surface air temperature anomaly for the period 2016–2035 relative to the reference period of 1986–2005, will likely be in the range 0.4–1.0°C (medium confidence) …”. Now, let me see, o.7 ºC is in the middle of that range, and multiplying it by 100/30 to give the centennial equivalent works out at, um, 2.33 Cº/century, which is precisely the value stated on my graph.

    AR5 maunders on: “It is consistent with the AR4 Summary for Policymakers statement that ‘For the next few decades a warming of about 0.2°C per decade is projected for a range of SRES emission scenarios’.” And that is equivalent to 2 Cº/century, which is also consistent with the central projection displayed on my graph.

    So Professor Brown was indeed criticizing the IPCC, as he had himself explained to Mr. Stokes he was. For the central projection displayed in my graph is the IPCC’s central projection, not mine. I merely reproduced it and correctly attributed it rather than pretending – as Mr. Stokes has unwisely done – that it was mine.

    As so often with the paid or unpaid trolls who make it their business to try to divert threads such as this with direct lies, Mr. Stokes mendaciously picks nits from the elephant in the room without appreciating that it is an elephant. The measured global temperature trend since 2005, the year to which the models’ projections relied upon by the IPCC in Fig. 11.33a are backcast, is inconsistent with the entire range of those projections. They go up. It goes down. Up and down are different directions.

  226. Greg Mansion says:
    June 14, 2013 at 6:10 pm

    Monckton of Brenchley says:
    June 14, 2013 at 6:36 am
    “absence of correlation necessarily implies absence of causation.”

    ****************************************************************************************

    :shock: OK, let me explain it in a very simple way.

    The heating device in your apartment generally warms the air, let us say, in winter, but if you repeatedly open and close the windows, the temperature inside might change in different ways, so that there will be no correlation between heating and temperature. The temperature might even decrease. Cooling trend, you know.

    Applying your logic like “no warming for …”, you must conclude that the heating device won’t heat. But, as you can hopefully see now, it is not necessarily so.

    Besides, and this is something you must know very well, warmists do not say that “global warming” is something steady. They have always said that it is about an overall trend. Just look at their trend graphs, you can find coolings and pauses there. Therefore your “no warming for X years” argument misses the point and is absolutely worthless.

    ===================================================================
    Sometimes we can’t see the forest for the trees.
    The cause. That’s the point. Is it Man? Those who promote CAGW say it is (“Coal Trains of Death” etc.) and we must mortgage our future for the sake of polar bears. What in their hypothesis of the Man-made cause allows for the absence of the predicted effect?
    You’re arguing that the absent effect isn’t absent at all. Why? To keep the Gravy Trains rolling?
    When has “climate” never changed?

  227. Mr. Mansion, like many students of logic in their first weeks, is surprised and perplexed that absence of correlation necessarily implies absence of causation. He uses the example of a room heater (which, if on, warms the room) and a window that is intermittently open and shut, varying the temperature so that there is no correlation between the output of the heater and the temperature of the room.

    Had he thought for a moment rather than being over-hasty to find fault, he would have appreciated that the variability in room temperature caused by the opening and shutting of the window is not correlated with the steady output of the heater. Nor, of course, is it caused by it. Since the temperature in the room varies and the output of the heater is steady, it should be obvious even to Mr. Mansion that the absence of correlation in his own example necessarily implies absence of causation.

    Mr. Mansion also says there is no greenhouse effect. Since the existence of that effect may be deduced theoretically and demonstrated empirically, I beg to differ. Besides, his remark is off topic.

    He says global warming is about an overall trend, but neglects to specify the period he has in mind. An elementary textbook of statistics will tell him that a trend without a period is void for uncertainty of meaning. The mean rate of warming in the entire global instrumental record since 1850 is equivalent to less than 0.5 K/century. The maximum supra-decadal rate of global warming since 1850 is equivalent to 1.7 K/century. It occurred from 1860-1880, 1910-1940, and 1976-2001. The IPCC predicts a mean rate of warming of 3 K/century from now to 2100. How likely is that?

    Finally, he says my “no warming for x years argument misses the point and is absolutely worthless”. Perhaps he should bother to read the head posting before opening his mouth and inserting his foot. I had made it explicit in that posting not only that the “no warming for 17 years” argument was Dr. Santer’s argument, not mine, but also that [as Dr. Santer, like the NOAA and James Hansen before him, has now discovered] it was imprudent to use such arguments and that it was better to concentrate on the inexorably widening discrepancy between the models’ projected global warming and the far lesser rate of warming that is measured in the real world.

  228. Greg Mansion says:
    June 14, 2013 at 8:47 pm

    Besides, saying that CO2 is a “greenhouse gas” on the one hand and implying that it does not cause warming on the other is a contradiction.

    I see no contradiction at all here. I believe CO2 is a greenhouse gas and that it causes some warming. (Or perhaps I should rephrase that and say it slows down the speed at which the surface of the earth loses heat.) However I believe that the amount of warming it causes is way less than the IPCC estimates and that there are negative feedbacks that reduce the warming due to CO2. I also believe that solar effects and ocean cycles can be a greater influence in the opposite direction to that of CO2.

  229. [snip - Greg House under a new fake name. Verified by network path. Mr. House has been shown the door but decided to come back as a fake persona preaching the Slayer/Principia meme. -Anthony]

  230. Monckton of Brenchley says: June 14, 2013 at 9:39 pm
    “In an increasingly desperate attempt to maintain the lie now that it has been exposed, Mr. Stokes says Professor Brown criticized my graph for the “implicit swindle” of having formed a “mean and standard deviation over model projections and then using the mean as a ‘most likely’ projection and the variance as representative of the range of the error”.

    However, I had done no such thing.”

    I don’t endorse RGB’s criticisms. I think they are over the top, and wrong. But it was your graph to which he applied them. He said explicitly that it was, and he referred to no other. True, it seems he thought it was an AR5 graph. AR5 Fig 11.33a has no statistics at all; it’s just a spaghetti plot of model runs and measured temps. And it doesn’t look anything like any of your plots. So he surely can’t have been talking about that.

    “Instead, I have simply displayed the range of those projections as it is displayed in Fig. 11.33a of the IPCC’s Fifth Assessment Report, adding the IPCC’s own central projection from the models…”
    I don’t particularly disagree with the way you put your graph together. The point is, it is your graph, with your statistical calculations, and that is what RGB criticised.

  231. [snip - Greg House under a new fake name. Verified by network path. Mr. House has been shown the door but decided to come back as a fake persona preaching the Slayer/Principia meme. -Anthony]

  232. [snip - Greg House under a new fake name. Verified by network path. Mr. House has been shown the door but decided to come back as a fake persona preaching the Slayer/Principia meme. -Anthony]

  233. [snip - Greg House under a new fake name. Verified by network path. Mr. House has been shown the door but decided to come back as a fake persona preaching the Slayer/Principia meme. -Anthony]

  234. [snip - Greg House under a new fake name. Verified by network path. Mr. House has been shown the door but decided to come back as a fake persona preaching the Slayer/Principia meme. -Anthony]

  235. jai mitchell says:
    June 14, 2013 at 2:55 pm

    “The sinusoidal solar cycle, held at a constant average incidence for 50 years will not produce a long-term.”

    If that were the case, sure. But, we are interested in this particular solar system which we inhabit, where there are several quasi-cycles, and no stationary average.

  236. Lil Fella from OZ says:
    June 13, 2013 at 1:58 pm

    Dr. Pachauri said that he would not take notice of these trends unless they continued for 40 years.
    I could not work that out seeing Dr.Carter wrote that 30 year spans are climate as opposed to the general comment regarding weather. Does the money run out then!?
    >>>>>>>>>>>>>>>>>>>>>>>>>
    No Dr. Pachauri will be dead by then. He is now 72 and his chances of living another 40-16= 24 years is pretty slim. So he doesn’t have to worry about ‘take notice of these trends’ or about the pitchforks and torches….

  237. taxed says:
    June 13, 2013 at 2:32 pm

    ….. but l do think we can expect to see more heavy rain and the risk of floods across the NH during the rest of the year. As Arctic air dives deep to the south.
    >>>>>>>>>>>>>>>>>>>>
    Tell me about it. It is 57F (13C) just south of RGB @ Duke Univ. in NC. Summer? What Summer.

  238. Lord Monckton: Whether there is warming or not, it has nothing to do with carbon dioxide which, if anything, has a net cooling effect of about 0.002 C degree as I can show. This calculation is based up the process of spontaneous evolution of thermodynamic equilibrium which the physics of the Second Law of Thermodynamics states “evolves spontaneously” thus producing a gravity effect which also “evolves spontaneously” as it established an autonomous temperature gradient – which is indeed the state of thermodynamic equilibrium with greatest accessible entropy, just as the Second Law says will evolve spontaneously. Thus there is no need to explain 33 degrees of warming with any greenhouse conjecture – it has already been “done” by gravity – on all planets – in their atmospheres, and also in their crusts and mantles if applicable.

  239. The ineluctable retreat of the lying Mr. Stokes and the confused Mansion continues. First the liar. Mr. Stokes, like many of the paid and unpaid trolls who fill their aimless hours with hatred for those of us who question how much of the predicted global warming will happen, merely repeats his lie even after it has been thoroughly dismantled. His latest version of the lie is that Professor Brown explicitly criticized one of my graphs and did not criticize the IPCC’s Fifth Assessment Report (AR5).

    Here is what Professor Brown actually said when Mr. Stokes first uttered his now-failed lie:
    “Aw, c’mon Nick, you can do better than that. Clearly I was referring to the AR5 ensemble average over climate models, which is pulled from the actual publication IIRC. This is hardly the first time it has been presented on WUWT.”

    Indeed it was not the first time the graph had been presented on WUWT. I had myself presented it only a few weeks previously, when explaining how I proposed to use it as the basis for contrast with what is actually happening to global temperatures.

    Mr. Stokes says my graph does not look anything like the graph in AR5 and accuses me of having done “statistics” on it. My graph faithfully reproduces the upper and lower bounds of the range of predictions made by 34 computer models and displayed in the IPCC’s spaghetti graph, though for clarity I have not troubled to show the spaghetti in between. And the only thing I have added to the graph is the central projection – but that, even if Mr. Stokes wants to call it “statistics”, is, as I have already demonstrated, the IPCC’s piece of “statistics”, and not mine. So that was what Professor Brown was criticizing, and that was what Professor Brown explicitly stated he was criticizing.

    Mr. Stokes now abjectly retreats from the field by saying that he does not consider Professor Brown’s criticism justified. In that event, why did he bother to utter his lie in the first place? Could it be that his sole aim in lying was to make up whatever nonsense he could for the sake of trying to discredit me personally rather than my graph, with which he now belatedly concedes he has little quarrel? One hopes he is being paid well to troll here and elsewhere, for he has made a spectacular ass of himself, and not for the first time.

    So to the confused Mr. Mansion. He now realizes that absence of correlation necessarily implies absence of causation, and – to his dismay – that his attempt at a counter-example was a failure. He had said that if a room were warmed by a heater but someone opened and shut the window several times there would be no correlation between the warmth from the heater and the temperature variability in the room. He now realizes – again belatedly – that the heater’s continuous output could not have caused the variability and that, therefore, the absence of correlation between that continuous heat output and the variability of the temperature implies absence of causation between them.

    However, like many trolls, he now shifts his ground rather than admit that his original assertion that absence of correlation does not necessarily imply absence of causation was incorrect in logic. Here is his priceless shift of ground:

    “So, a) the heater heats the air, this is causation. Now, b) the temperature goes up and down or perhaps only down, thus no correlation. Does lack of correlation mean the heater does not heat the air? Get it now?”

    Here, Mr. Mansion makes an error of logic that we must pray was not deliberate, though with trolls such errors usually are deliberate. He now says he was talking about a causative correlation between turning on the heater and the fact that once the heater is turned on it emits heat, making the air in the room warmer than it would otherwise have been. But if that was what he had meant from the outset, why introduce the complication of the opening and shutting window?

    If one adds CO2 to an atmosphere such as ours, one would expect some warming to result. However, since warming is not at present resulting, there is at present no correlation between the steady increase in CO2 concentration and the variability of global temperatures: indeed, for up to 16 years 6 months (on the RSS dataset), there has been no global warming at all. Therefore, the CO2 concentration change cannot be causative of the current temperature fluctuations.

    The implication one should draw from this is not that CO2 does not cause wom3 warming (it does: get over it) but that the warming signal from CO2 is so weak that several rather small natural cooling effects have been able to overwhelm it for getting on for two decades. The fact that there have been other periods of up to a dozen years without warming in the instrumental temperature record since 1850 tells us nothing: for during those periods our emissions of CO2 were not substantial enough to make a difference. They are now, but they are not making a difference, and that is interesting.

    Mr. Mansion, for some spiteful reason determined to try to score a blow, says Dr. Santer’s argument that if there had been x years without global warming the models would be shown to be wrong, which I had explicitly addressed in the head posting, was really my argument. No, it wasn’t: the head posting had explicitly stated that such arguments could be imprudent and it was better to concentrate on the growing discrepancy between colorful prediction and unexciting reality.

    Mr. Mansion says I had used the “no warming for x years” argument myself several months previously at the Doha climate conference. Yes, I did, for I had only a few seconds in which to make a point that would register with delegates (which is why, contrary to Mr. Mansion’s assertion that I had made it “repeatedly”, I had made it just once, as the video of my speech clearly shows). Indeed, the point I made is one that has rung around the world. But the head posting was what Mr. Mansion was addressing, and he only shifted his ground to Doha when he realized that his attempt to find fault with the head posting had failed.

    Mr. Mansion – a glutton for punishment – goes on to say he did not consider himself under any obligation to state the period of the “overall trend” of global warming that he had previously mentioned, on the ground that the period was irrelevant to his point. If so, then his point was itself irrelevant. Since 10,000 years ago there has been global cooling. Since 150 years ago there has been global warming. The period of the “overall trend”, therefore, is crucial to any mention of an “overall trend”.

    He then uses the usual troll technique of repeating a bad point that had already been demolished. He says that what he calls my “warming pause” argument (an argument I had explicitly stated I was not making, for it was Dr. Santer’s argument I was addressing) “does not contradict the alleged overall warming trend at all”. But I did not say it contradicted some other, unspecified period during which warming had occurred. Since warming and cooling may both occur and are observed, it ought to be self-evident even to the confused Mr. Mansion that the existence of a period of cooling does not contradict the existence of a previous period of warming.

    Readers will by now have received the impression that Mr. Stokes and Mr. Mansion are somewhat out of their depth and out of their league. Both display an unbecoming intellectual dishonesty that is a discredit to them and to whatever causes they consider they are espousing. The monthly Global Warming Prediction Index will, for the first time, provide a straightforward benchmark to demonstrate whether and to what extent global temperature changes reflect what the models had predicted. At present, and conspicuously, they do not.

  240. Monckton of Brenchley says: June 15, 2013 at 1:25 am
    “His latest version of the lie is that Professor Brown explicitly criticized one of my graphs and did not criticize the IPCC’s Fifth Assessment Report (AR5).”

    It’s hardly my latest version. In my second comment I pointed out RGB’s explicit introduction:
    “This is reflected in the graphs Monckton publishes above, where the AR5 trend line is the average over all of these models and in spite of the number of contributors the variance of the models is huge.”

    And he goes on. There’s no doubt what he is criticising. It is “the graphs Monckton publishes above”. I have never contended that he did not criticise the AR5 elsewhere. he often does. But his remarks here were directed to graphs. He says which ones, and specifies no others.

    “when Mr. Stokes first uttered his now-failed lie:”
    I simply pointed out that your name was on the graph. True or not?

    ” and accuses me of having done “statistics” on it”
    Well, somebody did. It says, for example, *3.20 C/century variance. This was one of the subjects of RGB criticism. Did you not calculate that? And r2=0.04? The variance that caused RGB to expostulate:
    “using the mean as a “most likely” projection and the variance as representative of the range of the error”

    “which he now belatedly concedes he has little quarrel”
    I said way back here
    I don’t even think it’s that bad, but RGB says:
    “One simply wishes to bitch-slap whoever it was that assembled the graph and ensure that they never work or publish in the field of science or statistics ever again.”

    I pointed out early on that RGB had criticised the graph in severe terms, clearly thinking it was from the IPCC. I simply noted that it was yours, not theirs, which I think is a reasonable observation. I had no wish to take the matter further, but since you have insisted with increasing shrillness that these simple facts are lies, told by a habitual liar, I have no option but to patiently repeat the facts.

  241. Clearly he thought he was referring to the IPCC, but the graph is labelled “Monckton”, and his diatribe matches the graph in this post. It does not match the AR5 graphs that he later linked to.

    You mean, except for 11.33b, the graph following 11.33a that does precisely what Monckton claims. Perhaps the problem is the attribution. Perhaps he should have just referenced 11.33. Or 1.4.

    Also, Nick, I’m puzzled. When Trenberth et. al. talk about the current neutral temperature trend not being rejected by “the models” at the “95% confidence level” — a phrase I seem to recall hearing fairly repeatedly as warmists seek to reassure the world, the media, and the U. S. Congress that warming continues without a rise in temperature. I might have heard it a time or two in the overall AR5 report, as well.

    What, exactly, does this “confidence” refer to? Does it mean that it is still not the case that 95% of the spaghetti snarl of GCM results (where even 11.33a is probably not showing all that are out there, in particular the individual runs within a model that might give us some idea of what the variance of future predictions is within a model and hence how robust they are to trivial perturbations of initial state) have failed miserably? That seems to be what you are suggesting.

    Look, there are two ways one can do things. Either you can use real statistics, following the rules (which are there for a reason!) and reject most of the GCMs because they are falsified by the evidence one at a time because the concept of “ensemble of model results” as an entity with predictive force quite literally has no meaning, in spite of the fact that (as figures 1.4, 11.33, and others in AR5 make perfect clear, the IPCC desperately wants to convince the lay public and policy makers that it does, implying with every single lying figure that a “consensus” of model results should be given weight in a multitrillion dollar decisioning process, or you can stop using terms like “confidence level” in climate science altogether. As it is, you seem to be suggesting that there is no criterion that you would consider falsification of the models, either individually or collectively.

    Is this the case, or are you just picking nits over which figure or whether the figure captions (which generally do not explain precisely how their grey or light blue or whatever “ranges” supposedly representing error/variance are computed — personally I think somebody draws them in with a drawing program and the human eye because they are invariably smooth and appealing, unlike the snarl they usually hide from the public (for example, it is perfectly and laughably obvious that the “error bars” drawn onto the annual data in figure 1.4 were just made up and have no meaning whatsoever save to make it look like the figure is “science”)?

    So let me extend the questions I raise to you personally. Do you think that the deviation of, say, the leftmost/highest models in 11.33a (the ones that are the farthest from reality over the last 15+ years) justifies their summary rejection as being unlikely to be correct? If not, why not?

    If so, how far over and down do you think one should go rejecting models?

    If you think it is reasonable to reject any of the models at all as being falsified, do you think it is “good science” to present them in the AR5 report in spite of the fact that nobody really takes them seriously any more (and believe me, even most climate scientists don’t seem to take Hansen’s ravings of 5+ C warming/century seriously any more, not after 1/7 of a century of neutral temperatures and a third of a century of ~0.1 C/decade temperature rise — e.g. UAH LTT)?

    If it isn’t good science to present them on an equal footing with more successful (or not as glaringly failed) model predictions then why are they there — could it be because of political reasons — because without them the centroid of GCM climate results stops looking so, um “catastrophic”?

    Do you think that the phrase “confidence” and the terminology of hypothesis testing has any purpose whatsoever in climate science (since as far as I can tell, nobody ever rejects a hypothesis such as “this particular GCM can be trusted when it predicts global warming even though it fails to predict rainfall, temperature, the coupling between surface temperatures and LTTs, or the variations in the major decadal oscillations better than a pair of broadly constrained dice”)?

    For example, do you think that the AR4 “Summary for policy makers” should have included the phrases: “strengthening the confidence in near term projections”, or “Advances in climate change modelling now enable best estimates and likely assessed uncertainty ranges to be given for projected warming for different emission scenarios. Results for different emission scenarios are provided explicitly in this report to avoid loss of this policy-relevant information. Projected global average surface warmings for the end of the 21st century (2090–2099) relative to 1980–1999 are shown in Table SPM.3. These illustrate the differences between lower and higher SRES emission scenarios, and the projected warming uncertainty associated with these scenarios. {10.5}”?

    When it presents figure SPM5 — as Multi-model global averages of surface warming with shading denotes the +/-1 standard deviation of individual model annual averages and uses phrases such as “likely range” (emphasis theirs, not mine) is this utterly misleading? Do the authors of these documents need to be bitch-slapped?

    Does the fact that graphics, commentary, and the abuse of statistical language used backwards — to suggest that the models have a 95% chance of being correct instead of not quite having made some arbitrary rejection threshold as they continue to deviate from reality — of almost identical nature are in the AR5 draft deserve any comment whatsoever?

    In your opinion, of course. I’m just curious. Do you really think that this is all OK? I occasionally have the temerity to comment on /. on climate reposts, and there is a religious army that will come down on you like a ton of bricks if you assert that global warming or sea level rise will be less than 3 to 5 C or meters respectively, because there are many people who uncritically accept this crap about “95% confidence” completely backwards (who understands statistical hypothesis testing? Not even all statisticians…)

    Do you think the end justifies the means?

    Just curious.

    rgb

  242. I bet the oddds of Nick responding to all of RGB questions are 100 to 1 against. Any takers?

  243. Mr. Stokes continues to lie and lie and lie again. How childish.

    He has been corrected by Professor Brown but persists in his lie, which he now embellishes with further nonsense. For instance, the variance on my graph is that between the IPCC’s own prediction and reality, not, as he has assumed between the upper and lower bounds of the IPC’s predictions.

    The correlation coefficient, too, is calculated not on the IPCC’s projections but on the real-world data. And, whether he likes it or not, it is correctly calculated: I invited a Professor of Statistics to verify the correlation coefficient independently by reference to the source data to ensure that I had not made a mistake.

    If only Mr. Stokes were less desperate to find fault, he would not have been led deeper and deeper into his futile lie.

    The position, then, is this. The derivation of the orange prediction region on my graph is explained clearly in an earlier posting by me, to which I have already referred Mr. Stokes. The derivation of the IPCC’s central projection was explained earlier by me. There is, therefore, nothing wrong with my portrayal of the range of projections or of the central projection in AR5.

    On the same graph I have superimposed the HadCRUt4 temperature anomalies since 2005; I have calculated the least-squares trend-line; I have compared the slopes of the IPCC’s central projection and the trend-line on the observed data to establish the variance between prediction and reality; and I have calculated the correlation coefficient on the observed data. And that is that. The entire process is innocent and reasonable, and has been adequately explained to Mr. Stokes many times.

    Therefore, let him stop lying, stop picking nits, and try – just for once – to do something constructive with his time. If this is the best the trolls can do to derail the Global Warming Prediction Index, then it is an effort as feeble as it is mendacious. Mr. Stokes should be ashamed of himself.

  244. nevket240 says:
    June 14, 2013 at 9:36 pm

    Thanks for that link. I know it’s somewhat old news, but it bears repeating since it shows how monumentally wrong the WarmBelievers are. The brief warming period of the 80s and 90s will be looked upon as the halcyon days of modern climate. All of the “carbon” we could possibly pump out won’t stop the cooling, since it never had much warming effect to begin with. Although, it should help with the continued greening of the planet.

  245. rgbatduke says: June 15, 2013 at 3:59 am
    “You mean, except for 11.33b, the graph following 11.33a that does precisely what Monckton claims.”

    What does it do that Monckton claims? It shows only a median and quantiles. It does not have means, variances, r2 etc about which you were so indignant. You said the person who assembled those should be drummed out of science. Who do you think it was?

    Lord M assembled his graph from something. He says (over and over) it was 11.33a, and describes it as a spaghetti plot. It’s written on the graph. I’d assume it is what he used.

    On model matching, as I said above, the models generate artificial weather, which they do not claim as weather predictions. That is one reason why they don’t align. They generate all kinds of realistic patterns, but the phase is uncertain. In real weather, we also have predictable patterns (ENSO etc) with unpredictable timing.

    What Trenberth and others are saying is that from weather observed long enough, you can deduce a climate. That is true for both models and the Earth. Climate modellers are making predictions about the climate, not the weather – even decadal weather.

    So the Earth has quite long hot and cold spells, superimposed now on the radiant forcing effect. Models have them too, but there’s no expectation that they will align in phase. You have to weight for a proper climate average from both before you can decide whether models have succeeded or not.

    So no, I don’t think individual deviant models should be considered wrong after fifteen years. Of course, models produce hugely detailed pictures of the Earth’s weather, and there are many consistency tests that can be applied. Global mean surface temperature is only a small part of the story.

    As to what SPM5 will say, we’ll have to wait and see. Fig 11.33 only shows median and quantiles, and box plots. The latter seem to reflect the varying estimation methods of the individual authors. Your bitchslap diatribe was based on Lord Monckton’s statistics, not AR5.

  246. “Monckton of Brenchley says:

    June 15, 2013 at 4:38 am”

    Add to that list Jai Mitchell.

  247. Nick Stokes says:

    June 15, 2013 at 4:53 am

    So the Earth has quite long hot and cold spells, superimposed now on the radiant forcing effect. Models have them too, but there’s no expectation that they will align in phase. You have to weight for a proper climate average from both before you can decide whether models have succeeded or not.

    Well then, I would suppose you would support that we all wait until the models show that they have succeeded (or not) before you would perform any action on the projection/predictions of the models?

    So far, with them being so out of phase, the prudent thing would be to do nothing based on them, would it not?

  248. Monckton of Brenchley says: June 15, 2013 at 4:38 am
    “If only Mr. Stokes were less desperate to find fault, he would not have been led deeper and deeper into his futile lie.”

    I think you’ve forgotten what the alleged lie is. But I’ve actually did little faultfindingt. I haven’t disputed the correctness of your calculations. I’m agnostic on the appropriateness, but I think some summary method should be found, and yours is in that direction.

    The big fault-finder here is RGB. You’ve just confirmed, in your last post, that you are the author of the statistical methods that he so trenchantly condemned, in the graph he clearly pointed to.

    Now it’s not a big issue for me what RGB thinks of Lord M’s statistics. I just object to him trying to pin it on the AR5. They didn’t do it.

  249. JohnWho says: June 15, 2013 at 5:11 am
    “So far, with them being so out of phase, the prudent thing would be to do nothing based on them, would it not?”

    No. The fact is that we have dug up and burned nearly 400 Gtons of carbon. This has increased CO2 in the air by about 40%. There are thousands more Gt that we are likely to burn, unless we can figure out how to avoid it.

    CO2 is a GHG, and its accumulation will make the world hotter. We have a real interest in knowing how much. GCM’s represent our best chance of finding out. We need to get as much information from them as we can. Doing nothing is not a riskfree policy.

  250. nick

    In the real world-rather than the one of composite temperatures optimistically labelled ‘global’-temperatures are not rising and haven’t been for a decade

    http://wattsupwiththat.com/2013/05/08/the-curious-case-of-rising-co2-and-falling-temperatures/

    it would be more helpful to look at regional/koppen zone temperatures to see what is happening but Mosh doesn’t seem to like that idea.

    In the meantime we have a global averaged temperature which is as useful as an average global telephone number or the global average economic growth rate in as much neither none of these matrix bears any relation to individual circumstances
    tonyb

  251. Nick Stokes says:
    June 15, 2013 at 5:43 am
    CO2 is a GHG, and its accumulation will make the world hotter. We have a real interest in knowing how much.
    That’s the Belief of you Climahysterics. Since there is no evidence so far that the additional C02 (wherever it’s from) has, in fact warmed the planet. Your Belief system, and that of all Warmists won’t allow you to see that though. As far as your “real interest in knowing how much”, that’s hilarious.

  252. No. The fact is that we have dug up and burned nearly 400 Gtons of carbon. This has increased CO2 in the air by about 40%. There are thousands more Gt that we are likely to burn, unless we can figure out how to avoid it.

    No.
    Setting aside any discussion about the actual AMOUNT of coal and oil we’ve dug up and burned, CO2 doesn’t just hang around, as much as that would move the narrative forward. CO2 is actively removed at varying and, if required, fantastically rapid rates.

    Again, the continual repeating of this mantra does not make it so. The fact is that we do NOT have continuous and credible records of just what the CO2 level has been for most of this interglacial. There is still compelling evidence that the levels have been in the range they are now during the last few hundred years. And there’s still that little question about which increases first: the CO2 levels or the temperature…

    Myself, I’m actively committed to burning each and every gram I can get my hands on, just to nullify the efforts of the luddites that think they’re saving the planet by avoiding burning any. And after seeing the CROWDS that turned out to protest Alberta Oil Sands Development while our PM was visiting parliament in London this week, I’m even more determined to make sure each and every one of them is disappointed.

  253. @Nick Stokes
    Nick as many have noticed you seem to be AC/DC on the whether chaos is important or not then you try and compare it all to turbulent flow in fluid mechanics. The problem is it is nothing like turbulent flow as we have two chaotic systems in the sun and earth/atmosphere linked together.

    Remember the completely normal equilibrium situation of a body at earth distance from sun without greenhouse gases is the sun facing side of the planet to be several hundred degrees above zero the dark side to be several hundred degrees below zero …. we see that exact behavior on the ISS.

    I dislike the chaos argument not for anything you discussed but for the above effect the chaotic behavior has defined limits and the chaos is synchronized to the drive force which is another slightly chaotic system being the sun.

    Climate therefore has a definitive type of physics it belongs to => synchronized chaotic systems

    When you synchronize chaos there are some pretty standard problems and most of them come back to haunt climate science (http://en.wikipedia.org/wiki/Synchronization_of_chaos)

    There are robust ways to pick signals out of synchronized chaos it is done a lot in the field of speech recognition and communications especially.

    Put “robust signal detection in synchronized chaotic systems” into google and you will get the theory and a multitude of areas we use it which will be most disciplines. What you will find very few references for is climate change.

    So the question for all you climate astrology types Nick is the mathematics and theory exists there is even prolific details on how to build models for these systems and the problems they encounter.

    So the question to you Nick is why is so little of this used in climate change there is simply massive amounts of real and proper science available in this area thanks largely to speech and image recognition?

  254. There are two scientists who I trust implicitly for some common sense and expertise when reading about the climate. @rgbatduke is one and Judith Curry over at her site is another. They have superlative credentials and conduct themselves with the kind of dignity and integrity that I always assumed all scientists did. Little did I know that others who are supposed to be scientists never evolved beyond adolescence, Agnostic initially, it didnt take long for me to see that defensive behavior and ad hominem attacks replaced robust scientific debate. Long live science.

  255. It is about time that the skeptical-realists stopped playing the game on the alarmists pitch.The IPCC-Met office model projections and all the impact studies which derive from them are literally useless for discussing future temperatures because they are founded on three absurd assumptions.First that CO2 is the main driver – when CO2 follows temperature .The effect does not follow the cause. Second piling stupidity on irrationality the models add the water vapour as a feed back to the CO2 in order to get a climate sensitivity of about 3 degrees. Water vapour follows temperature independently of CO2 and is the main GHG.
    Furthermore apart from the specific problems in the Met- IPCC models ,models are inherently useless for predicting temperatures because of the difficulty of setting the initial parameters with sufficient precision.That is why the Met Office gave up on making seasonal and then decadal forecasts. Discussing model outputs is like discussing species of unicorns.
    Realists should put fo put forward their own forecasts. Here is an Email which I sent to the Met Office which expands on the above comments.
    E-Mail to Stephen Belcher re Climate Change – Global Cooling
    From Dr Norman Page
    Houston Blog http://climatesense-norpag.blogspot.com.
    Dear Professor Belcher
    There has been no net warming since 1997 with CO2 up over 8%, The warming trend peaked in about 2003 and the earth has been cooling slightly for the last 10 years . This cooling will last for at least 20 years and perhaps for hundreds of years beyond that.. The Met office and IPCC climate models and all the impact studies depending on them are totally useless because they are incorrectly structured. The models are founded on two irrationally absurd assumptions.First that CO2 is the main driver – when CO2 follows temperature .The effect does not follow the cause. Second piling stupidity on irrationality the models add the water vapour as a feed back to the CO2 in order to get a climate sensitivity of about 3 degrees. Water vapour follows temperature independently of CO2 and is the main GHG.
    Furthermore apart from the specific problems in the Met- IPCC models ,models are inherently useless for predicting temperatures because of the difficulty of setting the initial parameters with sufficient precision.Why you think you can iterate more than a couple of weeks ahead is beyond my comprehension.After all you gave up on seasonal forecasts.
    For a discussion of the right way to approach forecasting see

    http://climatesense-norpag.blogspot.com/2013/05/climate-forecasting-basics-for-britains.html

    and several other pertinent posts also on http://climatesense-norpag.blogspot.com.
    Here is a summary of the conclusions.
    “It is not a great stretch of the imagination to propose that the 20th century warming peaked in about 2003 and that that peak was a peak in both the 60 year and 1000 year cycles.On that basis the conclusions of the post referred to above were as follows.
    1 Significant temperature drop at about 2016-17
    2 Possible unusual cold snap 2021-22
    3 Built in cooling trend until at least 2024
    4 Temperature Hadsst3 moving average anomaly 2035 – 0.15
    5Temperature Hadsst3 moving average anomaly 2100 – 0.5
    6 General Conclusion – by 2100 all the 20th century temperature rise will have been reversed,
    7 By 2650 earth could possibly be back to the depths of the little ice age.
    8 The effect of increasing CO2 emissions will be minor but beneficial – they may slightly ameliorate the forecast cooling and help maintain crop yields .
    9 Warning !! There are some signs in the Livingston and Penn Solar data that a sudden drop to the Maunder
    Minimum Little Ice Age temperatures could be imminent – with a much more rapid and economically disruptive cooling than that forecast above which may turn out to be a best case scenario.

    For a dicussion of the effects of cooling on future weather patterns see the 30 year Climate Forecast 2 Year update at

    http://climatesense-norpag.blogspot.com/2012/07/30-year-climate-forecast-2-year-update.html

    How confident should one be in these above predictions? The pattern method doesn’t lend itself easily to statistical measures. However statistical calculations only provide an apparent rigour for the uninitiated and in relation to the climate models are entirely misleading because they make no allowance for the structural uncertainties in the model set up.This is where scientific judgement comes in – some people are better at pattern recognition than others.A past record of successful forecasting is a useful but not infallible measure. In this case I am reasonably sure – say 65/35 for about 20 years ahead. Beyond that, inevitably ,certainty drops.”
    It is way past time for someone in the British scientific establishment to forthrightly say to the government that the whole CO2 scare is based on a mass delusion and try to stop Britain’s lunatic efforts to control climate by installing windmills.
    As an expat Brit I watch with fascinated horror as y’all head lemming like over a cliff. I would be very happy to consult for the Met on this matter- you certainly need to hear a forthright skeptic presentation to reconnect with reality.
    Best Regards Norman Page.

  256. Nick Stokes says:

    June 15, 2013 at 5:43 am

    JohnWho says: June 15, 2013 at 5:11 am
    “So far, with them being so out of phase, the prudent thing would be to do nothing based on them, would it not?”

    No. The fact is that we have dug up and burned nearly 400 Gtons of carbon. This has increased CO2 in the air by about 40%. There are thousands more Gt that we are likely to burn, unless we can figure out how to avoid it.

    But, as noted by others above, unless it can be shown that that CO2 that we emit into the atmosphere is actually doing anything of consequence, we should be more like Alfred E. Neuman and not worry.

    CO2 is a GHG, and its accumulation will make the world hotter. We have a real interest in knowing how much. GCM’s represent our best chance of finding out. We need to get as much information from them as we can. Doing nothing is not a riskfree policy.

    I suspect we all may agree with you that “we have a real interest in knowing how much” the effect, if discernible, atmospheric CO2 levels have, just as we have a real interest in understanding the world around us.

    But, c’mon, we aren’t using a lot of our intellectual ability when we state “GCM’s represent our best chance of finding out”. Wouldn’t actual observation of the actual warming, if discernible, by the actual difference in the increased CO2 be a much better way of actually finding out what, if anything, we should actually be concerned over?

    Actually, I am sure it would be.

    Determining this is clearly not “doing nothing” and is the best scientific policy. The best political policy is to let the scientists be scientists and not react hastily to “what if” scenarios.

    Moving forward on the GCM’s, by the way, is not risk free.

  257. “we shall soon be approaching Dr. Ben Santer’s 17-year test: if there is no warming for 17 years, the models are wrong.”

    Is this really what Santer said? I’m pretty sure he said 17 was a minimum for observation of a gradual warming trend.

  258. Several years ago some skeptics pointed out that there had been no warming for around 10 years. They used the 1998 El Niño as a starting point. The alarmists came unglued. They told us that it was cherry picking and meaningless.

    Fast forward to now. Nick just claimed that models all show periods of non-warming for 10-15 years. What he didn’t tell you was that everyone of those periods is pure cherry picking. They go from a local high (probably El Niño) to a local low (likely a La Niña or volcano). So, the fact is, NO models match our current reality. NONE. RSS data says we’ve had zero warming since late 1996 which was ENSO neutral just like we are now. And, since 1996 was near a solar minimum while we are now at a solar maximum, the zero trend is probably lightly negative if this were factored in.

    One can only wonder why Nick is being dishonest. Why does he want to hide the truth?

  259. Ryan says:
    June 15, 2013 at 7:45 am
    “we shall soon be approaching Dr. Ben Santer’s 17-year test: if there is no warming for 17 years, the models are wrong.”

    Is this really what Santer said? I’m pretty sure he said 17 was a minimum for observation of a gradual warming trend.

    I suspect you are right. The reason is most likely due to the situation I described above where you might have a zero trend going from ENSO+ to ENSO- over a period of warming. That would require a longer interval for their programmed warming to overcome. However, that is not the case right now in 2013. The “noise” factors are not affecting the trend to any significant degree. Hence, the 17 years is probably overstating the situation. I haven’t seen a single model run with even a 15 year trend that matches our current situation. NONE.

    The fact is ALL the models have been falsified.

  260. CodeTech says:

    June 15, 2013 at 6:17 am

    The fact is that we do NOT have continuous and credible records of just what the CO2 level has been for most of this interglacial. There is still compelling evidence that the levels have been in the range they are now during the last few hundred years.

    ————–
    you are entitled to your own opinions, but not your own facts.

    We have a very significant and credible record based on thousands of ice cores (recent 2,000 years) and hundreds of ice cores (earlier Holocene).

    as well as plant stomata and tree ring growth as well as other ancillary indicators that

    CO2 has not been anywhere near current atmospheric levels for almost 52 million years.

    ———

    and,

    when you said,

    And there’s still that little question about which increases first: the CO2 levels or the temperature…

    ———

    you, of course, are talking about the interglacial record and, no, that has never been a question, we have always known that the Milankovich (solar) cycles start the thaw from ice ages. BUT we also know that the amount of heat from the sun’s cycles are not nearly enough to cause the warming we see. Only the greenhouse effect (later rises in CO2) are enough to warm the planet after an ice age.

    AND

    the lag time is consistent with the amount of time it takes the ocean’s currents to complete one thermohaline loop (warm, salty co2 rich water sinks and travels near the bottom of the ocean and rises up after about 500 years).

    ———————–

    when you say, “luddites” who are you talking about really????

    If the overwhelming majority of the scientists out there are honest and sincerely believe that CO2 will kill your progeny, why would you want to help kill them faster (by burning all the fossil fuel you can)

    that is like a smoker disbelieving his own lung cancer.

  261. Nick Stokes says:
    June 14, 2013 at 9:29 pm
    NOAA specified surface temperature, which rules out two of them. The other is obsolete.

    That is interesting! On the other hand, Santer specified satellites. So if Santer was mentioned, then I suppose the satellite data should have been mentioned instead of HadCRUT4. The following are quotes from Santer:

    “We compare global-scale changes in satellite estimates of the temperature of the lower troposphere (TLT) ….Our results show that temperature records of at least 17 years in length are required for identifying human effects on global-mean tropospheric temperature.” See:

    http://www.agu.org/pubs/crossref/2011/2011JD016263.shtml

    This however does not really change much since RSS is 198/204 = 97% of the way to reaching Santer’s mark.

    As for HadCRUT3 being obsolete, do you have confirmation of that? In my report: http://wattsupwiththat.com/2013/06/09/are-we-in-a-pause-or-a-decline-now-includes-at-least-april-data/
    I did mention that “However as of June 8, HadCRUT3 for April is still not up! Could it be because as of the end of March, the slope of 0 lasted 16 years and 1 month and they do not want to add another month or two? What do you think?”

  262. [snip - Greg House under a new fake name. Verified by network path. Mr. House has been shown the door but decided to come back as a fake persona preaching the Slayer/Principia meme. -Anthony]. The same goes for the calculations of “global warming”. OK, the logical issues are apparently the most difficult ones, so I suggest you just memorize it: “absence of correlation does not necessarily imply absence of causation”.

  263. Jai says, “If the overwhelming majority of the scientists out there are honest and sincerely believe that CO2 will kill your progeny, why would you want to help kill them faster (by burning all the fossil fuel you can)

    that is like a smoker disbelieving his own lung cancer.”

    1) The overwhelming majority of scientists does not sincerely believe that CO2 will kill your progeny, but

    2) It wouldn’t matter if a majority did, since science isn’t a democracy.

    3) Comparing natural cyclic climate fluctuations with a smoker who gives himself lung cancer is not only a pointless analogy but, please excuse my saying so, idiotic.

  264. Mr. Stokes and Mr. Mansion (perhaps they are the same) continue to sow what looks increasingly like deliberate (and more than a little petulant) confusion.

    Mr. Stokes has now been told again by Professor Brown that it was the IPCC’s Fifth Assessment Report, not my representation of it, that he was criticizing and that I had correctly represented Fig. 11.33 from that report; and Mr. Stokes has been assured by me that I did not perform any statistics on the AR5 projections: I merely placed them accurately on the graph. He continues to assert, however, that Professor Brown was criticizing me for having performed “statistics” on the IPCC’s projections, including the determination of a correlation coefficient that I had already explained was determined not on the IPCC’s projections but on the real-world temperatures from HadCRUt4 that were also displayed on the graph. No: I had simply reported the IPCC’s projections on the graph, without performing any statistics whatsoever on them, and I had fairly pointed out the already-substantial variance between the IPCC’s declared central projection, which I also displayed on the graph, and the less dramatic real-world outturn. And the correlation coefficient on the outturn trend was was correctly determined, so there would have been no call for Professor Brown to criticize it.

    Mr. Mansion, having had his original argument against the proposition that absence of correlation necessarily implies absence of causation thoroughly dismantled, merely reasserts it (twice) without any argument at all, in a pusillanimous “so-there!” fashion. There is not the space here for me to describe or demonstrate the causal laws, on which entire treatises have been written (though Mr. Mansion has self-evidently not read any of them).

    However, he will be better informed (though not necessarily wiser) if I explain that the proposition that absence of correlation necessarily implies absence of causation (a proposition neatly illustrated by his own failed counterexample of the radiator and the opening and shutting window) is a corollary of the rule of concomitant variations, the classical formulation of which is as follows: “Whatever phenomenon varies in any manner whenever another phenomenon varies in some particular manner is either a cause or an effect of that phenomenon, or is connected with it through some fact of causation.”

    I need not, I think, set out the formal proof of the corollary in terms of propositional calculus, for the corollary self-evidently follows from the rule of concomitant variations, but if Mr. Mansion is genuinely interested in learning rather than shouting he may like to read any sufficiently advanced textbook on logic (the elementary textbooks usually do not cover this topic, but he would need to read them first, for he is plainly unfamiliar even with the elementary principles of logic).

    Mr. Mansion perpetrates another unfortunate logical solecism when he falsely discerns a contradiction between what he says are my statements primo that “the influence of CO2 concentration change on temperature change is not discernible” and secundo that “CO2 causes warming”.

    Mr. Mansion makes a mistake that trolls often make. He carefully quotes both statements incompletely and thereby fabricates the hollow basis for his supposed contradiction. What I had written was that “At least at present, the influence of CO2 concentration change is not discernible”; and that, though CO2 causes warming, its warming signal is sufficiently weak that a combination of small natural cooling factors is at present proving sufficient to mask that signal. Once the statements artfully edited by Mr. Mansion are restored, the imagined contradiction between them is shown to be illusory.

    Mr. Stokes and Mr. Mansion should really go and play in someone else’s sandpit. They are likely to get hurt if they go on trying to play alongside the big boys. They have no idea how silly they are making themselves look, not only now but for all time: for these postings are being archived by the Lord Monckton Foundation so that future generations can discern something of the intellectual feeble-mindedness, dishonesty and petty politicization that led to the now-collapsed “global warming” scare. Of the completeness of that collapse the Global Warming Prediction Index is but one measure.

  265. Chad Jessup says:
    June 13, 2013 at 10:54 pm

    rgbatduke at 1:17 pm – Oh Yes, follow the money. Corporate America, which of course includes Big Oil, has consistently been the main supplier of money to the Green Movement for decades.
    ////////////

    Yes Chad, it is ironic that the green movement is what helps make the general cost of oil (carbon) higher not lower (making “big oil” bigger not smaller). That higher price supports global tyranny (statist control of oil) which is something the left supports as well.

    Try speaking to liberals and socialists about “banking” is you want to see how emotionally and intellectually unstable their core beliefs really are. They’ll condemn the financial system on the one hand (“Occupy Wall-Street”, banksters etc.) but then demand more fake money be printed and added to the system that demands more banking and leverage to exist at all. In most cases and topics there is no consideration of the longer-term consequences to individual rights. That’s the common denominator. All of it favors large interests and the “wealthy” as well. Green extremes and blathering leftism in the European tradition is based on nostalgia, a Luddite conclusion to life. Anti-progress and anti-science.

    As for corporations co-opting green demagoguery it’s perfecting logical as long as it helps their transactions and keeps them from dying in their beds.

  266. [snip - Greg House under a new fake name. Verified by network path. Mr. House has been shown the door but decided to come back as a fake persona preaching the Slayer/Principia meme. -Anthony]

  267. Steven Mosher says (June 13, 2013 at 10:23 pm): “Absent a better theory, folks work with the best they have.”

    1) Today’s GCMs demonstrably can’t predict the future, except perhaps by chance.
    2) Today’s GCMs are “the best they have”.

    Ergo, “they” can’t predict the future, except by chance. In that case, there’s no reason to prefer GCMs over some other forecasting method with no predictive skill other than chance, e.g. a ouija board.

    And that’s when I realized that today’s GCMs may not be “the best they have”. I remember seeing a graph somewhere, probably on WUWT, in which a “naive” temperature forecast outperformed a GCM “ensemble mean”. In other words, the GCMs aren’t just competing with each other, they’re also competing with “naive” models which “project” something like

    “for the thirty year interval centered on 2100, the global climate system will look a lot like it did during the thirty years centered on 1998 (or 2002, or 2013, etc.).”

    This is another “model”, just as “legitimate” as the GCMs, which policymakers can use for planning. Anyone claiming the GCMs are “the best they have” needs to show the GCMs outperform “naive” models.

    BTW, I’m no philosopher of science, but isn’t the “naive” set of models just the “null hypothesis” of CAGW?

    rgbatduke says (June 13, 2013 at 7:20 am): ‘We can then sort out the models by putting (say) all but the top five or so into a “failed” bin and stop including them in any sort of analysis or policy decisioning whatsoever unless or until they start to actually agree with reality.’

    The “top five” might include a “naive” model or two, which may then be improved by adding, for example, ENSO, AMO, opposite sea ice change in northern & southern hemispheres, etc.

  268. Greg Mansion says (June 15, 2013 at 10:18 am): “Or you can keep arguing in your brilliant disgusting style, of course…”

    FYI, not all of us reading this thread find Lord Monckton’s style “disgusting” (“brilliant” maybe), so you might want to use the phrase “disgusting to me, Greg Mansion” in the future.

    “…but it is maybe time you realize that you will only sink deeper and deeper into the BS you yourself created.”

    The irony, it burns! :-)

  269. This is certainly one of the more entertaining – and quite educational and informative – blog discussions in a long time. I would say “Game, Set, Match, and Yer Outta Here!” to Messrs. Stokes and Mansion (and Mitchell, who seems to be the ball boy in this side), who clearly are outmatched in every aspect in this game. Go back to the minor leagues, boys.

  270. Greg Mansion is basically Greg House in a new avatar. He spouts the same bullshit again and again.

    REPLY: Thanks I’ll check into it, these Slayer/Principia folks are worse than Jehovah’s Witnesses when it comes to knocking on your door and demanding we listen to their opinion. – Anthony

  271. [snip - Greg House under a new fake name. Verified by network path. Mr. House has been shown the door but decided to come back as a fake persona preaching the Slayer/Principia meme. -Anthony]

  272. Nick Stokes says:
    June 15, 2013 at 5:43 am
    No. The fact is that we have dug up and burned nearly 400 Gtons of carbon. This has increased CO2 in the air by about 40%
    ======
    What a stupid argument….
    You have $0.28 in your pocket….I increase it 40%

    ….go buy dinner

  273. jai mitchell says:
    June 14, 2013 at 7:27 pm

    “milodonharlani

    I gave you a lengthy explaination but the moderators are holding onto it for now.

    anyone who says that there has been no warming since the 1998 El Nino needs to realize that this has been one of the warmest years in human history and the last 10 years have been the warmest decade in human history.

    unless an annual temperature drops below the 1979 average (which it hasn’t done in over 35 years now) I am not concerned about your pet theories.”
    ————————————————–

    There has been no statistically significant warming since before 1998, which you’d know had you read this blog more attentively. And the period of flat to cooling temperatures is longest in the least “adjusted” data sets.

    Are you aware that a cooling phase comparable to that which Earth currently appears to be in also occurred during the 1960s & ’70s? And in prior phases of the PDO during the recovery from the LIA before CO2 took off post-war? Winters in the ’60s & ’70s were memorably frigid, despite the rise in CO2 from the ’40s & ’50s.

    I don’t have a pet theory. I have a respect for the scientific method, which CACCA violates, so I look for hypotheses that haven’t been falsified (in both senses of the term), as CACCA has been.

  274. Glad to see that jai mitchell now understands that we’re in an interstadial. Previously his position was that the current Ice Age had passed.

    And yes, CO2 is higher than at times in the past. Not that it matters. During geological history, CO2 has been up to twenty times higher than it is now — with no runaway global warming. When CO2 was high, the biosphere teemed with life. More CO2 is better. There is no downside at either current or projected CO2 levels.

  275. Steven Mosher says:
    June 13, 2013 at 10:23 pm

    Here is a hint. You can be a sceptic and not rely on either of these guys flawed ideas about how science in fact operates. Theories rarely get “falsified” they get changed, improved, or forgotten when some better theory comes along. Absent a better theory, folks work with the best they have.
    ——————————————————–

    CACCA is hardly the best that science has to offer. It isn’t scientific at all, but un-scientific & corruptly defended by anti-scientific means. Climatology needs more & better data, with fewer GIGO models, which reminds me of Freeman Dyson, another great physicist whom I wonder if you regard as poorly as you do Richard Feynman.

    Please quantify “rarely”. One major theory or hypothesis per century? Two? Ten? Or do you have in mind a proportion of theories falsified compared to those simply abandoned by the weight of evidence?

    A few biggies spring readily to mind. The geocentric theory was falsified in the 17th century, as was the theory of perfectly circular orbits (by Tycho’s data & Kepler’s analysis thereof). Phlogiston was falsified in the 18th century & spontaneous generation in the 19th. The steady state theory of the universe was falsified in the 20th century, along with immovable continents. To mention but a few. CACCA was falsified in the 20th century & then again in the 21st, but reality deniers still cling to it, like Flat Earthers.

  276. DB:

    Please excuse my pedantry, but technically we’re in an interglacial, between stadials. Interstadials occur during cold glacial stages. Stadials are cooler phases of a warmer interglacial, like the Little Ice Age stadial.

    I think.

  277. Latitude says:
    June 15, 2013 at 11:42 am

    Nick Stokes says:
    June 15, 2013 at 5:43 am
    No. The fact is that we have dug up and burned nearly 400 Gtons of carbon. This has increased CO2 in the air by about 40%
    ======
    What a stupid argument….
    You have $0.28 in your pocket….I increase it 40%

    ….go buy dinner
    —————————–

    Well said, but it’s worse than that. Only about 4% of atmospheric CO2 (& a possibly somewhat higher share of gain since 1850) is due to human activities such as burning hydrocarbons or making cement for windmills, dams & nuclear power plants.

    Climate science has been so busy spending grant money on worse than worthless GIGO models, that it hasn’t discovered all the sinks for carbon or worked out the details of the C cycle.

  278. jai mitchell says:
    June 14, 2013 at 7:27 pm
    milodonharlani

    I gave you a lengthy explaination but the moderators are holding onto it for now.

    anyone who says that there has been no warming since the 1998 El Nino needs to realize that this has been one of the warmest years in human history and the last 10 years have been the warmest decade in human history.

    unless an annual temperature drops below the 1979 average (which it hasn’t done in over 35 years now) I am not concerned about your pet theories.
    ======================
    Haven’t humans been around for the entire holocene? Wasn’t it warmier in the medevil, roman and minoan warm periods? Not to mention the HCO?

  279. One thing I have noticed is that the zealots of the “Alarmist Church” are out in force for this discussion because they absolutley cannot concede this argument. If they do, they realize that its all over for their “faith”.

    What are they going to be like when it cools for 20 or thirty years?

    LOL

  280. Tom:

    The adherents of the CACCA cult will do what they’ve done for the past decade, ie assert that global cooling is a result of catastrophic man-made global warming. The models predicted this result, don’t you know?

  281. jai mitchell says:
    June 15, 2013 at 8:15 am
    you are entitled to your own opinions, but not your own facts.
    We have a very significant and credible record based on thousands of ice cores (recent 2,000 years) and hundreds of ice cores (earlier Holocene).
    ===================
    jai mitchell says:
    June 14, 2013 at 7:27 pm
    anyone who says that there has been no warming since the 1998 El Nino needs to realize that this has been one of the warmest years in human history and the last 10 years have been the warmest decade in human history.
    ===================

    jai, are you alright?

  282. TomR,Worc,MA,USA says:
    What are they going to be like when it cools for 20 or thirty years?

    Pockets of them may remain, as members of the “Hot Earth Society”. They will bow and scrape before idols of Al Gore, Hansen, and Mann, and their logo will be a hockey stick.

  283. I just think of thousands to hundreds of thousands of highly-paid and well-respected people who can’t figure out that large chaotic systems aren’t easy to predict, and I want to weep for the future.

    Here’s a thought; what else are they getting wrong?

  284. Latitude (June 15, 2013 at 12:54 pm), technically previous interglacials don’t count as “human history”, even though “modern” humans may have been around; they didn’t leave written records and so aren’t “historical”.

    He’s still wrong, though, because the Minoan, Roman, and Medieval Warm periods occurred during human “history”. With the limitations of ancient records and temperature proxies (and even modern temperature records), it can’t be said with certainty that one or more of these eras wasn’t warmer than today. One of the warmest decades in history? OK. The warmest? No.

  285. Proxy records indicate that the peaks of the Minoan, Roman & Medieval Warm Periods were hotter than the 1980s & 1990s, which weren’t even warmer than the 1930s & 40s.

    CACCA is & always has been an epic FAIL.

  286. jai typed:

    If the overwhelming majority of the scientists out there are honest and sincerely believe that CO2 will kill your progeny, why would you want to help kill them faster (by burning all the fossil fuel you can)

    You probably don’t realize this, but that’s one of the least intelligent comments on this thread. The “overwhelming majority of the scientists” also believed in eugenics, phrenology, terracentric universe, static continents, ether, etc. etc. etc. Then they died, refusing to look at the facts. Like you will.

    And even though I am well aware that you think you’re a pretty clever guy, if you think the ice cores are anything more than a long-term average with absolutely NO capability of documenting spikes or other excursions, then you understand nothing about them. The “52 million years” claim makes you look like a 10 year old, repeating something their teacher told them at school.

  287. the last 10 years have been the warmest decade in human history

    On the other hand, 3 of the last 5 years on RSS are not even in the top 10. With RSS, 2012 ranks 11th, and 2011 ranks 13th, and 2008 is 22nd.

  288. Codetech

    says,

    And even though I am well aware that you think you’re a pretty clever guy, if you think the ice cores are anything more than a long-term average with absolutely NO capability of documenting spikes or other excursions, then you understand nothing about them. The “52 million years” claim makes you look like a 10 year old, repeating something their teacher told them at school.

    ———-

    The finest resolution on the 540,000 year cycle is several hundred years.

    the finest resolution on the 2000 year cycle is several decades.

    it sounds like you are making stuff up–well, if you think that CO2 isn’t from human’s burning fossil fuels, well there is simply no hope for a guy like you.

  289. Latitude says:

    June 15, 2013 at 12:54 pm

    do you understand the term “human history”?

    just how far back do you suppose that goes?

  290. dbstealey says:
    June 15, 2013 at 11:54 am

    Glad to see that jai mitchell now understands that we’re in an interstadial. Previously his position was that the current Ice Age had passed.

    And yes, CO2 is higher than at times in the past. Not that it matters. During geological history, CO2 has been up to twenty times higher than it is now — with no runaway global warming. When CO2 was high, the biosphere teemed with life. More CO2 is better. There is no downside at either current or projected CO2 levels.
    ————————————

    When CO2 was 20 times higher than it was now was when exactly? and what was the temperature then? and what was the sea levels then? what were humans like then? could our modern civilization survive with the temperature effects and sea level changes that this would bring?

    you know, the idea that we are in an interstadial is against everything that has ever been taught or studied in paleo climate. . .I mean, they call it the Holocene BECAUSE we ended the last ice age.

    That’s it, I can’t possibly deal with all the Gish Gallop going on here. You guys are absolutely bonkers.

    http://rationalwiki.org/wiki/Gish_Gallop

  291. jai…..tell me again how far back you said we have ice cores

    and what did early humans have to do with temperatures or CO2 levels

    So what if humans were here or not…temps with up and down anyway

    Just exactly like they are now….

    Who’s stupid enough to think a fraction of a degree means anything…and even ‘stupider’ enough to think that tiny, the real one that’s not adjusted, not an anomaly, fraction of a degree can show a trend?

    hell, my butt has a bigger temperature swing when I get up and down……….

  292. More importantly Jai, what was the sun doing then? What were the continents doing then?

    I can’t believe nobody is even bothered that Santer never said the thing this whole argument is based on. He said 17 years, at a minimum. And he wasn’t even talking about global surface temps.

  293. jai mitchell says:

    “When CO2 was 20 times higher than it was now was when exactly?”

    You can easily see the answer to that by clicking on the link I posted. CO2 has been up to twenty times higher in the past. Sorry that disrupts your world view.

    And:

    “…they call it the Holocene BECAUSE we ended the last ice age.”

    I understand that you’re winging it here, and that you’re pretty new regarding this subject. But we are still in an Ice Age.

    Finally, thanx for the laugh with your one link, which has nothing whatever to do with science. It just reflects the consternation you feel when someone runs circles around your arguments.

    I recommend reading the WUWT archives for a few months. You need to get up to speed on the subjects discussed here over the past 5 – 6 years. That way you won’t make juvenile errors, like when you wrote: “The LIA is associated with the maurader minimum…”

    It is easy to tell when someone gets their talking points from alarmist blogs, which don’t know what the Maunder Minimum was. You can’t pretend your way to credibility here.

  294. M. Courtney says:-
    “The worst impact of creating this echo-chamber is the decline in the Guardian’s readership.”

    Indeed, what happened to RealClimate ? It appears to have just moved over, to the Guardian.

  295. Jai & Ryan:

    Please educate yourselves on basic geologic & atmospheric history before you comment on them.

    Just considering the Phanerozoic Eon (the past 543 million years), CO2 was about 20 times higher than now during the Paleozoic Era’s Cambrian & Ordovician Periods. The sun was then only four to five percent weaker than now, yet the latter period experienced a not just an icehouse cycle, but a glacial epoch. The spread of green plants onto land during the Silurian, Devonian & Carboniferous Periods helped lower CO2 from around 7000 ppm to 1000 during much of the Mesozoic Era.

    During the Paleocene & Eocene Epochs of our present Cenozoic Era, CO2 concentration was 900 to 1100 ppm, although some think that during the Eocene Optimum, levels might have returned to 2000 ppm. That time may be the 52 million years ago interval that has Jai so confused, but CO2 was much higher then than now, as you easily could have discovered by doing the least little bit of actual research.

    As noted by many, the human contribution to current CO2 concentration of 395 ppm of dry air is a small fraction of the total. Most of the gain in this beneficial gas has been from natural causes, chiefly its release with slow warming of the oceans since the depths of the LIA c. 1700.

  296. dbstealey says:
    June 15, 2013 at 4:16 pm

    Thanks for mentioning the current glacial epochs.

    Jai, the present Ice Age, with waxing & waning ice sheets over the NH, began with the Pleistocene Epoch about 2.4 million years ago. But since the Oligocene Epoch, when Antarctica glaciated, Earth has been in an icehouse phase. The Holocene is simply another in the long series of interglacials that interrupt NH ice sheet advances & extensive SH montane glaciers, in addition to the persistent Antarctic ice sheets, now about 35 million years old.

    BTW, soil isotope studies show that the East Antarctic Ice Sheet, with most of the ice on our planet, has been stable or growing for at least 3000 years, ending its retreat begun after the Last Glacial Maximum, c. 20,000 years ago. This fits with other proxy data showing that Earth has been in a cooling phase since the Minoan Warm Period, ie headed back toward the next glacial phase. They last about 100,000 years, while the interglacials, as now, typically endure just 10 to 20,000 years. Ours is getting long in the tooth, so enjoy the balmy climate while it lasts.

    Humanity could not stop the next ice age even if over the next few centuries we burnt all the accessible fossil fuel in Earth’s crust. The ice sheets might not return for thousands of years yet, but they will.

  297. Yes, glaciations became possible when it went somewhere around or under 3k. A little bit less sun output makes a huge difference, and so do other factors. These things are not news to most anyone who follows climate science, including the thousands of “warmist” climatologists who think the Earth is still warming in response to CO2. And claiming that we know concentrations hit 8k(20×400, assuming here that you don’t think Mauna Loa is in on the conspiracy ;D) is a bit out there. It’s possible, but it is not a sure thing.

    Tell me, does it bother you that Santer never said anything about a 17-yr test to disprove warming as is claimed in this post?(trying to actually discuss the post)

  298. Obviously, jai mitchell doesn’t understand climate science, how it works, what the history is, etc.

    We can go on trying to educate her, but it is usually pointless for the true believers. They don’t want to know the facts, or the theory or the truth or understand any of the nuances. They just want to continue believing and every comment he/she makes is in that frame. Try to ignore and especially ignore any fact-type information she thinks she is making.

  299. Ryan says:
    June 15, 2013 at 4:11 pm
    I can’t believe nobody is even bothered that Santer never said the thing this whole argument is based on. He said 17 years, at a minimum.

    In other words, if this is the correct interpretation, and if we have no change for a million years, his great, great…..great grandchildren can say he was right since he only said 17 years was the minimum needed, but no maximum was ever given by which time you can conclude that the models are wrong. Is that correct?

    Here is what Richard Courtney had to say about Santer’s statement on an earlier article:
    “The Santer statement says that a period of at least 17 years is needed to see an anthropogenic effect. It is a political statement because “at least 17 years” could be any length of time longer than 17 years. It is not a scientific statement because it is not falsifiable.
    However, if the Santer statement is claimed to be a scientific statement then any period longer than 17 years would indicate an anthropogenic effect. So, a 17-year period of no discernible global warming would indicate no anthropogenic global warming.
    In my opinion, Santer made a political statement so it should be answered with a political response: i.e. it should be insisted that he said 17 years of no global warming means no anthropogenic global warming because any anthropogenic effect would have been observed.
    Santer made his petard and he should be hoisted on it.
    Richard”

  300. Ryan says:

    “Tell me, does it bother you that Santer never said anything about a 17-yr test to disprove warming as is claimed in this post?”

    Let’s cut to the chase here: How many years, in your opinion, would global warming have to stop for you to admit that the CO2=CAGW conjecture is falsified?

    Post a specific number, please. How many years?

  301. Ryan says:
    June 15, 2013 at 4:35 pm

    Yes, glaciations became possible when it went somewhere around or under 3k. A little bit less sun output makes a huge difference, and so do other factors. These things are not news to most anyone who follows climate science, including the thousands of “warmist” climatologists who think the Earth is still warming in response to CO2. And claiming that we know concentrations hit 8k(20×400, assuming here that you don’t think Mauna Loa is in on the conspiracy ;D) is a bit out there. It’s possible, but it is not a sure thing.

    ————————————-

    Wrong again. The relatively brief Ordovician glaciation happened with CO2 well above 3000 ppm. One of the proxy data sets has a resolution of ten million years, with about 7000 ppm on the older side & at least 4000 ppm on the newer, so CACCAs are reduced to arguing that CO2 could have dipped below 1000 ppm in between. Typical evidence free hand waving by CACCA.

    Besides which, what of all the previous glaciations, some of which either covered the entire planet in ice or most of it? When they were initiated, CO2 levels might have been higher even than in the Cambrian, ie 7000 to 8000 ppm. No one knows for sure how much CO2 there was in Precambrian air. Estimates range from 90,000 ppm to 3200, although the latter figure (& lower) is controversial & from a single paper.

    The “idea” that going from three CO2 molecules per 10,000 of dry air 100 years ago to six 100 years from now will cause runaway catastrophic global warming is nothing short of ridiculous. As you may know, water vapor levels in the tropics reach 400 molecules per 10,000 (40,000 ppm), totally swamping out any possible effect of one, two or three more CO2 molecules.

  302. Ryan:

    During what period has there been statistically significant global warming, by whichever cooked book surface data set you want to use? In 2009, Jones of UEA said there had been none since 1995. That was now 18 years ago, more or less, so to equal that period, statistically significant global warming would have had to commence in they famous weather year of 1977, which it may well have done, although most trend analyses find a later start date.

    So in about a year, when the world has gone longer without significant warming than it previously experienced of warming, or has experienced cooling, all the while with CO2 rising at close to the same rate as before, will you consider CACCA scientifically falsified, or not?

    Thanks.

  303. Whether or not you feel like you agree with his statement doesn’t really change the fact that it is being misrepresented here. It isn’t even being compared to the right data…

    This is classic quote-mining. Taking the words of a respected scientist out of context to add some weight to an otherwise nonsensical claim.

  304. “Santer made his petard and he should be hoisted on it.”

    With it, hoisted with it. A Petard is a small bomb that was placed on a raisable platform and put next to a castle wall or gate. One cannot be ‘hoist on his own petard’, one must be ‘hoist WITH his own petard’.

    I think that anyone who is splitting hairs over what is meant by Santer’s statement is dissembling and does not want to accept that the models are invalidated.

  305. Awesome, jai – you’ve demonstrated conclusively that you know nothing of Science, let alone “climate science”.

    Then again, comparing Science to “climate science” is a bit like comparing a kindergarten Christmas concert to the Mormon Tabernacle Choir.

    jai, it doesn’t MATTER if thousands of “scientists” are wrong. They’re still wrong. Luckily, tens of thousands are not “climate scientists” and don’t buy what the “climate” industry is selling.

    Now, guess what happens when you try to “prove” that human emitted CO2 is the cause of the documented rise at Mauna Loa? Just as an exercise, you should try it. But, of course, you won’t. Because you’re not here to learn anything, you’re here to “teach us”… right?

  306. Ryan:

    I tried to find recent estimates of CO2 level before the Marinoan or previous Snowball Earth episodes, but haven’t succeeded. This 2013 press release on an LSU study of concentrations needed to melt the ice finds, as had prior papers, extremely high carbon dioxide levels, comparable to those of O2.

    http://www.eurekalert.org/pub_releases/2013-02/lsu-lrf022813.php

    Snowball Earth periods show the remarkable homoeostatic power of Mother Gaia. And speaking of Gaia, I respect James “Billions Will Die” Lovelock for being enough of a true scientist, ie practitioner of the scientific method, to realize he was wrong about the catastrophic component of CACCA, & maybe the anthropogenic part.

    From his Wiki entry, the reliability of which can be checked via the footnotes:

    Of the claims “the science is settled” on global warming he states:[33]

    “One thing that being a scientist has taught me is that you can never be certain about anything. You never know the truth. You can only approach it and hope to get a bit nearer to it each time. You iterate towards the truth. You don’t know it.”[33]

    He criticizes environmentalists for treating global warming like a religion.[33]

    “It just so happens that the green religion is now taking over from the Christian religion,” Lovelock observed

    “I don’t think people have noticed that, but it’s got all the sort of terms that religions use … The greens use guilt. That just shows how religious greens are. You can’t win people round by saying they are guilty for putting (carbon dioxide) in the air.”[33]

    In the MSNBC article Lovelock is quoted as proclaiming:[32]

    “The problem is we don’t know what the climate is doing. We thought we knew 20 years ago. That led to some alarmist books – mine included – because it looked clear-cut, but it hasn’t happened;” he continues

    “The climate is doing its usual tricks. There’s nothing much really happening yet. We were supposed to be halfway toward a frying world now,” he said

    The world has not warmed up very much since the millennium. Twelve years is a reasonable time … it (the temperature) has stayed almost constant, whereas it should have been rising – carbon dioxide is rising, no question about that”, he added.[32]

    In a follow up interview Lovelock stated his support for natural gas; he now favors fracking as a low-polluting alternative to coal.[13][33] He opposes the concept of “sustainable development”, where modern economies might be powered by wind turbines, calling it meaningless drivel.[33][34] He keeps a poster of a wind turbine to remind himself how much he detests them.[13]

  307. Robert:

    I like Jai & hope he or she stays here if to learn rather than to keep regurgitating the lies he/she has eagerly consumed, after garbling (maybe meant the usual lie of 3 million years, not the ludicrously false 52 million) rather than digesting them.

  308. There are much higher resolution estimates of the Ordivician CO2 levels, John. But they aren’t in the NIPCC so I guess they didn’t get much play around here.

    But back to the topic at hand, Santer was talking about early detection of the anthropogenic forcing in TLT trends. It is being used here to talk about falsification of global surface temperature trends. That is a lie, pure and simple. It doesn’t mater whether your or Richard or Mr. of Brenchley like what he said. It’s still a falsehood, bannered and stickied on the closest thing to a reasonable place of sceptic climate science.

  309. Ryan,

    I didn’t find the ten million year resolution studies here but from SkS.

    I’d be happy to see your finer resolution proxies for Ordovician CO2 levels. Thanks in advance.

    There is no doubt that the surface temperature data sets have been “adjusted” without justification, making recent numbers warmer & older cooler. I can see it in my own reference materials. The most shocking revelation was that when GISS was finally forced to make public their UHI adjustment algorithm, the public learned that, contrary to all reason, the changes made temperatures higher rather than lower.

    This kind of behavior is as far removed from science as possible. It’s shameless activism, & on the public dime, like Gavin’s blogging on the job & Hansen’s under-reporting outside income.

  310. “I think that anyone who is splitting hairs over what is meant by Santer’s statement is dissembling and does not want to accept that the models are invalidated.”

    It’s not splitting hairs. Santer didn’t claim that a test at 17 years would prove or disprove anything. Chris is using Santer’s name to give the test legitimacy.

  311. Ryan,

    You are avoiding my question:

    How many years, in your opinion, would global warming have to stop for you to admit that the CO2=CAGW conjecture is falsified? Post a specific number, please. How many years?

    So: how many years?

  312. So YOU think temperature adjustments have been made without justification, therefore it is fine if sceptics piggy-back tests they invented out of whole cloth on the name Ben Santer. Seems reasonable.

  313. Probably around 35 years, db. But I am not a climatologist(biology really is the greatest science, and I’ve stuck with it), so I don’t really think that carries much weight anywhere, at any time. That’s the thing about not working in a field, it makes your claims carry no weight. But the difference is that I’m not going to lift a Spencer quote out of context to try and fake some cred.

  314. Ryan says:
    June 15, 2013 at 6:23 pm
    Probably around 35 years..
    =====
    But how in this world would you know?….it’s still only a fraction of a degree

    .. a fraction of a degree will not show a trend

    serious question….I’m not being a butt

  315. Latitude says:
    June 15, 2013 at 6:17 pm

    John Tillman says:
    June 15, 2013 at 6:10 pm
    ====
    John…..U….Hide…..It

    —————————–

    I like that.

    Ryan: I don’t think that GISS, et al, make unjustified adjustments. It’s an objective, observable fact that they have done so. Unless you can point me to an explanation that I’ve missed as to why they should suddenly from 2008 onwards change temperature averages from the 1930s that have been considered valid ever since they were recorded. Please show me their justifications for these “adjustments”. Thanks.

  316. Ryan says:

    “That’s the thing about not working in a field, it makes your claims carry no weight.”

    We have no claims, Ryan. You make the claims. You just did it in your post above.

    See, skeptics have nothing to prove. We only question the alarmist “carbon” claim, because it is a fact-free conjecture.

    That’s how the Scientific Method works.

  317. Latitude:

    Thank God that the late, great Daly archived those data.

    GISS, NOAA & the Had Crew are truly shameless. How dare they claim to be scientists?

    I hope that Ryan can produce their justifications for these “climate scientists'” outrageous, Orwellian misbehavior, which is the antithesis of genuine scientific behavior.

  318. “We have no claims, Ryan. ”

    Chris does. He claims that Santer devised a test to falsify warming at 17 years.

    REPLY: And the claim is true, if you followed the link in the essay to Ben Santer’s 17 year itch you’d know this Mr. Gainey.

    From: Separating signal and noise in climate warming

    The LLNL-led research shows that climate models can and do simulate short, 10- to 12-year “hiatus periods” with minimal warming, even when the models are run with historical increases in greenhouse gases and sulfate aerosol particles. They find that tropospheric temperature records must be at least 17 years long to discriminate between internal climate noise and the signal of human-caused changes in the chemical composition of the atmosphere.

    “One individual short-term trend doesn’t tell you much about long-term climate change,” Santer said. “A single decade of observational temperature data is inadequate for identifying a slowly evolving human-caused warming signal. In both the satellite observations and in computer models, short, 10-year tropospheric temperature trends are strongly influenced by the large noise of year-to-year climate variability.”

    Source: http://www.llnl.gov/news/newsreleases/2011/Nov/NR-11-11-03.html

    We’ll be happy to accept your mea culpa at any time, we’ll even accept one “made out of whole cloth”.

    – Anthony

  319. Climate anti-scientists, IMO, started stepping on the historical data because there is a limit to how crispily they can cook the current books. Before 1979, the satellites aren’t keeping them from being as crooked as they want to be.

  320. “And the claim is true, if you followed the link in the essay”

    The para preceding that quote says:

    “In fingerprinting, we analyze longer, multi-decadal temperature records, and we beat down the large year-to-year temperature variability caused by purely natural phenomena (like El NiÒ±os and La NiÒ±as). This makes it easier to identify a slowly-emerging signal arising from gradual, human-caused changes in atmospheric levels of greenhouse gases,” Santer said. “

    The test he’s talking about is of signal after allowing for ENSO etc.

    REPLY: doesn’t matter, there’s still been no warming for that period, the models deviate from the tropospheric data sets enough to show clearly that they don’t work.. You chaps may as well get used to that fact rather than try to flummox it. – Anthony

  321. As noted in the link above:

    So with Dr. Ben Santer now solidly defining 17 years as the minimum to determine a climate signal, what happens to the argument when we reach 2013-2014 and there’s still no statistically significant upwards trend?

    Good question. We’re almost at 2014. Six months to go.

    Santer devised a test to falsify warming at 17 years. That is a lot closer than Ryan’s “35 years” — which is more than double Santer’s number. Since Santer at least has the credentials [vs Ryan's baseless assertion], I will assume that Ryan is just too scared to post a reasonable number here.

    But the rest of us can see that with no global warming for the past 17+ years, the “carbon” scare is on its last legs.

  322. “They find that tropospheric temperature records must be at least 17 years long…”

    At LEAST 17 years is not the same as “falsified at 17 years”. It’s pretty simple. And notice the TLT reference. Again, not the info Chris is quoting above. Santer didn’t specify any test to falsify warming at 17 years and yet the claim that he did is made over and over and over again. Sounds like Darwin’s famous lifted eye quote to this biologist.

  323. “Santer devised a test to falsify warming at 17 years.”

    No, he didn’t, as you can read from Anthony’s plain-as-day quote above. He specified 17 years as the minimum number of years required to detect a trend AT ALL.

    REPLY: Yes and there’s no statistically significant trend in 17 years. Seems like only a dullard or a true believer wouldn’t get that. – Anthony

  324. For how many years previously was there a statistically significant warming trend in the same data set? From say, 1978 to 1995, for 17 years? How did Santer derive the 17-year minimum?

    For how many years before the onset of significant warming (though slight) in the 1970s or ’80s was there cooling or at least no statistical warming? From 1961 to ’78? If it were significant cooling, I’m sure that that statistical trend has been “adjusted” away.

    Was there also statistically significant warming from 1944 to ’61, or whenever?

  325. Ryan persists in attempting to maintain, contrary to the direct evidence, that Dr. Santer had not adumbrated what Anthony has called a “17-year test”, so that absence of warming for 17 years would indicate that the models were wrong.

    However, that is precisely what Dr. Santer had adumbrated. The models, he wrote, “find that tropospheric temperature records must be at least 17 years long to discriminate between internal climate noise and the signal of human-caused changes in the chemical composition of the atmosphere”.

    Ryan has attempted to wriggle out of this surely clear quotation by saying that Dr. Santer was talking about tropospheric rather than surface temperatures (but the two are in lock-step); that he was talking about the record after ENSO influences had been excluded (but, as SteveF shows, that adjustment merely strengthens the case against CO2 as an influence); and that he was talking about the absence of warming simpliciter rather than the absence of statistically-significant warming (but, as Werner Brozek has pointed out, on the RSS satellite record there has already been no warming at all for 16 years 6 months: and that period is close to Dr. Santer’s test as redefined by Ryan).

    Whichever way Ryan slices and dices the evidence, Dr. Santer’s test has been met. The models were wrong. Get used to it.

    However, the main thrust of the head posting was to point out the widening discrepancy between the rate of warming that the models have been predicting and the less exciting rate that has been measured. Here it is still more obvious that the models were wrong. That is why policy-makers are now beginning to rethink their earlier, over-hasty commitment to shutting down the economies of the West in the name of preventing global warming that has not been occurring at anything like the predicted rate. No amount of wriggling by trolls can avert the ever-more-widespread realization that models cannot reliably predict the climate for more than a week or two in advance. The climate scare is over. Move along.

  326. milodonharlani says: June 15, 2013 at 8:54 pm
    “Was there also statistically significant warming from 1944 to ’61, or whenever?”

    You can see the answers to those questions here. This one is Hadcrut 4, but you can select other datasets. Just look up 1961 on the x axis, 1944 on the y, and check the color. Click there for details.

    The answer is no, the trend was negative, but not significantly below zero.

  327. Bureaucrats seeking to rule the world are already moving along, unfortunately, from CACCA to “sustainability”, just as CACCA replaced Marxism after the Fall of the Wall. Whatever the urgent problem is needing global government to save us, it is surely caused by humans in general & evil capitalists in particular.

  328. Nick:

    Thanks.

    Looks as if the only 17-year period in which pronouncedly rising CO2 & statistically significant temperature in this “adjusted” data set occurred was roughly 1978 to 1995. Before that time, no significant heating is obvious, indeed possibly significant cooling for a spell, despite rising CO2, then the period around 1988, when Hansen warned us that the seas would boil, followed by the current phase of again no significant warming.

    For this we have to give up fossil fuels, let plants starve, old age pensioners freeze in the dark & massacre helpful bats & birds with the whirring, slicing & dicing blades of death?

  329. Monckton of Brenchley says: June 15, 2013 at 8:58 pm
    “Whichever way Ryan slices and dices the evidence, Dr. Santer’s test has been met.”

    It isn’t Dr Santer’s test. No such test appears in your quote. Aside from the issues of TLT and 17 year as a minimum, not maximum, he clearly indicated what kind of data should be tested:
    “In fingerprinting, we analyze longer, multi-decadal temperature records, and we beat down the large year-to-year temperature variability caused by purely natural phenomena (like El Ninos and La Ninas). This makes it easier to identify a slowly-emerging signal arising from gradual, human-caused changes in atmospheric levels of greenhouse gases,” Santer said.”

    Your quote follows after that para. Allowing for effects like ENSO makes a big difference.

  330. dbstealey says:
    June 15, 2013 at 7:43 pm

    Good question. We’re almost at 2014. Six months to go.

    Take a close look at the three months to the left of the slope line for RSS. Then look where the May anomaly is.

    http://www.woodfortrees.org/plot/rss/from:1996.6/plot/rss/from:1996.9/trend

    The bottom line is this: If the May anomaly for RSS holds for the next three months, then RSS will hit the 17 year mark in only three months since the 0 line would go from September 1996 to August 2013. With ENSO being neutral and with the sun being “dead”, there is a good possibility that will happen.

  331. Monckton of Brenchley sees the same Nick Stokes that we have to endure every day: someone who is psychologically incapable of ever admitting he is wrong about any of his many assertions.

    Since we are all wrong at one time or another, and since the rest of us admit it on occasion, it is a glaring fault of Nick Stokes that he has never admitted to any errors of any kind. And in Mr Stokes’ case, he makes more errors than commenters such as Lord Monckton and Willis Eschenbach, who have a problem with Mr Stokes’ alarmist version of reality.

  332. It doesn’t even matter what Santer said, or didn’t say. The fact that Climatists are, between bouts of denying that the warming has indeed halted for 17+ years, scrambling to “explain” why means they know their warmist ideology is in trouble. The models have failed, as they were destined to, not being based on reality.

  333. Nick Stokes says, June 15, 2013 at 9:25 pm:

    “Allowing for effects like ENSO makes a big difference.”

    Except, you cannot ‘allow’ for effects like ENSO. The ‘effect’ of ENSO is what makes the whole global temperature graph:

  334. PS: That’s the same team cited in the LSU press release, but with somewhat different CO2 estimates.

  335. CO2 is a GHG, and its accumulation will make the world hotter. We have a real interest in knowing how much. GCM’s represent our best chance of finding out. We need to get as much information from them as we can. Doing nothing is not a riskfree policy.

    I think it is plausible that you are correct that the accumulation of CO_2 will make the world warmer. Doubling it from 300 ppm (by roughly the end of the century) might make it 1.2 C warmer that it was in 1955 or 1960, if all things were equal and we were dealing with a single variable linear problem. But we’re not. We’re dealing with a highly nonlinear, chaotic system were we cannot even predict the baseline behavior in the past, when it has exhibited variation much larger than observed over the last 40-50 years which is the entire extent during which we could possibly have influenced it with CO_2. Indeed, temperature increase in the first half of the 20th century bore a strong resemblance to temperature increase in the second half — as did reported behavior of e.g. ice melt in the arctic and more — until second half and first half records were adjusted and adjusted again until a cooling blip in the middle all but disappeared and the second half was artificially warmed compared to the first. Whether or not these adjustments were honest or dishonest is in a sense beyond the point — either way with satellite measurements of LTT it simply isn’t possible to continue to adjust the land record ever warmer than the LTT and hence in a decade or two the issue will be moot.

    In a decade or two many issues will be moot. I am not a “denier” or a “warmist” — I am simply skeptical that we know enough to model the climate when we don’t even understand how to incorporate the basic physics that has driven the climate into a model that can quantitatively hindcast the last million years or so. And no, I do not buy into the argument that we can linearize around the present — not in a nonlinear chaotic system with a nonlinear past that we cannot explain. If the climate is chaotic — and note well, I did not say the weather — then we probably cannot predict what it will do, or even what it will “probably” do, CO_2 or no CO_2. If it is not chaotic, and is following some sort of predictable behavior that we might be able to linearize around, well — predict it. In the past.

    You point out that doing nothing is not a risk free policy. Absolutely true. However, doing something isn’t a risk free action — it is a guaranteed cost! You are simply restating Pascal’s Wager, in the modern religion of sinful human caused global warming. Let me clarify.

    Pascal, of course, said that even though the proposition that God exists was (your choice of) absurd or improbable or at the very least unprovable and lacking proper evidence, the consequences of believing and being wrong were small compared to the consequences of not believing and being right. A slightly wiser man than Pascal — such as the Buddha — might have then examined what the real costs were (tithing the priesthood, giving up enormous amounts of political power to the religion-supported establishment feudal government, wasting countless hours praying and attending sermons where somebody tells you how to think and behave, all of the attendant distortion of human judgment that occurs when one accepts false premises on important matters such as a worldview) and just who it was that wrote the book describing the consequences of disbelief (that same priesthood, hmmm), but Pascal took them at their word that a being capable of creating a Universe and supposedly being perfectly loving would throw humans into a pit of eternal torture for the sin of failing to be convinced of an absurdity by the priesthood. Hence one could “prove” that the costs on one side paid immediately, however great, were less than the expectation value of the future costs on the other side, however unlikely. Infinity/eternity times even a tiny chance exceeds a mere lifetime of tithing etc.

    This is the modern climate science wager as well. You remain unconvinced that the GCMs are wrong in spite of the fact that whether you plot the spaghetti in quantiles or standard deviations (where I note that you have not responded to my observation that AR4 plotted both mean and standard deviation of precisely these spaghetti curves and furthermore made claims of likelihood, and there isn’t the faintest reason to think that AR5 won’t do exactly the same thing, in the guide to policy makers, not the scientific sections that nobody but scientists and not all of them ever read) the current climate has diverged pretty much outside of the envelope of all of them.

    I ask you again to invert your belief system and reorient it back towards the accepted one within the general realm of science. We do not know if the GCMs are trustworthy or correct or implement their physics appropriately. It is a nontrivial question because the climate is chaotic and highly nonlinear, because there are many different types of models, and because the models do not even make predictions in good agreement with each other in toy problems. We do not know the physics and feedback associated with many of the assumptions built into the models — clouds and water vapor being an excellent example — well enough to trust it without some sort of confirmation, some evidence that it the models are capable of predicting the future or hindcasting the indefinite past.

    Do you consider the deviation of observed temperatures from the entire range predicted by these inconsistent models from observed temperatures to be evidence for the GCMs being trustworthy? If you answer yes, then I have to conclude that you have fully embraced CAGW as a religion, not as science. Note that I didn’t say “proven” or “disproven”. One does not verify or falsify in science (and yes, I’m perfectly happy to tackle anyone on list who is a Popperite) — one increases or decreases one’s degree of belief in propositions based on a mix of consistency and evidence. Negative evidence (including failure to agree with predictions in a reasonable manner) either decreases degree of belief or you have given up reason altogether, just as positive evidence should increase it.

    If you agree — as I hope that you do — that the lack of agreement is troubling and suggests that perhaps, just maybe, these models are badly wrong since the current climate is diverging from the entire envelope of their predictions (which were themselves averages over ensembles of starting conditions, presuming some sort of ergodicity that of course might be utterly absent in a chaotic system or a system with many non-Markovian contributions with many timescales from the past and with the possibility — nay, the observed certainty — of spin-glass-like “frustration” in its internal dynamics on arbitrarily large timescales) then this should cause you to take a hard look at the entire issue of certain costs versus risks versus benefits because changes in the plausibility/probability of downstream disaster have enormous impact on the expected costs of action versus inaction, and even on which actions are reasonable now and which are not.

    For example, on youtube right now you can watch Hansen’s TED talks video where he tells the entire world that he still thinks that sea level will rise 5 meters by the end of the century. 5 meters! Do you agree with that? Do you think that there are ten actually reputable climate scientists on Earth who would agree with that? Bear in mind that the current rate of SLR — one that has persisted for roughly 140 years with only small variations in rate — is 1.6 mm/year (plus or minus around 1.5 mm). 9 inches since 1870 according to the mix of tide gauge and satellite data. This is the same Hansen who conspired to turn off the air conditioning in the US congress on the day he made a presentation to them to sufficiently convince them that CAGW was a certain danger and that they should fund any measures necessary to prevent it. Trenberth, OTOH — sadly, so committed to CAGW that he can hardly afford to back out now but probably a basically honest person — fairly recently called for 30 cm by the end of the century, a number that is a linear extrapolation of the current rates but at least isn’t radically implausible — a foot in 90 years, or a bit over an inch a decade.

    Five meters is Pascallian — sixteen or seventeen feet, a disaster beyond imagining, morally equivalent to eternal damnation. Trenberth’s assertion, on the other hand, is completely ignorable — just like nobody even noticed the 9 inch rise over the last century plus or is noticing the current (supposed) 3 mm/year, nobody will notice it if it continues decades or longer. Certainly there is reason for alarm and the urgent expenditure of trillions of dollars to ameliorate it.

    Let me explain the real cost benefit of CAGW to you. The money spent on it by Europe so far would have prevented the recent monetary crisis that almost brought down the Euro, which in turn might have triggered a global depression. Why do you think that Europe is backing off of CAGW? Because there has been no warming observed pretty much from when Mann’s infamous hockey stick was first published and used to wipe out the MWP and LIA that “inconveniently” caused people with ordinary common sense to doubt that there was a catastrophe underway, and screw a disaster in 2100, the monetary crisis amelioration has helped cause is a disaster right now.

    It’s a disaster right now in the US. We’re all spending a lot more for gasoline, coal, and oil derived products because the energy companies love CAGW, and probably help fund the hysteria. They make a marginal profit on retail cost, demand is almost entirely inelastic, and anything that makes prices rise is good for them. We pay substantially more for electricity than we probably need to, especially in states like California. We can “afford” this only because we are so very very wealthy and because none of the measures we take to ameliorate CAGW will even according to their promoters have any significant impact on it in the future, while measures we are taking for purely selfish and economical reasons (using lots of natural gas, for example) turn out to have a large impact on carbon utilization.

    And of course the same anti-civilization priests that preach the sin of burning carbon preach the even bigger sin of taking any of the measures that might actually work to ameliorate hypothetical CAGW caused by CO_2 such as building lots of nuclear power plants or investing heavily in LFTR. Western North Carolina alone could provide 100% of the energy requirements of the entire world for some 17,000 years, and mining it of course produces lots of the rare earths that are equally valuable for use in magnets and energy storage devices that might make electric cars ultimately feasible (at the moment, thorium is viewed as toxic waste when mining rare earths, which is why the US imports them all continuing our tradition of simply exporting our pollution).

    But the real disaster, the big disaster, the ongoing catastrophe, the most tragic aspect of the religion of CAGW is that all of the measures we have taken to combat it, with their moderate to severe impact in the first world, have come at the expense of the third world. The third world is suffering from energy poverty above all else. Energy is, of course, the fundamental scarcity. With enough, cheap enough, energy, one can make the desert bloom, build clean water and sewage systems, fuel industry, fuel transportation, fuel communication. Most people living in the first world cannot imagine life without clean running water, flushable toilets, electric lights, air conditioning, cell phones, computers and the Internet, cars, supermarkets, refrigerators, stoves, washing machines, but I grew up in India and I could literally see that life happening outside of my first-world window in the heart of New Delhi. I cited a TED talks of evil featuring Hansen up above — if you want to watch a TED talks of good, google up the one on washing machines. Washing machines are instruments where you put in dirty clothes on one side and take out books on the other side. You take out time, and wealth, and quality of life on the other side. And this cannot begin to compare to India, where there isn’t any water to wash clothes in for the poorest people unless they live near a river or it is the Monsoon.

    Every measure we erect to oppose the development of carbon based energy raises prices, and raising prices has a devastating impact on the development of the third world. Worse, the money we spend in the first world comes out of money we might otherwise spend in useful ways on the economic development of the third world — we have finite resources, and spending more on one thing means spending less on another. If we spent just one of the billions of dollars we spend a year on CAGW on global poverty, how many lives would we save (save from death, save from disease, save from poverty, save from hopelessness), mostly of children? Millions, easily. A year.

    So next time you want to talk about the “risk” of doing nothing, make sure you accompany it with the immediate cost of doing something for a problem that might or might not actually exist, whose impact (if it does indeed exist) could range from ignorable, as in a 30 cm SLR by 2100 to “catastrophic” (let’s say a whole meter of SLR by 2100, since only crazy people who are convinced that they are the religious salvation of humanity think it will be five), but a problem that will largely sort itself out even if we do nothing in a decade and beyond as technologies such as solar cells and (we can hope) LFTR and maybe long-shot thermonuclear fusion make burning carbon for energy as obsolete as TV antennas on top of houses within two decades not to “save the world” but to save money.

    In case you wonder if I think there are measures worth taking to ameliorate the risk of CAGW, I would answer certainly. Here is a list. All of these measures have a guaranteed payout in the long run regardless of whether or not CAGW is eventually borne out not by the current crop of GCMs but by observational science and models that actually work. None of them are horribly expensive. None of them are the equivalent of “carbon trading”, “carbon taxes”, or other measures for separating fools from money, and all of them would be supported as part of the general support of good science.

    * Invest heavily in continuing to develop natural gas as a resource. This is actually real-time profitable and doesn’t need a lot of government interference to make happen.

    * Invest heavily in fission based power plants. I don’t think much of pressurized water Uranium plants, although I think that with modern technology they can be built pretty safely. But however you assess the risk if you really believe in a global calamity if we burn carbon, and do not want to go back to outhouses, washing clothes by hand in a river and going to bed at sundown and living in houses that are hot in the summer and freezing cold in the winter, fission plants are surely better than that.

    * Invest in building LFTR and other possible thorium-burning fission designs. Start mining the thorium we’ve got and extracting our own rare earth metals for use in things like super-magnets (thereby driving down world prices in the process).

    * Continue to invest in fusion and solar cell and storage device research at an aggressive level, without subsidizing their premature adoption.

    * Back off on all measures intended to reduce the burning of carbon for energy and nothing else until there is solid observational evidence of not only warming, but catastrophic warming. Try to actually do real cost-benefit analysis based not on Pascal’s wager and mass public hysteria caused by the likes of Hansen, but on observational data backed by real knowledge of how the climate works, once we have any.

    * And sure, continue to do climate research, but at a vastly reduced level of public funding.

    Hansen succeeded in one thing — he caused the diversion of billions of dollars of public funding into climate science over more than two decades. If you want yet another horrendous cost — funding climate research intended to prove that if temperatures increase 5 C by 2100, it will be bad for tree frogs in the amazon instead of funding research into thorium, funding research on dumping massive amounts of iron into the ocean to supposedly increase its rate of CO_2 uptake instead of increasing the funding of fusion research, or research and development of vaccines, or development of water sanitation projects in third world countries, or the development of global literacy programs, or name almost anything that could have been done with the money pissed away in an overfunded scientific diversion that might yet turn out to be completely incorrect — net feedback from CO_2 increases could be negative to the point where the climate is almost completely insensitive to CO_2 increases (as has been quite seriously and scientifically proposed, and which is rather consistent with the evidence of the last 33 years of high-quality empirical observations of e.g. LTTs and SSTs).

    Personally, I will believe that even the proponents of the CAGW, now CACC (since there is no visible warming, the marketing has changed to “climate change” in order to try to perpetuate the Pascallian panic) truly believe the kool-ade that they would have us all drink the day that they call for us to build fission plants of one sort or another as fast as we can build them. In the meantime, I will continue to think that this whole public debate has a lot less to do with science, and a lot more to do with money, power, and an unholy alliance between those who want to exploit the supposed risk of disaster to their own direct and personal monetary benefit and those who hate civilization itself, who think that the world has too many people living in it and who are willing to promote any lie if it perpetuates the cycle of poverty and death that limits third world population growth, if it has any chance of topping the civilization that they perceive of as being run by the wealthy and powerful at their personal expenses.

    rgb

  336. In my opinion, Santer made a political statement so it should be answered with a political response: i.e. it should be insisted that he said 17 years of no global warming means no anthropogenic global warming because any anthropogenic effect would have been observed.
    Santer made his petard and he should be hoisted on it.
    Richard”

    Well put. He made a political statement because there is no possible equivalent scientific statement that can be made. Why not? Because we have no idea what the climate is “naturally” doing, has done, or will do. In physics, we tend to believe F = ma, so that as long as we can measure a and m, we can infer F. If we have a system with a number of well-known forces acting on a mass, and we observe its acceleration, and it isn’t consistent with the total force given the forces we understand, then we might possibly be forgiven for inferring the existence of a new force (although cautious physicists would work very hard to look for confounding occurrences of known forces in new ways before they went out on a limb and published a paper asserting the definite existence of a new force). This is, in fact, how various new elementary particles were discovered — by looking for missing energy or momentum after tallying up all that we could observe in known channels and inferring the existence of particles such as a neutrino needed to to make energy and momentum conservation work out.

    Now, try doing the same thing when we do not have the moral equivalent of F = ma, when we do not know the existing force laws, when we cannot even predict the outcome of a given experiment well enough to observe a deviation from expected behavior because there is no expected behavior. That it what modern climate science attempts to do.

    We have no idea why the world emerged from the Wisconsin glaciation. We are not sure why it re-descended briefly into glaciation in the Younger Dryas. We cannot explain the temperature record in proxy of the Holocene, why for some 8000 or 9000 years it was warmer than it is now, then why it cooled in the LIA to the coldest global temperatures observed in the entire Holocene, or why it warmed back up afterwards (to temperatures that are entirely comparable to what they were for most of the Holocene, although still a degree or so cooler than most of it). We are completely clueless about the Pliestocene Ice Age we are still in, and cannot predict or explain the variable periodicity of the interglacials or why the depth of the temperature variation in the interglacial/glacial episodes appears to be growing. We do not know why the Pliestocene began in the first place. We do know understand why most of the last 60 million years post-Cretaceous was warm, except for several stretches of a million years or more where it got damn cold for no apparent reason and quite suddenly, and then warmed up again equally suddenly.

    On shorter time scales, we cannot explain the MWP, the LIA, or the modern warm period. Most of all three were variations that occurred completely independent of any conceivable human influence, and yet were similar in magnitude and scope and timescale of variation. The only thing CO_2 increase is supposed to be responsible for is the temperature increase observed from roughly 1955 on (before that anthropogenic CO_2 was pretty ignorable) and temperature didn’t even begin to rise until fifteen to twenty years after the supposedly anthropogenic CO_2 did. It then inconveniently rose sharply for as much as 30 years — more or less sharply depending on whether you use data accrued before the early 90’s before or after it was “adjusted” to show far more late century warming and far less early century warming — but then stopped after one final burst associated with the super El Nino and the following equally super La Nina in 1998-1999. In the meantime, CO_2 continued to increase but temperatures have not. And they cannot be “adjusted” to make them warmer any more because whatever you do to the surface record, the lower troposphere temperature cannot be finagled and surface temperatures have already diverged further from them and from the SSTs than one can reasonably believe over the time the latter two have reliably been recorded.

    We cannot determine the human influence because we do not know what the non-human baseline would have been without it. I do not believe that we can know what it would have been without it, certainly not with existing theory.

    rgb

  337. rgbatduke says:
    June 16, 2013 at 10:50 am

    We have no idea why the world emerged from the Wisconsin glaciation. We are not sure why it re-descended briefly into glaciation in the Younger Dryas. We cannot explain the temperature record in proxy of the Holocene, why for some 8000 or 9000 years it was warmer than it is now, then why it cooled in the LIA to the coldest global temperatures observed in the entire Holocene, or why it warmed back up afterwards (to temperatures that are entirely comparable to what they were for most of the Holocene, although still a degree or so cooler than most of it). We are completely clueless about the Pliestocene Ice Age we are still in, and cannot predict or explain the variable periodicity of the interglacials or why the depth of the temperature variation in the interglacial/glacial episodes appears to be growing. We do not know why the Pliestocene began in the first place. We do know understand why most of the last 60 million years post-Cretaceous was warm, except for several stretches of a million years or more where it got damn cold for no apparent reason and quite suddenly, and then warmed up again equally suddenly.

    On shorter time scales, we cannot explain the MWP, the LIA, or the modern warm period. Most of all three were variations that occurred completely independent of any conceivable human influence, and yet were similar in magnitude and scope and timescale of variation. The only thing CO_2 increase is supposed to be responsible for is the temperature increase observed from roughly 1955 on (before that anthropogenic CO_2 was pretty ignorable) and temperature didn’t even begin to rise until fifteen to twenty years after the supposedly anthropogenic CO_2 did. It then inconveniently rose sharply for as much as 30 years — more or less sharply depending on whether you use data accrued before the early 90′s before or after it was “adjusted” to show far more late century warming and far less early century warming — but then stopped after one final burst associated with the super El Nino and the following equally super La Nina in 1998-1999. In the meantime, CO_2 continued to increase but temperatures have not. And they cannot be “adjusted” to make them warmer any more because whatever you do to the surface record, the lower troposphere temperature cannot be finagled and surface temperatures have already diverged further from them and from the SSTs than one can reasonably believe over the time the latter two have reliably been recorded.

    We cannot determine the human influence because we do not know what the non-human baseline would have been without it. I do not believe that we can know what it would have been without it, certainly not with existing theory.

    rgb

    This is a very persuasive statement of a rationalist position on climate. It is to this position that the research community (all the real sciences whose subjects impinge on climate) will increasingly converge as the fallacy and impossibility of the alarmist AGW position becomes clearer.

    To paraphrase Donald Rumsfeld, climate is an “unknown unknown”. We dont know what the hell is going on.

  338. @rgbatduke

    How coud I forget? And model predictions of course.

    Have your cake or eat. You can’t have ‘em both. But thanks anyways for so thoroughly refuting the hockey stick.

  339. Professor Brown’s heartfelt anger at the senseless waste and cruelty arising from the diversion of hundreds of billions from where they are needed to where they will do no good at all, at 10.18 am and at 10.50 am on 16 June, deserves to be elevated to a new posting in its own right.These two comments, taken together, constitute one of the best summaries of the case against the profiteers of doom that I have seen. I am grateful to him for having contributed so many distinguished, illuminating and passionate comments to this thread.

  340. Nick Stokes says:
    June 15, 2013 at 9:25 pm
    It isn’t Dr Santer’s test. No such test appears in your quote. Aside from the issues of TLT and 17 year as a minimum, not maximum, he clearly indicated what kind of data should be tested:
    “In fingerprinting, we analyze longer, multi-decadal temperature records, and we beat down the large year-to-year temperature variability caused by purely natural phenomena (like El Ninos and La Ninas). This makes it easier to identify a slowly-emerging signal arising from gradual, human-caused changes in atmospheric levels of greenhouse gases,” Santer said.”

    Your quote follows after that para. Allowing for effects like ENSO makes a big difference.

    I’ve said this before but I think it bears repeating. It is key to what Santer meant above. He is trying to explain that you could have a strong El Niño at the beginning of an interval with a strong La Niña 16 or less years later. The effect of this positioning might be just enough to create a long flat period. He was saying at 17 years the effects of those events would not be enough to avoid a positive trend.

    So, the fact we don’t have either situation but are ENSO neutral at both ends is actually beyond anything Santer envisioned. I’m sure he thought this would be impossible. I also sure NOAA thought it would be impossible when they produced their 15 years number.

    The bottom line is we have already gone beyond anything the modelers considered possible. And, the planet has been cooling since the PDO flipped. It is not going to get any better for them. About time they admitted they were wrong and started to try to model reality where ocean oscillations are the dominate decadal forcing.

  341. Wow. To anyone starting this comment thread at the bottom, I recommend:

    If your time is limited, read every comment by rgbatduke first;

    If you have more time, read every comment by Monckton of Brenchley second;

    If you have even more time, read the rest of the comments to harvest the remaining 10% or so of the value in this thread.

  342. Greg Mansion says: @ June 14, 2013 at 6:10 pm

    ….Besides, and this is something you must know very well, warmists do not say that “global warming” is something steady. They have always said that it is about an overall trend….
    >>>>>>>>>>>>>>>>>>>>>
    And has been shown many times the overall temperature trend of the latter half of the Holocene is COOLING!

    GRAPH: GSIP2 (Greenland) vs CO2

    GRAPH: 10,000 yrs Vostok (present on left)

    And just in case that ice core data doesn’t sink in you can ask other glaciers:

    …The study went after a variety of sediments in the lake bed to determine the sediment that was depositing in the lake. By determining the different compositions in the sediment they could find how much glacial activity was taking place over the past 8,000 years.

    Here is the official chart from the study itself….

    Astute readers will notice the brief periods from 1,000 and 2,000 years ago that are commonly referred to as the Medieval and Roman Warming periods. Both are simply interludes of the expanding glacial activity that has steadily been taking place for the past 4,000 years….
    This study is not an anomaly either. Any study of the Northern Hemisphere shows this exact overall behavior. The NH was warmer several thousand years ago, even though the CO2 level was lower. There has been a general cooling trend throughout the NH over the past 4,000 years. It is not steady by any means over a period of a few hundred years, but over the course of thousands of years it is very steady. This is simply one more study that shows the same thing.

    The authors of the study simply state their findings in their abstract.

    href=”http://www.sciencedirect.com/science/article/pii/S0033589411001256″>A new approach for reconstructing glacier variability based on lake sediments recording input from more than one glacier, Quaternary Research, Volume 77, Issue 1, January 2012, Pages 192–204
    ABSTRACT:
    We explore the possibility of building a continuous glacier reconstruction by analyzing the integrated sedimentary response of a large (440 km2) glacierized catchment in western Norway, as recorded in the downstream lake Nerfloen (N61°56′, E6°52′). A multi-proxy numerical analysis demonstrates that it is possible to distinguish a glacier component in the ~8000-yr-long record, based on distinct changes in grain size, geochemistry, and magnetic composition. Principal Component Analysis (PCA) reveals a strong common signal in the 15 investigated sedimentary parameters, with the first principal component explaining 77% of the total variability. This signal is interpreted to reflect glacier activity in the upstream catchment, an interpretation that is independently tested through a mineral magnetic provenance analysis of catchment samples. Minimum glacier input is indicated between 6700-5700 cal yr BP, probably reflecting a situation when most glaciers in the catchment had melted away, whereas the highest glacier activity is observed around 600 and 200 cal yr BP. During the local Neoglacial interval (~4200 cal yr BP until present), five individual periods of significantly reduced glacier extent are identified at ~3400, 3000-2700, 2100-2000, 1700-1500, and ~900 cal yr BP. <a

    link

  343. Gail. No doubt all readers who think that CO2 drives temperature will acknowledge that the GISP graph you linked “proves” that C2 cools the earth and is an Ice House not a Greenhouse gas

  344. jai mitchell says: @ June 15, 2013 at 8:15 am

    …… you are entitled to your own opinions, but not your own facts.

    We have a very significant and credible record based on thousands of ice cores (recent 2,000 years) and hundreds of ice cores (earlier Holocene).

    as well as plant stomata and tree ring growth as well as other ancillary indicators that

    CO2 has not been anywhere near current atmospheric levels for almost 52 million years…..
    >>>>>>>>>>>>>>>>>>>>>>>>>>>>

    Another untruth.

    The actual data from chemical testing for CO2 GRAPH 1 Note the tendency to select low values for the CO2 concentration in the 19th century atmosphere despite values as high as 550 ppm and above.

    A closer look at the cherry picked results used by warmists from above graph. GRAPH 2

    Again note the cherry picking of values as outlined by Mauna Loa Obs.

    4. In keeping with the requirement that CO2 in background air should be steady, we apply a general “outlier rejection” step, in which we fit a curve to the preliminary daily means for each day calculated from the hours surviving step 1 and 2, and not including times with upslope winds. All hourly averages that are further than two standard deviations, calculated for every day, away from the fitted curve (“outliers”) are rejected. This step is iterated until no more rejections occur.
    How we measure background CO2 levels on Mauna Loa.

    CO2 continuous hourly data, all values. GRAPH

    CO2 continuous hourly data after selection process. GRAPH

    EARLY STUDIES of CO2 in snow and ice. starting with a small glacier in Norway (Coachman et al 1956, 1958a, b), and the studies were continued in Greenland and Antarctica (Table 1). In the first Antarctic study of Matsuo and Miyake (1966) an elegant method of 13C isotopic dilution was used for CO2 determinations. The precision of these determinations, with an analytical error of +/- 0.002%, was never matched in later studies, which reported errors usually ranging between +/- 0.2 and 3%.
    TABLE 1

    THE PERIOD OF HIGH CO2 READINGS
    After 1980 most of the studies of CO2 in glaciers were carried out on Greenland and Artarctic ice by Swiss and French research groups; one core was studied in an Australian laboratory. A striking feature of the data published until about 1985 is the high concentrations of CO2 in air extracted from both pre-industrial and ancient ice, often much higher than in the contemporary atmosphere (Table 1).

    Fig. 2. Concentration of CO2 in a 90-cm long section of a Camp Century (Greenland) ice core. The lower curve represents 15 min. “wet” extraction from melted ice and “dry” extraction; the upper curve 7 hours “wet” extraction. Redrawn after Stauffer et al (1981)

    For example, in 11 samples of about 185-year-old ice from Dye 3 (Greenland) an average CO2 concentration of 660 ppm was measured in the air bubbles (using the “dry” extraction method), with a range of 290 – 2450 ppm (Stauffer et al 1985). In a deep ice core from Camp Century (Greenland), covering the last 40,000 years, Neftel et al (1982) found CO2 concentrations in the air bubbles ranging between 273 and 436 ppm (average 327 ppm). They also found that in an ice core of similar age from Byrd Station (Antarctica) these concentrations ranged between 257 and 417 ppm. Both these deep cores were heavily fractured and contaminated with drilling fluid. Neftel et al (1982) arbitrarily assumed that “the lowest CO2 values best represent the CO2 concentrations of the originally trapped air”.

    Using the same dry extraction method, in the same segment of an ice core from a depth of 1616.21m in Dye 3 (Greenland), Neftel et al (1983) found a CO2 concentration of 773 ppm in the air bubbles. Two years later, Stauffer et al (1985) reported only about half of this concentration (410 ppm).

    It appears from Table 1 that the change from high to low CO2 values reported for polar ice occurred in the middle of 1985….

    THE PERIOD OF LOW CO2 READINGS
    Since 1985, low concentrations, near a value of 290 ppm or below, started to dominate the records. They were interpreted as indicating “the CO2 increase during the last 150 years” and “overlapping or adjacent to results from direct measurements on Mauna Loa started in 1958″ (Stauffer and Oeschger 1985)….
    [See SOURCE for a lot more information including a rebuttal of Ferdinand Engelbeen's criticism of Jaworowski.]

    Statement of Prof. Zbigniew Jaworowski

    Do glaciers tell a true atmospheric CO2 story? Z Jaworowski, T V Segalstad, & N Ono, 1992 227-284 Science of Total Environment

    Dr. Zbigniew Jaworowski denied funding and fired

    THE ACQUITTAL OF CARBON DIOXIDE by Jeffrey A. Glassman, PhD

    ON WHY CO2 IS KNOWN NOT TO HAVE ACCUMULATED IN THE ATMOSPHERE & WHAT IS HAPPENING WITH CO2 IN THE MODERN ERA by Jeffrey A. Glassman, PhD

    The Trouble With C12 C13 Ratios

    Bombshell from Bristol: Is the airborne fraction of anthropogenic CO2 emissions increasing? – study says “no” University of Bristol Press release issued 9 November 2009

    The whole hoax is based on the ASSumption that CO2 is uniformly mixed in the atmosphere and then cherry picking the desired results from the shotgun scatter of real life data.

    The Japanese satellite (JAXA) shows CO2 is not ‘well-mixed’. map 1 and map 2

  345. Gail Combs says (June 17, 2013 at 5:54 am): “Again note the cherry picking of values as outlined by Mauna Loa Obs.”

    Cherry picking or good observational science? Measuring the background level of something so easily affected by local sources is no easy task. CO2 is also measured at the South Pole (and elsewhere). Even in such a “pristine” location, precautions must be taken. On the linked page, watch the air sampling video, then compare the two graphs of air samples taken downwind and upwind of the station. I’ve read that background CO2 measurements from around the world are quite comparable, so either there’s a lot of cherry-picking going on, or CO2 measurement is one of the more reliable aspects of climate science.

    “The whole hoax is based on the ASSumption that CO2 is uniformly mixed in the atmosphere…”

    1) The assumption is actually that CO2 is “well-mixed”, which isn’t the same as “uniformly mixed”.
    2) I don’t believe the concept of CAGW or even AGW requires that CO2 be “well-mixed”, though in that case the GCMs might need to take into account persistent geographic variations in so-called “greenhouse gases”.
    3) If I understand the AGW concept correctly, the critical area is the upper atmosphere, i.e. the “effective radiating level” or ERL. Anybody know if CO2 is “weller-mixed” in the upper atmosphere than at ground level?

    “The Japanese satellite (JAXA) shows CO2 is not ‘well-mixed’. map 1 and map 2″

    That depends on what the meaning of is “well-mixed” is. :-) Even on these maps the CO2 levels vary less than 20 ppm (by eyeball) over the (limited) coverage areas, or roughly 5%. Considering the non-uniform CO2 sources and sinks, that’s “well-mixed” to me. Check out a larger set of maps. Note the variation of CO2 on time scales as short as a month. Again, it looks like the CO2 is getting stirred around pretty well.

    BTW, two observations on these maps:

    1) I understand the satellite measures the entire column of atmospheric CO2, so even if CO2 is more uniform in the upper atmosphere, the reading would be skewed by the levels closer to the surface; and vice-versa, although less so because the absolute amount of CO2 decreases with altitude.
    2) Comparing the same month year over year, the maps get redder. Near Hawaii in April 2010 I see an orange square, consistent with the 392.52 ppm CO2 reading from Mauna Loa. In April 2013 there’s a reddish square over Hawaii, consistent with a Mauna Loa measurement of 398.40 ppm. The scale has to be exaggerated to make such a small change noticeable, but as a result a 1.5% change seems to set the maps on fire. :-)

    “The Trouble With C12 C13 Ratios”

    I visited that link before and found Chiefio’s musings quite intriguing. Likewise the carbon isotope section of Murry Salby’s talk covered at WUWT. I’m not sure if the satellite data support or contradict Salby and Chiefio.

  346. rgbatduke says:
    June 13, 2013 at 7:20 am

    “You might as well use a ouija board as the basis of claims about the future climate history as the ensemble average of different computational physical models that do not differ by truly random variations and are subject to all sorts of omitted variable, selected variable, implementation, and initialization bias.”

    That is extremely appropriate on a particular level. The Ouija board is, of course, a bunch of nonsense. The smart kids realize at some point that the way to make it work is to slowly guide it to the answer he or she wants, all the while protesting that he or she didn’t do anything, and the answer was provided by the spirits (or, in the case of the climate, “the science”).

  347. 1) I understand the satellite measures the entire column of atmospheric CO2, so even if CO2 is more uniform in the upper atmosphere, the reading would be skewed by the levels closer to the surface; and vice-versa, although less so because the absolute amount of CO2 decreases with altitude.

    Did you notice the other oddity? There is no possible way that the Japanese satellite measurement OF the entire air column supports global CO_2 on the edge of 400 ppm. Eyeballing the graphs the means should be what, 380 ppm, and as you note, it should be skewed high compared to Mauna Loa, not low.

    I’m thus troubled by what appears to be a 5 or 6% error in normalization. That’s actually a rather lot. Mauna Loa is what it is, whether or not you approve of its data reduction methodology at least it has been consistently applied for a long time so its own value is apples-to-apples. But it has been used as a sort of gold-standard of global atmospheric CO_2, and I’d rather bet that its readings are used as primary input into the GCMs.

    The satellite data suggests many things. First, that the Mauna Loa readings are a lousy proxy for global CO_2 levels. Second, that they are far too high — as you note, if CO_2 concentration on average increases with depth (which makes moderate sense and is consistent with it not being perfectly homogeneous over the globe as it is) then the mountaintop reading should be lower by some percentage than the mean of the entire air column, not higher. Given that surface readings are frequently in the 400 to 500 ppm (depending on where you look) Mauna Loa could be off by 10% or 20% on the high side compared to true top-of-the troposphere average CO_2 concentration. Since that is where the atmosphere nominally becomes transparent to LWIR emitted from the CO_2 because the atmosphere itself has thinned to the point where it is no longer optically opaque, this suggests that the emission temperature being used in the models is derived from a point too high in the DALR (and hence too cold) — thereby exaggerating warming.

    If it “only” a 5% effect I’d worry but perhaps it isn’t “important” (bearing in mind that the entire post-LIA warming in degrees Kelvin is order of 0.5%, so small numbers cannot safely be ignored when trying to explain the 0.1-0.2% that might be attributable to anthropogenic causes). If Mauna Loa is off by 10% or more for any reason whatsoever, that cannot possibly be ignorable. For example, if top of the troposphere global mean CO_2 were really 370 ppm and not 400 ppm, that’s a huge difference.

    One wonders if there are any reliable controls. The other possibility is of course that the satellite isn’t correctly normalized. One wonders if contemporaneous soundings support one or the other. That actually might be a large enough error to kick the GCM spaghetti back down into agreement with nature all by itself, although systematically correcting them back in time is going to be very difficult. One also wonders why Mauna Loa produces so very different a number (just as one wonders why LTT is diverging from land surface temperature records, or was until they stopped adjusting it because they had to after one final push of all the older temperatures down).

    rgb

  348. I agree, rgb. Look at an IR spectrum emitted from the earth, and the CO2 region is at ~-55C — around 40,000 – 50,000 ft (?). So that’s the relevant level/concentration. But I’d assume someone’s been monitoring that. (?)

  349. And rgb, spectrums of IR from the poles show CO2 emitting instead of absorbing. This goes w/Chiefo’s idea that the TOA (defined as no convection) at the poles during an inversion is actually at the surface, and CO2 acting as a coolant instead of an insulator. So the standard theory of ~1C warming from CO2 doubling might not be so solid, IMO.

  350. rgbatduke says:
    June 18, 2013 at 7:51 am
    1) I understand the satellite measures the entire column of atmospheric CO2, so even if CO2 is more uniform in the upper atmosphere, the reading would be skewed by the levels closer to the surface; and vice-versa, although less so because the absolute amount of CO2 decreases with altitude.

    Did you notice the other oddity? There is no possible way that the Japanese satellite measurement OF the entire air column supports global CO_2 on the edge of 400 ppm. Eyeballing the graphs the means should be what, 380 ppm, and as you note, it should be skewed high compared to Mauna Loa, not low.

    The data I’m looking at for April is consistent with 400ppm:

    https://data.gosat.nies.go.jp/GosatBrowseImage/browseImage/fts_l2_swir_co2_gallery_en_image.html?image=46

  351. rgbatduke says (June 18, 2013 at 7:51 am): “Did you notice the other oddity? There is no possible way that the Japanese satellite measurement OF the entire air column supports global CO_2 on the edge of 400 ppm. Eyeballing the graphs the means should be what, 380 ppm, and as you note, it should be skewed high compared to Mauna Loa, not low.”

    Well, I’m not so sure. In the 2013/04 map, for example, Hawaii doesn’t seem out of line with the rest of the northern hemisphere, and in fact several places are even redder. The maps seem to cover most CO2 sources, but a lot of space is blank. Plus, as you mention later in your comment, calibration/sensitivity/reliability of this new satellite tool is unknown. And this says CO2 at the South Pole is within 6 ppm or less of Mauna Loa. Most or even all of the difference is explainable as a gradient from the major CO2 sources to the north.

    BTW, the GOSAT site also has a neat animation of global CO2 distribution. It’s a simulation, so take with a grain of salt, but it smooths out the monthly maps nicely.

    Last night I re-watched the “Seasonal Forests” episode of the BBC’s “Planet Earth”. It mentioned that the vast northern forests are a major source of atmospheric oxygen. In the GOSAT simulation, Canada and Siberia turn dark blue (low CO2) in late summer, indicating massive photosynthesis and oxygen production. Watching the simulation unfold, I felt the same thrill up my leg that Chris Matthews gets listening to Obama. :-)

  352. Phil. says:
    June 18, 2013 at 10:41 am

    The data I’m looking at for April is consistent with 400ppm:
    Phil if you look at the picture you post you see between 390 and 400 in the North Hemisphere, with maybe a bit darker spots South of Japan and around 390 and some lower in the South Hemisphere.
    To my eyes, the average of that is in no case 400.

  353. Lars P. says:
    June 18, 2013 at 12:42 pm
    Phil. says:
    June 18, 2013 at 10:41 am

    “The data I’m looking at for April is consistent with 400ppm:”
    Phil if you look at the picture you post you see between 390 and 400 in the North Hemisphere, with maybe a bit darker spots South of Japan and around 390 and some lower in the South Hemisphere.
    To my eyes, the average of that is in no case 400.

    Well Lars I opened the map in Photoshop and examined the data point next to Hawaii using the colorsync utility and it came out at the same rgb value as 400ppm on the scale bar!

  354. From above and other postings Ny Nick Stokes and Mr T.O.O tjhere is no doubt in my mind that they are paid trolls who receive a salary to come to the skeptical sites, The TEAM is very concerned about the influence of climate skeptic sites and the effect it is having on public opinion. My cents worth

  355. Superimposing the temperature curve and its least-squares linear-regression trend on the statistical insignificance region bounded by the means of the trends on these published uncertainties since January 1996 demonstrates that there has been no statistically-significant warming in 17 years 4 months:

    Some Moncktonesque statistical treatment here, what exactly does “bounded by the means of the trends on these published uncertainties” mean in relation to statistical significance? Using the same data I obtained a trend of 0.089±0.118 ºC/decade which indicates statistically significant warming at the 85% level. To say that there has been “no statistically-significant warming in 17 years 4 months” as Monckton does is meaningless.

  356. The data I’m looking at for April is consistent with 400ppm:

    https://data.gosat.nies.go.jp/GosatBrowseImage/browseImage/fts_l2_swir_co2_gallery_en_image.html?image=46

    Fair enough, but one wonders why August of 2012 was then so very different from April 2013. Did they renormalize their detectors to get agreement?

    As for the comment about polar cooling and TOA descending to ground level, this is something that has occurred to me as well. The DALR from ground level to the tropopause is determined, among other things, by the height where GHGs cease to be optically opaque and lose their heat content to space. Above the tropopause the stratosphere begins, which is hotter than the top of the troposphere, in part because the ordinary atmosphere no longer has any radiative cooling channel. In polar inversions with very dry air (and hence very little GHE), one essentially drops the stratosphere/tropopause down to the ground — I recall there is a spectrograph pair in Perry that illustrates this case.

    This makes me less sure about just what the temperature profile would be in a mythical planet covered in just oyxgen-nitrogen-argon, no H2O or CO2 or ozone or GHGs per se, or better yet, a planet with a Helium atmosphere. The ground would radiatively heat and cool unopposed, like the moon, so from one point of view one would expect it to get very hot in the day, very cold at night, not unlike the low humidity desert does now.

    But not exactly like the desert, because the DALR doesn’t break down over the desert and in this mythical planet the tropopause would basically be at ground level, and temperatures would ascend from the ground up, just as the stratosphere warms to the thermosphere only far overhead. Then my imagination breaks down. Such an atmosphere would have basically no cooling mechanism overhead until it gets hot enough to activate SWIR bands in the non-GHG atmosphere, say thousands of degrees. It might well be that it cools by contact with the ground at nighttime because the ground is an efficient radiator where the atmosphere is not. The atmosphere would, however, be densest and coolest at the bottom, and cooling there would not generate surface convection.

    However there would be differential heating and cooling between the equator and the poles, and differential heating and cooling from daytime to nighttime, so there would be some general circulation — cold air from the poles being pushed south, uplifting as it heats, displacing hotter upper air poleward, and forcing it down to contact the colder surface there, which might well still make such an atmosphere net heating. IIRC, one of the moons — Triton? — has such an atmosphere and a moderately uniform surface and might serve as a sort of laboratory for this kind of model, although its density and mean temperature are of course way off for applicability to Earth.

    One of our many problems is that we just don’t have enough, simple enough, planets to study to get a good feel for planetary climatology. It is so easy to make some simplistic assertion that captures one “linearized” cause-effect relationship and then fail to realize that in the real world four other nonlinear relationships are coupled in such a way that the hypothesized linearized relationship is not correct, it is at best valid in the neighborhood of “now” in a single projective direction of a complex, curved, surface, the projection of the first term in a Taylor series expansion of a solution to a multivariate nonlinear partial differential equation. So stating that CO_2 is “cooling at the poles” might better be stated as “sometimes, when conditions are just right, CO_2 can contribute to net cooling at the poles”. But this is probably a smaller effect globally than the statement that “sometimes, when the conditions are just right, water vapor is strongly net cooling; other times when the conditions are just right, water vapor is net warming; the conditions that regulate which it is might well depend on aerosol levels, soot/particulate levels, solar magnetic state, geomagnetic state, time of year, macroscopic state (e.g. water content) of the stratosphere, the phase of the decadal oscillations, the phase of the moon, and who won the superbowl”. And since water vapor is by far the dominant GHG, getting the global climate answer approximately correct depends on getting it right first, and only then worrying about what happens with CO_2.

    rgb

  357. Some Moncktonesque statistical treatment here, what exactly does “bounded by the means of the trends on these published uncertainties” mean in relation to statistical significance? Using the same data I obtained a trend of 0.089±0.118 ºC/decade which indicates statistically significant warming at the 85% level. To say that there has been “no statistically-significant warming in 17 years 4 months” as Monckton does is meaningless.

    Not wanting to speak for Mr. Monckton, of course, but I suspect he is referring to R^2 for the linear fit, usually interpreted as a measure of the signficance of the fit compared to the null hypothesis of no relationship. I also think he is using “accepted” values for the conclusion, which a lot of people have grudgingly been coming to accept in the climate community.

    At the same time, I completely agree with you. There is nothing special or magical about 17 years, or 16 years, or 13 years. Or rather, there might well be, but we don’t know what it is because we do not actually know the relevant timescales for variation of the climate as opposed to the weather. The climate as reflected in some sort of average global temperature derived from either the thermometric record or proxies has never been constant, never been linear, never had anything like a single time constant or frequency that could be associated with anything but a Taylor series/power series fit to some sufficiently small chord or fourier transform ditto. Well, I take that back — there is a pretty clear fourier signal associated with Milankovitch processes over the last 3.5 million years, only the period changes without warning three times over that interval (most recently to roughly 100 ky) and we don’t know why.

    The much better way to assert precisely the point Monckton makes above is by asserting no point at all — simply presenting the actual e.g. LTT and SST and LST records over the last 34 years where LTT, at least, is consistently and accurately measured, SST is increasingly precisely measured, and sadly, LST estimates are comparatively dubious. Over that entire interval, LTT (as arguably the best measure of actual warming for a variety of reasons) suggests a non-catastrophic warming rate on the order of 0.1 C/decade but with (obviously!) large error bars. SSTs lead to a similar conclusions. Measurements of SLR (any mix of tide gauge data and satellite) lead to a similar conclusion.

    Deconstructing the causes of the warming, decomposing it into (say) a flat fit pre-1997 and a second flat fit post-1999 (which reveals that most of the warming occurred in a single discrete event associated with the super-El Nino/La Nina pair in between as far as we can tell from the data) or a linear fit, or an exponential fit, or throwing new fourier, linear, or otherwise functional components in coincidence with decadal oscillations, the solar cycle, the level of solar activity, CO_2 concentrations, stratospheric water vapor, stratospheric ozone, or the density of kitchen sinks (per household) in Zimbabwe is in some sense all complex numerology, climatological astrology. The temperature record is what it is, not what we would fit it to be, not the stories we make up to explain it because we damn sure cannot compute it!

    That’s the real mistake Monckton made — he presented an aggregate view of the GCMs, because that is what the IPCC did in its notorious AR4 summary for policy makers, which contains an absolutely horrendous abuse of statistics by using the averages and standard deviations of model results over many completely different GCMs to make quantitative assertions of the probabilities of various warming scenarios as if “structure of a particular climate GCM” is an independent, identically distributed variable and reality is somehow bound to the mean behavior averaged over this variable by the central limit theorem which is sheer madness. This isn’t Monckton’s error, it is a standard error made by the IPCC, an error where incompetence in statistical analysis is beautifully married to political purpose to the detriment of scientific reasoning), but he perhaps should have avoided perpetuating it (as Nick Stokes rather overvehemently insisted above and just presented the spaghetti snarl of actual GCM model results themselves as they say precisely the same thing, only better, without the irrelevant and incorrectly computed probabilities.

    Madness and incorrect computation that is, of course, perpetuated and reflected in your citing 0.089±0.118 ºC/decade which indicates statistically significant warming at the 85% level.. Let’s restate this in Bayesian language. If there is no bias in the process associated with generating the points being fit (an assumption we can frame as a prior probability of there being occult bias), if the underlying climate is a linear function (an assumption we can frame as the prior probability of the climate behaving linearly over some interval of time, an assumption we can actually make at least semi-quantitative if one can believe the proxy record over the Holocene and bewares the fact that much of that proxy is intrinsically coarse grain averaged over intervals longer than 33 years), if the error bar you obtain from a linear fit (presumably from the distribution of Pearson’s \chi^2) over a remarkably short interval where we do not know the timescales of the relevant noise compared to the linear trend is relevant (again, one can try to guestimate the probable timescales of noise compared to the fit interval, but the curve itself strongly suggests that the two are comparable as it decomposes into two distinct and quite good fits joined at the ENSO in the middle), and if there is nothing else going on that we, in our ignorance, should be correcting for, then your linear fit yields warming with a considerably wider variance than you are allowing for — none of the Bayesian probabilities above are optimal for the linear fit to be precisely meaningful, and the uncertainties they introduce all broaden the uncertainty of the result or worse, reflect the probability that the linear fit itself is nonsense and cannot be extrapolated with any real confidence at all.

    I strongly suggest that you read the Koutsoyiannis paper on hydrology that has as it first graphic a function plus noise at a succession of timescales (in fact, Anthony, this graph should be a front-page feature on WUWT somewhere, IMO, as a permanent criticism of the plague of linearizing a function we know is nontrivially nonlinear). On a short enough timescale it appears linear. Then it appears exponential. Then it appears sinusoidal. But is that really its behavior? Or is the sinusoidal merely systematic noise on a longer term linear behavior? Note that no possible statistical analysis on the original fit interval can reveal the tru-er longer time behavior, and at no time can one separate out the unknown longer time behavior from the fit.

    In the meantime, here is a statement that perhaps everybody — even Nick Stokes and in other venues Joel Shore — can agree with. The last 15 to 17 years of the climate record — depending on which side of the discrete “event” of the 1997/1998 ENSO you with to start on — are not strong evidence supporting the hypothesis of catastrophic anthropogenic whatever, global warming, climate change. So much so that the warmist community stopped using the phrase global warming over this interval, substituting the non-falsifiable term “climate change” instead because any weather event can safely be attributed to human activity and who can prove you wrong other than by asserting statistical arguments so abstruse that any member of the lay population will fall asleep long before you finish them, where the catastrophe itself is always immediate and exciting and real to them. Every experience is a peak experience if you are demented and forget past experiences, after all.

    Note that I carefully avoid stating that the data “falsifies” the assertion of CAGW, AGW, or any particular “scenario” put forth in AR4’s summary for policy makers. To make such an assertion I would have to have prior knowledge that not only I lack, but everybody lacks. The issue of e.g. UHI bias and cherrypicking in the contemporary land surface temperature record is an open question at this point, with evidence supporting both sides. Who really knows if there is a bias, what the sign of the bias is, what the magnitude of the bias is? Even given the best possible intentions and totally honest scientists constructing it, totally honest scientists have biases and often cannot help incorporating them into their database — an assertion I make with considerable empirical evidence derived from meta-studies in e.g. medical science.

    There is considerable unacknowledged uncertainty in climate science — the egregious treatment of various Bayesian priors as equal to unity or zero (in a way that reflects one’s prejudices on the issue) in order to avoid diluting the purity of one’s conclusions with the inconvenient truth of uncertainty. But quite independent of Bayes, it is entirely safe to say that an interval of essentially flat temperatures does not support the assertion of aggressive, catastrophic, global warming. Indeed, 13 years (starting MOST generously in the year 2000) is 1/8th of a century. If 1/8th of the twenty first century has been climate neutral in spite of a significant increase in CO_2 in that time and over an interval in which the GCMs unanimously call for aggressive warming, one would have to lack simple common sense to assert that this is evidence for the correctness of the GCMs and the likelihood of a catastrophic warming increasingly confined to the 7/8 of the century remaining.

    rgb

  358. rgbatduke says (June 19, 2013 at 7:01 am): “This makes me less sure about just what the temperature profile would be in a mythical planet covered in just oyxgen-nitrogen-argon, no H2O or CO2 or ozone or GHGs per se, or better yet, a planet with a Helium atmosphere.”

    In case you haven’t seen it, Dr. Spencer discusses a no-GHG Earth here.

    If WUWT commenter “Konrad” is reading this thread he may add a somewhat different view.

  359. rgbatduke says (June 19, 2013 at 8:07 am): “Let’s restate this in Bayesian language.”

    I gather from what follows the above that the “Bayesian” language must be spoken in very long and very complex sentences. :-)

    No worries, though. I ran it through Google Translate and after about 15 minutes of processing it spit out “It’s not statistically significant, Phil.”. :-)

    One final thought: We must at all costs keep RGB away from the cryptic Steve Mosher, lest their mutual annihilation destroy the entire planet. :-)

  360. rgbatduke says:
    June 19, 2013 at 7:01 am
    The data I’m looking at for April is consistent with 400ppm:

    https://data.gosat.nies.go.jp/GosatBrowseImage/browseImage/fts_l2_swir_co2_gallery_en_image.html?image=46

    Fair enough, but one wonders why August of 2012 was then so very different from April 2013. Did they renormalize their detectors to get agreement?

    I didn’t look at last year but I’d expect it to have been ~6ppm lower based on ML data.

    As for the comment about polar cooling and TOA descending to ground level, this is something that has occurred to me as well. The DALR from ground level to the tropopause is determined, among other things, by the height where GHGs cease to be optically opaque and lose their heat content to space. Above the tropopause the stratosphere begins, which is hotter than the top of the troposphere, in part because the ordinary atmosphere no longer has any radiative cooling channel. In polar inversions with very dry air (and hence very little GHE), one essentially drops the stratosphere/tropopause down to the ground — I recall there is a spectrograph pair in Perry that illustrates this case.

    I don’t think this is correct, see the following for example:

    http://tinyurl.com/l33n3cv

  361. rgbatduke says:
    June 19, 2013 at 8:07 am
    “Some Moncktonesque statistical treatment here, what exactly does “bounded by the means of the trends on these published uncertainties” mean in relation to statistical significance? Using the same data I obtained a trend of 0.089±0.118 ºC/decade which indicates statistically significant warming at the 85% level. To say that there has been “no statistically-significant warming in 17 years 4 months” as Monckton does is meaningless.”

    Not wanting to speak for Mr. Monckton, of course, but I suspect he is referring to for the linear fit, usually interpreted as a measure of the signficance of the fit compared to the null hypothesis of no relationship. I also think he is using “accepted” values for the conclusion, which a lot of people have grudgingly been coming to accept in the climate community.

    You might think that but the values shown on his graph don’t correspond with that, hence my question.

    The much better way to assert precisely the point Monckton makes above is by asserting no point at all — simply presenting the actual e.g. LTT and SST and LST records over the last 34 years……

    Certainly, but he doesn’t do that!

    That’s the real mistake Monckton made — he presented an aggregate view of the GCMs, because that is what the IPCC did in its notorious AR4 summary for policy makers, which contains an absolutely horrendous abuse of statistics by using the averages and standard deviations of model results over many completely different GCMs to make quantitative assertions of the probabilities of various warming scenarios as if “structure of a particular climate GCM” is an independent, identically distributed variable and reality is somehow bound to the mean behavior averaged over this variable by the central limit theorem which is sheer madness.

    Agreed.
    Madness and incorrect computation that is, of course, perpetuated and reflected in your citing 0.089±0.118 ºC/decade which indicates statistically significant warming at the 85% level…..

    Which has nothing to do with the GCMs, it’s the corrected version of Monckton’s statistical analysis. Your argument re Bayesian statistics shows that Monckton’s attempt to show that there is no significant trend in the data is invalid.

    So much so that the warmist community stopped using the phrase global warming over this interval, substituting the non-falsifiable term “climate change” instead because any weather event can safely be attributed to human activity and who can prove you wrong other than by asserting statistical arguments so abstruse that any member of the lay population will fall asleep long before you finish them,

    An often repeated canard, since the term ‘climate change’ was already in use when the IPCC was founded in 1988!

  362. Gary Hladik says:
    June 19, 2013 at 8:53 am
    rgbatduke says (June 19, 2013 at 8:07 am): “Let’s restate this in Bayesian language.”

    I gather from what follows the above that the “Bayesian” language must be spoken in very long and very complex sentences. :-)

    No worries, though. I ran it through Google Translate and after about 15 minutes of processing it spit out “It’s not statistically significant, Phil.”. :-)

    Check out your translator, it should have said “RGB thinnks that the method used by Monckton isn’t capable of determining the significance of the trend”.

  363. Phil. says:
    June 18, 2013 at 2:52 pm

    Well Lars I opened the map in Photoshop and examined the data point next to Hawaii using the colorsync utility and it came out at the same rgb value as 400ppm on the scale bar!

    Phil, wonderful, you spotted one spot that looks like 400. To have 400 in average one should have equal numbers 410 as 390 or at least about.

    Also remember that you look at the whole column of CO2 which makes me wonder as the records vary more:

    http://m4gw.com/the_photosynthesis_effect/

  364. Lars P. says:
    June 19, 2013 at 12:44 pm
    Phil. says:
    June 18, 2013 at 2:52 pm

    “Well Lars I opened the map in Photoshop and examined the data point next to Hawaii using the colorsync utility and it came out at the same rgb value as 400ppm on the scale bar!”

    Phil, wonderful, you spotted one spot that looks like 400. To have 400 in average one should have equal numbers 410 as 390 or at least about.

    RGB was talking about ML CO2 readings so I picked the closest which was 400, there were many others with the same value and plenty which were greater!

    Also remember that you look at the whole column of CO2 which makes me wonder as the records vary more:

    http://m4gw.com/the_photosynthesis_effect/

    There will be variation near the surface due to the presence of sources and sinks but higher in the atmosphere it will be fairly constant up to the tropopause. Near growing crops there will be strong diurnal variation from about 300-450, bear in mind that the GOSAT data is a monthly average.

  365. jai mitchell says:
    June 15, 2013 at 3:53 pm

    Latitude says:

    June 15, 2013 at 12:54 pm

    do you understand the term “human history”?

    just how far back do you suppose that goes?
    ———————————–

    History began with writing, ie about 5000 years ago in the ancient Near East, but more recently elsewhere. Before writing is the realm of prehistory or archaeology. Before that, paleontology.

  366. Gary Hladik says:
    June 19, 2013 at 1:55 pm
    Phil. says (June 19, 2013 at 9:11 am): “I didn’t look at last year but I’d expect it to have been ~6ppm lower based on ML data.”

    April 2012: 396.18 ppm
    April 2013: 398.4 ppm

    Yes but RGB said: Fair enough, but one wonders why August of 2012 was then so very different from April 2013</b<. Did they renormalize their detectors to get agreement?
    August 2012: 392.41

  367. rgbatduke says: June 19, 2013 at 7:01 am
    “Fair enough, but one wonders why August of 2012 was then so very different from April 2013. Did they renormalize their detectors to get agreement?”

    There’s an annual cycle at ML. Peaks in April, min about Sept. Amplitude about 6ppm.

  368. Phil. says (June 19, 2013 at 3:03 pm): “Yes but RGB said: Fair enough, but one wonders why August of 2012 was then so very different from April 2013</b<. Did they renormalize their detectors to get agreement?
    August 2012: 392.41"

    Ah. My mistake. I hastily assumed he was comparing same month, because as Nick points out, there's an annual cycle.

Comments are closed.