A Note on the 50-50 Attribution Argument between Judith Curry and Gavin Schmidt

Guest essay by Bob Tisdale | Judith Curry and Gavin Schmidt are arguing once again about how much of the global warming we’ve experienced since 1950 is attributable to human-induced global warming.  Judith’s argument was presented in her post The 50-50 argument at ClimateEtc (where this morning there were more than 700 comments…wow…so that thread may take a few moments to download.)  Gavin’s response can be found at the RealClimate post IPCC attribution statements redux: A response to Judith Curry.

Gavin’s first illustration is described by the caption:

The probability density function for the fraction of warming attributable to human activity (derived from Fig. 10.5 in IPCC AR5). The bulk of the probability is far to the right of the “50%” line, and the peak is around 110%.

I’ve included Gavin’s illustration as my Figure 1.

Figure 1 - RealClimate attribution

Figure 1

So the discussion is about the warming rate of global surface temperature anomalies since 1950. Figure 2 presents the global GISS Land-Ocean Temperature Index data for the period of 1950 to 2013. I’m using the GISS data because Gavin was newly promoted to the head of GISS. (BTW, congrats, Gavin.)  As illustrated, the global warming rate from 1950 to 2013 is 0.12 deg C/decade, according to the GISS data.

Figure 2

Figure 2

For this discussion, let’s overlook the two hiatus periods during the term of 1950 to 2013…whether they were caused by aerosols or naturally occurring multidecadal variations in known coupled ocean-atmosphere processes, such as the Atlantic Multidecadal Oscillation (AMO) and the dominance of El Niño or La Niña events (ENSO).  Let’s also overlook for this discussion any arguments about how much of the warming from the mid-1970s to the turn of the century was caused by manmade greenhouse gases or the naturally occurring multidecadal variations in the AMO and ENSO.

Bottom line, according to Gavin:

The bottom line is that multiple studies indicate with very strong confidence that human activity is the dominant component in the warming of the last 50 to 60 years, and that our best estimates are that pretty much all of the rise is anthropogenic.

Or in other words, all the warming of global surfaces from 1950 to 2013 is caused by anthropogenic sources.  Curiously, that’s only a warming rate of +0.12 deg C/decade. He’s not saying that all of the warming, at a higher rate, from the mid-1970s to the turn of the century is anthropogenic.  His focus is the period starting in 1950 with the lower warming rate.

HOWEVER

Climate models are not tuned to the period starting in 1950.  They are tuned to a cherry-picked period with a much higher warming rate…the period of 1976-2005 according to Mauritsen, et al. (2012) Tuning the Climate of a Global Model [paywalled].  A preprint edition is here.  As shown in Figure 3, the period of 1976 to 2005 has a much higher warming rate, about +0.19 deg C/decade. And that’s the starting trend for the long-term projections, not the lower, longer-term trend.

Figure 3

Figure 3

And that’s why, when compared to the observed warming rate for the period of 1950 to 2013, which, according to Gavin, is the period “that our best estimates are that pretty much all of the rise is anthropogenic”, then climate model warming rates appear to go off on a tangent.  The modelers have started their projections from a cherry-picked period with a high warming rate.

Figure 4 shows the warming rates for multi-model ensemble-member mean of the CMIP5-archived models using RCP6.0 and RCP8.5 scenarios for the period of 2001-2030.  RCP6.0 basically has the same warming rate as the observations from 1976-2005, which is the model tuning period, but that’s much higher than the warming rate from 1950-2013.  And the trend of the business-as-usual RCP8.5 scenario seems to be skyrocketing off with no basis in reality.

Figure 4

Figure 4

And in Figure 5, the modeled warming rates for the same scenarios are shown through 2100.

Figure 5

Figure 5

CLOSING

I’ve asked a similar question before:  Why would the climate modelers base their projections of global warming on the trends of a cherry-picked period with a high warming rate?  The models being out of phase with the longer-term trends exaggerates the doom-and-gloom scenarios, of course.

But we purposely overlooked a couple of things in this post…that there are, in fact, naturally occurring ocean-atmosphere processes that contributed to the warming from the mid-1970s to the turn of the century—ENSO and the AMO.  The climate models are not only out of phase with the long-term data, they are out of touch with reality.

SOURCES

The GISS Land-Ocean Temperature Index data are available here, and the CMIP5 climate model outputs are available through the KNMI Climate Explorer, specifically the Monthly CMIP5 scenario runs webpage.

About these ads

177 thoughts on “A Note on the 50-50 Attribution Argument between Judith Curry and Gavin Schmidt

  1. correction: As illustrated, the global warming rate from 1950 to 2013 is 0.02 deg C/decade, according to the GISS raw data.

    according to Gavin:

    correction: The bottom line is that multiple studies indicate with very strong confidence that human activity is the dominant component in the warming of the last 50 to 60 years, and that our best estimates are that pretty much all of the rise is adjustments and algorithms..

  2. The more the alarmists claim a huge CO2 effect during the warming periods, the more impossible their task of explaining the whyfor of the pause, when CO2 increases have not paused. Their cause and effect have become totally disjointed.

    • Besides the current plateau, they also have to explain the cooling from c. 1944 to 1976 under rising CO2, & the rising temperature during the 1920s to ’40s on falling CO2.

      Temperature accidentally happened to rise during c. 1977-96 during climbing CO2 because of the switch to the warm phase PDO in 1977.

      • Between CO2 & temperature there isn’t even good correlation, let alone causation.

        On longer time scales, there is correlation & causation between rising T & CO2, but T is the cause & CO2 the effect.

      • Latecommer: Correlation does not prove causation, but if there is causation there will be correlation.

      • Being true believing Warmistas, they turned the clock speed down on their super-dupe computers, hence reducing the rate of warming. It’s that simple!

      • Chris4692
        August 28, 2014 at 12:16 pm
        Latecommer: Correlation does not prove causation, but if there is causation there will be correlation.

        … except when there is no correlation.

        If, for the sake of argument, we accept that rising CO2 will cause rising global warming, then the currently rising CO2 levels should be causing rising temperatures. But currently there is no observable correlation between rising CO2 and global temps.

        There is no compelling, simple explanation for this lack of correlation (assuming AGW). In fact, there are at least 37 explanations for it, none of them compelling enough to displace the others.

      • It’s only appears to be in fairly recent time that there is any correlation between CO2 & temperature too. With even that looking for like B causes A (or possibly C causes A and B).

    • Col Mosby: The more the alarmists claim a huge CO2 effect during the warming periods, the more impossible their task of explaining the whyfor of the pause, when CO2 increases have not paused. Their cause and effect have become totally disjointed”

      Oh Contraire – Dr. Mann has a recent study which shows the cooling phase is due to the AMO/PDO while none of the warming is due to the flip side of the AMO/PDO cycles. ( I may be oversimplifying his conclusion, though that is the general gist of his study).

      The irony is that many of the skeptics have pointed out the amo/pdo cycles partly explaining both the warming phase and the cooling phase and the most likely cause of the current pause (skeptics have proffered this explanation since the late 1990′s) yet the high priest of climate science has only recently acknowledged 1/2 of the cycle. ( Mann obviously knows more science than us mere mortals)
      Another irony, is that Mark Steyn pointed this out circa 2009.

  3. Fig 1 looks smack dab right in the center to me. What is this “far to the right” mumbo jumbo?? Couldn’t be more ho hum , big deal, if I had plotted it myself.

    You need new spectacles Gavin; and no, I didn’t mean for you to go to Burning Man, I meant, get some new glasses !

  4. Gav says: “The bottom line is that multiple studies indicate with very strong confidence that human activity is the dominant component in the warming of the last 50 to 60 years, and that our best estimates are that pretty much all of the rise is anthropogenic.”

    This is not consistent. “Dominant” just mean largest, it does not mean more than everything else added together.

    If the issue is split into many parts like, human, volcanic ENSO, solar…… the “dominant” factor could actually be quite small percentage, much less than half.

    The previous AR4 “majority of warming” is not the same thing as AR5 “dominant cause” of warming.

    This is a climbdown in the IPCC position that seems to have gone mainly unnoticed.

    Quite where Gav gets his “best estimates are that pretty much all of ” I don’t know but it’s far more extreme claim than either AR4 or AR5.

  5. It appears everyone is using adjusted temperatures, so the error bar should be as large as the adjusted temp. I do not believe in land based, corrupted temp records, and hold that any forcing caused by man is automatically absorbed and compensated for by nature. That is why we do not have “run away climate”. There is no proof man can over ride natural climate processes for any extended period.

    • Shouldn’t the error bars be somewhat larger than the adjustment?
      Since you still need to factor in the accuracy and precision of the original readings. Together with that applicable to any “data processing” involved.

  6. Gavin Schmidt is prevaricating as usual. Global warming since the LIA is composed of natural step changes. Those steps are exactly the same — whether CO2 was low, or high. Therefore, there is no “fingerprint of AGW”. It is clearly shown here in über-Warmist Dr. Phil Jones’ chart:

    • Very good point.
      I wonder if SkS will keep pushing their escalator line if that is pointed out.

      Not sure Dr. Phil Jones is an über-Warmist though. He always struck me as more wrong than wronging.

    • Mr Schmidt should examine the graph above closely to see the overall warming from 1850 to 2013. Its about 0.9 degrees C over that long term period and works out to about 0.55 deg C per CENTURY. Peanuts. The biggest fraud in history over a few tenths of a Degree.

    • Yeah, db, a point that I, and Lindzen, and many others have tried to emphasize in discussion. You can actually take HADCRUT4 from the first half of the 20th century and the second half of the 20th century and put them side by side on similar scales but with the time scale hidden and ask which graph occurred with the help of anthropogenic CO_2? Not so easy to tell, unless you are aware of the individual features such as the terminating super-ENSO in the late 20th century. I sometimes think that the last round of tampering in the GISS anomaly was designed as much to erase this similarity and as much of the pause as possible without quite making it laughable compared to LTT. But that game is up — there will be no more adjustments of GISS or HADCRUT4 to further warm the present as they are now UAH/RSS constrained.

      That hasn’t stopped them from trying to further cool the past, and now newcomers are appearing that re-krige and infill and homogenize areas that “haven’t shown enough warming” because they are less constrained by LTT; this further obfuscates if nothing else. HADCRUT4 — and earlier versions of HADCRUT even more — clearly give the lie to the assertion of “unprecedented” warming, though, in precisely this graph (which anybody can make, BTW, at least piecewise on woodfortrees).

      However, even this graph omits the display of or discussion of two critical problems with assertions of warming or cooling or plain old knowledge of temperature.

      The first and most glaring omission is the absence of any error bar or estimate on the data. This is insane! In what other field of human endeavor are so many data-derived graphs shown to so many people utterly devoid of error estimates? Note the obvious impact of error visible in the Jones curve. Does Jones, or anyone else, really think that the global average surface temperature anomaly was 10 times more volatile in the 1800′s, with the planet warming by 0.6C over as little as a year and then plunging down into 0.6C of cooling relative to some ill-defined mean in a year more? Because that’s what the error-bar free data shows.

      Of course not! What the graph is showing is the impact of the sparseness of the record in the 19th century. With order of 10x as much variance, there is order of 100x less data contributing in the 19th century compared to the present. In the 19th century most of the Earth’s surface area was completely unsampled (I mean “most” literally — 70% of the surface that is ocean, the bulk of at least 3 or 4 continents were either terra incognita altogether, e.g. Antarctica, or barely penetrated by a thermometer — if you will excuse the image — and consider the Amazon, central Africa, much of Siberia and central Asia, Tibet, even much of the U.S.). The parts that were sampled were obviously quite volatile — one imagines that the bulk of what is producing these large variations were things like heat waves in Europe.

      The variance quiets quite a lot when the colonial gold rush really gets underway in the 1880s and colonials carry thermometers with them to their newly annexed territories. The ocean remained a problem then, and remains a huge problem now with ARGO pitifully undersampling 70% of the Earth’s surface even today, and that in a highly biased fashion with buoys that float with thermohaline currents or are trapped in eddies (both unlikely to reflect their surrounding environment adequately) rather than be distributed according to a simple random number generator in Monte Carlo style (which would have a computable statistical error instead of an unknown bias). There is a surprising amount of variance for a global temperature anomaly today, but at least between the thermometric record and the LTT satellite record, we can think about resolving features of the presumably much less volatile actual anomaly from the statistical noise, by comparing the various “modelled” average temperatures. The error is almost certainly larger than the difference between, say, GISS and HADCRUT4 or Cowtan and Way, and at present these numbers are easily 0.2C or thereabouts apart much of the time.

      HADCRUT4 acknowledges — IIRC — 0.15C of error in the present. I think this is an underestimate but let’s go with it, as the existence of the number we hope means that they actually computed it instead of pulled it out of their nether regions, as were the error estimates on graphs in the leaked early AR5 draft (figure 1.4?), which were obviously created by a graphical artist and not by anything like an algorithm. The scaling of the variance then suggests that the error estimate in the mid-1880s ought to be a whopping 1.5C — the eye suggests that a more modest 0.4 C error bar might encapsulate 60% of the data such as it is, but that is really the error for the sampled territories only and is a lower bound on the error estimate for global temperature. I’d suggest that 0.7C is a compromise — one can probably find proxies (with their own error and resolution problems) that that constrain the error to be less than 1 C. This statistical — not systematic — error would then systematically, but slowly, shrink from then to now. It wouldn’t really be linear — as I said, there is a relatively rapid diminishment in the late 19th century followed by a slower decrease into the late 20th, but it is likely fair to say that it is at least 0.3 to 0.4C for most of the record prior to the satellite era and ARGO, as only these have made it possible to push it down to ballpark of 0.2C.

      If one includes the error estimate on the graphs, our certainty of any particular thermal history substantially diminishes. Maybe it warmed since the mid-1800′s. Maybe it has cooled. Maybe it warmed a lot more. Maybe the single 20 year period in the late 20th century when warming occurred has the steepest slope in the thermometric record, or — most importantly — maybe it does not! That’s the big statistical lie even in Jones relatively honest portrayal of the HADCRUT4 trends above. If one actually fit the data, with errors, and used e.g. a measure like Pearson’s \chi^2 to estimate the robustness of the linear trend, how likely it is that the slope is actually much larger or smaller than the simple regression fit, I promise that in the leftmost chord of the data we have almost no friggin’ idea what the linear temperature trend really was beyond “probably positive” (that is, maybe it is 0.16 \pm 0.12 or something like that), that in the second chord we can probably say that it — again guestimating since I don’t have the data and cannot do a better analysis — 0.15 \pm 0.05, and that only the last push is known reasonably accurately at 0.16 \pm 0.02.

      In other words, it could have warmed faster in either the mid-1800s or the early 1900s than it warmed in the late 1900s. It isn’t even improbable. It is even odds that one or the other of these warming trends was larger than the best fit slope, and 25% of the time they would both be larger, and larger by just a bit is enough to confound the assertion that the more strongly constrained third linear trend is the largest.

      So much for “unprecedented warming” or the necessity for CO_2 forcing as an explanatory mechanism for warming at the rates in Jones’ figure above.

      The second problem is that we are left with a profound paradox in all discussions of global average surface temperature. Even NASA GISS acknowledges that we have very little idea what it is. It is often given as 288 K, but this obscures the simple fact that no two models for computing it, working from the same or largely overlapping surface data, get numbers that are within half a degree of one another! Or even a degree. The most honest way to present the number might be 288 \pm 1K. Or 287 \pm 1K. It’s hard to say, and depends on who is doing the averaging and with what model for kriging, infilling, homogenizing, and dealing with error. It is also impossible to generate a proper estimate for the probable error including all sources, because what one can estimate is only the range of values produced by the models, which is (again) a strict lower bound in any honest error estimate. Since the models tend to share data sources they are hardly independent, and yet there is a spread of more than a degree in their average. Statistics 101 — the variance of sample means drawn from overlapping populations is too small because the number of independent and identically distributed samples is smaller than the number of samples that produced the variance.

      To fix this is enormously difficult and requires some pretty serious statistical mojo. Indeed, it would probably be simplest to fix via Monte Carlo and just plain sampling — generate a simulated smooth temperature field with the “approximately correct” surface temperature moments, pull samples at the overlapping locations and feed them into the different models, determining both the distribution of the absolute error of the models (per model) given the data compared to the precisely known average temperature, as well how that variance compares to the multimodel variance with overlapping samples. This might then provide some sort of quantitative basis for determining the actual probable absolute global average surface temperature — note well not the anomaly — as well as a probable error estimate that has a quantitative basis (subject to various assumptions, but given time we could even investigate the effect of varying those assumptions).

      In the meantime, we persist in the belief that we can measure and compute the anomaly in global average surface temperature almost an order of magnitude more precisely than we can compute the average surface temperature itself. In most systems, susceptibilities (effectively, the anomaly) are second moments and their error estimates are fourth moments of the underlying distribution — the variance of the variance, so to speak. We generally know the higher order cumulants of a distribution less accurately than we know the mean/first order. This isn’t always true, of course — sometimes what we measure is a deviation, not the absolute — but thermometers don’t measure deviations from an unknown or poorly known mean, they measure temperature, the absolute quantity in question. The argument is that if there is a systematic bias in the trend of each contributing thermometer (say, we have 100 thermometers at different places, all perfectly accurate, and if 40 of them show warming of 1 degree, 20 of them show no change, and 30 of them show cooling of 1 degree, then we can conclude that there has been a statistically significant systematic trend in the anomaly of (40 – 30 =10)/100 = 0.1 C even if, when we compute the actual statistical mean and standard error of the temperatures measured by those thermometers over whatever spatial region they are sampling, the error is 1C!

      This isn’t impossible, of course. We can certainly imagine systems where we could reliably measure the anomaly accurately but the mean inaccurately, the simplest one being that all of the thermometers themselves were perfectly accurate, but that a demented child scribed the scales on the side so that the supposed “zero” of the all of the thermometers was randomly distributed on some wide range. Each thermometer would then precisely record deltas/displacements, but the origin of their coordinates would be a random variable. But is that a reasonable assumption for the thermometric record? It seems equally plausible (for example) that the glass bore of (say) a mercury thermometer and the actual volume of the mercury in the thermometer are random variables, but that the person who zero’d the thermometer scale was an obsessive compulsive. In that case the absolute measurement of the thermometer might be very accurate, at least when it was made at temperatures close to the reference temperature used to set the scale, but the anomaly might have a bias that might, or might not, be randomly distributed.

      This problem has hardly gone away now. Anthony has actually tested supposed accurate electronic thermometers in personal weather station kits obtained (for example) from China and found that they experience substantial absolute error and time dependent drift. Now and in the past, even a thermometer that was precisely made, and carefully zero’d and scaled with respect to multiple reference temperatures so that it worked perfectly the first day it was hung up in a weather station could easily experience a systematic, and biased, drift over a decade or five of usage. Spring thermometers gradually anneal and become less springy. Liquid thermometers outgas and deform. We assume that things remain the same over long times because we can’t see them moving, but they don’t.

      Throw in biases recorded in weather station metadata, throw in all of the occult, slow biases not recoverable from any sort of metadata — a tree line that slowly grows over time, the UHI effect as a station that was initially rural finds itself in the middle of a prosperous concrete jungle, throw in unrecorded and variable idiosyncracies of the humans who performed the measurements as they changed over the decades, and you have substantial variance not only in the absolute temperatures any given thermometer might measure, but in the trend, in the anomaly. And some of those biases might well be slow, systematic, unrecorded and virtually impossible to retroactively correct for.

      Again, we could probably learn quite a bit from simulations of the models used to compute the anomalies, by simply generating an (ensemble of) simulated smooth temperatures on the surface of a sphere with a given, known time variation that has or doesn’t have any given trend. Sample it, and add noise to the samples, both white unbiased noise and trended noise that might (for example) model the UHI on urban stations, or delta correlated shifts that might occur when station personnel changes, or trended noise that might represent various distributions of slow non-UHI environmental shifts — conversion of surrounding countryside from forest to pasture, the building of impoundments that transform small rivers into vast lakes (this has happened, for example, in the immediate vicinity of RDU airport, the source of our “official” temperature — Falls Lake and Lake Jordan are between them tens of thousands of acres and flank the airport, adding yet another confounding factor between comparing temperatures before the early 80′s to temperatures afterwards at this site). Where is that accounted for in the site metadata?

      Who even knows what sort of effect turning a mix of forest and human occupied farmland into 60 or 70 thousand acres’ worth reservoirs might have on the surrounding temperatures and “climate”, at the same time that the weather station itself went from being a tiny regional airport to being a hub for a large commercial carrier, at the same time the surrounding farmland turned into one giant suburban and urban mega-community? We don’t know, of course — and not even BEST can account for or correct for this — but we might, perhaps, simulate some range of the possibilities and see what they do not to the anomaly itself — per model, it is what it is — but to the best estimate for the uncertainty in the anomaly when any give model ignores a source of potential systematic bias. As (apparently) HADCRUT4 does when they do not correct for UHI at all, however eager they are to cool the past or warm the present in other ways.

      rgb

    • This kind of graph not only shows no relation to CO2 (human or “natural”) it also shows that the main driver(s) must be something cyclic. Yet it was only very recently that the PDO and AMO were identified and we don’t appear to fully understand either.

    • When the assumptions, taken as true, that the GCM rest on become increasingly “wrong” (sign and magnitude), their outputs become increasingly absurd and result in ever more bizarre claims. We see the results of that with each new paper trying to explain the pause.

      • Robert,
        In the 90s and for the 2000-2006 period, much of it likely looked quite on track. The big cracks appeared with the climate gate fraud exposure in 2009. But now in 2014.5 the GCM temp divergence with reality is becoming untenable, hence all the alternative ali is are coming out every week now. Most certInly there were a few bad apples in 1998 & forward who used chicanery, data manipulations and suppression of data from rivals that were contrary to their data and results in the past temps records, results they would need to build a case against Mann’s continued Carbon intensive energy sources. Those individuals should be banished by science journals editors for Life.
        In the US, democrats began to see dancing trucks loads of carbon tax dollars to spend. Enviros saw a way to de-industrialize and shutdown Big Oil, their arch enemy.

        But I get the sense that guys like Trenberth really do want to be true to science, but with so much reputation riding on AGW it’s a hard thing to finally let go of a dying baby you birthed and nurtured in good faith. But the rime to let CAGW go is past, now they are just desperately cling to just AGW starting back up in 20 or so years.

  7. how is it possible that humans have contributed 110% of the warming (best guess)? are they saying that otherwise there would have been cooling?

    why did temps rise from 1910-1940 almost identical to 1970-2000? It wasn’t CO2, so what was it? Why did temps pause from 1940-1970? How is the current pause any different? If the pause lasts from 2000-2030, how is this any different than the pause from 1940-1970?

    Why did the [climate] models not see the cyclical pattern, that [your] average 6th graders would have caught? Do they not know that nature is cyclical, not linear?

  8. Having crossed swords with Schmidt some years ago on unRealClimate, I came to the conclusion not to believe anything he says. I’ve never been back there since, as it is full of pseudo-science presented by pseudo-scientists.

  9. Scientist have a different way of talking than the public. Well at least in my experience. Words don’t always mean the same thing to them as the general public. I don’t know what “Conspire” means in the context of what Gavin Schmidt is saying below. I hope it doesn’t mean they got together in a sinister way to plan what they did.

    ——————————–

    Climate models projected stronger warming over the past 15 years than has been seen in observations. Conspiring factors of errors in volcanic and solar inputs, representations of aerosols, and El Niño evolution, may explain most of the discrepancy.

    http://www.nature.com/ngeo/journal/v7/n3/full/ngeo2105.html

    Volcanoes, the sun, aerosols, & El Nino conspired to make the models wrong.

    HT/ Maksimovich From Curry’s blog.

    • Gavin is English, I think. In England, the word Conspire means ‘work together’ or ‘work in tandem’. it doesnt necessarily have a sinister meaning

      • Partly right, BUT the whole point of ‘conspire’ is that it is a plan, and usually for nefarious means. See below – it’s all bad, dude! If Gavin is or was ‘conspiring’, be prepared for nonsense.

        *****************************

        “to agree together, especially secretly, to do something wrong, evil, or illegal:
        “They conspired to kill the king.”

        “to act or work together toward the same result or goal”.

        verb (used with object), conspired, conspiring.

        “to plot” (something wrong, evil, or illegal).

      • I am English and in the context it is used I don’t see anything sinister. He surely means merely to work together.

        Tonyb

      • I think the correct word for that would be “collaborate” To labor or work together.
        Con-spire is the breath together, like telling secrets…

      • I’m 57 years old and have been English all my life and I have to disagree with that. A closer meaning for conspire than ‘work together’ is ‘plot together’. It definitely does have mildly sinister connotations. If Gavin meant ‘work together’ or ‘work in tandem’, imo he would have used the word collaborate.

      • “late Middle English: from Old French conspirer, from Latin conspirare ‘agree, plot’, from con- ‘together with’ + spirare ‘breathe’.”

        Or more to the point whispering together. Definite hush hush.

        If we are talking about plotting in the open, that’s collaboration or co-operation..

    • Ok, so “volcanoes” contributed to the last 17 years of steady climate temperatures – DESPITE ever-higher CO2 levels in the atmosphere.

      If you believe that theory, show us the measured real, demonstrated decrease in atmospheric clarity – which has remained absolutely steady the past 21 years!

      Well, for two months in 2009 clarity did drop. But neither temperatures nor ice coverage changed when the atmospheric clarity DID drop that one time!

      The excuse is proved wrong. Again.

  10. If the ‘best guess’ is 110% of warming is attributable to man are they saying it would have got colder without our efforts? By deduction they must be confident they have the natural variability component understood which I sincerely doubt.

  11. “The climate models are not only out of phase with the long-term data, they are out of touch with reality.”

    But, importantly, they are not out of touch with funding.

  12. Ok. I feel entirely dumb. What is a 110% probability??? What is 110% of the entirety of something??? What is, say, being 110% responsible for the making of 110% of a car? I told you I feel dumb.

      • When I was young, the probability of event X was the ratio between all outcomes resulting in X and all the possible outcomes. It’s probability theory, the most basic-basic-basic of it. You cant have a total of outcomes (of whatever) with 10% more outcomes than the total possible outcomes. So, I’m dumbfounded. It could be colloquial usage, as in “I’m 120% sure that…” But I guess colloquial is inappropriate in the context of the discussion.

        And there my reading was completely blocked.

    • I think the 110% comes in from extrapolation of Mann’s Hockey stick they all are in love with, which up to 1900 showed gradual cooling. If you assume that the cooling would have continued without Man’s input, then the observed warming is actually less since some of Man’s warming was negated by the natural cooling that they claim should have been occurring.

    • I think he’s saying that the mode of the probability is that natural variation would have resulted in cooling, but man’s interference caused warming equal to 110% of the warming observed. I.e., if it warmed one degree, it would have cooled a tenth of a degree without man. The area under the curve is only 100%, though. That is, the percentage he’s talking about is a percentage of warming, not a probability.

    • Josualdo – Gavin means that the amount of calculated CO2 warming is 110% of the measured. But his wording “fraction of warming attributable” shows that he does not understand climate. Climate, as clearly stated by the IPCC, is a complex, coupled non-linear system. That means that there are many factors involved, they affect each other, and the results are chaotic (things sometimes happen in certain conditions, and sometimes don’t). In the real world, the amount attributable to human activity is the difference between what it would have been without the human activity and what it actually was. [The reason that it's this way round is that the non-human stuff was always going to be there. It's the human stuff that is different. If things end up exactly where they were going to be anyway, for example, then the human impact is zero, regardless of any calculations of what the human stuff does.]
      If we look at the last, say 200 years, or the last 10,000 years, then it is pretty clear that Earth would likely have warmed up a bit (we can’t be sure, because we reallydon’t know how Earth’s climate behaves). That automatically puts the fraction of warming atttributable to human activity at less than 100%. So how does Gavin come up with 110%? He has used linear thinking – a big no-no in a non-linear system – he has compared his calculated human effect with the measured temperature.

      • Thanks, Mike, I think I got the gist of the thing. So, no probabbilties here.

        Having been interested in chaos theory, fractals and all that formerly fancy stuff, and knowing — well at least that was the meme — that butterfly wings might affect the weather somewhere else, I find all this certainty very strange. There’s an anecdote that almost would apply, but I guess it would not surviving the translation (and my telling it.)

  13. The interesting part, Bob, is that Gavin felt a reply of this sort was needed at all. I suspect that between the pause falsifying the models by the CAGW gangs own previously published standards (btw your tuning argument has been made by many including Akasofu), and all the stuff now coming out about inexplicable and in an increasing number of cases inexcusable homogenization (BOM Rutherglen in Australia, BEST station 166900) that reality is really starting to bite hard.

  14. Perhaps we could ask Gavin to pop into a parallel universe, where man died out on Earth a few tens of thousands of years ago.

    He could then take all the measurements needed to determine exactly how much of the past 70 years’ mild warming is due to the activities of man. Even in the wacky world of ‘climate science’, this is unlikely to happen anytime soon.

    Bottom line: None of us have a clue how much of the recent mild warming has been due to the activities of man. Those who are worried about their future salary cheques argue, “A lot.” While those who are worried about beggaring the world economy for no apparent reason, argue, “Not a lot.”

  15. When are going to throw these frauds in jail?Oh wait.If that happened,the psychopathic politicians might take a hit.Nothing to see here.Carry on.

  16. I’m having trouble interpreting figure 1. How can the fraction of global warming caused by anthropogenic causes be greater than 1.

    • I guess that begs the question of whether or not GISS’ rate of temperature ‘homogenisation’, designed to cool the recent past, will accelerate or not under Gavin’s stewardship.

      Under Hansen, better known for his antics rather than his science, ‘homogenisation’ ran wild at GISS.

  17. Gavin is an intelligent scientist, due to the ever longer ‘pause’, he must see that writing is on the wall, but his current position doesn’t offer him an acceptable alternative.

    • Agree. But if you have conscience and scientific integrity, when do you get to point you can’t sleep at night from the lies?

  18. Even using the most ddjusted data set of all and assuming ALL of the warming in the 1900s was man made and a 110% warming rate… they still CANT anywhere close to 2C temperature rise from 2000-2100. There is NO CAGW. At 013C per decade, that is 1.3C per century.

  19. Since the 110% is getting people confused, let me explain a little bit. The 110% is the theoretical AGW (the warming that–according to the IPCC and Gavin–would have occurred if there were no natural cooling influence) divided by the real warming that was actually measured. I discussed this in the guest post I mentioned earlier.

    http://judithcurry.com/2014/01/29/the-big-question/

    It’s not just counterintuitive, it also has some insane consequences. The longer the pause lasts, the more certain the AGW dominance becomes. The catch is that it’s an ever slower rate of warming, and therefore you have to expect slower waming in the future anyway. It’s pretty misleading, but I think Gavin is so steeped in this mode of thinking, he actually believes it’s the right way to calculate.

    • Remember the research is already years old. I am sure if the numbers are run today the most likely percent of AGW would now be 140%, and extremely likely more than 75% is AGW. Like you say, the longer the pause the more certain the temperature rise is AGW! If the temperature falls the next 5 years, look out, certainty will sky rocket even higher that CAGW is coming!

    • So that 110%, -150% or 250% “probabilities” are actually something like slope, or differences, which are added and subtracted? In that case they aren’t probabilities, of course. If I got your interesting post.

      • That’s correct. Those aren’t probabilities. The probability is the notorious 95% and the statistical method practically guarantees that it will increase.

    • Folks here are confusing effect with probability.

      Probability of an occurrence can never exceed 1.0.

      Effects can cancel out other effects, and thus individually contribute more than 100% of an observed integrated output such as global temp reponse.

  20. The rate remains defiantly linear in nearly all of the oldest real thermometer records, despite both urban heating effects and the overall greenhouse effect plus or minus feedbacks:

    The same exact thing is seen in nearly every long running tide gaude record.

    So indeed, 110% needs be invoked since an unprecedented cooling spell has to be have been averted for a mere trend continuation to be blamed on emissions. It’s amazing how the ghost of debunked hockey sticks live on as a background assumption to conceal these old records that debunk anthropogenic claims quite strongly as far as traditional scientific rigor is concerned.

  21. Yeah, been watching and commenting at JC’s site on this. Fascinating, as is Bob’s response here.

    Once again, my apologies if anyone is offended by this, but this remains in my mind like “two fleas arguing over who owns the dog they are riding on” (Crocodile Dundee).

    It begs sanity, IMHO, that we are even having this discussion at all.

    There are only 2 possibilities here. That’s it!

    1 The Holocene would just have continued blithely along, presumably forever were it not for Anthropogenic disturbances, AGW etc.

    2. The AGW hypothesis is correct which makes Ruddiman’s Early Anthropogenic Hypothesis also correct. The Holocene may well be over and we are living in the Anthropocene now. Interglacial conditions extended by AGW.

    On possibility 1, here is my detailed look at the Holocene conundrum http://wattsupwiththat.com/2012/03/16/the-end-holocene-or-how-to-make-out-like-a-madoff-climate-change-insurer/

    On possibility 2. we find ourselves faced with perhaps ending the Anthropocene by stripping the CO2/GHG “climate security blanket” from the atmosphere. If the AGW hypothesis is correct, that would leave glacial inception as the only other climate state, wouldn’t it?

    The Pretzel Logic here is simply gobsmacking!!

    You cannot be right about the “Anthropocene”, or ending it, without getting a hated tipping point, but of the opposite sign to the one expected. If CO2/GHGs are holding us in interglacial conditions, wouldn’t removing the excess tip us into the next glacial inception?

    Getting deep into the Judith/Gavin weeds is, of course, a very interesting discussion. “I suggest a new strategy R2, let the Wookie win!”, C3PO. Because the real fun begins if cede Gavin is right, because the choice is really about extending the Holocene, or removing the “climate security blanket” so we can get on with our overdue glacial inception.

    Muller and Pross (2007) provide one of the more poignant quotes in all of climate science:

    “The possible explanation as to why we are still in an interglacial relates to the early anthropogenic hypothesis of Ruddiman (2003, 2005). According to that hypothesis, the anomalous increase of CO2 and CH4 concentrations in the atmosphere as observed in mid- to late Holocene ice-cores results from anthropogenic deforestation and rice irrigation, which started in the early Neolithic at 8000 and 5000 yr BP, respectively. Ruddiman proposes that these early human greenhouse gas emissions prevented the inception of an overdue glacial that otherwise would have already started.”

    http://folk.uib.no/abo007/share/papers/eemian_and_lgi/mueller_pross07.qsr.pdf

      • OK, so let me take a stab at responding.

        “you did not get it at all” provides only an ad hominem. The comment link, on the other hand provides what I think is enough information to suggest that your point is there is no anthropogenic influence.

        Stepping out on that limb is putting forth a hypothesis. I do not disagree with your hypothesis. However, one of the key steps one takes as a scientist when thinking about proposing their hypothesis is to adopt the opposing position(s) as a means of testing the hypothesis. Standard science.

        So adopting the opposing viewpoint, standard in science, is that there is a decisive climate impact from CO2/GHGs. And if that was correct, then we are living in the Anthropocene extension of the Holocene interglacial. So, with our standard science adopted opposite viewpoint, we now come to what do if we are right? Strip CO2/GHGs from the Anthropocene atmosphere, and where does THAT leave us?

        The only other state would be getting on with that overdue glacial inception.

        I am in no way saying you are wrong. I am saying what if you are wrong and the AGW crowd is right? Would not being right about AGW, and quelling its atmospheric presence, actually be the wrong thing to do?

    • Except under Ruddiman, the Holocene would scarcely have been an interglacial at all. The Eemian lasted 16,000 years & the MIS 11 interglacial tens of thousands. Those of MIS 7 & 9 were longer than the Holocene would have been under Ruddiman’s hypothesis.

      • Milodon, I would suggest that instead of just taking a higher-end estimate for the length of the Eemian, which of course is a length quoted by several authors, it is by no means the consensus on the length of the Eemian. There probably isn’t one, but the range would seem to be somewhere between 10-13kyrs with 16 being an outlier, but not the furthest outlier. I do not have the time to dig all of this up anytime soon, but there is still disagreement as to whether Termination II was a single step, or a two-step one like Termination I. From memory, it seems like evidence for a 2-step deglaciation into the Eemian seems more likely as higher resolution studies pile-up. From memory again, the 135kyr start of the Eemian tends to be associated with the single warming camp. The 2-step camp, from memory counts the period from 135kyrs to 125kyrs as consisting of two warming events with a duration for both similar to the last deglaciation. ~115kyrs ago is what I remember as being one of the more frequent conclusions as to when the Eemian ran down. So something on the order of 10-20kyrs, depending on who you quote and depending on whether the 10kyr deglaciation interval is included in the estimate.

        I took a quick look in my Eemian folder and was rewarded with this 2008 paper http://journals.co-action.net/index.php/polar/article/download/6172/6851 Have a look at Figure 5 and you will catch my drift.

        This is not about tit for tat, because even on things which have happened, the science is not particularly well-settled. Which makes consideration of the science being settled on something which has not happened yet a bit unsettling…. :-)

      • It is, isn’t it?

        And it took no time at all to realize I was decidedly not the only one who had come of such an argument. This simply cannot be had both ways. AGW either can (and may already have) extended the Holocene, or it cannot. That’s pretty much it.

        The most thorough analysis is still Tzedakis 2010 landmark paper here http://www.clim-past.net/6/131/2010/cp-6-131-2010.pdf

  22. If anthropogenic effect in global warming in the modern times is more than 1% in total, I would be impressed.

  23. How could any blog generate +700 argumentative comments to a article on the 97% consensus, with 110% attribution to humans, ‘settled science’ of man made global warming??? It seems highly improbable, unless the ‘science’ is ill supported. And the proponent of the ’110% attribution’ does not respond directly to the blog article on ClimateEtc, choosing to fire his blunderbuss from behind the self censured revetments of RealClimate, ala Kim Jong Un? (There is a bit of a resemblance….)

    Settled science doesn’t draw such spirited discussion. Unsettled science does, as does unsupported conjecture or willful deceit.

  24. WRONG
    the models are not tuned to the period this ONE PAPER reports for ONE model.

    Even here you get it wrong

    “Formulating and prioritizing our goals is challenging. To us, a global mean temperature
    in close absolute agreement with observations is of highest priority because it sets the
    stage for temperature-dependent processes to act. For this, we target the 1850-1880
    observed global mean temperature of about 13.7◦C
    [Brohan et al., 2006].

    • Finally. But you’re wrong too. One model, one paper. And you left out the most important part.

      Arguably, the most basic physical property that we expect global climate models to predict is how the global mean surface air temperature varies naturally, and responds to changes in atmospheric composition and solar insolation. We usually focus on temperature anomalies, rather than the absolute temperature that the models produce, and for many purposes this is sufficient.

      Figure 1 instead shows the absolute temperature evolution from 1850 till present in realizations of the coupled climate models obtained from the CMIP3 and CMIP5 multimodel datasets. There is considerable coherence between the model realizations and the observations; models are generally able to reproduce the observed 20th century warming of about 0.7 K, and details such as the years of cooling following the volcanic eruptions.

      Yet, the span between the coldest and the warmest model is almost 3 K, distributed equally far above and below the best observational estimates, while the majority of models are cold-biased. Although the inter-model span is only one percent relative to absolute zero, that argument fails to be reassuring. Relative to the 20th century warming the span is a factor four larger, while it is about the same as our best estimate of the climate response to a doubling of CO2, and about half the difference between the last glacial maximum and present.

      To parameterized processes that are non-linearly dependent on the absolute temperature it is a prerequisite that they be exposed to realistic temperatures for them to act as intended. Prime examples are processes involving phase transitions of water: Evaporation and precipitation depend non-linearly on temperature through the Clausius-Clapeyron relation, while snow, sea-ice, tundra and glacier melt are critical to freezing temperatures in certain regions. The models in CMIP3 were frequently criticized for not being able to capture the timing of the observed rapid Arctic sea-ice decline.

      While unlikely the only reason, provided that sea ice melt occurs at a specific absolute temperature, this model ensemble behavior seems not too surprising when the majority of models do start out too cold.

      In addition to targeting a TOA radiation balance and a global mean temperature, model tuning might strive to address additional objectives, such as a good representation of the atmospheric circulation, tropical variability or sea-ice seasonality. But in all these cases it is usually to be expected that improved performance arises not because uncertain or non-observable parameters match their intrinsic value – although this would clearly be desirable – rather that compensation among model errors is occurring. This raises the question as to whether tuning a model influences model-behavior, and places the burden on the model developers to articulate their tuning goals, as including quantities in model evaluation that were targeted by tuning is of little value. Evaluating models based on their ability to represent the TOA radiation balance usually reflects how closely the models were tuned to that particular target, rather than the models intrinsic qualities.

      These issues motivate our present contribution where we both document and reflect on the model tuning that accompanied the preparation of a new version of our model system for participation in CMIP5. As decisions were made, often in the interest of expediency, a nagging question remained unanswered: To what extent did our results depend on the decisions we had just made?

      Do you know the answer?

      It is mainly Bob’s argument that models are tuned to the period of the late 20th century, so it’s up to him to respond to your point specifically.

    • Steven Mosher, are you purposely trying to misdirect and misinform? My discussion of tuning is about trends. The paper you linked was about tuning to a specific absolute temperature for a specific point in time.

      • Correction. Because, Steven, you didn’t specify what you were quoting, I assumed you were quoting Brohan et al. 2006, which was referred to at the end your comment. I’ve checked and discovered you were quoting Mauritsen, et al., which I referred to in the post.

        The tuning I was referring to for the period of 1976-2005, as discussed in Mauritsen, et al. follows. Let’s start with the closing discussion of Mauritsen et al. (2012). They begin:
        “Parameter tuning is the last step in the climate model development cycle, and invariably involves making sequences of choices that influence the behavior of the model.”

        Then under their discussion of “2.1. The tuning process”. There they write:

        “We tune the radiation balance with the main target to control the pre-industrial global mean temperature by balancing the TOA net longwave flux via the greenhouse effect and the TOA net shortwave flux via the albedo affect. The methodology of tuning the radiation balance may vary between model development groups, and is usually adapted to the specific goals and constraints of the exercise. After a problem has been identified in the coupled climate model, we iterate the following steps until a satisfactory solution is found:
        “1. Short runs of single months, or if possible one or more years, with prescribed observed SST’s and sea ice concentration; first with reference parameter settings, and then altered parameter settings.
        “2. A longer simulation with altered parameter settings obtained in step 1 and observed SST’s, currently 1976-2005 from the Atmospheric Model Intercomparison Project (AMIP), is compared with the observed climate.”
        “3. Implement the changes in the coupled model setup to run under pre-industrial conditions and evaluate the altered climate. Frequently, we make small parameter changes in this step to fine-tune the climate, without first revisiting steps 1 and 2.”

      • It’s hard to figure out what Mosher is complaining about now, as he never explains himself properly. I suppose one might argue that models are not consciously or intentionally “tuned”. They are based on various physical calculations and parameterisations. There are no “control nobs” inside models that can be turned one way or another to produce a desired output. What happens is that the parameterisations get shifted this way and that, which alters other processes and feedbacks, until eventually you’ve got something that mirrors the temperature record for a designated period of time. There must be various constraints on what is allowable and what isn’t. But to every sensible person, that’s still tuning. But not overt tuning, as such. But with Mosher being Mosher, this distinction without real distinction is important.

      • I haven’t heard one word about the proper “scientific process”. Models are designed and tuned to a particular set of past data using certain variables (the dependent sample). In this case the main variable is CO2 plus some water vapor feedbacks. To test the validity of the model, it is then applied to a new (independent sample). If the projections don’t fit the actual data, there is something wrong with the basic assumptions. This is clearly the case with climate models, which have thus been invalidated.

  25. dbstealey says:
    August 28, 2014 at 11:27 am
    “…Global warming since the LIA is composed of natural step changes. Those steps are exactly the same — whether CO2 was low, or high. Therefore, there is no “fingerprint of AGW”. It is clearly shown here in über-Warmist Dr. Phil Jones’ chart:”
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

    No, the steps are not exactly the same. There clearly was ~0.4 deg C less cooling from 1950 to 1975 than from 1885 to 1910. Also, the warming cycles from 1910-1940 and 1975-2009 are respectively 10 years and 14 years longer than the 20 year warming cycle from 1860-1880.

    These trend differences could possibly be considered fingerprints of Anthropogenic Global Warming (AGW) if we didn’t know that there was comparable warming to modern warming in the Roman Warm Period and the Medieval Warm Period. The warming trends back then were almost certainly not fingerprints of AGW.

  26. There is an interesting piece by Andy Revkin at the NY Times (really!) on the connections between the oceans and atmospheric temperatures. For me, the take-home quote from a climate scientist was

    “The underlying anthropogenic warming trend, even with the zero rate of warming during the current hiatus, is 0.08 C per decade.* [That's 0.08 degrees Celsius, or 0.144 degrees Fahrenheit.] However, the flip side of this is that the anthropogenically forced trend is also 0.08 C per decade during the last two decades of the twentieth century when we backed out the positive contribution from the cycle….”

    http://dotearth.blogs.nytimes.com/2014/08/26/a-closer-look-at-turbulent-oceans-and-greenhouse-heating/?_php=true&_type=blogs&smid=tw-share&_r=0

    Warming of 0.8 C per century is not frightening.

    • Yes. But the warming since c. 1945 is no different from the warming in the early 20th century, & much less impressive than that in the early 18th century, among prior natural warming intervals.

    • I always figure it should be 1935. By 1938 industrial factories in Europe, russia, and Japan were in high gear, burning coal and oil as fast as they could dif it out of the ground. The US joined in that industrial fray in 1940. By 1943 US industrial output and thus energy use was up almost 300%over 1939. There was a big bad recession in 1946-1947 as factories retooled.

      • You are assuming that four valleys of extreme industrial concentration (Germany’s Ruhr Valley, Pittsburgh’s PA, UK’s London (Thames and surrounds) and California’s LA basin) are typical of the rest of the world. Those four WERE extremely polluted, but are a very, very small part of the whole world. And, even around Pittsburgh, once you were a few miles from the steel mills and glass factories, the air cleaned up remarkably.

        Further, three of the four cleaned up between 1945 and 1950. (LA got worse until the early 70′s). Pittsburgh was sandblasting downtown to clean buildings as early as 1947.

    • Dr. Curry made the point, and it’s been mentioned many times, if CO2′s affect is only noticeable post 1950, then where id the 1910-1940 rise come from? None of these so called Climate scientists have explained how one is natural and one is man made. Only that in the second that is greater than the first can logically be attributed to man.

  27. @Bob Tisdale’s
    “The climate models are not only out of phase with the long-term data, they are out of touch with reality.”

    +10!

    “It is difficult to get a man to understand something when his job depends on not understanding it.”
    – Upton Sinclair

  28. G.E. Pease,

    The steps are exactly the same, when considering even microscopic error bars. Furthermore, there is no empirical evidence showing any ‘fingerprint of AGW’. None at all. There are no measurements of a fraction of a degree warming that could be directly attributable to human emissions. Thus, the default conclusion must be that all global warming is natural, unless shown to be otherwise. To show that would require verifiable measurements. But there are no such measurements.

    It is like someone doing an overlay of CO2 and temperature, and saying, “Look! Rising CO2 causes rising temperature!” They do that all the time. But a temporary, coincidental correlation proves nothing. And that T/CO2 relationship broke down, both before and after a short periord from about 1980 to 1997.

    Global temperature has been rising at the same rate, as NikFrom NYC shows above, for hundreds of years. There is no evidence at all that human CO2 emissions cause any warming. Any such AGW is mere speculation, and it would anyway be so minor that it can be completely disregarded.

    The onus is on the alarmist crowd to support their CAGW conjecture. They have failed miserably, so now their tactic is to make baseless assertions as if they were fact. They aren’t. And without real world measurements, their conjecture fails.

  29. I both agree and disagree with both Bob and Judith/Gavin, but on several different issues:

    First: Why is the year 1950 chosen as when AGW supposedly started? That just makes no sense. Please look at the data: Take HadCRUT4 for example. It clearly shows several periods of increasing and decreasing temperatures, each of about 30-34yrs long, making for a 60+ year cycle.

    This can be easily and nicely shown with a MACD, which I’ve shown last year here:

    http://wattsupwiththat.com/2013/10/01/if-climate-data-were-a-stock-now-would-be-the-time-to-sell/

    Clearly the year 1950 is at the start of a 30+ year cooling trend that started in 1945 and ended in 1976. In other words: temperatures peaked in 1945 and bottomed in 1976. How then can there have been (AG) warming since 1950? That makes no sense. Cycle analyses will, thus, tell you when and where temperature trends change. One has to start “counting” from those trend changes. Otherwise you are mixing cyclical warming and cooling periods.

    Second: This also means that 1976 is a more appropriate year to look for any AGW signal. However, as I’ve shown in my MACD article, the increase in GSTA during the latest warming cycle, 1976-2007, is 0.019°C/yr, whereas that of the previous 30 yr warming period: 1911-1945 was 0.014°C/yr. Hence, assuming all else equal (i.e. nature… very dangerous to do that in science btw), then the last period had a warming rate that was 0.005°C/yr (36%) higher than the that of the previous warming period. So the maximum possible human influence is 36% IMHO. Note that a) the MACD analyses finds the same years and warming rates as Bob presented and b) that since 2007 the temporal trend in GSTA is effectively 0.

    • 1950 makes “sense” because it makes the average rate of warming in the time frame smaller, so a larger part of the rate of warming can be attributed to AGW.

      • Cherry-picking one way or the other. It may have been chosen for whatever reason originally and then found to be “convenient”. Speaking of fruit, it’s also apples and oranges. The time frame is from 1950 to “the present”, which is different for each successive IPCC report.

  30. Current data is not supporting AGW theory. In addition past climate changes before this idea of AGW came about were many times greater in magnitude then the slight warming which occurred last century. Again the data in this case past data does not support AGW.

    Yet they insist.

    For my money I attribute all climate changes due to natural causes and 0% to human activity.

  31. I noticed the difference in the starting from the proclaimed “CO2″ age of 1950 and most of the graphs only going back to the late 70s. But never thought about it skewing their models as well. Kudos for pointing out the merely obvious to me!

  32. If, as we have been told, that there has been no warming for over 17 years, how can there be an argument on how much warming from 1950 to now can be attributed to man?

  33. The probability density function for the fraction of warming attributable to human activity….
    What is this? Counting the male-female ratio of angles on the head of a pin?

    “Twice nothing is still nothing.” – Cyrano Jones

    Forget the ratio. It is an intractable measure of a religious concept — impossible to test.
    What is PDF for the absolute warming attributable to the growth in anthropomorphic green house gasses.
    What is the PDF for natural warming elements? for the past 6 decades, For the past 2 millennia.?

    • Well said.

      In mathematics and logic it is easy to define entities that don’t exist (e.g. “Let X be the set of entities that don’t exist”). That is why mathematics usually require existence theorems to prove that any such entities actually exist before trying to characterize and deduce truth from them.

      Where is the proof of existence for this pdf (without begging the question)?

  34. Honestly, I am astounded at the utter ignorance of the people involved in climate “science”. I have seen no decent theory backed up by experiment and evidence that CO2 has any net effect on the climate of the planet. In fact, I have seen published charts and graphs that suggest that CO2 has no or nearly no effect at all. And yet we have supposedly educated men and women claiming anthropogenic warming to a precise measure as if they knew Mother Nature’s contribution to the whole affair. Unbelievable ignorance, delusion, and arrogance.

    Of course since the government funded temperature data sets are now so corrupted as to be useless: how can we look for real causes of climate change? I notice that even this site calls the best European blog of last year a dispenser of “way out there theories”. Looks to me like the theories we have now are bunk and we need to be working on something else.

    • To paragraph 1, do not be astounded. Their academic or government careers, grant funding, and personal status all depend on it. To phrase it differently, climate science increasingly resembles the worlds oldest profession.
      To paragraph 2 part one, that is going to be an Achilles heel.
      To paragraph 2 part two, truth is not always found in popularity contests despite the supposed ‘wisdom of crowds’. Were it so, then tulip bulbs would be more valuable than gold and present shareholders in the South Seas Company would be richer than Bill Gates. (h/t IIRC Mackey”s famous old book on the madness of crowds.)

    • It looks like she is merely pointing out the strategy of blaming the Kochs, and that it isn’t working.

    • She’s describing ‘the climate science communication paradigm’ and why it fails, not her position in the debate.

      This strategy hasn’t worked for a lot of reasons. The chief one that concerns me as a scientist is that strident advocacy and alarmism is causing the public to lose trust in scientists.

      It’s quite clear when you read the full interview.

  35. That pdf is the single worst thing i have seen in climate science. There is just no way you can create such an attribution given the unknowns.

    At least the hockey stick had some basis to it…

  36. The most damning aspect of Gavin’s argument that the cherry-picked 1976~2005 warming period is almost entirely attributable to CO2 forcing, is that its warming trend is similar to the warming period to the 1921~1943 warming period (0.14/decade, 0.19c/decade respectively), and the 1921~43 warming trend can’t possibly be attributable to CO2 because even the IPCC admits CO2 levels were too low in the first half of the 20th century to have caused much warming.

    What these two warming periods do have in common is that the PDO was in its 30-yr warming cycle during both of these warming periods.

    http://www.woodfortrees.org/plot/hadcrut4gl/from:1850/to:1880/plot/hadcrut4gl/from:1850/to:1880/trend/plot/hadcrut4gl/from:1880/to:1921/plot/hadcrut4gl/from:1880/to:1921/trend/plot/hadcrut4gl/from:1921/to:1943/plot/hadcrut4gl/from:1921/to:1943/trend/plot/hadcrut4gl/from:1943/to:1977/plot/hadcrut4gl/from:1943/to:1977/trend/plot/hadcrut4gl/from:1977/to:2005/plot/hadcrut4gl/from:1977/to:2005/trend/plot/hadcrut4gl/from:2005/plot/hadcrut4gl/from:2005/trend

    The PDO entered its 30-yr cool cycle in 2005, and that’s precisely when global temp trends started falling again, despite record amounts of CO2 emissions.

    Earth’s warming and cooling cycles have followed PDO warming/cooling cycles almost perfectly for the past 164 years. Accordingly, It’s illogical to assume CO2 is the primary driving force behind global warming since 1950, because from 1950~1976 global temps were falling (PDO cool cycle in effect) and when a global warming trend started again in 1976, it coincides when the PDO entered its 30-yr warm cycle.

    The empirical evidence suggests that for the next 20 years, global temp trends should continue to fall, which will be the death knell for CAGW.

    • It will happen w/i the first 5 years as temps fall. AGW as a science hypothesis just becomes untenable in the scenario.

  37. Is it just too obvious that the oceans act as an enormous heat sink that moderates atmospheric temperature?When the heat content of the entire atmosphere is the same as the top 10 meters of the ocean, and when there are 321 million cubic miles of ocean, most of which is at or below 4C, how is it surprising that there is incredible buffering capacity for temperature changes?

    • Bingo!! Ding ding ding ding!! Flashing lights. Winner!!!

      The oceans control the thermostat, as they have for a billion years once our sun matured. The stupid thought that man’s fossil fuel CO2 is the thermostat regulator is total BS.

  38. Thanks, Bob. Very good information about GCMs training; the models are specialists.
    Why would the climate modelers base their projections of global warming on the trends of a cherry-picked period with a high warming rate?
    To better scare the money out of our pockets, of course.
    But, it seems to have come back to haunt them.

  39. Shouldn’t Gavin repay American taxpayers for the time he spends blogging on the job?

    Why does the US hire foreigners like Gavin & Kevin, anyway? Aren’t there lots of lower paying yet cushy sinecures on their home islands for such drones?

  40. Hmmm… Did Gavin actually somehow calculate the probability distribution that his graphic shows, or did it come out of some dark hole of his, as it seems to me?

  41. Gavin? Gavin who?

    Isn’t that the bloke who claimed he was too important to debate a real scientist face to face?

    And now he’s trying the very old ‘Argumentum ad Verecundiam’ (argument from authority) as if Gavin has any authority or respect left in science that he could use as a basis to support his funny numbers. “Thus sayeth the lord rascal of GISS…”

    Gavin is so desperate and eager to support the case for man causing warmth through CO2 emissions that he happily tailors his graphs to a very short period of time. Not forgetting that he must cherry pick.

    One would think that such a high placed NOAA executive would utilize all his resources, historical documents and data available. Which makes it very odd that Gavin instead places all of his eggs into a very shallow tippy and flimsy basket.

    (Good point and question Nylo!)

    • Gavin just took over the directorship of GISS from James Hansen the (Venus) CO2 expert. How on Earth can he deviate from his holy party line?

  42. I have yet to see any attempt at modelling the entropy between TOA and BOA. If Trenberth’s calculations are to be believed (given the huge margins of error): If the earth as a system is retaining more energy than previously due to CO2 why would this change the mean temperature at BOA – this response would only be expected from the simplest of systems. Not one in which H2O dominates and exists in all three states simultaneously. If you haven’t determined all possible arrangements of energy in the system why on earth would you look for a signal in average kinetic energy at BOA? And why would you then manipulate the data and claim you have a signal in data where only the most obtuse would expect to see one?

    • I think that is rather Trenberth’s point — the ocean is in fact retaining a lot of the supposed surplus in energy, buffering the surface temperature. However, the effect is enormously marginal — we are talking hundredths of a degree here as the ocean is huge and has a gazillion times the heat capacity of the atmosphere — and one can, and probably should, question whether there is any way in hell we can resolve this sort of tiny temperature variation from statistical noise, natural drift, instrumental error, the substantial errors that result from undersampling the ocean to depth by a half-dozen orders of magnitude compared to what might reasonably be expected to yield that sort of precision where the ACTUAL record doesn’t even use homogeneous instrumentation and is even now derived from only a few thousand samples.

      But if true, it is great news! The ocean could eat the heat for a century and not change temperature by a half degree even in the top kilometer only. And by then, who knows where we’ll be?

      rgb

  43. The period of the AMO is about 60 years. If you use this period in a trend analysis the oscillation due the AMO will cancel out and this trend reflects the green house effect. But the temperature rise due to the greenhouse effect is basically nonlinear in time. Therefore a linear trend analysis will run into error, if the period used for the trend calculation is too long. Better you should make a least squares fit to the data with a Fitting function TA(t)= c0+ c1*t+c2*t°2+c3*sin(wt+phi).

    • I have made this Analysis now. My results for Giss Loti data are listed in the following table. Temperatures are anomalies relative to 1900 (reference interval 1886-1915) in °C. Data (D) and Forecasts (F) are annual means. 5 different assumptions are made: (1) no temperature change, (2) 30 yr linear trend, (3) 60 yr linear trend, (4) TA(t) = c0+c1*t+c2*t^2, and (5) TA(t) = c0+c1*t +c2*t^2+ c3*sin(wt+phi), where ci, w and phi are adjustable parameters.

      | (1) | (2) | (3) | (4) | (5)
      F Jul 2074 | 0,9 | 1,9 | 1,7 | 2,1 | 1,9
      D Jul 2014 | 0,9 | 0,9 | 0,9 | 0,9 | 0,9
      F Jul 2014 | 0,2 | 0,6 | 0,6 | 2,0 | 0,4

      I have checked the fitting procedure by using data until July 1954 for a 60 yr forecast to 2014. As you see in the third line of the table, best agreement between data and forecast is obtained for the linear trend forecasts. The reason for this is that before 1954 the anomalous temperature rise was small. Therefore the fitting functions (4) and (5) were no good description of reality.

  44. 110% certainty = 100% certainty that 10% of the warming MIGHT have been caused by human activity.

  45. All data before 1950 is suspicious for the following reasons
    a) no re-calibration done on thermometers
    b) human observations (usually 4 x day) versus automatic recording every second
    c) after 1970 we changed from thermometers to thermo couples

    etc.

    better is to stick with data only from the 1970′s

    http://blogs.24.com/henryp/2013/02/21/henrys-pool-tables-on-global-warmingcooling/

    and to make your own conclusions from those…

  46. Goddard Institute of Space Study

    Gavin and every other climate modelers who spends the taxpayer funds that should go towards space study instead on thermometers on the surface of the earth need to be fired for misusing taxpayer funds and the GISS needs to be put back to its true purpose, space.

    Gavin is a public leach who misuses taxpayer funds and a the reputation public institution for his own pet issue. He and the rest of the climate modelers at the GISS should have been fired a long long time ago.

  47. I raised the case of the CET summer/winter temperatures trends dichotomy on RealClimate blog, but no viable CO2 explanation was offered, instead it was dismissed as a regional coincidence.
    CET instrumental record goes back to the nadir of Little Ice Age.
    It shows that summer temperatures have stayed almost constant during the last 350 years, or to be more accurate it rose from 15.1C to 15.45 C, or less than 0.1 C/century.
    Meanwhile, the winter temperatures have risen from 3.05C to 4.35C or nearly 0.4C/century.

    It is more than clear that these changes have nothing to do with CO2, and that what Gavin Schmidt, and his sorry band of cataclysmates, are advocating is a nonsense.

    • I am just speculating but I wonder whether the reason for the rise of temps in winter in densely populated areas [where the meters are] has to do with [1] the quick removal of snow which would normally deflect light (energy) if we had not interfered with it. In addition there might also be [2] a more noticeable UHI effect in winter. In winter we have inversion here (South Africa- inland) i.e.warm [smokey] layers trapped underneath a cold layer when there is no wind. I don’t know if that could be [3] a factor in Europe as well. I doubt it because it seems to me in Europe there is always wind….

      • Hi Henry
        Met office does adjustments for the UHI factor (TonyB has looked into this).
        It is more complex than just UHI, I think is to do with the specifics of the N. Atlantic as I outline here:
        Ocean heat transfer of a major consequence is permanently active in the far North Atlantic, to the south west of Iceland (mainly in the winter months) and to the Iceland’s north (throughout the year).

        http://www.grida.no/climate/ipcc_tar/slides/04.18.htm

        Fact:
        Cold Arctic winds remove the surface heat at rates of several hundred watts per square meter (W/m2)
        Assumed:
        There is a twofold effect of this phenomenon:

        1 – rising plum of warm air affects meandering of the polar jet stream, causing short term temperature (weather) variation across the N. Hemisphere temperate region.
        2 – wind cooled saline surface waters sink to depths of 1-2000m. This deep water convection is the engines of the oceanic thermohaline conveyor circulation. Changes here have a long term effect, affecting the strength of the north-ward horizontal flow of the Atlantic’s upper warm layer, thereby altering the oceanic poleward heat transport and the distribution of sea surface temperature (SST – AMO), the presumed source of the (climate) natural variability.

        a – Intensity of the summers’ variability is of lesser effect, mostly due to the near constant insolation (TSI) across the decades or even centuries, overwhelming any major variability in the external forcing.
        b – Extent of the winters’ variability is far greater due to the absence of the solar suppressing factor, with the external forcing having the full effect.
        (on the external forcing at another occasion)
        This summer / winter dichotomy in the N. Hemisphere’s temperature variability is clearly shown in the CET’s 350 year long instrumental record.

        Effect of the CO2, before or since 1950’s if any, it is most likely.
        But I doubt that anyone with a fixed agenda would take any notice of it.

      • Thx for your comment
        I checked here in South Africa [inland] where we have this inversion,
        trapping smoke [containing large % CO2]
        whether winters were getting warmer, due to this….
        No such luck, here…
        It has to be the snow, UHI, or some GH effect
        [note that central England is getting warmer during a global cooling period, due to GH effect]
        that is causing warming winters in Europe

      • Hi Mark
        No, other than what I have written above (it is my hypothesis). When I posted the ‘summer no trend CET’ at Gavin’s real climate, apparently no one was aware of the case. Two well known climate ‘scientists’ Daniel Bailey – Sceptical Science and Grant Foster – Tamino, went as far as to accuse me of fabricating data.
        Gavin had to go and look up the CET, and put them right, not that I am much welcomed there since.
        So much for the expertise of the ‘climate science’ on whose advise many governments bring in laws and regulations.

  48. Let us say that I construct models that use methane emissions from animals posteriors, orange juice consumption in america, working hours in ireland and other parameters I carefully select to precisely match the temperature record from 1950-2000. I now state that I can prove with near 100% certainty that methane emissions from the posteriors of land animals is nearly 100% the cause of the temperature variation during the 1950-2000 period. I can prove this because I can show that other factors in my model aren’t contributing. Yes, I had to homogenize data and do some renormalization, a little parameterization. Given some millions of dollars I am certain any half decent set of mathematicians can construct such a thing. It’s called curve fitting and it’s been done millions of times.

    All Gavin seems to be saying is that he has models that were constructed to show that between 1950-2000 100% of the temperature increase was because of CO2. Therefore because he constructed models to show that it must be so.

    He then depends 100% on the legitimacy of the models and the predictions they make outside the 1950-2000 period because his models were fitted to this data and precisely engineered so that 100% of the temperature change can be attributed to Co2. Saying that he can prove that 100% of the temperature change is because of CO2 is trivial because he baked that into the models.

    The only thing that matters then is: Do the models work after 2000? If they do he may have a leg to stand on. It would still depend on the accuracy of his models but he would have a leg. Since the models fail to correspond to new data since 2000 it means the models are disproven. Pointing to how good the models work during the period he fit them to is irrelevant because as I pointed out above I could have constructed models based on animal methane emissions. I am simply regurgitating the fact I constructed a model that fit the curve. It has no legitimacy if it doesn’t match NEW data. The only new data we have contradicts the models therefore it is impossible to conclude anything using models which don’t seem to work or any of the assumptions in those models about forcings or other things.

    This is just basic science folks. This is just as true if Einsteins general theory of relativity predicted mercuries perihelion would be much different than it was. There is the issue of accuracy of the models. But this is a two sided coin. If they say the models are consistent with the 17 year history and have error bars which put this well within the probability of the models then they would also be saying that the temperature in the future could be virtually anything and they would have no basis for saying their models are able to do anything or prove anything let alone how much co2 contributed to warming from 1975-1998.

  49. I haven’t read the newest post from Gavin, but I read the earlier ones carefully when working on my Big Question essay. And there is clearly an attempt to discount natural variability through som rather vague, convoluted and indirect logic: “The final issue is whether the internal variability of the system on multi-decadal timescales has been properly characterised. For instance, it is possible that all the models grossly underestimate the internal variability, in which case any expected trend due to GHGs would be drowned out in the noise. But there is no positive evidence for this at all – as Hegerl et al point out, the estimates of multi-decadal variability in the models and observational records all overlap within their (substantial) uncertainties (arising from the shortness of the record, and the difficulty in estimating internal variability in the presence of multiple forcings).

    So while it is conceivable be that there is a bias, it is currently undetectable, which implies it can’t be that large.”

    http://www.realclimate.org/index.php/archives/2012/01/the-ar4-attribution-statement/#sthash.VQ8qhWiN.dpuf

    To me the last sentence looks a lot like: “I turned my back on the grizzly bear. Since I can’t see it, it’s currently undetectable, which implies it can’t be that large.”

  50. This is crazy. Look, if the thing you are measuring is so small that you end up in an argument as to whether or not it exists then it is clearly, blatantly, obvious that the thing is too small to worry about.

Comments are closed.