A Note on the 50-50 Attribution Argument between Judith Curry and Gavin Schmidt

Guest essay by Bob Tisdale | Judith Curry and Gavin Schmidt are arguing once again about how much of the global warming we’ve experienced since 1950 is attributable to human-induced global warming.  Judith’s argument was presented in her post The 50-50 argument at ClimateEtc (where this morning there were more than 700 comments…wow…so that thread may take a few moments to download.)  Gavin’s response can be found at the RealClimate post IPCC attribution statements redux: A response to Judith Curry.

Gavin’s first illustration is described by the caption:

The probability density function for the fraction of warming attributable to human activity (derived from Fig. 10.5 in IPCC AR5). The bulk of the probability is far to the right of the “50%” line, and the peak is around 110%.

I’ve included Gavin’s illustration as my Figure 1.

Figure 1 - RealClimate attribution

Figure 1

So the discussion is about the warming rate of global surface temperature anomalies since 1950. Figure 2 presents the global GISS Land-Ocean Temperature Index data for the period of 1950 to 2013. I’m using the GISS data because Gavin was newly promoted to the head of GISS. (BTW, congrats, Gavin.)  As illustrated, the global warming rate from 1950 to 2013 is 0.12 deg C/decade, according to the GISS data.

Figure 2

Figure 2

For this discussion, let’s overlook the two hiatus periods during the term of 1950 to 2013…whether they were caused by aerosols or naturally occurring multidecadal variations in known coupled ocean-atmosphere processes, such as the Atlantic Multidecadal Oscillation (AMO) and the dominance of El Niño or La Niña events (ENSO).  Let’s also overlook for this discussion any arguments about how much of the warming from the mid-1970s to the turn of the century was caused by manmade greenhouse gases or the naturally occurring multidecadal variations in the AMO and ENSO.

Bottom line, according to Gavin:

The bottom line is that multiple studies indicate with very strong confidence that human activity is the dominant component in the warming of the last 50 to 60 years, and that our best estimates are that pretty much all of the rise is anthropogenic.

Or in other words, all the warming of global surfaces from 1950 to 2013 is caused by anthropogenic sources.  Curiously, that’s only a warming rate of +0.12 deg C/decade. He’s not saying that all of the warming, at a higher rate, from the mid-1970s to the turn of the century is anthropogenic.  His focus is the period starting in 1950 with the lower warming rate.

HOWEVER

Climate models are not tuned to the period starting in 1950.  They are tuned to a cherry-picked period with a much higher warming rate…the period of 1976-2005 according to Mauritsen, et al. (2012) Tuning the Climate of a Global Model [paywalled].  A preprint edition is here.  As shown in Figure 3, the period of 1976 to 2005 has a much higher warming rate, about +0.19 deg C/decade. And that’s the starting trend for the long-term projections, not the lower, longer-term trend.

Figure 3

Figure 3

And that’s why, when compared to the observed warming rate for the period of 1950 to 2013, which, according to Gavin, is the period “that our best estimates are that pretty much all of the rise is anthropogenic”, then climate model warming rates appear to go off on a tangent.  The modelers have started their projections from a cherry-picked period with a high warming rate.

Figure 4 shows the warming rates for multi-model ensemble-member mean of the CMIP5-archived models using RCP6.0 and RCP8.5 scenarios for the period of 2001-2030.  RCP6.0 basically has the same warming rate as the observations from 1976-2005, which is the model tuning period, but that’s much higher than the warming rate from 1950-2013.  And the trend of the business-as-usual RCP8.5 scenario seems to be skyrocketing off with no basis in reality.

Figure 4

Figure 4

And in Figure 5, the modeled warming rates for the same scenarios are shown through 2100.

Figure 5

Figure 5

CLOSING

I’ve asked a similar question before:  Why would the climate modelers base their projections of global warming on the trends of a cherry-picked period with a high warming rate?  The models being out of phase with the longer-term trends exaggerates the doom-and-gloom scenarios, of course.

But we purposely overlooked a couple of things in this post…that there are, in fact, naturally occurring ocean-atmosphere processes that contributed to the warming from the mid-1970s to the turn of the century—ENSO and the AMO.  The climate models are not only out of phase with the long-term data, they are out of touch with reality.

SOURCES

The GISS Land-Ocean Temperature Index data are available here, and the CMIP5 climate model outputs are available through the KNMI Climate Explorer, specifically the Monthly CMIP5 scenario runs webpage.

Get notified when a new post is published.
Subscribe today!
0 0 votes
Article Rating
177 Comments
Inline Feedbacks
View all comments
Latitude
August 28, 2014 11:06 am

correction: As illustrated, the global warming rate from 1950 to 2013 is 0.02 deg C/decade, according to the GISS raw data.
according to Gavin:
correction: The bottom line is that multiple studies indicate with very strong confidence that human activity is the dominant component in the warming of the last 50 to 60 years, and that our best estimates are that pretty much all of the rise is adjustments and algorithms..

ossqss
Reply to  Latitude
August 28, 2014 12:03 pm

Bingo!

Dougmanxx
Reply to  Latitude
August 28, 2014 1:10 pm

You beat me to it. “Man made” indeed!

James the Elder
Reply to  Latitude
August 28, 2014 7:03 pm

“Estimates” and “pretty much”. GIGO

August 28, 2014 11:10 am

correction
my results show there is no man made global warming whatsoever
namely this would affect minimum temperatures,
but this is going down naturally, 100%
http://blogs.24.com/henryp/files/2013/02/henryspooltableNEWc.pdf
(last graph, on the bottom of the last table)

Kelvin Vaughan
Reply to  moreCarbonOK[&theWeatherisalwaysGood]HenryP
August 28, 2014 11:51 am

If none of the warming is man made then the warming is definitely man made..

Reply to  Kelvin Vaughan
August 28, 2014 12:16 pm

Kelvin I don’t follow
if there were man made warming you should see chaos
but the relationship of the speed of warming versus time (deceleration) is going down 100% natural

KevinM
Reply to  Kelvin Vaughan
August 28, 2014 12:30 pm

moreCarbonOK[&theWeatherisalwaysGood]HenryP:
He left off a smiley face. He means IF the warming in the world isn’t man made (raw data) THEN the warming in the charts is man made (presented data).

Owen in GA
Reply to  Kelvin Vaughan
August 28, 2014 12:30 pm

HenryP:
I think he means man made in the laboratory with a computer rather than man made due to economic activity.

August 28, 2014 11:13 am

The more the alarmists claim a huge CO2 effect during the warming periods, the more impossible their task of explaining the whyfor of the pause, when CO2 increases have not paused. Their cause and effect have become totally disjointed.

milodonharlani
Reply to  Col Mosby
August 28, 2014 11:18 am

Besides the current plateau, they also have to explain the cooling from c. 1944 to 1976 under rising CO2, & the rising temperature during the 1920s to ’40s on falling CO2.
Temperature accidentally happened to rise during c. 1977-96 during climbing CO2 because of the switch to the warm phase PDO in 1977.

latecommer2014
Reply to  milodonharlani
August 28, 2014 11:29 am

Once again correlation is not causation.

milodonharlani
Reply to  milodonharlani
August 28, 2014 11:49 am

Between CO2 & temperature there isn’t even good correlation, let alone causation.
On longer time scales, there is correlation & causation between rising T & CO2, but T is the cause & CO2 the effect.

Chris4692
Reply to  milodonharlani
August 28, 2014 12:16 pm

Latecommer: Correlation does not prove causation, but if there is causation there will be correlation.

Robert of Ottawa
Reply to  milodonharlani
August 28, 2014 3:11 pm

Being true believing Warmistas, they turned the clock speed down on their super-dupe computers, hence reducing the rate of warming. It’s that simple!

Reply to  milodonharlani
August 29, 2014 5:05 am

Chris4692
August 28, 2014 at 12:16 pm
Latecommer: Correlation does not prove causation, but if there is causation there will be correlation.

… except when there is no correlation.
If, for the sake of argument, we accept that rising CO2 will cause rising global warming, then the currently rising CO2 levels should be causing rising temperatures. But currently there is no observable correlation between rising CO2 and global temps.
There is no compelling, simple explanation for this lack of correlation (assuming AGW). In fact, there are at least 37 explanations for it, none of them compelling enough to displace the others.

Mark
Reply to  milodonharlani
August 29, 2014 11:30 pm

It’s only appears to be in fairly recent time that there is any correlation between CO2 & temperature too. With even that looking for like B causes A (or possibly C causes A and B).

joe
Reply to  Col Mosby
August 29, 2014 8:14 am

Col Mosby: The more the alarmists claim a huge CO2 effect during the warming periods, the more impossible their task of explaining the whyfor of the pause, when CO2 increases have not paused. Their cause and effect have become totally disjointed”
Oh Contraire – Dr. Mann has a recent study which shows the cooling phase is due to the AMO/PDO while none of the warming is due to the flip side of the AMO/PDO cycles. ( I may be oversimplifying his conclusion, though that is the general gist of his study).
The irony is that many of the skeptics have pointed out the amo/pdo cycles partly explaining both the warming phase and the cooling phase and the most likely cause of the current pause (skeptics have proffered this explanation since the late 1990’s) yet the high priest of climate science has only recently acknowledged 1/2 of the cycle. ( Mann obviously knows more science than us mere mortals)
Another irony, is that Mark Steyn pointed this out circa 2009.

george e. smith
August 28, 2014 11:16 am

Fig 1 looks smack dab right in the center to me. What is this “far to the right” mumbo jumbo?? Couldn’t be more ho hum , big deal, if I had plotted it myself.
You need new spectacles Gavin; and no, I didn’t mean for you to go to Burning Man, I meant, get some new glasses !

Greg
August 28, 2014 11:18 am

Gav says: “The bottom line is that multiple studies indicate with very strong confidence that human activity is the dominant component in the warming of the last 50 to 60 years, and that our best estimates are that pretty much all of the rise is anthropogenic.”
This is not consistent. “Dominant” just mean largest, it does not mean more than everything else added together.
If the issue is split into many parts like, human, volcanic ENSO, solar…… the “dominant” factor could actually be quite small percentage, much less than half.
The previous AR4 “majority of warming” is not the same thing as AR5 “dominant cause” of warming.
This is a climbdown in the IPCC position that seems to have gone mainly unnoticed.
Quite where Gav gets his “best estimates are that pretty much all of ” I don’t know but it’s far more extreme claim than either AR4 or AR5.

latecommer2014
August 28, 2014 11:26 am

It appears everyone is using adjusted temperatures, so the error bar should be as large as the adjusted temp. I do not believe in land based, corrupted temp records, and hold that any forcing caused by man is automatically absorbed and compensated for by nature. That is why we do not have “run away climate”. There is no proof man can over ride natural climate processes for any extended period.

Mark
Reply to  latecommer2014
August 29, 2014 11:37 pm

Shouldn’t the error bars be somewhat larger than the adjustment?
Since you still need to factor in the accuracy and precision of the original readings. Together with that applicable to any “data processing” involved.

August 28, 2014 11:27 am

Gavin Schmidt is prevaricating as usual. Global warming since the LIA is composed of natural step changes. Those steps are exactly the same — whether CO2 was low, or high. Therefore, there is no “fingerprint of AGW”. It is clearly shown here in über-Warmist Dr. Phil Jones’ chart:
http://jonova.s3.amazonaws.com/graphs/hadley/Hadley-global-temps-1850-2010-web.jpg

Reply to  dbstealey
August 28, 2014 12:00 pm

Very good point.
I wonder if SkS will keep pushing their escalator line if that is pointed out.
Not sure Dr. Phil Jones is an über-Warmist though. He always struck me as more wrong than wronging.

lee
Reply to  M Courtney
August 28, 2014 8:55 pm

Someone used that graph the other day on me. I pointed out Trenberth’s ‘Has Global Warming Stalled?’ ‘big jumps’ as noted by Bob Tisdale. I suggested SKS sometimes got it right for the wrong reasons. I never got a response.
http://wattsupwiththat.com/2013/06/04/open-letter-to-the-royal-meteorological-society-regarding-dr-trenberths-article-has-global-warming-stalled/

FrankKarr
Reply to  dbstealey
August 28, 2014 2:42 pm

Mr Schmidt should examine the graph above closely to see the overall warming from 1850 to 2013. Its about 0.9 degrees C over that long term period and works out to about 0.55 deg C per CENTURY. Peanuts. The biggest fraud in history over a few tenths of a Degree.

rgbatduke
Reply to  dbstealey
August 29, 2014 7:50 am

Yeah, db, a point that I, and Lindzen, and many others have tried to emphasize in discussion. You can actually take HADCRUT4 from the first half of the 20th century and the second half of the 20th century and put them side by side on similar scales but with the time scale hidden and ask which graph occurred with the help of anthropogenic CO_2? Not so easy to tell, unless you are aware of the individual features such as the terminating super-ENSO in the late 20th century. I sometimes think that the last round of tampering in the GISS anomaly was designed as much to erase this similarity and as much of the pause as possible without quite making it laughable compared to LTT. But that game is up — there will be no more adjustments of GISS or HADCRUT4 to further warm the present as they are now UAH/RSS constrained.
That hasn’t stopped them from trying to further cool the past, and now newcomers are appearing that re-krige and infill and homogenize areas that “haven’t shown enough warming” because they are less constrained by LTT; this further obfuscates if nothing else. HADCRUT4 — and earlier versions of HADCRUT even more — clearly give the lie to the assertion of “unprecedented” warming, though, in precisely this graph (which anybody can make, BTW, at least piecewise on woodfortrees).
However, even this graph omits the display of or discussion of two critical problems with assertions of warming or cooling or plain old knowledge of temperature.
The first and most glaring omission is the absence of any error bar or estimate on the data. This is insane! In what other field of human endeavor are so many data-derived graphs shown to so many people utterly devoid of error estimates? Note the obvious impact of error visible in the Jones curve. Does Jones, or anyone else, really think that the global average surface temperature anomaly was 10 times more volatile in the 1800’s, with the planet warming by 0.6C over as little as a year and then plunging down into 0.6C of cooling relative to some ill-defined mean in a year more? Because that’s what the error-bar free data shows.
Of course not! What the graph is showing is the impact of the sparseness of the record in the 19th century. With order of 10x as much variance, there is order of 100x less data contributing in the 19th century compared to the present. In the 19th century most of the Earth’s surface area was completely unsampled (I mean “most” literally — 70% of the surface that is ocean, the bulk of at least 3 or 4 continents were either terra incognita altogether, e.g. Antarctica, or barely penetrated by a thermometer — if you will excuse the image — and consider the Amazon, central Africa, much of Siberia and central Asia, Tibet, even much of the U.S.). The parts that were sampled were obviously quite volatile — one imagines that the bulk of what is producing these large variations were things like heat waves in Europe.
The variance quiets quite a lot when the colonial gold rush really gets underway in the 1880s and colonials carry thermometers with them to their newly annexed territories. The ocean remained a problem then, and remains a huge problem now with ARGO pitifully undersampling 70% of the Earth’s surface even today, and that in a highly biased fashion with buoys that float with thermohaline currents or are trapped in eddies (both unlikely to reflect their surrounding environment adequately) rather than be distributed according to a simple random number generator in Monte Carlo style (which would have a computable statistical error instead of an unknown bias). There is a surprising amount of variance for a global temperature anomaly today, but at least between the thermometric record and the LTT satellite record, we can think about resolving features of the presumably much less volatile actual anomaly from the statistical noise, by comparing the various “modelled” average temperatures. The error is almost certainly larger than the difference between, say, GISS and HADCRUT4 or Cowtan and Way, and at present these numbers are easily 0.2C or thereabouts apart much of the time.
HADCRUT4 acknowledges — IIRC — 0.15C of error in the present. I think this is an underestimate but let’s go with it, as the existence of the number we hope means that they actually computed it instead of pulled it out of their nether regions, as were the error estimates on graphs in the leaked early AR5 draft (figure 1.4?), which were obviously created by a graphical artist and not by anything like an algorithm. The scaling of the variance then suggests that the error estimate in the mid-1880s ought to be a whopping 1.5C — the eye suggests that a more modest 0.4 C error bar might encapsulate 60% of the data such as it is, but that is really the error for the sampled territories only and is a lower bound on the error estimate for global temperature. I’d suggest that 0.7C is a compromise — one can probably find proxies (with their own error and resolution problems) that that constrain the error to be less than 1 C. This statistical — not systematic — error would then systematically, but slowly, shrink from then to now. It wouldn’t really be linear — as I said, there is a relatively rapid diminishment in the late 19th century followed by a slower decrease into the late 20th, but it is likely fair to say that it is at least 0.3 to 0.4C for most of the record prior to the satellite era and ARGO, as only these have made it possible to push it down to ballpark of 0.2C.
If one includes the error estimate on the graphs, our certainty of any particular thermal history substantially diminishes. Maybe it warmed since the mid-1800’s. Maybe it has cooled. Maybe it warmed a lot more. Maybe the single 20 year period in the late 20th century when warming occurred has the steepest slope in the thermometric record, or — most importantly — maybe it does not! That’s the big statistical lie even in Jones relatively honest portrayal of the HADCRUT4 trends above. If one actually fit the data, with errors, and used e.g. a measure like Pearson’s \chi^2 to estimate the robustness of the linear trend, how likely it is that the slope is actually much larger or smaller than the simple regression fit, I promise that in the leftmost chord of the data we have almost no friggin’ idea what the linear temperature trend really was beyond “probably positive” (that is, maybe it is 0.16 \pm 0.12 or something like that), that in the second chord we can probably say that it — again guestimating since I don’t have the data and cannot do a better analysis — 0.15 \pm 0.05, and that only the last push is known reasonably accurately at 0.16 \pm 0.02.
In other words, it could have warmed faster in either the mid-1800s or the early 1900s than it warmed in the late 1900s. It isn’t even improbable. It is even odds that one or the other of these warming trends was larger than the best fit slope, and 25% of the time they would both be larger, and larger by just a bit is enough to confound the assertion that the more strongly constrained third linear trend is the largest.
So much for “unprecedented warming” or the necessity for CO_2 forcing as an explanatory mechanism for warming at the rates in Jones’ figure above.
The second problem is that we are left with a profound paradox in all discussions of global average surface temperature. Even NASA GISS acknowledges that we have very little idea what it is. It is often given as 288 K, but this obscures the simple fact that no two models for computing it, working from the same or largely overlapping surface data, get numbers that are within half a degree of one another! Or even a degree. The most honest way to present the number might be 288 \pm 1K. Or 287 \pm 1K. It’s hard to say, and depends on who is doing the averaging and with what model for kriging, infilling, homogenizing, and dealing with error. It is also impossible to generate a proper estimate for the probable error including all sources, because what one can estimate is only the range of values produced by the models, which is (again) a strict lower bound in any honest error estimate. Since the models tend to share data sources they are hardly independent, and yet there is a spread of more than a degree in their average. Statistics 101 — the variance of sample means drawn from overlapping populations is too small because the number of independent and identically distributed samples is smaller than the number of samples that produced the variance.
To fix this is enormously difficult and requires some pretty serious statistical mojo. Indeed, it would probably be simplest to fix via Monte Carlo and just plain sampling — generate a simulated smooth temperature field with the “approximately correct” surface temperature moments, pull samples at the overlapping locations and feed them into the different models, determining both the distribution of the absolute error of the models (per model) given the data compared to the precisely known average temperature, as well how that variance compares to the multimodel variance with overlapping samples. This might then provide some sort of quantitative basis for determining the actual probable absolute global average surface temperature — note well not the anomaly — as well as a probable error estimate that has a quantitative basis (subject to various assumptions, but given time we could even investigate the effect of varying those assumptions).
In the meantime, we persist in the belief that we can measure and compute the anomaly in global average surface temperature almost an order of magnitude more precisely than we can compute the average surface temperature itself. In most systems, susceptibilities (effectively, the anomaly) are second moments and their error estimates are fourth moments of the underlying distribution — the variance of the variance, so to speak. We generally know the higher order cumulants of a distribution less accurately than we know the mean/first order. This isn’t always true, of course — sometimes what we measure is a deviation, not the absolute — but thermometers don’t measure deviations from an unknown or poorly known mean, they measure temperature, the absolute quantity in question. The argument is that if there is a systematic bias in the trend of each contributing thermometer (say, we have 100 thermometers at different places, all perfectly accurate, and if 40 of them show warming of 1 degree, 20 of them show no change, and 30 of them show cooling of 1 degree, then we can conclude that there has been a statistically significant systematic trend in the anomaly of (40 – 30 =10)/100 = 0.1 C even if, when we compute the actual statistical mean and standard error of the temperatures measured by those thermometers over whatever spatial region they are sampling, the error is 1C!
This isn’t impossible, of course. We can certainly imagine systems where we could reliably measure the anomaly accurately but the mean inaccurately, the simplest one being that all of the thermometers themselves were perfectly accurate, but that a demented child scribed the scales on the side so that the supposed “zero” of the all of the thermometers was randomly distributed on some wide range. Each thermometer would then precisely record deltas/displacements, but the origin of their coordinates would be a random variable. But is that a reasonable assumption for the thermometric record? It seems equally plausible (for example) that the glass bore of (say) a mercury thermometer and the actual volume of the mercury in the thermometer are random variables, but that the person who zero’d the thermometer scale was an obsessive compulsive. In that case the absolute measurement of the thermometer might be very accurate, at least when it was made at temperatures close to the reference temperature used to set the scale, but the anomaly might have a bias that might, or might not, be randomly distributed.
This problem has hardly gone away now. Anthony has actually tested supposed accurate electronic thermometers in personal weather station kits obtained (for example) from China and found that they experience substantial absolute error and time dependent drift. Now and in the past, even a thermometer that was precisely made, and carefully zero’d and scaled with respect to multiple reference temperatures so that it worked perfectly the first day it was hung up in a weather station could easily experience a systematic, and biased, drift over a decade or five of usage. Spring thermometers gradually anneal and become less springy. Liquid thermometers outgas and deform. We assume that things remain the same over long times because we can’t see them moving, but they don’t.
Throw in biases recorded in weather station metadata, throw in all of the occult, slow biases not recoverable from any sort of metadata — a tree line that slowly grows over time, the UHI effect as a station that was initially rural finds itself in the middle of a prosperous concrete jungle, throw in unrecorded and variable idiosyncracies of the humans who performed the measurements as they changed over the decades, and you have substantial variance not only in the absolute temperatures any given thermometer might measure, but in the trend, in the anomaly. And some of those biases might well be slow, systematic, unrecorded and virtually impossible to retroactively correct for.
Again, we could probably learn quite a bit from simulations of the models used to compute the anomalies, by simply generating an (ensemble of) simulated smooth temperatures on the surface of a sphere with a given, known time variation that has or doesn’t have any given trend. Sample it, and add noise to the samples, both white unbiased noise and trended noise that might (for example) model the UHI on urban stations, or delta correlated shifts that might occur when station personnel changes, or trended noise that might represent various distributions of slow non-UHI environmental shifts — conversion of surrounding countryside from forest to pasture, the building of impoundments that transform small rivers into vast lakes (this has happened, for example, in the immediate vicinity of RDU airport, the source of our “official” temperature — Falls Lake and Lake Jordan are between them tens of thousands of acres and flank the airport, adding yet another confounding factor between comparing temperatures before the early 80’s to temperatures afterwards at this site). Where is that accounted for in the site metadata?
Who even knows what sort of effect turning a mix of forest and human occupied farmland into 60 or 70 thousand acres’ worth reservoirs might have on the surrounding temperatures and “climate”, at the same time that the weather station itself went from being a tiny regional airport to being a hub for a large commercial carrier, at the same time the surrounding farmland turned into one giant suburban and urban mega-community? We don’t know, of course — and not even BEST can account for or correct for this — but we might, perhaps, simulate some range of the possibilities and see what they do not to the anomaly itself — per model, it is what it is — but to the best estimate for the uncertainty in the anomaly when any give model ignores a source of potential systematic bias. As (apparently) HADCRUT4 does when they do not correct for UHI at all, however eager they are to cool the past or warm the present in other ways.
rgb

eyesonu
Reply to  rgbatduke
August 29, 2014 10:21 pm

+10
I wish that there was a way to see how many times your comment has been read and/or linked to.

Mark
Reply to  dbstealey
August 29, 2014 11:46 pm

This kind of graph not only shows no relation to CO2 (human or “natural”) it also shows that the main driver(s) must be something cyclic. Yet it was only very recently that the PDO and AMO were identified and we don’t appear to fully understand either.

August 28, 2014 11:34 am

I had a guest post at Judith’s blog some months ago in which I tried to untangle some of the weird concepts used by the IPCC and friends and show how they lead to absurd consequences (it gets more certain the longer the divergence between data and theory lasts). http://judithcurry.com/2014/01/29/the-big-question/

Reply to  Dagfinn
August 28, 2014 12:35 pm

When the assumptions, taken as true, that the GCM rest on become increasingly “wrong” (sign and magnitude), their outputs become increasingly absurd and result in ever more bizarre claims. We see the results of that with each new paper trying to explain the pause.

Robert of Ottawa
Reply to  Joel O'Bryan
August 28, 2014 3:12 pm

Doesn’t this charade become fraud at some point?

Reply to  Joel O'Bryan
August 28, 2014 9:27 pm

Robert,
In the 90s and for the 2000-2006 period, much of it likely looked quite on track. The big cracks appeared with the climate gate fraud exposure in 2009. But now in 2014.5 the GCM temp divergence with reality is becoming untenable, hence all the alternative ali is are coming out every week now. Most certInly there were a few bad apples in 1998 & forward who used chicanery, data manipulations and suppression of data from rivals that were contrary to their data and results in the past temps records, results they would need to build a case against Mann’s continued Carbon intensive energy sources. Those individuals should be banished by science journals editors for Life.
In the US, democrats began to see dancing trucks loads of carbon tax dollars to spend. Enviros saw a way to de-industrialize and shutdown Big Oil, their arch enemy.
But I get the sense that guys like Trenberth really do want to be true to science, but with so much reputation riding on AGW it’s a hard thing to finally let go of a dying baby you birthed and nurtured in good faith. But the rime to let CAGW go is past, now they are just desperately cling to just AGW starting back up in 20 or so years.

ferd berple
August 28, 2014 11:42 am

how is it possible that humans have contributed 110% of the warming (best guess)? are they saying that otherwise there would have been cooling?
why did temps rise from 1910-1940 almost identical to 1970-2000? It wasn’t CO2, so what was it? Why did temps pause from 1940-1970? How is the current pause any different? If the pause lasts from 2000-2030, how is this any different than the pause from 1940-1970?
Why did the [climate] models not see the cyclical pattern, that [your] average 6th graders would have caught? Do they not know that nature is cyclical, not linear?

Reply to  ferd berple
August 28, 2014 11:48 am

About the 110%, see my discussion of the “net warming model”, in the link above.

lee
Reply to  ferd berple
August 28, 2014 8:56 pm

They incorporate the cooling from aerosols.

August 28, 2014 11:43 am

Having crossed swords with Schmidt some years ago on unRealClimate, I came to the conclusion not to believe anything he says. I’ve never been back there since, as it is full of pseudo-science presented by pseudo-scientists.

Clyde
August 28, 2014 11:45 am

Scientist have a different way of talking than the public. Well at least in my experience. Words don’t always mean the same thing to them as the general public. I don’t know what “Conspire” means in the context of what Gavin Schmidt is saying below. I hope it doesn’t mean they got together in a sinister way to plan what they did.
——————————–
Climate models projected stronger warming over the past 15 years than has been seen in observations. Conspiring factors of errors in volcanic and solar inputs, representations of aerosols, and El Niño evolution, may explain most of the discrepancy.
http://www.nature.com/ngeo/journal/v7/n3/full/ngeo2105.html
Volcanoes, the sun, aerosols, & El Nino conspired to make the models wrong.
HT/ Maksimovich From Curry’s blog.

EternalOptimist
Reply to  Clyde
August 28, 2014 12:16 pm

Gavin is English, I think. In England, the word Conspire means ‘work together’ or ‘work in tandem’. it doesnt necessarily have a sinister meaning

Cheshirered
Reply to  EternalOptimist
August 28, 2014 1:15 pm

Partly right, BUT the whole point of ‘conspire’ is that it is a plan, and usually for nefarious means. See below – it’s all bad, dude! If Gavin is or was ‘conspiring’, be prepared for nonsense.
*****************************
“to agree together, especially secretly, to do something wrong, evil, or illegal:
“They conspired to kill the king.”
“to act or work together toward the same result or goal”.
verb (used with object), conspired, conspiring.
“to plot” (something wrong, evil, or illegal).

Tonyb
Reply to  EternalOptimist
August 28, 2014 1:26 pm

I am English and in the context it is used I don’t see anything sinister. He surely means merely to work together.
Tonyb

J
Reply to  EternalOptimist
August 28, 2014 1:37 pm

I think the correct word for that would be “collaborate” To labor or work together.
Con-spire is the breath together, like telling secrets…

Mr Green Genes
Reply to  EternalOptimist
August 29, 2014 1:07 am

I’m 57 years old and have been English all my life and I have to disagree with that. A closer meaning for conspire than ‘work together’ is ‘plot together’. It definitely does have mildly sinister connotations. If Gavin meant ‘work together’ or ‘work in tandem’, imo he would have used the word collaborate.

Reply to  EternalOptimist
August 29, 2014 7:16 am

“late Middle English: from Old French conspirer, from Latin conspirare ‘agree, plot’, from con- ‘together with’ + spirare ‘breathe’.”
Or more to the point whispering together. Definite hush hush.
If we are talking about plotting in the open, that’s collaboration or co-operation..

Tom T
Reply to  EternalOptimist
August 29, 2014 11:51 am

Gavin is anthropomorphizing the natural forces that made him look stupid.

RACookPE1978
Editor
Reply to  Clyde
August 28, 2014 10:12 pm

Ok, so “volcanoes” contributed to the last 17 years of steady climate temperatures – DESPITE ever-higher CO2 levels in the atmosphere.
If you believe that theory, show us the measured real, demonstrated decrease in atmospheric clarity – which has remained absolutely steady the past 21 years!
http://www.esrl.noaa.gov/gmd/webdata/grad/mloapt/mlo_transmission.gif
Well, for two months in 2009 clarity did drop. But neither temperatures nor ice coverage changed when the atmospheric clarity DID drop that one time!
The excuse is proved wrong. Again.

Resourceguy
August 28, 2014 11:47 am

Well said Bob, as usual

Another Gareth
August 28, 2014 11:50 am

If the ‘best guess’ is 110% of warming is attributable to man are they saying it would have got colder without our efforts? By deduction they must be confident they have the natural variability component understood which I sincerely doubt.

BallBounces
August 28, 2014 11:53 am

“The climate models are not only out of phase with the long-term data, they are out of touch with reality.”
But, importantly, they are not out of touch with funding.

Reply to  BallBounces
August 28, 2014 5:45 pm

bingo.

Reply to  dmacleo
August 28, 2014 7:35 pm

Like

August 28, 2014 11:53 am

Ok. I feel entirely dumb. What is a 110% probability??? What is 110% of the entirety of something??? What is, say, being 110% responsible for the making of 110% of a car? I told you I feel dumb.

Reply to  Josualdo
August 28, 2014 12:02 pm

Thanks for asking that, exactly what I have been wondering.
Is this a statistical term?
Or just another liberty of Climatology ™ IPCC Team.?

Reply to  john robertson
August 28, 2014 12:34 pm

When I was young, the probability of event X was the ratio between all outcomes resulting in X and all the possible outcomes. It’s probability theory, the most basic-basic-basic of it. You cant have a total of outcomes (of whatever) with 10% more outcomes than the total possible outcomes. So, I’m dumbfounded. It could be colloquial usage, as in “I’m 120% sure that…” But I guess colloquial is inappropriate in the context of the discussion.
And there my reading was completely blocked.

Reply to  Josualdo
August 28, 2014 12:04 pm

I think the 110% comes in from extrapolation of Mann’s Hockey stick they all are in love with, which up to 1900 showed gradual cooling. If you assume that the cooling would have continued without Man’s input, then the observed warming is actually less since some of Man’s warming was negated by the natural cooling that they claim should have been occurring.

Reply to  Josualdo
August 28, 2014 2:56 pm

I think he’s saying that the mode of the probability is that natural variation would have resulted in cooling, but man’s interference caused warming equal to 110% of the warming observed. I.e., if it warmed one degree, it would have cooled a tenth of a degree without man. The area under the curve is only 100%, though. That is, the percentage he’s talking about is a percentage of warming, not a probability.

Editor
Reply to  Josualdo
August 28, 2014 3:47 pm

Josualdo – Gavin means that the amount of calculated CO2 warming is 110% of the measured. But his wording “fraction of warming attributable” shows that he does not understand climate. Climate, as clearly stated by the IPCC, is a complex, coupled non-linear system. That means that there are many factors involved, they affect each other, and the results are chaotic (things sometimes happen in certain conditions, and sometimes don’t). In the real world, the amount attributable to human activity is the difference between what it would have been without the human activity and what it actually was. [The reason that it’s this way round is that the non-human stuff was always going to be there. It’s the human stuff that is different. If things end up exactly where they were going to be anyway, for example, then the human impact is zero, regardless of any calculations of what the human stuff does.]
If we look at the last, say 200 years, or the last 10,000 years, then it is pretty clear that Earth would likely have warmed up a bit (we can’t be sure, because we reallydon’t know how Earth’s climate behaves). That automatically puts the fraction of warming atttributable to human activity at less than 100%. So how does Gavin come up with 110%? He has used linear thinking – a big no-no in a non-linear system – he has compared his calculated human effect with the measured temperature.

Reply to  Mike Jonas
August 29, 2014 3:54 am

Thanks, Mike, I think I got the gist of the thing. So, no probabbilties here.
Having been interested in chaos theory, fractals and all that formerly fancy stuff, and knowing — well at least that was the meme — that butterfly wings might affect the weather somewhere else, I find all this certainty very strange. There’s an anecdote that almost would apply, but I guess it would not surviving the translation (and my telling it.)

Reply to  Mike Jonas
August 29, 2014 3:55 am

* probabilities…

Rud Istvan
August 28, 2014 12:00 pm

The interesting part, Bob, is that Gavin felt a reply of this sort was needed at all. I suspect that between the pause falsifying the models by the CAGW gangs own previously published standards (btw your tuning argument has been made by many including Akasofu), and all the stuff now coming out about inexplicable and in an increasing number of cases inexcusable homogenization (BOM Rutherglen in Australia, BEST station 166900) that reality is really starting to bite hard.

Peter Miller
August 28, 2014 12:00 pm

Perhaps we could ask Gavin to pop into a parallel universe, where man died out on Earth a few tens of thousands of years ago.
He could then take all the measurements needed to determine exactly how much of the past 70 years’ mild warming is due to the activities of man. Even in the wacky world of ‘climate science’, this is unlikely to happen anytime soon.
Bottom line: None of us have a clue how much of the recent mild warming has been due to the activities of man. Those who are worried about their future salary cheques argue, “A lot.” While those who are worried about beggaring the world economy for no apparent reason, argue, “Not a lot.”

Justthinkin
August 28, 2014 12:01 pm

When are going to throw these frauds in jail?Oh wait.If that happened,the psychopathic politicians might take a hit.Nothing to see here.Carry on.

Reply to  Justthinkin
August 28, 2014 9:36 pm

Narcissism is a sociopathology.

Ralph Kramden
August 28, 2014 12:03 pm

I’m having trouble interpreting figure 1. How can the fraction of global warming caused by anthropogenic causes be greater than 1.

Kelvin Vaughan
August 28, 2014 12:04 pm

So all of the warming is made by man and then another 10% is made by man????????????????

Reply to  Kelvin Vaughan
August 28, 2014 12:44 pm

there is no man made warming
there never was
and there probably never will be

DGH
Reply to  Kelvin Vaughan
August 29, 2014 4:14 am

If the earth might have otherwise cooled by, let’s say, 10% over this period then 110% of the observed warming would be attributable to our activities.

Kelvin Vaughan
Reply to  DGH
August 29, 2014 6:21 am

No then it would be 100%. You cant have more than 100 per 100. 100 people have brown eyes 110 of them are bald.

EternalOptimist
August 28, 2014 12:07 pm

Why is the keeper of the record arguing a position in the first place ?

Peter Miller
Reply to  EternalOptimist
August 28, 2014 12:31 pm

I guess that begs the question of whether or not GISS’ rate of temperature ‘homogenisation’, designed to cool the recent past, will accelerate or not under Gavin’s stewardship.
Under Hansen, better known for his antics rather than his science, ‘homogenisation’ ran wild at GISS.

AlecM
August 28, 2014 12:08 pm

On average there is net zero CO2-AGW; the atmosphere self-adapts.
The warming was from other causes.

Reply to  AlecM
August 28, 2014 12:48 pm

hint: Gleissberg
88 year cycle, 44 years of warming followed by 44 years of cooling

August 28, 2014 12:08 pm

Gavin is an intelligent scientist, due to the ever longer ‘pause’, he must see that writing is on the wall, but his current position doesn’t offer him an acceptable alternative.

Reply to  vukcevic
August 28, 2014 9:39 pm

Agree. But if you have conscience and scientific integrity, when do you get to point you can’t sleep at night from the lies?

Tom T
Reply to  vukcevic
August 29, 2014 11:59 am

Gavin is not even a scientists. He is a poor mathematician who plays office politics well.

August 28, 2014 12:10 pm

Even using the most ddjusted data set of all and assuming ALL of the warming in the 1900s was man made and a 110% warming rate… they still CANT anywhere close to 2C temperature rise from 2000-2100. There is NO CAGW. At 013C per decade, that is 1.3C per century.

1 2 3