A Note on the 50-50 Attribution Argument between Judith Curry and Gavin Schmidt

Guest essay by Bob Tisdale | Judith Curry and Gavin Schmidt are arguing once again about how much of the global warming we’ve experienced since 1950 is attributable to human-induced global warming.  Judith’s argument was presented in her post The 50-50 argument at ClimateEtc (where this morning there were more than 700 comments…wow…so that thread may take a few moments to download.)  Gavin’s response can be found at the RealClimate post IPCC attribution statements redux: A response to Judith Curry.

Gavin’s first illustration is described by the caption:

The probability density function for the fraction of warming attributable to human activity (derived from Fig. 10.5 in IPCC AR5). The bulk of the probability is far to the right of the “50%” line, and the peak is around 110%.

I’ve included Gavin’s illustration as my Figure 1.

Figure 1 - RealClimate attribution

Figure 1

So the discussion is about the warming rate of global surface temperature anomalies since 1950. Figure 2 presents the global GISS Land-Ocean Temperature Index data for the period of 1950 to 2013. I’m using the GISS data because Gavin was newly promoted to the head of GISS. (BTW, congrats, Gavin.)  As illustrated, the global warming rate from 1950 to 2013 is 0.12 deg C/decade, according to the GISS data.

Figure 2

Figure 2

For this discussion, let’s overlook the two hiatus periods during the term of 1950 to 2013…whether they were caused by aerosols or naturally occurring multidecadal variations in known coupled ocean-atmosphere processes, such as the Atlantic Multidecadal Oscillation (AMO) and the dominance of El Niño or La Niña events (ENSO).  Let’s also overlook for this discussion any arguments about how much of the warming from the mid-1970s to the turn of the century was caused by manmade greenhouse gases or the naturally occurring multidecadal variations in the AMO and ENSO.

Bottom line, according to Gavin:

The bottom line is that multiple studies indicate with very strong confidence that human activity is the dominant component in the warming of the last 50 to 60 years, and that our best estimates are that pretty much all of the rise is anthropogenic.

Or in other words, all the warming of global surfaces from 1950 to 2013 is caused by anthropogenic sources.  Curiously, that’s only a warming rate of +0.12 deg C/decade. He’s not saying that all of the warming, at a higher rate, from the mid-1970s to the turn of the century is anthropogenic.  His focus is the period starting in 1950 with the lower warming rate.

HOWEVER

Climate models are not tuned to the period starting in 1950.  They are tuned to a cherry-picked period with a much higher warming rate…the period of 1976-2005 according to Mauritsen, et al. (2012) Tuning the Climate of a Global Model [paywalled].  A preprint edition is here.  As shown in Figure 3, the period of 1976 to 2005 has a much higher warming rate, about +0.19 deg C/decade. And that’s the starting trend for the long-term projections, not the lower, longer-term trend.

Figure 3

Figure 3

And that’s why, when compared to the observed warming rate for the period of 1950 to 2013, which, according to Gavin, is the period “that our best estimates are that pretty much all of the rise is anthropogenic”, then climate model warming rates appear to go off on a tangent.  The modelers have started their projections from a cherry-picked period with a high warming rate.

Figure 4 shows the warming rates for multi-model ensemble-member mean of the CMIP5-archived models using RCP6.0 and RCP8.5 scenarios for the period of 2001-2030.  RCP6.0 basically has the same warming rate as the observations from 1976-2005, which is the model tuning period, but that’s much higher than the warming rate from 1950-2013.  And the trend of the business-as-usual RCP8.5 scenario seems to be skyrocketing off with no basis in reality.

Figure 4

Figure 4

And in Figure 5, the modeled warming rates for the same scenarios are shown through 2100.

Figure 5

Figure 5

CLOSING

I’ve asked a similar question before:  Why would the climate modelers base their projections of global warming on the trends of a cherry-picked period with a high warming rate?  The models being out of phase with the longer-term trends exaggerates the doom-and-gloom scenarios, of course.

But we purposely overlooked a couple of things in this post…that there are, in fact, naturally occurring ocean-atmosphere processes that contributed to the warming from the mid-1970s to the turn of the century—ENSO and the AMO.  The climate models are not only out of phase with the long-term data, they are out of touch with reality.

SOURCES

The GISS Land-Ocean Temperature Index data are available here, and the CMIP5 climate model outputs are available through the KNMI Climate Explorer, specifically the Monthly CMIP5 scenario runs webpage.

Advertisements

  Subscribe  
newest oldest most voted
Notify of
Latitude

correction: As illustrated, the global warming rate from 1950 to 2013 is 0.02 deg C/decade, according to the GISS raw data.
according to Gavin:
correction: The bottom line is that multiple studies indicate with very strong confidence that human activity is the dominant component in the warming of the last 50 to 60 years, and that our best estimates are that pretty much all of the rise is adjustments and algorithms..

ossqss

Bingo!

Dougmanxx

You beat me to it. “Man made” indeed!

James the Elder

“Estimates” and “pretty much”. GIGO

correction
my results show there is no man made global warming whatsoever
namely this would affect minimum temperatures,
but this is going down naturally, 100%
http://blogs.24.com/henryp/files/2013/02/henryspooltableNEWc.pdf
(last graph, on the bottom of the last table)

Kelvin Vaughan

If none of the warming is man made then the warming is definitely man made..

Kelvin I don’t follow
if there were man made warming you should see chaos
but the relationship of the speed of warming versus time (deceleration) is going down 100% natural

KevinM

moreCarbonOK[&theWeatherisalwaysGood]HenryP:
He left off a smiley face. He means IF the warming in the world isn’t man made (raw data) THEN the warming in the charts is man made (presented data).

Owen in GA

HenryP:
I think he means man made in the laboratory with a computer rather than man made due to economic activity.

The more the alarmists claim a huge CO2 effect during the warming periods, the more impossible their task of explaining the whyfor of the pause, when CO2 increases have not paused. Their cause and effect have become totally disjointed.

milodonharlani

Besides the current plateau, they also have to explain the cooling from c. 1944 to 1976 under rising CO2, & the rising temperature during the 1920s to ’40s on falling CO2.
Temperature accidentally happened to rise during c. 1977-96 during climbing CO2 because of the switch to the warm phase PDO in 1977.

latecommer2014

Once again correlation is not causation.

milodonharlani

Between CO2 & temperature there isn’t even good correlation, let alone causation.
On longer time scales, there is correlation & causation between rising T & CO2, but T is the cause & CO2 the effect.

Chris4692

Latecommer: Correlation does not prove causation, but if there is causation there will be correlation.

Robert of Ottawa

Being true believing Warmistas, they turned the clock speed down on their super-dupe computers, hence reducing the rate of warming. It’s that simple!

Johanus

Chris4692
August 28, 2014 at 12:16 pm
Latecommer: Correlation does not prove causation, but if there is causation there will be correlation.

… except when there is no correlation.
If, for the sake of argument, we accept that rising CO2 will cause rising global warming, then the currently rising CO2 levels should be causing rising temperatures. But currently there is no observable correlation between rising CO2 and global temps.
There is no compelling, simple explanation for this lack of correlation (assuming AGW). In fact, there are at least 37 explanations for it, none of them compelling enough to displace the others.

Mark

It’s only appears to be in fairly recent time that there is any correlation between CO2 & temperature too. With even that looking for like B causes A (or possibly C causes A and B).

joe

Col Mosby: The more the alarmists claim a huge CO2 effect during the warming periods, the more impossible their task of explaining the whyfor of the pause, when CO2 increases have not paused. Their cause and effect have become totally disjointed”
Oh Contraire – Dr. Mann has a recent study which shows the cooling phase is due to the AMO/PDO while none of the warming is due to the flip side of the AMO/PDO cycles. ( I may be oversimplifying his conclusion, though that is the general gist of his study).
The irony is that many of the skeptics have pointed out the amo/pdo cycles partly explaining both the warming phase and the cooling phase and the most likely cause of the current pause (skeptics have proffered this explanation since the late 1990’s) yet the high priest of climate science has only recently acknowledged 1/2 of the cycle. ( Mann obviously knows more science than us mere mortals)
Another irony, is that Mark Steyn pointed this out circa 2009.

george e. smith

Fig 1 looks smack dab right in the center to me. What is this “far to the right” mumbo jumbo?? Couldn’t be more ho hum , big deal, if I had plotted it myself.
You need new spectacles Gavin; and no, I didn’t mean for you to go to Burning Man, I meant, get some new glasses !

Greg

Gav says: “The bottom line is that multiple studies indicate with very strong confidence that human activity is the dominant component in the warming of the last 50 to 60 years, and that our best estimates are that pretty much all of the rise is anthropogenic.”
This is not consistent. “Dominant” just mean largest, it does not mean more than everything else added together.
If the issue is split into many parts like, human, volcanic ENSO, solar…… the “dominant” factor could actually be quite small percentage, much less than half.
The previous AR4 “majority of warming” is not the same thing as AR5 “dominant cause” of warming.
This is a climbdown in the IPCC position that seems to have gone mainly unnoticed.
Quite where Gav gets his “best estimates are that pretty much all of ” I don’t know but it’s far more extreme claim than either AR4 or AR5.

latecommer2014

It appears everyone is using adjusted temperatures, so the error bar should be as large as the adjusted temp. I do not believe in land based, corrupted temp records, and hold that any forcing caused by man is automatically absorbed and compensated for by nature. That is why we do not have “run away climate”. There is no proof man can over ride natural climate processes for any extended period.

Mark

Shouldn’t the error bars be somewhat larger than the adjustment?
Since you still need to factor in the accuracy and precision of the original readings. Together with that applicable to any “data processing” involved.

Gavin Schmidt is prevaricating as usual. Global warming since the LIA is composed of natural step changes. Those steps are exactly the same — whether CO2 was low, or high. Therefore, there is no “fingerprint of AGW”. It is clearly shown here in über-Warmist Dr. Phil Jones’ chart:
http://jonova.s3.amazonaws.com/graphs/hadley/Hadley-global-temps-1850-2010-web.jpg

M Courtney

Very good point.
I wonder if SkS will keep pushing their escalator line if that is pointed out.
Not sure Dr. Phil Jones is an über-Warmist though. He always struck me as more wrong than wronging.

lee

Someone used that graph the other day on me. I pointed out Trenberth’s ‘Has Global Warming Stalled?’ ‘big jumps’ as noted by Bob Tisdale. I suggested SKS sometimes got it right for the wrong reasons. I never got a response.
http://wattsupwiththat.com/2013/06/04/open-letter-to-the-royal-meteorological-society-regarding-dr-trenberths-article-has-global-warming-stalled/

FrankKarr

Mr Schmidt should examine the graph above closely to see the overall warming from 1850 to 2013. Its about 0.9 degrees C over that long term period and works out to about 0.55 deg C per CENTURY. Peanuts. The biggest fraud in history over a few tenths of a Degree.

rgbatduke

Yeah, db, a point that I, and Lindzen, and many others have tried to emphasize in discussion. You can actually take HADCRUT4 from the first half of the 20th century and the second half of the 20th century and put them side by side on similar scales but with the time scale hidden and ask which graph occurred with the help of anthropogenic CO_2? Not so easy to tell, unless you are aware of the individual features such as the terminating super-ENSO in the late 20th century. I sometimes think that the last round of tampering in the GISS anomaly was designed as much to erase this similarity and as much of the pause as possible without quite making it laughable compared to LTT. But that game is up — there will be no more adjustments of GISS or HADCRUT4 to further warm the present as they are now UAH/RSS constrained.
That hasn’t stopped them from trying to further cool the past, and now newcomers are appearing that re-krige and infill and homogenize areas that “haven’t shown enough warming” because they are less constrained by LTT; this further obfuscates if nothing else. HADCRUT4 — and earlier versions of HADCRUT even more — clearly give the lie to the assertion of “unprecedented” warming, though, in precisely this graph (which anybody can make, BTW, at least piecewise on woodfortrees).
However, even this graph omits the display of or discussion of two critical problems with assertions of warming or cooling or plain old knowledge of temperature.
The first and most glaring omission is the absence of any error bar or estimate on the data. This is insane! In what other field of human endeavor are so many data-derived graphs shown to so many people utterly devoid of error estimates? Note the obvious impact of error visible in the Jones curve. Does Jones, or anyone else, really think that the global average surface temperature anomaly was 10 times more volatile in the 1800’s, with the planet warming by 0.6C over as little as a year and then plunging down into 0.6C of cooling relative to some ill-defined mean in a year more? Because that’s what the error-bar free data shows.
Of course not! What the graph is showing is the impact of the sparseness of the record in the 19th century. With order of 10x as much variance, there is order of 100x less data contributing in the 19th century compared to the present. In the 19th century most of the Earth’s surface area was completely unsampled (I mean “most” literally — 70% of the surface that is ocean, the bulk of at least 3 or 4 continents were either terra incognita altogether, e.g. Antarctica, or barely penetrated by a thermometer — if you will excuse the image — and consider the Amazon, central Africa, much of Siberia and central Asia, Tibet, even much of the U.S.). The parts that were sampled were obviously quite volatile — one imagines that the bulk of what is producing these large variations were things like heat waves in Europe.
The variance quiets quite a lot when the colonial gold rush really gets underway in the 1880s and colonials carry thermometers with them to their newly annexed territories. The ocean remained a problem then, and remains a huge problem now with ARGO pitifully undersampling 70% of the Earth’s surface even today, and that in a highly biased fashion with buoys that float with thermohaline currents or are trapped in eddies (both unlikely to reflect their surrounding environment adequately) rather than be distributed according to a simple random number generator in Monte Carlo style (which would have a computable statistical error instead of an unknown bias). There is a surprising amount of variance for a global temperature anomaly today, but at least between the thermometric record and the LTT satellite record, we can think about resolving features of the presumably much less volatile actual anomaly from the statistical noise, by comparing the various “modelled” average temperatures. The error is almost certainly larger than the difference between, say, GISS and HADCRUT4 or Cowtan and Way, and at present these numbers are easily 0.2C or thereabouts apart much of the time.
HADCRUT4 acknowledges — IIRC — 0.15C of error in the present. I think this is an underestimate but let’s go with it, as the existence of the number we hope means that they actually computed it instead of pulled it out of their nether regions, as were the error estimates on graphs in the leaked early AR5 draft (figure 1.4?), which were obviously created by a graphical artist and not by anything like an algorithm. The scaling of the variance then suggests that the error estimate in the mid-1880s ought to be a whopping 1.5C — the eye suggests that a more modest 0.4 C error bar might encapsulate 60% of the data such as it is, but that is really the error for the sampled territories only and is a lower bound on the error estimate for global temperature. I’d suggest that 0.7C is a compromise — one can probably find proxies (with their own error and resolution problems) that that constrain the error to be less than 1 C. This statistical — not systematic — error would then systematically, but slowly, shrink from then to now. It wouldn’t really be linear — as I said, there is a relatively rapid diminishment in the late 19th century followed by a slower decrease into the late 20th, but it is likely fair to say that it is at least 0.3 to 0.4C for most of the record prior to the satellite era and ARGO, as only these have made it possible to push it down to ballpark of 0.2C.
If one includes the error estimate on the graphs, our certainty of any particular thermal history substantially diminishes. Maybe it warmed since the mid-1800’s. Maybe it has cooled. Maybe it warmed a lot more. Maybe the single 20 year period in the late 20th century when warming occurred has the steepest slope in the thermometric record, or — most importantly — maybe it does not! That’s the big statistical lie even in Jones relatively honest portrayal of the HADCRUT4 trends above. If one actually fit the data, with errors, and used e.g. a measure like Pearson’s \chi^2 to estimate the robustness of the linear trend, how likely it is that the slope is actually much larger or smaller than the simple regression fit, I promise that in the leftmost chord of the data we have almost no friggin’ idea what the linear temperature trend really was beyond “probably positive” (that is, maybe it is 0.16 \pm 0.12 or something like that), that in the second chord we can probably say that it — again guestimating since I don’t have the data and cannot do a better analysis — 0.15 \pm 0.05, and that only the last push is known reasonably accurately at 0.16 \pm 0.02.
In other words, it could have warmed faster in either the mid-1800s or the early 1900s than it warmed in the late 1900s. It isn’t even improbable. It is even odds that one or the other of these warming trends was larger than the best fit slope, and 25% of the time they would both be larger, and larger by just a bit is enough to confound the assertion that the more strongly constrained third linear trend is the largest.
So much for “unprecedented warming” or the necessity for CO_2 forcing as an explanatory mechanism for warming at the rates in Jones’ figure above.
The second problem is that we are left with a profound paradox in all discussions of global average surface temperature. Even NASA GISS acknowledges that we have very little idea what it is. It is often given as 288 K, but this obscures the simple fact that no two models for computing it, working from the same or largely overlapping surface data, get numbers that are within half a degree of one another! Or even a degree. The most honest way to present the number might be 288 \pm 1K. Or 287 \pm 1K. It’s hard to say, and depends on who is doing the averaging and with what model for kriging, infilling, homogenizing, and dealing with error. It is also impossible to generate a proper estimate for the probable error including all sources, because what one can estimate is only the range of values produced by the models, which is (again) a strict lower bound in any honest error estimate. Since the models tend to share data sources they are hardly independent, and yet there is a spread of more than a degree in their average. Statistics 101 — the variance of sample means drawn from overlapping populations is too small because the number of independent and identically distributed samples is smaller than the number of samples that produced the variance.
To fix this is enormously difficult and requires some pretty serious statistical mojo. Indeed, it would probably be simplest to fix via Monte Carlo and just plain sampling — generate a simulated smooth temperature field with the “approximately correct” surface temperature moments, pull samples at the overlapping locations and feed them into the different models, determining both the distribution of the absolute error of the models (per model) given the data compared to the precisely known average temperature, as well how that variance compares to the multimodel variance with overlapping samples. This might then provide some sort of quantitative basis for determining the actual probable absolute global average surface temperature — note well not the anomaly — as well as a probable error estimate that has a quantitative basis (subject to various assumptions, but given time we could even investigate the effect of varying those assumptions).
In the meantime, we persist in the belief that we can measure and compute the anomaly in global average surface temperature almost an order of magnitude more precisely than we can compute the average surface temperature itself. In most systems, susceptibilities (effectively, the anomaly) are second moments and their error estimates are fourth moments of the underlying distribution — the variance of the variance, so to speak. We generally know the higher order cumulants of a distribution less accurately than we know the mean/first order. This isn’t always true, of course — sometimes what we measure is a deviation, not the absolute — but thermometers don’t measure deviations from an unknown or poorly known mean, they measure temperature, the absolute quantity in question. The argument is that if there is a systematic bias in the trend of each contributing thermometer (say, we have 100 thermometers at different places, all perfectly accurate, and if 40 of them show warming of 1 degree, 20 of them show no change, and 30 of them show cooling of 1 degree, then we can conclude that there has been a statistically significant systematic trend in the anomaly of (40 – 30 =10)/100 = 0.1 C even if, when we compute the actual statistical mean and standard error of the temperatures measured by those thermometers over whatever spatial region they are sampling, the error is 1C!
This isn’t impossible, of course. We can certainly imagine systems where we could reliably measure the anomaly accurately but the mean inaccurately, the simplest one being that all of the thermometers themselves were perfectly accurate, but that a demented child scribed the scales on the side so that the supposed “zero” of the all of the thermometers was randomly distributed on some wide range. Each thermometer would then precisely record deltas/displacements, but the origin of their coordinates would be a random variable. But is that a reasonable assumption for the thermometric record? It seems equally plausible (for example) that the glass bore of (say) a mercury thermometer and the actual volume of the mercury in the thermometer are random variables, but that the person who zero’d the thermometer scale was an obsessive compulsive. In that case the absolute measurement of the thermometer might be very accurate, at least when it was made at temperatures close to the reference temperature used to set the scale, but the anomaly might have a bias that might, or might not, be randomly distributed.
This problem has hardly gone away now. Anthony has actually tested supposed accurate electronic thermometers in personal weather station kits obtained (for example) from China and found that they experience substantial absolute error and time dependent drift. Now and in the past, even a thermometer that was precisely made, and carefully zero’d and scaled with respect to multiple reference temperatures so that it worked perfectly the first day it was hung up in a weather station could easily experience a systematic, and biased, drift over a decade or five of usage. Spring thermometers gradually anneal and become less springy. Liquid thermometers outgas and deform. We assume that things remain the same over long times because we can’t see them moving, but they don’t.
Throw in biases recorded in weather station metadata, throw in all of the occult, slow biases not recoverable from any sort of metadata — a tree line that slowly grows over time, the UHI effect as a station that was initially rural finds itself in the middle of a prosperous concrete jungle, throw in unrecorded and variable idiosyncracies of the humans who performed the measurements as they changed over the decades, and you have substantial variance not only in the absolute temperatures any given thermometer might measure, but in the trend, in the anomaly. And some of those biases might well be slow, systematic, unrecorded and virtually impossible to retroactively correct for.
Again, we could probably learn quite a bit from simulations of the models used to compute the anomalies, by simply generating an (ensemble of) simulated smooth temperatures on the surface of a sphere with a given, known time variation that has or doesn’t have any given trend. Sample it, and add noise to the samples, both white unbiased noise and trended noise that might (for example) model the UHI on urban stations, or delta correlated shifts that might occur when station personnel changes, or trended noise that might represent various distributions of slow non-UHI environmental shifts — conversion of surrounding countryside from forest to pasture, the building of impoundments that transform small rivers into vast lakes (this has happened, for example, in the immediate vicinity of RDU airport, the source of our “official” temperature — Falls Lake and Lake Jordan are between them tens of thousands of acres and flank the airport, adding yet another confounding factor between comparing temperatures before the early 80’s to temperatures afterwards at this site). Where is that accounted for in the site metadata?
Who even knows what sort of effect turning a mix of forest and human occupied farmland into 60 or 70 thousand acres’ worth reservoirs might have on the surrounding temperatures and “climate”, at the same time that the weather station itself went from being a tiny regional airport to being a hub for a large commercial carrier, at the same time the surrounding farmland turned into one giant suburban and urban mega-community? We don’t know, of course — and not even BEST can account for or correct for this — but we might, perhaps, simulate some range of the possibilities and see what they do not to the anomaly itself — per model, it is what it is — but to the best estimate for the uncertainty in the anomaly when any give model ignores a source of potential systematic bias. As (apparently) HADCRUT4 does when they do not correct for UHI at all, however eager they are to cool the past or warm the present in other ways.
rgb

eyesonu

+10
I wish that there was a way to see how many times your comment has been read and/or linked to.

Mark

This kind of graph not only shows no relation to CO2 (human or “natural”) it also shows that the main driver(s) must be something cyclic. Yet it was only very recently that the PDO and AMO were identified and we don’t appear to fully understand either.

I had a guest post at Judith’s blog some months ago in which I tried to untangle some of the weird concepts used by the IPCC and friends and show how they lead to absurd consequences (it gets more certain the longer the divergence between data and theory lasts). http://judithcurry.com/2014/01/29/the-big-question/

joelobryan

When the assumptions, taken as true, that the GCM rest on become increasingly “wrong” (sign and magnitude), their outputs become increasingly absurd and result in ever more bizarre claims. We see the results of that with each new paper trying to explain the pause.

Robert of Ottawa

Doesn’t this charade become fraud at some point?

joelobryan

Robert,
In the 90s and for the 2000-2006 period, much of it likely looked quite on track. The big cracks appeared with the climate gate fraud exposure in 2009. But now in 2014.5 the GCM temp divergence with reality is becoming untenable, hence all the alternative ali is are coming out every week now. Most certInly there were a few bad apples in 1998 & forward who used chicanery, data manipulations and suppression of data from rivals that were contrary to their data and results in the past temps records, results they would need to build a case against Mann’s continued Carbon intensive energy sources. Those individuals should be banished by science journals editors for Life.
In the US, democrats began to see dancing trucks loads of carbon tax dollars to spend. Enviros saw a way to de-industrialize and shutdown Big Oil, their arch enemy.
But I get the sense that guys like Trenberth really do want to be true to science, but with so much reputation riding on AGW it’s a hard thing to finally let go of a dying baby you birthed and nurtured in good faith. But the rime to let CAGW go is past, now they are just desperately cling to just AGW starting back up in 20 or so years.

ferd berple

how is it possible that humans have contributed 110% of the warming (best guess)? are they saying that otherwise there would have been cooling?
why did temps rise from 1910-1940 almost identical to 1970-2000? It wasn’t CO2, so what was it? Why did temps pause from 1940-1970? How is the current pause any different? If the pause lasts from 2000-2030, how is this any different than the pause from 1940-1970?
Why did the [climate] models not see the cyclical pattern, that [your] average 6th graders would have caught? Do they not know that nature is cyclical, not linear?

About the 110%, see my discussion of the “net warming model”, in the link above.

lee

They incorporate the cooling from aerosols.

Having crossed swords with Schmidt some years ago on unRealClimate, I came to the conclusion not to believe anything he says. I’ve never been back there since, as it is full of pseudo-science presented by pseudo-scientists.

Clyde

Scientist have a different way of talking than the public. Well at least in my experience. Words don’t always mean the same thing to them as the general public. I don’t know what “Conspire” means in the context of what Gavin Schmidt is saying below. I hope it doesn’t mean they got together in a sinister way to plan what they did.
——————————–
Climate models projected stronger warming over the past 15 years than has been seen in observations. Conspiring factors of errors in volcanic and solar inputs, representations of aerosols, and El Niño evolution, may explain most of the discrepancy.
http://www.nature.com/ngeo/journal/v7/n3/full/ngeo2105.html
Volcanoes, the sun, aerosols, & El Nino conspired to make the models wrong.
HT/ Maksimovich From Curry’s blog.

EternalOptimist

Gavin is English, I think. In England, the word Conspire means ‘work together’ or ‘work in tandem’. it doesnt necessarily have a sinister meaning

Cheshirered

Partly right, BUT the whole point of ‘conspire’ is that it is a plan, and usually for nefarious means. See below – it’s all bad, dude! If Gavin is or was ‘conspiring’, be prepared for nonsense.
*****************************
“to agree together, especially secretly, to do something wrong, evil, or illegal:
“They conspired to kill the king.”
“to act or work together toward the same result or goal”.
verb (used with object), conspired, conspiring.
“to plot” (something wrong, evil, or illegal).

Tonyb

I am English and in the context it is used I don’t see anything sinister. He surely means merely to work together.
Tonyb

J

I think the correct word for that would be “collaborate” To labor or work together.
Con-spire is the breath together, like telling secrets…

Mr Green Genes

I’m 57 years old and have been English all my life and I have to disagree with that. A closer meaning for conspire than ‘work together’ is ‘plot together’. It definitely does have mildly sinister connotations. If Gavin meant ‘work together’ or ‘work in tandem’, imo he would have used the word collaborate.

Leo Smith

“late Middle English: from Old French conspirer, from Latin conspirare ‘agree, plot’, from con- ‘together with’ + spirare ‘breathe’.”
Or more to the point whispering together. Definite hush hush.
If we are talking about plotting in the open, that’s collaboration or co-operation..

Tom T

Gavin is anthropomorphizing the natural forces that made him look stupid.

Ok, so “volcanoes” contributed to the last 17 years of steady climate temperatures – DESPITE ever-higher CO2 levels in the atmosphere.
If you believe that theory, show us the measured real, demonstrated decrease in atmospheric clarity – which has remained absolutely steady the past 21 years!
http://www.esrl.noaa.gov/gmd/webdata/grad/mloapt/mlo_transmission.gif
Well, for two months in 2009 clarity did drop. But neither temperatures nor ice coverage changed when the atmospheric clarity DID drop that one time!
The excuse is proved wrong. Again.

Resourceguy

Well said Bob, as usual

Another Gareth

If the ‘best guess’ is 110% of warming is attributable to man are they saying it would have got colder without our efforts? By deduction they must be confident they have the natural variability component understood which I sincerely doubt.

BallBounces

“The climate models are not only out of phase with the long-term data, they are out of touch with reality.”
But, importantly, they are not out of touch with funding.

bingo.

Like

Ok. I feel entirely dumb. What is a 110% probability??? What is 110% of the entirety of something??? What is, say, being 110% responsible for the making of 110% of a car? I told you I feel dumb.

Thanks for asking that, exactly what I have been wondering.
Is this a statistical term?
Or just another liberty of Climatology ™ IPCC Team.?

When I was young, the probability of event X was the ratio between all outcomes resulting in X and all the possible outcomes. It’s probability theory, the most basic-basic-basic of it. You cant have a total of outcomes (of whatever) with 10% more outcomes than the total possible outcomes. So, I’m dumbfounded. It could be colloquial usage, as in “I’m 120% sure that…” But I guess colloquial is inappropriate in the context of the discussion.
And there my reading was completely blocked.

I think the 110% comes in from extrapolation of Mann’s Hockey stick they all are in love with, which up to 1900 showed gradual cooling. If you assume that the cooling would have continued without Man’s input, then the observed warming is actually less since some of Man’s warming was negated by the natural cooling that they claim should have been occurring.

jhborn

I think he’s saying that the mode of the probability is that natural variation would have resulted in cooling, but man’s interference caused warming equal to 110% of the warming observed. I.e., if it warmed one degree, it would have cooled a tenth of a degree without man. The area under the curve is only 100%, though. That is, the percentage he’s talking about is a percentage of warming, not a probability.

Josualdo – Gavin means that the amount of calculated CO2 warming is 110% of the measured. But his wording “fraction of warming attributable” shows that he does not understand climate. Climate, as clearly stated by the IPCC, is a complex, coupled non-linear system. That means that there are many factors involved, they affect each other, and the results are chaotic (things sometimes happen in certain conditions, and sometimes don’t). In the real world, the amount attributable to human activity is the difference between what it would have been without the human activity and what it actually was. [The reason that it’s this way round is that the non-human stuff was always going to be there. It’s the human stuff that is different. If things end up exactly where they were going to be anyway, for example, then the human impact is zero, regardless of any calculations of what the human stuff does.]
If we look at the last, say 200 years, or the last 10,000 years, then it is pretty clear that Earth would likely have warmed up a bit (we can’t be sure, because we reallydon’t know how Earth’s climate behaves). That automatically puts the fraction of warming atttributable to human activity at less than 100%. So how does Gavin come up with 110%? He has used linear thinking – a big no-no in a non-linear system – he has compared his calculated human effect with the measured temperature.

Josualdo

Thanks, Mike, I think I got the gist of the thing. So, no probabbilties here.
Having been interested in chaos theory, fractals and all that formerly fancy stuff, and knowing — well at least that was the meme — that butterfly wings might affect the weather somewhere else, I find all this certainty very strange. There’s an anecdote that almost would apply, but I guess it would not surviving the translation (and my telling it.)

Josualdo

* probabilities…

Rud Istvan

The interesting part, Bob, is that Gavin felt a reply of this sort was needed at all. I suspect that between the pause falsifying the models by the CAGW gangs own previously published standards (btw your tuning argument has been made by many including Akasofu), and all the stuff now coming out about inexplicable and in an increasing number of cases inexcusable homogenization (BOM Rutherglen in Australia, BEST station 166900) that reality is really starting to bite hard.

Peter Miller

Perhaps we could ask Gavin to pop into a parallel universe, where man died out on Earth a few tens of thousands of years ago.
He could then take all the measurements needed to determine exactly how much of the past 70 years’ mild warming is due to the activities of man. Even in the wacky world of ‘climate science’, this is unlikely to happen anytime soon.
Bottom line: None of us have a clue how much of the recent mild warming has been due to the activities of man. Those who are worried about their future salary cheques argue, “A lot.” While those who are worried about beggaring the world economy for no apparent reason, argue, “Not a lot.”

Justthinkin

When are going to throw these frauds in jail?Oh wait.If that happened,the psychopathic politicians might take a hit.Nothing to see here.Carry on.

joelobryan

Narcissism is a sociopathology.

Ralph Kramden

I’m having trouble interpreting figure 1. How can the fraction of global warming caused by anthropogenic causes be greater than 1.

Kelvin Vaughan

So all of the warming is made by man and then another 10% is made by man????????????????

there is no man made warming
there never was
and there probably never will be

DGH

If the earth might have otherwise cooled by, let’s say, 10% over this period then 110% of the observed warming would be attributable to our activities.

Kelvin Vaughan

No then it would be 100%. You cant have more than 100 per 100. 100 people have brown eyes 110 of them are bald.

EternalOptimist

Why is the keeper of the record arguing a position in the first place ?

Peter Miller

I guess that begs the question of whether or not GISS’ rate of temperature ‘homogenisation’, designed to cool the recent past, will accelerate or not under Gavin’s stewardship.
Under Hansen, better known for his antics rather than his science, ‘homogenisation’ ran wild at GISS.

AlecM

On average there is net zero CO2-AGW; the atmosphere self-adapts.
The warming was from other causes.

hint: Gleissberg
88 year cycle, 44 years of warming followed by 44 years of cooling

Gavin is an intelligent scientist, due to the ever longer ‘pause’, he must see that writing is on the wall, but his current position doesn’t offer him an acceptable alternative.

joelobryan

Agree. But if you have conscience and scientific integrity, when do you get to point you can’t sleep at night from the lies?

Tom T

Gavin is not even a scientists. He is a poor mathematician who plays office politics well.

Even using the most ddjusted data set of all and assuming ALL of the warming in the 1900s was man made and a 110% warming rate… they still CANT anywhere close to 2C temperature rise from 2000-2100. There is NO CAGW. At 013C per decade, that is 1.3C per century.

Since the 110% is getting people confused, let me explain a little bit. The 110% is the theoretical AGW (the warming that–according to the IPCC and Gavin–would have occurred if there were no natural cooling influence) divided by the real warming that was actually measured. I discussed this in the guest post I mentioned earlier.
http://judithcurry.com/2014/01/29/the-big-question/
It’s not just counterintuitive, it also has some insane consequences. The longer the pause lasts, the more certain the AGW dominance becomes. The catch is that it’s an ever slower rate of warming, and therefore you have to expect slower waming in the future anyway. It’s pretty misleading, but I think Gavin is so steeped in this mode of thinking, he actually believes it’s the right way to calculate.

Dan

Remember the research is already years old. I am sure if the numbers are run today the most likely percent of AGW would now be 140%, and extremely likely more than 75% is AGW. Like you say, the longer the pause the more certain the temperature rise is AGW! If the temperature falls the next 5 years, look out, certainty will sky rocket even higher that CAGW is coming!

So that 110%, -150% or 250% “probabilities” are actually something like slope, or differences, which are added and subtracted? In that case they aren’t probabilities, of course. If I got your interesting post.

That’s correct. Those aren’t probabilities. The probability is the notorious 95% and the statistical method practically guarantees that it will increase.

They’re introducing too many epicycles by now.

joelobryan

Folks here are confusing effect with probability.
Probability of an occurrence can never exceed 1.0.
Effects can cancel out other effects, and thus individually contribute more than 100% of an observed integrated output such as global temp reponse.

NikFromNYC

The rate remains defiantly linear in nearly all of the oldest real thermometer records, despite both urban heating effects and the overall greenhouse effect plus or minus feedbacks:
http://s6.postimg.org/uv8srv94h/id_AOo_E.gif
The same exact thing is seen in nearly every long running tide gaude record.
So indeed, 110% needs be invoked since an unprecedented cooling spell has to be have been averted for a mere trend continuation to be blamed on emissions. It’s amazing how the ghost of debunked hockey sticks live on as a background assumption to conceal these old records that debunk anthropogenic claims quite strongly as far as traditional scientific rigor is concerned.

so how do you account for the change in recording
from mercury thermometers (not re-calibrated before 1950) and human observation (usually 4 times per day) with thermo couples and measuring records of every second a day?

NikFromNYC August 28, 2014 at 12:37 pm
Do you data on the trend lines, assuming a linear function

Scrub the above …
Do you have data on the trend lines, assuming a linear function for the full record and ,say, for 1900 onward?

William McClenney

Yeah, been watching and commenting at JC’s site on this. Fascinating, as is Bob’s response here.
Once again, my apologies if anyone is offended by this, but this remains in my mind like “two fleas arguing over who owns the dog they are riding on” (Crocodile Dundee).
It begs sanity, IMHO, that we are even having this discussion at all.
There are only 2 possibilities here. That’s it!
1 The Holocene would just have continued blithely along, presumably forever were it not for Anthropogenic disturbances, AGW etc.
2. The AGW hypothesis is correct which makes Ruddiman’s Early Anthropogenic Hypothesis also correct. The Holocene may well be over and we are living in the Anthropocene now. Interglacial conditions extended by AGW.
On possibility 1, here is my detailed look at the Holocene conundrum http://wattsupwiththat.com/2012/03/16/the-end-holocene-or-how-to-make-out-like-a-madoff-climate-change-insurer/
On possibility 2. we find ourselves faced with perhaps ending the Anthropocene by stripping the CO2/GHG “climate security blanket” from the atmosphere. If the AGW hypothesis is correct, that would leave glacial inception as the only other climate state, wouldn’t it?
The Pretzel Logic here is simply gobsmacking!!
You cannot be right about the “Anthropocene”, or ending it, without getting a hated tipping point, but of the opposite sign to the one expected. If CO2/GHGs are holding us in interglacial conditions, wouldn’t removing the excess tip us into the next glacial inception?
Getting deep into the Judith/Gavin weeds is, of course, a very interesting discussion. “I suggest a new strategy R2, let the Wookie win!”, C3PO. Because the real fun begins if cede Gavin is right, because the choice is really about extending the Holocene, or removing the “climate security blanket” so we can get on with our overdue glacial inception.
Muller and Pross (2007) provide one of the more poignant quotes in all of climate science:
“The possible explanation as to why we are still in an interglacial relates to the early anthropogenic hypothesis of Ruddiman (2003, 2005). According to that hypothesis, the anomalous increase of CO2 and CH4 concentrations in the atmosphere as observed in mid- to late Holocene ice-cores results from anthropogenic deforestation and rice irrigation, which started in the early Neolithic at 8000 and 5000 yr BP, respectively. Ruddiman proposes that these early human greenhouse gas emissions prevented the inception of an overdue glacial that otherwise would have already started.”
http://folk.uib.no/abo007/share/papers/eemian_and_lgi/mueller_pross07.qsr.pdf

William McClenney

You are correct. I’m not following you at all.

William McClenney

OK, so let me take a stab at responding.
“you did not get it at all” provides only an ad hominem. The comment link, on the other hand provides what I think is enough information to suggest that your point is there is no anthropogenic influence.
Stepping out on that limb is putting forth a hypothesis. I do not disagree with your hypothesis. However, one of the key steps one takes as a scientist when thinking about proposing their hypothesis is to adopt the opposing position(s) as a means of testing the hypothesis. Standard science.
So adopting the opposing viewpoint, standard in science, is that there is a decisive climate impact from CO2/GHGs. And if that was correct, then we are living in the Anthropocene extension of the Holocene interglacial. So, with our standard science adopted opposite viewpoint, we now come to what do if we are right? Strip CO2/GHGs from the Anthropocene atmosphere, and where does THAT leave us?
The only other state would be getting on with that overdue glacial inception.
I am in no way saying you are wrong. I am saying what if you are wrong and the AGW crowd is right? Would not being right about AGW, and quelling its atmospheric presence, actually be the wrong thing to do?

milodonharlani

Except under Ruddiman, the Holocene would scarcely have been an interglacial at all. The Eemian lasted 16,000 years & the MIS 11 interglacial tens of thousands. Those of MIS 7 & 9 were longer than the Holocene would have been under Ruddiman’s hypothesis.

William McClenney

Milodon, I would suggest that instead of just taking a higher-end estimate for the length of the Eemian, which of course is a length quoted by several authors, it is by no means the consensus on the length of the Eemian. There probably isn’t one, but the range would seem to be somewhere between 10-13kyrs with 16 being an outlier, but not the furthest outlier. I do not have the time to dig all of this up anytime soon, but there is still disagreement as to whether Termination II was a single step, or a two-step one like Termination I. From memory, it seems like evidence for a 2-step deglaciation into the Eemian seems more likely as higher resolution studies pile-up. From memory again, the 135kyr start of the Eemian tends to be associated with the single warming camp. The 2-step camp, from memory counts the period from 135kyrs to 125kyrs as consisting of two warming events with a duration for both similar to the last deglaciation. ~115kyrs ago is what I remember as being one of the more frequent conclusions as to when the Eemian ran down. So something on the order of 10-20kyrs, depending on who you quote and depending on whether the 10kyr deglaciation interval is included in the estimate.
I took a quick look in my Eemian folder and was rewarded with this 2008 paper http://journals.co-action.net/index.php/polar/article/download/6172/6851 Have a look at Figure 5 and you will catch my drift.
This is not about tit for tat, because even on things which have happened, the science is not particularly well-settled. Which makes consideration of the science being settled on something which has not happened yet a bit unsettling…. 🙂

William McClenney

My bad! I meant Figure 6 (dang keyboard)

gaelansclark

That is a fascinating argument!

William McClenney

It is, isn’t it?
And it took no time at all to realize I was decidedly not the only one who had come of such an argument. This simply cannot be had both ways. AGW either can (and may already have) extended the Holocene, or it cannot. That’s pretty much it.
The most thorough analysis is still Tzedakis 2010 landmark paper here http://www.clim-past.net/6/131/2010/cp-6-131-2010.pdf

JimS

If anthropogenic effect in global warming in the modern times is more than 1% in total, I would be impressed.

Mac the Knife

How could any blog generate +700 argumentative comments to a article on the 97% consensus, with 110% attribution to humans, ‘settled science’ of man made global warming??? It seems highly improbable, unless the ‘science’ is ill supported. And the proponent of the ‘110% attribution’ does not respond directly to the blog article on ClimateEtc, choosing to fire his blunderbuss from behind the self censured revetments of RealClimate, ala Kim Jong Un? (There is a bit of a resemblance….)
Settled science doesn’t draw such spirited discussion. Unsettled science does, as does unsupported conjecture or willful deceit.

Rud Istvan

Outstanding observation. That Gavin felt compelled to rebut Judith is the big news. Downright unsettling…

William McClenney

Struth.

You only have to take a sample of weather stations to see what’s happening
http://blogs.24.com/henryp/2013/02/21/henrys-pool-tables-on-global-warmingcooling/

WRONG
the models are not tuned to the period this ONE PAPER reports for ONE model.
Even here you get it wrong
“Formulating and prioritizing our goals is challenging. To us, a global mean temperature
in close absolute agreement with observations is of highest priority because it sets the
stage for temperature-dependent processes to act. For this, we target the 1850-1880
observed global mean temperature of about 13.7◦C
[Brohan et al., 2006].

mouruanh

Finally. But you’re wrong too. One model, one paper. And you left out the most important part.
Arguably, the most basic physical property that we expect global climate models to predict is how the global mean surface air temperature varies naturally, and responds to changes in atmospheric composition and solar insolation. We usually focus on temperature anomalies, rather than the absolute temperature that the models produce, and for many purposes this is sufficient.
Figure 1 instead shows the absolute temperature evolution from 1850 till present in realizations of the coupled climate models obtained from the CMIP3 and CMIP5 multimodel datasets. There is considerable coherence between the model realizations and the observations; models are generally able to reproduce the observed 20th century warming of about 0.7 K, and details such as the years of cooling following the volcanic eruptions.
Yet, the span between the coldest and the warmest model is almost 3 K, distributed equally far above and below the best observational estimates, while the majority of models are cold-biased. Although the inter-model span is only one percent relative to absolute zero, that argument fails to be reassuring. Relative to the 20th century warming the span is a factor four larger, while it is about the same as our best estimate of the climate response to a doubling of CO2, and about half the difference between the last glacial maximum and present.

http://curryja.files.wordpress.com/2013/10/figure.jpg
To parameterized processes that are non-linearly dependent on the absolute temperature it is a prerequisite that they be exposed to realistic temperatures for them to act as intended. Prime examples are processes involving phase transitions of water: Evaporation and precipitation depend non-linearly on temperature through the Clausius-Clapeyron relation, while snow, sea-ice, tundra and glacier melt are critical to freezing temperatures in certain regions. The models in CMIP3 were frequently criticized for not being able to capture the timing of the observed rapid Arctic sea-ice decline.
While unlikely the only reason, provided that sea ice melt occurs at a specific absolute temperature, this model ensemble behavior seems not too surprising when the majority of models do start out too cold.
In addition to targeting a TOA radiation balance and a global mean temperature, model tuning might strive to address additional objectives, such as a good representation of the atmospheric circulation, tropical variability or sea-ice seasonality. But in all these cases it is usually to be expected that improved performance arises not because uncertain or non-observable parameters match their intrinsic value – although this would clearly be desirable – rather that compensation among model errors is occurring. This raises the question as to whether tuning a model influences model-behavior, and places the burden on the model developers to articulate their tuning goals, as including quantities in model evaluation that were targeted by tuning is of little value. Evaluating models based on their ability to represent the TOA radiation balance usually reflects how closely the models were tuned to that particular target, rather than the models intrinsic qualities.
These issues motivate our present contribution where we both document and reflect on the model tuning that accompanied the preparation of a new version of our model system for participation in CMIP5. As decisions were made, often in the interest of expediency, a nagging question remained unanswered: To what extent did our results depend on the decisions we had just made?

Do you know the answer?
It is mainly Bob’s argument that models are tuned to the period of the late 20th century, so it’s up to him to respond to your point specifically.

So Mosher why the hell have the models gone so wrong and off Target ? ;>(

James Macdonald

I haven’t heard one word about the proper “scientific process”. Models are designed and tuned to a particular set of past data using certain variables (the dependent sample). In this case the main variable is CO2 plus some water vapor feedbacks. To test the validity of the model, it is then applied to a new (independent sample). If the projections don’t fit the actual data, there is something wrong with the basic assumptions. This is clearly the case with climate models, which have thus been invalidated.

G. E. Pease

dbstealey says:
August 28, 2014 at 11:27 am
“…Global warming since the LIA is composed of natural step changes. Those steps are exactly the same — whether CO2 was low, or high. Therefore, there is no “fingerprint of AGW”. It is clearly shown here in über-Warmist Dr. Phil Jones’ chart:”
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
No, the steps are not exactly the same. There clearly was ~0.4 deg C less cooling from 1950 to 1975 than from 1885 to 1910. Also, the warming cycles from 1910-1940 and 1975-2009 are respectively 10 years and 14 years longer than the 20 year warming cycle from 1860-1880.
These trend differences could possibly be considered fingerprints of Anthropogenic Global Warming (AGW) if we didn’t know that there was comparable warming to modern warming in the Roman Warm Period and the Medieval Warm Period. The warming trends back then were almost certainly not fingerprints of AGW.

Don B

There is an interesting piece by Andy Revkin at the NY Times (really!) on the connections between the oceans and atmospheric temperatures. For me, the take-home quote from a climate scientist was
“The underlying anthropogenic warming trend, even with the zero rate of warming during the current hiatus, is 0.08 C per decade.* [That’s 0.08 degrees Celsius, or 0.144 degrees Fahrenheit.] However, the flip side of this is that the anthropogenically forced trend is also 0.08 C per decade during the last two decades of the twentieth century when we backed out the positive contribution from the cycle….”
http://dotearth.blogs.nytimes.com/2014/08/26/a-closer-look-at-turbulent-oceans-and-greenhouse-heating/?_php=true&_type=blogs&smid=tw-share&_r=0
Warming of 0.8 C per century is not frightening.

joelobryan

CAGW is in a dying screaming death spiral.

Typhoon

The comment by Carl Wunsch is gets to the heart of the matter.
For example, 0.08C +/- 0.1C, is consistent with the null hhypothesis of zero..

Wouldn’t 1945 be a better starting point consider it is when man-made CO2 really started up up and away and became the blade of a hockey stick.
http://sunshinehours.files.wordpress.com/2014/04/cdiac-co2.jpg

milodonharlani

Yes. But the warming since c. 1945 is no different from the warming in the early 20th century, & much less impressive than that in the early 18th century, among prior natural warming intervals.

joelobryan

I always figure it should be 1935. By 1938 industrial factories in Europe, russia, and Japan were in high gear, burning coal and oil as fast as they could dif it out of the ground. The US joined in that industrial fray in 1940. By 1943 US industrial output and thus energy use was up almost 300%over 1939. There was a big bad recession in 1946-1947 as factories retooled.

You are assuming that four valleys of extreme industrial concentration (Germany’s Ruhr Valley, Pittsburgh’s PA, UK’s London (Thames and surrounds) and California’s LA basin) are typical of the rest of the world. Those four WERE extremely polluted, but are a very, very small part of the whole world. And, even around Pittsburgh, once you were a few miles from the steel mills and glass factories, the air cleaned up remarkably.
Further, three of the four cleaned up between 1945 and 1950. (LA got worse until the early 70’s). Pittsburgh was sandblasting downtown to clean buildings as early as 1947.

Note that we are globally cooling/
http://blogs.24.com/henryp/files/2013/02/henryspooltableNEWc.pdf
that might become a challenge?

William McClenney
BarryW

Dr. Curry made the point, and it’s been mentioned many times, if CO2’s affect is only noticeable post 1950, then where id the 1910-1940 rise come from? None of these so called Climate scientists have explained how one is natural and one is man made. Only that in the second that is greater than the first can logically be attributed to man.

You must be able to show me a certificate of re-calibrated thermometer before 1945 then.

pdtillman

@Bob Tisdale’s
“The climate models are not only out of phase with the long-term data, they are out of touch with reality.”
+10!
“It is difficult to get a man to understand something when his job depends on not understanding it.”
— Upton Sinclair

G.E. Pease,
The steps are exactly the same, when considering even microscopic error bars. Furthermore, there is no empirical evidence showing any ‘fingerprint of AGW’. None at all. There are no measurements of a fraction of a degree warming that could be directly attributable to human emissions. Thus, the default conclusion must be that all global warming is natural, unless shown to be otherwise. To show that would require verifiable measurements. But there are no such measurements.
It is like someone doing an overlay of CO2 and temperature, and saying, “Look! Rising CO2 causes rising temperature!” They do that all the time. But a temporary, coincidental correlation proves nothing. And that T/CO2 relationship broke down, both before and after a short periord from about 1980 to 1997.
Global temperature has been rising at the same rate, as NikFrom NYC shows above, for hundreds of years. There is no evidence at all that human CO2 emissions cause any warming. Any such AGW is mere speculation, and it would anyway be so minor that it can be completely disregarded.
The onus is on the alarmist crowd to support their CAGW conjecture. They have failed miserably, so now their tactic is to make baseless assertions as if they were fact. They aren’t. And without real world measurements, their conjecture fails.

david dohbro

I both agree and disagree with both Bob and Judith/Gavin, but on several different issues:
First: Why is the year 1950 chosen as when AGW supposedly started? That just makes no sense. Please look at the data: Take HadCRUT4 for example. It clearly shows several periods of increasing and decreasing temperatures, each of about 30-34yrs long, making for a 60+ year cycle.
This can be easily and nicely shown with a MACD, which I’ve shown last year here:
http://wattsupwiththat.com/2013/10/01/if-climate-data-were-a-stock-now-would-be-the-time-to-sell/
Clearly the year 1950 is at the start of a 30+ year cooling trend that started in 1945 and ended in 1976. In other words: temperatures peaked in 1945 and bottomed in 1976. How then can there have been (AG) warming since 1950? That makes no sense. Cycle analyses will, thus, tell you when and where temperature trends change. One has to start “counting” from those trend changes. Otherwise you are mixing cyclical warming and cooling periods.
Second: This also means that 1976 is a more appropriate year to look for any AGW signal. However, as I’ve shown in my MACD article, the increase in GSTA during the latest warming cycle, 1976-2007, is 0.019°C/yr, whereas that of the previous 30 yr warming period: 1911-1945 was 0.014°C/yr. Hence, assuming all else equal (i.e. nature… very dangerous to do that in science btw), then the last period had a warming rate that was 0.005°C/yr (36%) higher than the that of the previous warming period. So the maximum possible human influence is 36% IMHO. Note that a) the MACD analyses finds the same years and warming rates as Bob presented and b) that since 2007 the temporal trend in GSTA is effectively 0.

1950 makes “sense” because it makes the average rate of warming in the time frame smaller, so a larger part of the rate of warming can be attributed to AGW.

joelobryan

aka cherry picking.

Cherry-picking one way or the other. It may have been chosen for whatever reason originally and then found to be “convenient”. Speaking of fruit, it’s also apples and oranges. The time frame is from 1950 to “the present”, which is different for each successive IPCC report.

Richard M

Exactly correct.

Current data is not supporting AGW theory. In addition past climate changes before this idea of AGW came about were many times greater in magnitude then the slight warming which occurred last century. Again the data in this case past data does not support AGW.
Yet they insist.
For my money I attribute all climate changes due to natural causes and 0% to human activity.

I noticed the difference in the starting from the proclaimed “CO2” age of 1950 and most of the graphs only going back to the late 70s. But never thought about it skewing their models as well. Kudos for pointing out the merely obvious to me!

Lil Fella from OZ

If, as we have been told, that there has been no warming for over 17 years, how can there be an argument on how much warming from 1950 to now can be attributed to man?

milodonharlani

No warming from 1950 to 1976, warming from 1977 to 1996, then no warming again from 1997 to 2014 & counting. That’s 20 years of warming (with some down years) vs. 44 (inclusive) years of no warming (or cooling), all the while CO2 has been rising monotonously. CACA was born falsified.

The probability density function for the fraction of warming attributable to human activity….
What is this? Counting the male-female ratio of angles on the head of a pin?
“Twice nothing is still nothing.” – Cyrano Jones
Forget the ratio. It is an intractable measure of a religious concept — impossible to test.
What is PDF for the absolute warming attributable to the growth in anthropomorphic green house gasses.
What is the PDF for natural warming elements? for the past 6 decades, For the past 2 millennia.?

Johanus

Well said.
In mathematics and logic it is easy to define entities that don’t exist (e.g. “Let X be the set of entities that don’t exist”). That is why mathematics usually require existence theorems to prove that any such entities actually exist before trying to characterize and deduce truth from them.
Where is the proof of existence for this pdf (without begging the question)?

Robertvd

Honestly, I am astounded at the utter ignorance of the people involved in climate “science”. I have seen no decent theory backed up by experiment and evidence that CO2 has any net effect on the climate of the planet. In fact, I have seen published charts and graphs that suggest that CO2 has no or nearly no effect at all. And yet we have supposedly educated men and women claiming anthropogenic warming to a precise measure as if they knew Mother Nature’s contribution to the whole affair. Unbelievable ignorance, delusion, and arrogance.
Of course since the government funded temperature data sets are now so corrupted as to be useless: how can we look for real causes of climate change? I notice that even this site calls the best European blog of last year a dispenser of “way out there theories”. Looks to me like the theories we have now are bunk and we need to be working on something else.

To paragraph 1, do not be astounded. Their academic or government careers, grant funding, and personal status all depend on it. To phrase it differently, climate science increasingly resembles the worlds oldest profession.
To paragraph 2 part one, that is going to be an Achilles heel.
To paragraph 2 part two, truth is not always found in popularity contests despite the supposed ‘wisdom of crowds’. Were it so, then tulip bulbs would be more valuable than gold and present shareholders in the South Seas Company would be richer than Bill Gates. (h/t IIRC Mackey”s famous old book on the madness of crowds.)

David Archibald

Why would anyone take any notice of Judith Curry? She is not a dispassionate seeker after truth. In this interview she refers to the “Kock-funded climate denial machine”: http://oilprice.com/Interviews/The-Kardashians-and-Climate-Change-Interview-with-Judith-Curry.html

Bart

It looks like she is merely pointing out the strategy of blaming the Kochs, and that it isn’t working.

mouruanh

She’s describing ‘the climate science communication paradigm’ and why it fails, not her position in the debate.
This strategy hasn’t worked for a lot of reasons. The chief one that concerns me as a scientist is that strident advocacy and alarmism is causing the public to lose trust in scientists.
It’s quite clear when you read the full interview.

David
She is making reference to the paradigm not her position
tonyb

Tom T

You think he doesn’t know that? He is bomb throwing he simply doesn’t care.

pete

That pdf is the single worst thing i have seen in climate science. There is just no way you can create such an attribution given the unknowns.
At least the hockey stick had some basis to it…

SAMURAI

The most damning aspect of Gavin’s argument that the cherry-picked 1976~2005 warming period is almost entirely attributable to CO2 forcing, is that its warming trend is similar to the warming period to the 1921~1943 warming period (0.14/decade, 0.19c/decade respectively), and the 1921~43 warming trend can’t possibly be attributable to CO2 because even the IPCC admits CO2 levels were too low in the first half of the 20th century to have caused much warming.
What these two warming periods do have in common is that the PDO was in its 30-yr warming cycle during both of these warming periods.
http://www.woodfortrees.org/plot/hadcrut4gl/from:1850/to:1880/plot/hadcrut4gl/from:1850/to:1880/trend/plot/hadcrut4gl/from:1880/to:1921/plot/hadcrut4gl/from:1880/to:1921/trend/plot/hadcrut4gl/from:1921/to:1943/plot/hadcrut4gl/from:1921/to:1943/trend/plot/hadcrut4gl/from:1943/to:1977/plot/hadcrut4gl/from:1943/to:1977/trend/plot/hadcrut4gl/from:1977/to:2005/plot/hadcrut4gl/from:1977/to:2005/trend/plot/hadcrut4gl/from:2005/plot/hadcrut4gl/from:2005/trend
The PDO entered its 30-yr cool cycle in 2005, and that’s precisely when global temp trends started falling again, despite record amounts of CO2 emissions.
Earth’s warming and cooling cycles have followed PDO warming/cooling cycles almost perfectly for the past 164 years. Accordingly, It’s illogical to assume CO2 is the primary driving force behind global warming since 1950, because from 1950~1976 global temps were falling (PDO cool cycle in effect) and when a global warming trend started again in 1976, it coincides when the PDO entered its 30-yr warm cycle.
The empirical evidence suggests that for the next 20 years, global temp trends should continue to fall, which will be the death knell for CAGW.

joelobryan

It will happen w/i the first 5 years as temps fall. AGW as a science hypothesis just becomes untenable in the scenario.

GeneDoc

Is it just too obvious that the oceans act as an enormous heat sink that moderates atmospheric temperature?When the heat content of the entire atmosphere is the same as the top 10 meters of the ocean, and when there are 321 million cubic miles of ocean, most of which is at or below 4C, how is it surprising that there is incredible buffering capacity for temperature changes?

joelobryan

Bingo!! Ding ding ding ding!! Flashing lights. Winner!!!
The oceans control the thermostat, as they have for a billion years once our sun matured. The stupid thought that man’s fossil fuel CO2 is the thermostat regulator is total BS.

Thanks, Bob. Very good information about GCMs training; the models are specialists.
Why would the climate modelers base their projections of global warming on the trends of a cherry-picked period with a high warming rate?
To better scare the money out of our pockets, of course.
But, it seems to have come back to haunt them.