By Christopher Monckton of Brenchley
As Anthony and others have pointed out, even the New York Times has at last been constrained to admit what Dr. Pachauri of the IPCC was constrained to admit some months ago. There has been no global warming statistically distinguishable from zero for getting on for two decades.
The NYT says the absence of warming arises because skeptics cherry-pick 1998, the year of the Great el Niño, as their starting point. However, as Anthony explained yesterday, the stasis goes back farther than that. He says we shall soon be approaching Dr. Ben Santer’s 17-year test: if there is no warming for 17 years, the models are wrong.
Usefully, the latest version of the Hadley Centre/Climatic Research Unit monthly global mean surface temperature anomaly series provides not only the anomalies themselves but also the 2 σ uncertainties.
Superimposing the temperature curve and its least-squares linear-regression trend on the statistical insignificance region bounded by the means of the trends on these published uncertainties since January 1996 demonstrates that there has been no statistically-significant warming in 17 years 4 months:
On Dr. Santer’s 17-year test, then, the models may have failed. A rethink is needed.
The fact that an apparent warming rate equivalent to almost 0.9 Cº is statistically insignificant may seem surprising at first sight, but there are two reasons for it. First, the published uncertainties are substantial: approximately 0.15 Cº either side of the central estimate.
Secondly, one weakness of linear regression is that it is unduly influenced by outliers. Visibly, the Great el Niño of 1998 is one such outlier.
If 1998 were the only outlier, and particularly if it were the largest, going back to 1996 would be much the same as cherry-picking 1998 itself as the start date.
However, the magnitude of the 1998 positive outlier is countervailed by that of the 1996/7 la Niña. Also, there is a still more substantial positive outlier in the shape of the 2007 el Niño, against which the la Niña of 2008 countervails.
In passing, note that the cooling from January 2007 to January 2008 is the fastest January-to-January cooling in the HadCRUT4 record going back to 1850.
Bearing these considerations in mind, going back to January 1996 is a fair test for statistical significance. And, as the graph shows, there has been no warming that we can statistically distinguish from zero throughout that period, for even the rightmost endpoint of the regression trend-line falls (albeit barely) within the region of statistical insignificance.
Be that as it may, one should beware of focusing the debate solely on how many years and months have passed without significant global warming. Another strong el Niño could – at least temporarily – bring the long period without warming to an end. If so, the cry-babies will screech that catastrophic global warming has resumed, the models were right all along, etc., etc.
It is better to focus on the ever-widening discrepancy between predicted and observed warming rates. The IPCC’s forthcoming Fifth Assessment Report backcasts the interval of 34 models’ global warming projections to 2005, since when the world should have been warming at a rate equivalent to 2.33 Cº/century. Instead, it has been cooling at a rate equivalent to a statistically-insignificant 0.87 Cº/century:
The variance between prediction and observation over the 100 months from January 2005 to April 2013 is thus equivalent to 3.2 Cº/century.
The correlation coefficient is low, the period of record is short, and I have not yet obtained the monthly projected-anomaly data from the modelers to allow a proper p-value comparison.
Yet it is becoming difficult to suggest with a straight face that the models’ projections are healthily on track.
From now on, I propose to publish a monthly index of the variance between the IPCC’s predicted global warming and the thermometers’ measurements. That variance may well inexorably widen over time.
In any event, the index will limit the scope for false claims that the world continues to warm at an unprecedented and dangerous rate.
UPDATE: Lucia’s Blackboard has a detailed essay analyzing the recent trend, written by SteveF, using an improved index for accounting for ENSO, volcanic aerosols, and solar cycles. He concludes the best estimate rate of warming from 1997 to 2012 is less than 1/3 the rate of warming from 1979 to 1996. Also, the original version of this story incorrectly referred to the Washington Post, when it was actually the New York Times article by Justin Gillis. That reference has been corrected.- Anthony
Related articles
- The warming ‘plateau’ may extend back even further (wattsupwiththat.com)
- Are We in a Pause or a Decline? (Now Includes at Least April* Data) (wattsupwiththat.com)
- The Met Drops Its Basis For Claim Of “Significant” Warming (papundits.wordpress.com)
- Benchmarking IPCC’s warming predictions (wattsupwiththat.com)
- WUWT: 150 million hits and counting (wattsupwiththat.com)
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
The one time a “Cherry-picking” accusation fails is when you use the present day as an anchor & look back into the past.
The observed temperature differential just doesn’t meet any definition of “catastrophic,” “runaway,” “emergency,” “critical,” or any synonym you can pull out of the (unwarming) air to justify the multitude of draconian measures ALREADY IN PLACE that curtail world economies or subsidize failing alternative energy attempts!!!
I like to use RSS because it is not contaminated with UHI, extrapolation and infilling. As indicated above the trend has been perfectly flat for 16.5 years (Dec. 1996). At some point in the near future, given the current cooling that could be later this year, the starting point could move back to the start of 1995. That would mean around 19 years with a zero trend.
I like to use the following graph because it demonstrates a change from the warming regime of the PDO to the cooling regime. It also shows how you could have many of the warmest years despite the lack of warming over the entire interval.
http://www.woodfortrees.org/plot/rss/from:1996.9/to/plot/rss/from:1996.9/to/trend/plot/rss/from:1996.9/to:2005/trend/plot/rss/from:2005/to/trend
How long before the warmists make 1998 go away like they did with the MWP ? Funny how 1998 was the shot across the bow warning when it was on the right side of the graph but an inconvienient truth on the left.
M Courtney says:
June 13, 2013 at 5:25 am
“It is a shame that a lively, left-wing forum has decided to commit suicide by out-sourcing moderation to alleged scientists who can’t defend their position.”
Guardian, Spiegel and NYT are the modern versions of the Pravda for the West. I read them to know what the 5 minute hate of the day is.
Looks like Lucia’s website is overloaded. I can get through on the main page but I can’t open SteveF’s post without getting an error message. I tried to leave him the following comment:
SteveF: As far as I can tell, your model assumes a linear relationship between your ENSO index and global surface temperatures.
Trenberth et al (2002)…
http://www.cgd.ucar.edu/cas/papers/2000JD000298.pdf
…cautioned against this. They wrote, “Although it is possible to use regression to eliminate the linear portion of the global mean temperature signal associated with ENSO, the processes that contribute regionally to the global mean differ considerably, and the linear approach likely leaves an ENSO residual.”
Compo and Sardeshmukh (2010)…
http://journals.ametsoc.org/doi/abs/10.1175/2009JCLI2735.1?journalCode=clim
…note that it should not be treated as noise that can be removed. Their abstract begins: “An important question in assessing twentieth-century climate change is to what extent have ENSO-related variations contributed to the observed trends. Isolating such contributions is challenging for several reasons, including ambiguities arising from how ENSO itself is defined. In particular, defining ENSO in terms of a single index and ENSO-related variations in terms of regressions on that index, as done in many previous studies, can lead to wrong conclusions. This paper argues that ENSO is best viewed not as a number but as an evolving dynamical process for this purpose…”
I’ve been illustrating and discussing for a couple of years that the sea surface temperatures of the East Pacific(90S-90N, 180-80W) show that it is the only portion of the global oceans that responds linearly to ENSO, but that the sea surface temperatures there haven’t warmed in 31 years:
http://oi47.tinypic.com/hv8lcx.jpg
On the other hand, the sea surface temperature anomalies of the Atlantic, Indian and West Pacific (90S-90N, 80W-180) warm in El Niño-induced steps (the result of leftover warm water from the El Niños) that cannot be accounted for with your model:
http://oi49.tinypic.com/29le06e.jpg
A more detailed, but introductory level, explanation of the processes that cause those shifts can be found here [42MB .pdf]:
http://bobtisdale.files.wordpress.com/2013/01/the-manmade-global-warming-challenge.pdf
And what fuels the El Ninos? Sunlight. Even Trenberth et al (2002), linked above, acknowledges that fact. They write, “The negative feedback between SST and surface fluxes can be interpreted as showing the importance of the discharge of heat during El Niño events and of the recharge of heat during La Niña events. Relatively clear skies in the central and eastern tropical Pacific allow solar radiation to enter the ocean, apparently offsetting the below normal SSTs, but the heat is carried away by Ekman drift, ocean currents, and adjustments through ocean Rossby and Kelvin waves, and the heat is stored in the western Pacific tropics. This is not simply a rearrangement of the ocean heat, but also a restoration of heat in the ocean.”
In other words, ENSO acts as a chaotic recharge-discharge oscillator, where the discharge events (El Niños) are occasionally capable of raising global temperatures, where they remain relatively stable for periods of a decade or longer.
In summary, you’re treating ENSO as noise, while data indicate that it is responsible for much of the warming over the past 30 years.
Regards
I wonder if ACGW advocates feel a little like advocates of the Iraq invasion felt when no WMDs were discovered? Just a random thought.
I got through. There must’ve been a temporary mad rush to Lucia’s Blackboard for a few minutes.
Rather off-topic, but there are 4 questions that I would like the answer to:
1. We are told the concentration of CO2 in the atmosphere is 0.039%, but what is the concentration of CO2 at different heights above the earth’s surface? As CO2 is ‘heavier than air’ one would expect it to be at higher percentages near the earth’s surface.
2. Do the CO2 molecules rise as they absorb heat during the day from the sun? And how far?
3. Do the CO2 molecules fall at night when they no longer get any heat input from the sun?
4. When a CO2 molecule is heated, does it re-radiate equally in all directions, assuming the surroundings are cooler, or does it radiate heat in proportion to the difference in temperaure in any particular direction?
Any comments gratefully received.
Lucia’s site not currently available…
Human beings caused the largest extinction rate in the planet’s history ( Pleistocene extinctions ). These extinctions came at a different time and at a different rate to be linked to the climate changes and its clear wild climate swings in short periods of time ( relatively ) did pretty much nothing to the earth’s species on any significant scale. Its exactly the same now. We are still causing extinctions at a record rate, simply by being here, not by “altering” the climate, and even if we did ( or are ) altering the climate, then this effect on the planet is insignificant to the simple fact that we are just “here”… So-called “climate scientists” are often no such thing, they do not understand the basics of pre-historic climate change and the parameters involved. They completely ignore the most important evidence. Large animals in Africa alone survived the P.E. period simply by having evolved along side humans, as soon as humans left Africa at a very fast rate, they pretty much wiped out the mega fauna everywhere else…. It is this pattern of human behaviour that is statistically significant, not fractions of a degree celcius. I wish alarmists would actually study a bit more !
I suppose we could always wait until 2018. By which time the World will be bankrupt and it won’t matter. Alternatively we could start applying the precautionary principal the other way round. How about: A clear lack of correlation between hypothesis and reality should preclude precipitate action beyond that which is prudent and can be shown to have a benefit.
First of all, skeptics didn’t pick 1998, the NOAA did in the 2008 State of the Climate report.
That report says, “Near-zero and even negative trends are common for intervals of a decade or less in the simulations, due to the model’s internal climate variability. The simulations rule out (at the 95% level) zero trends for intervals of 15 yr or more, suggesting that an observed absence of warming of this duration is needed to create a discrepancy with the expected present-day warming rate.”
It does not say “The simulations rule out (at the 95% level) zero trends for intervals of 15 years or more, except intervals starting in 1998…”
Second, I don’t know why anyone is bending over backwards to try to find statistical significance (or lack thereof) in a goalpost changing 17 year trend when we already have an unambiguous test for the models straight from the NOAA. Why bother with ever changing warmist arguments? Just throw the above at them and let them argue with the NOAA over it.
The problem is that models of catastrophic climate change are being used by futurists and tech companies and rent seekers generally to argue that our written constitutions need to be jettisoned and new governance structures created that rely more on Big data and supercomputers. To deal with the global warming crisis. wish I was making this up but I wrote about the political and social economy and using education globally to get there today. Based primarily on Marina Gorbis’ April 2013 book The Nature of The Future and Willis Harman’s 1988 Global Mind Change.
You can’t let actual temps get in the way of such a transformation. Do you have any idea how many well-connected people have decided we are all on the menu? Existing merely to finance their future plans and to do as we are told.
This analysys of the UAH data (and the implied future that it provides) says that the (short term < 60 years anyway) may all be cyclic – not a linear trend of any form during that preiod.
http://s1291.photobucket.com/user/RichardLH/media/uahtrendsinflectionfuture_zps7451ccf9.png.html
That could turn in time into a 'Short Term Climate Predictor' 🙂
The Guardian is left-wing. That won’t be popular with people who aren’t.
But it wasn’t dumbed down. It wasn’t anti-democratic. It wasn’t just hate.
The Guardian was part of the civil society in which develops the politcal awareness that a democracy needs.
So was the Telegraph from the other side.
But the Guardian has abandoned debate. That is the death of the Guardian. A loss which will be a weakening of the UK’s and the entire West’s political life.
Interesting, and by the way:
On March 13, WUWT announced that Climategate 3.0 had occurred.
What happened to it?
Everybody just ignoring it ever happened?
Because of the the thermal inertia of the oceans and the fact that we should really be measuring the enthalpy of the system – the best metric for temperature is the SST data which varies much more closely with enthalpy than land temperatures.The NOAA data ftp://ftp.ncdc.noaa.gov/pub/data/anomalies/annual.ocean.90S.90N.df_1901-2000mean.
data shows no net warming since 1997 and also shows that the warming trend peaked in about 2003 and that the earth has been in a slight cooling trend since then.This trend will likely steepen and last for at least 20 years and perhaps for hundred of years beyond that if ,as seems likely, the warming peak represents a peak in both the 60 and 1000 year solar cycles,
For a discussion and detailed forecast see
http://climatesense-norpag.blogspot.com/2013/04/global-cooling-methods-and-testable.html
StephenP The CO2-contentration is constant throughout the atmosphere. Winds ensure that the atmosphere is stirred enough that the small density difference doesn’t matter. Nor does absorption or emission of photons cause the molecules to move up or down. CO2-molecules radiate equally in all directions.
Scott, “It is meaningless to say that there is warming, just not statistically significant warming. Someone who says that does not know what statistical significance is.”
I’d say that on the contrary, anyone who thinks a measured trend that is larger than zero but not quite reaches statistical significance is the same as no trend doesn’t not know enough about statistics. Compare these three measurement: 0.9+-1, 0+-1 and -0.9+-1. None of them is statistically different from zero, but the fist one allows values as high as 1.9 while the last one allows values as low as -1.9.
Thomas says:
June 13, 2013 at 6:56 am
“I’d say that on the contrary, anyone who thinks a measured trend that is larger than zero but not quite reaches statistical significance is the same as no trend doesn’t not know enough about statistics.”
And without sufficient knowledge as to what the future actually provides (or a accurate model :-)) then drawing any conclusions based on which end of any distribution the values may currently lie is just a gloryfied guess.
If you were to draw conclusion about the consistency with which the data has has moved towards a limit you would have a better statistical idea about what the data is really saying.
Saying that we need to wait for a certain interval in order to conclude that “the models are wrong” is dangerous and incorrect for two reasons. First — and this is a point that is stunningly ignored — there are a lot of different models out there, all supposedly built on top of physics, and yet no two of them give anywhere near the same results!
or
for a linear fit, or for a Kolmogorov-Smirnov comparison of the real temperature record and the extrapolated model prediction, has some meaning? It has none.
This is reflected in the graphs Monckton publishes above, where the AR5 trend line is the average over all of these models and in spite of the number of contributors the variance of the models is huge. It is also clearly evident if one publishes a “spaghetti graph” of the individual model projections (as Roy Spencer recently did in another thread) — it looks like the frayed end of a rope, not like a coherent spread around some physics supported result.
Note the implicit swindle in this graph — by forming a mean and standard deviation over model projections and then using the mean as a “most likely” projection and the variance as representative of the range of the error, one is treating the differences between the models as if they are uncorrelated random variates causing >deviation around a true mean!.
Say what?
This is such a horrendous abuse of statistics that it is difficult to know how to begin to address it. One simply wishes to bitch-slap whoever it was that assembled the graph and ensure that they never work or publish in the field of science or statistics ever again. One cannot generate an ensemble of independent and identically distributed models that have different code. One might, possibly, generate a single model that generates an ensemble of predictions by using uniform deviates (random numbers) to seed
“noise” (representing uncertainty) in the inputs.
What I’m trying to say is that the variance and mean of the “ensemble” of models is completely meaningless, statistically because the inputs do not possess the most basic properties required for a meaningful interpretation. They are not independent, their differences are not based on a random distribution of errors, there is no reason whatsoever to believe that the errors or differences are unbiased (given that the only way humans can generate unbiased anything is through the use of e.g. dice or other objectively random instruments).
So why buy into this nonsense by doing linear fits to a function — global temperature — that has never in its entire history been linear, although of course it has always been approximately smooth so one can always do a Taylor series expansion in some sufficiently small interval and get a linear term that — by the nature of Taylor series fits to nonlinear functions — is guaranteed to fail if extrapolated as higher order nonlinear terms kick in and ultimately dominate? Why even pay lip service to the notion that
Let me repeat this. It has no meaning! It is indefensible within the theory and practice of statistical analysis. You might as well use a ouija board as the basis of claims about the future climate history as the ensemble average of different computational physical models that do not differ by truly random variations and are subject to all sorts of omitted variable, selected variable, implementation, and initialization bias. The board might give you the right answer, might not, but good luck justifying the answer it gives on some sort of rational basis.
Let’s invert this process and actually apply statistical analysis to the distribution of model results Re: the claim that they all correctly implement well-known physics. For example, if I attempt to do an a priori computation of the quantum structure of, say, a carbon atom, I might begin by solving a single electron model, treating the electron-electron interaction using the probability distribution from the single electron model to generate a spherically symmetric “density” of electrons around the nucleus, and then performing a self-consistent field theory iteration (resolving the single electron model for the new potential) until it converges. (This is known as the Hartree approximation.)
Somebody else could say “Wait, this ignore the Pauli exclusion principle” and the requirement that the electron wavefunction be fully antisymmetric. One could then make the (still single electron) model more complicated and construct a Slater determinant to use as a fully antisymmetric representation of the electron wavefunctions, generate the density, perform the self-consistent field computation to convergence. (This is Hartree-Fock.)
A third party could then note that this still underestimates what is called the “correlation energy” of the system, because treating the electron cloud as a continuous distribution through when electrons move ignores the fact that individual electrons strongly repel and hence do not like to get near one another. Both of the former approaches underestimate the size of the electron hole, and hence they make the atom “too small” and “too tightly bound”. A variety of schema are proposed to overcome this problem — using a semi-empirical local density functional being probably the most successful.
A fourth party might then observe that the Universe is really relativistic, and that by ignoring relativity theory and doing a classical computation we introduce an error into all of the above (although it might be included in the semi-empirical LDF approach heuristically).
In the end, one might well have an “ensemble” of models, all of which are based on physics. In fact, the differences are also based on physics — the physics omitted from one try to another, or the means used to approximate and try to include physics we cannot include in a first-principles computation (note how I sneaked a semi-empirical note in with the LDF, although one can derive some density functionals from first principles (e.g. Thomas-Fermi approximation), they usually don’t do particularly well because they aren’t valid across the full range of densities observed in actual atoms). Note well, doing the precise computation is not an option. We cannot solve the many body atomic state problem in quantum theory exactly any more than we can solve the many body problem exactly in classical theory or the set of open, nonlinear, coupled, damped, driven chaotic Navier-Stokes equations in a non-inertial reference frame that represent the climate system.
Note well that solving for the exact, fully correlated nonlinear many electron wavefunction of the humble carbon atom — or the far more complex Uranium atom — is trivially simple (in computational terms) compared to the climate problem. We can’t compute either one, but we can come a damn sight closer to consistently approximating the solution to the former compared to the latter.
So, should we take the mean of the ensemble of “physics based” models for the quantum electronic structure of atomic carbon and treat it as the best prediction of carbon’s quantum structure? Only if we are very stupid or insane or want to sell something. If you read what I said carefully (and you may not have — eyes tend to glaze over when one reviews a year or so of graduate quantum theory applied to electronics in a few paragraphs, even though I left out perturbation theory, Feynman diagrams, and ever so much more:-) you will note that I cheated — I run in a semi-empirical method.
Which of these is going to be the winner? LDF, of course. Why? Because the parameters are adjusted to give the best fit to the actual empirical spectrum of Carbon. All of the others are going to underestimate the correlation hole, and their errors will be systematically deviant from the correct spectrum. Their mean will be systematically deviant, and by weighting Hartree (the dumbest reasonable “physics based approach”) the same as LDF in the “ensemble” average, you guarantee that the error in this “mean” will be significant.
Suppose one did not know (as, at one time, we did not know) which of the models gave the best result. Suppose that nobody had actually measured the spectrum of Carbon, so its empirical quantum structure was unknown. Would the ensemble mean be reasonable then? Of course not. I presented the models in the way physics itself predicts improvement — adding back details that ought to be important that are omitted in Hartree. One cannot be certain that adding back these details will actually improve things, by the way, because it is always possible that the corrections are not monotonic (and eventually, at higher orders in perturbation theory, they most certainly are not!) Still, nobody would pretend that the average of a theory with an improved theory is “likely” to be better than the improved theory itself, because that would make no sense. Nor would anyone claim that diagrammatic perturbation theory results (for which there is a clear a priori derived justification) are necessarily going to beat semi-heuristic methods like LDF because in fact they often do not.
What one would do in the real world is measure the spectrum of Carbon, compare it to the predictions of the models, and then hand out the ribbons to the winners! Not the other way around. And since none of the winners is going to be exact — indeed, for decades and decades of work, none of the winners was even particularly close to observed/measured spectra in spite of using supercomputers (admittedly, supercomputers that were slower than your cell phone is today) to do the computations — one would then return to the drawing board and code entry console to try to do better.
Can we apply this sort of thoughtful reasoning the spaghetti snarl of GCMs and their highly divergent results? You bet we can! First of all, we could stop pretending that “ensemble” mean and variance have any meaning whatsoever by not computing them. Why compute a number that has no meaning? Second, we could take the actual climate record from some “epoch starting point” — one that does not matter in the long run, and we’ll have to continue the comparison for the long run because in any short run from any starting point noise of a variety of sorts will obscure systematic errors — and we can just compare reality to the models. We can then sort out the models by putting (say) all but the top five or so into a “failed” bin and stop including them in any sort of analysis or policy decisioning whatsoever unless or until they start to actually agree with reality.
Then real scientists might contemplate sitting down with those five winners and meditate upon what makes them winners — what makes them come out the closest to reality — and see if they could figure out ways of making them work even better. For example, if they are egregiously high and diverging from the empirical data, one might consider adding previously omitted physics, semi-empirical or heuristic corrections, or adjusting input parameters to improve the fit.
Then comes the hard part. Waiting. The climate is not as simple as a Carbon atom. The latter’s spectrum never changes, it is a fixed target. The former is never the same. Either one’s dynamical model is never the same and mirrors the variation of reality or one has to conclude that the problem is unsolved and the implementation of the physics is wrong, however “well-known” that physics is. So one has to wait and see if one’s model, adjusted and improved to better fit the past up to the present, actually has any predictive value.
Worst of all, one cannot easily use statistics to determine when or if one’s predictions are failing, because damn, climate is nonlinear, non-Markovian, chaotic, and is apparently influenced in nontrivial ways by a world-sized bucket of competing, occasionally cancelling, poorly understood factors. Soot. Aerosols. GHGs. Clouds. Ice. Decadal oscillations. Defects spun off from the chaotic process that cause global, persistent changes in atmospheric circulation on a local basis (e.g. blocking highs that sit out on the Atlantic for half a year) that have a huge impact on annual or monthly temperatures and rainfall and so on. Orbital factors. Solar factors. Changes in the composition of the troposphere, the stratosphere, the thermosphere. Volcanoes. Land use changes. Algae blooms.
And somewhere, that damn butterfly. Somebody needs to squash the damn thing, because trying to ensemble average a small sample from a chaotic system is so stupid that I cannot begin to describe it. Everything works just fine as long as you average over an interval short enough that you are bound to a given attractor, oscillating away, things look predictable and then — damn, you change attractors. Everything changes! All the precious parameters you empirically tuned to balance out this and that for the old attractor suddenly require new values to work.
This is why it is actually wrong-headed to acquiesce in the notion that any sort of p-value or Rsquared derived from an AR5 mean has any meaning. It gives up the high ground (even though one is using it for a good purpose, trying to argue that this “ensemble” fails elementary statistical tests. But statistical testing is a shaky enough theory as it is, open to data dredging and horrendous error alike, and that’s when it really is governed by underlying IID processes (see “Green Jelly Beans Cause Acne”). One cannot naively apply a criterion like rejection if p < 0.05, and all that means under the best of circumstances is that the current observations are improbable given the null hypothesis at 19 to 1. People win and lose bets at this level all the time. One time in 20, in fact. We make a lot of bets!
So I would recommend — modestly — that skeptics try very hard not to buy into this and redirect all such discussions to questions such as why the models are in such terrible disagreement with each other, even when applied to identical toy problems that are far simpler than the actual Earth, and why we aren’t using empirical evidence (as it accumulates) to reject failing models and concentrate on the ones that come closest to working, while also not using the models that are obviously not working in any sort of “average” claim for future warming. Maybe they could hire themselves a Bayesian or two and get them to recompute the AR curves, I dunno.
It would take me, in my comparative ignorance, around five minutes to throw out all but the best 10% of the GCMs (which are still diverging from the empirical data, but arguably are well within the expected fluctuation range on the DATA side), sort the remainder into top-half models that should probably be kept around and possibly improved, and bottom half models whose continued use I would defund as a waste of time. That wouldn’t make them actually disappear, of course, only mothball them. If the future climate ever magically popped back up to agree with them, it is a matter of a few seconds to retrieve them from the archives and put them back into use.
Of course if one does this, the GCM predicted climate sensitivity plunges from the totally statistically fraudulent 2.5 C/century to a far more plausible and still possibly wrong ~1 C/century, which — surprise — more or less continues the post-LIA warming trend with a small possible anthropogenic contribution. This large a change would bring out pitchforks and torches as people realize just how badly they’ve been used by a small group of scientists and politicians, how much they are the victims of indefensible abuse of statistics to average in the terrible with the merely poor as if they are all equally likely to be true with randomly distributed differences.
rgb
Going back to 1998 is small potatoes. Let’s go back 1000 years, 2000, 5000, even back to the last interglacial. The best data we have show that all of those times were warmer than now.
17 years? Piffle.
As I understand it, running the same model twice in a row with the same parameters won’t even produce the same results. But somehow averaging the results together is meaningful? Riiiight. As meaningful as a “global temperature” which is not at all.
Steven said:
“Since when is weather/climate a linear behavorist?… I realize this is a short timescale and things may look linear but they are not. Not even close.”
Absolutely spot-on Steven. Drawing lines all over data that is patently non-linear in its behaviour is a key part of the CAGW hoax.
RichardLH, the context of the discussion is Monckton’s statement that “On Dr. Santer’s 17-year test, then, the models may have failed. A rethink is needed.” This statement is based on a (IMHO probably intentional) mixing of the measured trend which is what Santer was talking about and whether the trend is statistically significant or not. How can a model be falsified by a value of the trend that isn’t significantly different from the expected?
This whole argument is the most ridiculous thing I’ve ever seen…
…who in their right mind would argue with these nutters when you start out by letting them define what’s “normal”
You guys have sat back and let the enemy define where that “normal” line is drawn…
….and then you argue with them that it’s above or below “normal”
Look at any paleo temp record……and realize how stupid this argument is
http://www.foresight.org/nanodot/wp-content/uploads/2009/12/histo4.png