By Christopher Monckton of Brenchley
As Anthony and others have pointed out, even the New York Times has at last been constrained to admit what Dr. Pachauri of the IPCC was constrained to admit some months ago. There has been no global warming statistically distinguishable from zero for getting on for two decades.
The NYT says the absence of warming arises because skeptics cherry-pick 1998, the year of the Great el Niño, as their starting point. However, as Anthony explained yesterday, the stasis goes back farther than that. He says we shall soon be approaching Dr. Ben Santer’s 17-year test: if there is no warming for 17 years, the models are wrong.
Usefully, the latest version of the Hadley Centre/Climatic Research Unit monthly global mean surface temperature anomaly series provides not only the anomalies themselves but also the 2 σ uncertainties.
Superimposing the temperature curve and its least-squares linear-regression trend on the statistical insignificance region bounded by the means of the trends on these published uncertainties since January 1996 demonstrates that there has been no statistically-significant warming in 17 years 4 months:
On Dr. Santer’s 17-year test, then, the models may have failed. A rethink is needed.
The fact that an apparent warming rate equivalent to almost 0.9 Cº is statistically insignificant may seem surprising at first sight, but there are two reasons for it. First, the published uncertainties are substantial: approximately 0.15 Cº either side of the central estimate.
Secondly, one weakness of linear regression is that it is unduly influenced by outliers. Visibly, the Great el Niño of 1998 is one such outlier.
If 1998 were the only outlier, and particularly if it were the largest, going back to 1996 would be much the same as cherry-picking 1998 itself as the start date.
However, the magnitude of the 1998 positive outlier is countervailed by that of the 1996/7 la Niña. Also, there is a still more substantial positive outlier in the shape of the 2007 el Niño, against which the la Niña of 2008 countervails.
In passing, note that the cooling from January 2007 to January 2008 is the fastest January-to-January cooling in the HadCRUT4 record going back to 1850.
Bearing these considerations in mind, going back to January 1996 is a fair test for statistical significance. And, as the graph shows, there has been no warming that we can statistically distinguish from zero throughout that period, for even the rightmost endpoint of the regression trend-line falls (albeit barely) within the region of statistical insignificance.
Be that as it may, one should beware of focusing the debate solely on how many years and months have passed without significant global warming. Another strong el Niño could – at least temporarily – bring the long period without warming to an end. If so, the cry-babies will screech that catastrophic global warming has resumed, the models were right all along, etc., etc.
It is better to focus on the ever-widening discrepancy between predicted and observed warming rates. The IPCC’s forthcoming Fifth Assessment Report backcasts the interval of 34 models’ global warming projections to 2005, since when the world should have been warming at a rate equivalent to 2.33 Cº/century. Instead, it has been cooling at a rate equivalent to a statistically-insignificant 0.87 Cº/century:
The variance between prediction and observation over the 100 months from January 2005 to April 2013 is thus equivalent to 3.2 Cº/century.
The correlation coefficient is low, the period of record is short, and I have not yet obtained the monthly projected-anomaly data from the modelers to allow a proper p-value comparison.
Yet it is becoming difficult to suggest with a straight face that the models’ projections are healthily on track.
From now on, I propose to publish a monthly index of the variance between the IPCC’s predicted global warming and the thermometers’ measurements. That variance may well inexorably widen over time.
In any event, the index will limit the scope for false claims that the world continues to warm at an unprecedented and dangerous rate.
UPDATE: Lucia’s Blackboard has a detailed essay analyzing the recent trend, written by SteveF, using an improved index for accounting for ENSO, volcanic aerosols, and solar cycles. He concludes the best estimate rate of warming from 1997 to 2012 is less than 1/3 the rate of warming from 1979 to 1996. Also, the original version of this story incorrectly referred to the Washington Post, when it was actually the New York Times article by Justin Gillis. That reference has been corrected.- Anthony
Related articles
- The warming ‘plateau’ may extend back even further (wattsupwiththat.com)
- Are We in a Pause or a Decline? (Now Includes at Least April* Data) (wattsupwiththat.com)
- The Met Drops Its Basis For Claim Of “Significant” Warming (papundits.wordpress.com)
- Benchmarking IPCC’s warming predictions (wattsupwiththat.com)
- WUWT: 150 million hits and counting (wattsupwiththat.com)
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
Lars said…Only models which have been validated by real data should continue to be used.
I’m a little confused. If you are using real data,then I was taught,and have experienced,that I do not need a model. Right now (8:30 AM MDT),my thermometer outside my south facing window,in the shade, shows it is +5C.After a little further checking,yup,it is indeed June 14/2013,not November,so it is cool out. I do not need a model to tell me that! The only model I need is the one of the F-104 (in 1/48th scale),I helped my then 12 year old step sister build,which still hangs in her bedroom. It is no more a reality capable of doing anything other than collecting dust,then a climate model is. And the rednekk truck I will use today to get to the lake pulling the boat I will use for fishing(fingers crossed) is a reality,not a model.
Has everybody forgotten GIGO?
Climate, in loose terms, is the average of the variability.
By that reckoning, the seasons should be indistinguishable.
There is no reason to presume that, given an ever increasing forcing, climate should be cyclical. On geological time scales stretching to hundreds of millions of years, there is no cyclical behaviour. There is no reason to expect it on every time scale. The cyclical, or osciallting processes we are sure of (ENSO, solar cycle on multi-decadal scale) are the variability within the climate system. You appear to be arguing that the world’s climate has ascillated roughly evenly around a mean for the length of its existence. Surely you know that this is wrong.
Another strong el Niño could – at least temporarily – bring the long period without warming to an end.
That is true. It will certainly not be CO2 that will bring the long period of warming to an end. Look at the following graph for RSS.
http://www.woodfortrees.org/plot/rss/from:1996.9/plot/rss/from:1996.9/trend
The area on the left that is below the green flat slope line needs a 1998 or 2010 El Nino to counter it. Any El Nino that is less strong will merely move the start time for a flat slope for RSS from December 1996 towards December 1997.
Monckton of Brenchley:
Though correlation indeed does not imply causation, absence of correlation necessarily implies absence of causation.
You have me puzzling on this one. If true, then RSA encryption should be impossible. The input causes the output, but as I understand it (probably incorrectly) it is next to impossible to find a correlation between the two. I don’t immediately see how your statement is a logical necessity.
Juan, it would be the encryption algorithm that causes the output, wouldn’t it? Inspecting these two, you could discover a correlation to the output.
I think Monckton’s point holds. Consider two kinds of event which are uncorrelated. What would you take as evidence that “in spite of complete lack of correlation, events of type A cause events of type B”? I don’t think anything would count as evidence, do you? I can’t imagine a possible world in which there is such evidence. The meaning of “causation” and “complete lack of correlation” just don’t overlap. So, I would conclude that absence of correlation necessarily implies absence of causation.
juan slayton says:
June 14, 2013 at 9:15 am
“…it is next to impossible to find a correlation between the two.”
You generally do not have “the two”, just the one, the output.
All those little adjustment upwards in recent history has come back to haunt the alarmists. The temperatures keep on failing to rise so they have to keep on adjusting just to keep the trend flat, hehe
For those with an interest, several months ago, the University of Kentucky hosted of forum on climate change with three excellent speakers who were all self-described conservatives. Liberals reported how they better understand that there are thoughtful conservative perspectives on, and solutions to, climate change, thus allowing for a broadened public discussion. In turn, conservatives in attendance learned the same thing. You can watch the recording of this event at http://bit.ly/135gvNa. The starting time for each speaker is noted at this page, so you can listen to the speakers of greatest interest to you.
The hockey stick is primarily an AGW industry marketing tool created by Michael Mann, Limited Liability Climatologist (LLC), in response to a pressing market need for a scientific-looking analysis product which eliminates the Medievel Warm Period.
But do the climate modelers take the hockey stick seriously enough to incorporate its purported historical data into their hindcasts and/or into their predictions, either directly or indirectly? Perhaps someone can give us some informed information as to whether they do or they don’t.
In any case, what ever happens with the future trend in global mean temperature — up, down, or flat — the climate science community as a whole will never abandon its AGW dogma.
The great majority of climate scientists — 80%, 90%, 97%, whatever percentage it actually is — will continue with “It’s the CO2, and nothing else but the CO2, so help us God”, regardless of how convoluted the explanations must become to support that narrative.
I am trying, just for fun, as a kind of a game, to imagine how the politicians, public, and warmists would react to global cooling. …. ….It is curiously difficult to imagine the scenario of global cooling after 20 years of nonstop media discussions, scientific papers, the IPCC reports, yearly climate conferences, and books all pushing global warming as a crisis. …. …..
To imagine global cooling, it seems it is necessary to pretend or try to imagine the warming of the last 70 years had nothing to do with the increase in atmospheric CO2. Try to imagine that the warming was 100% due to solar magnetic cycle changes. (That makes it possible for the warming to be reversible.) Got that picture? Now imagine the onset of significant cooling, back to 1850’s climate. The cooling will be significant and rapid, occurring over roughly 5 years. Can you picture that change?
http://www.solen.info/solar/images/comparison_recent_cycles.png
Will the public request a scientific explanation for the onset of significant planetary cooling? Will the media start to interview the so called ‘skeptics’? Will the media connect the sudden slowdown of the solar magnetic cycle with the planetary cooling? … …Will the media ask why no one noticed that there are cycles of warming and cooling in the paleo climate record that correlate with solar magnetic cycle changes? The warming and cooling cycles are clearly evident. There are peer reviewed papers that connected past solar magnetic cycles changes with the warming and cooling cycles. How is it possible that this evidence was ignored? When there was 17 years without warming why did no one relook at the theory?
How long will the public accept massive subsides of scam green energy if there is unequivocal significant evidence the planet is cooling? Add a stock market crash and a currency crisis to the picture.
Greenland ice temperature, last 11,000 years determined from ice core analysis, Richard Alley’s paper.
http://www.climate4you.com/images/GISP2%20TemperatureSince10700%20BP%20with%20CO2%20from%20EPICA%20DomeC.gif
http://www.dailymail.co.uk/news/article-2341484/Floods-droughts-snow-May-Britains-weather-got-bad-Met-Office-worried.html#ixzz2WBzcNZIc
http://en.wikipedia.org/wiki/Little_Ice_Age
Little Ice Age
The Little Ice Age (LIA) was a period of cooling that occurred after the Medieval Warm Period (Medieval Climate Optimum).[1] While it was not a true ice age, the term was introduced into the scientific literature by François E. Matthes in 1939.[2] It has been conventionally defined as a period extending from the 16th to the 19th centuries,[3][4][5] or alternatively, from about 1350 to about 1850,[6]….
Europe/North America
….The population of Iceland fell by half, but this was perhaps caused by fluorosis after the eruption of the volcano Laki in 1783.[20] Iceland also suffered failures of cereal crops, and people moved away from a grain-based diet.[21] The Norse colonies in Greenland starved and vanished (by the early 15th century), as crops failed and livestock …. …. Hubert Lamb said that in many years, “snowfall was much heavier … ….Crop practices throughout Europe had to be altered to adapt to the shortened, less reliable growing season, and there were many years of dearth and famine (such as the Great Famine of 1315–1317, although this may have been before the LIA proper).[25] According to Elizabeth Ewan and Janay Nugent, “Famines in France 1693–94, Norway 1695–96 and Sweden 1696–97 claimed roughly 10% of the population of each country. In Estonia and Finland in 1696–97, losses have been estimated at a fifth and a third of the national populations, respectively.”[26] Viticulture disappeared from some northern regions. Violent storms caused serious flooding and loss of life. Some of these resulted in permanent loss of large areas of land from the Danish, German and Dutch coasts.[24] … ….Historian Wolfgang Behringer has linked intensive witch-hunting episodes in Europe to agricultural failures during the Little Ice Age.[36]
Comment:
As the planet has suddenly started to cool, I would assume GCR now again modulates planetary cloud cover. We certainly appear to live in interesting times.
http://ocean.dmi.dk/arctic/meant80n.uk.php
http://nsidc.org/data/seaice_index/images/daily_images/S_timeseries.png
Hypothesis: 17y 4m > 17y.
Hey warmists, lmao.
Accuracy was never the goal of climate models — there’s no money in that. Scientists were forced to “Chicken Little” the results to try and spur action by governments. Seventeen years later and the sky hasn’t fallen. The new “Chicken Little” meme is “Extreme Climate Events”.
Steven says:
June 13, 2013 at 4:36 am
I keep seeing these graphs with linear progressions. Seriously. I mean seriously. Since when is weather/climate a linear behavorist? …. I am glad someone has the tolerance to deal with these idiots. I certainly don’t.
>>>>>>>>>>>>>>>>>>>
Monckton and others use the assumptions made by the Warmists, like linear behavior and use their much abused/fudged data sets and STILL win the scientific debate. No wonder the Climastrologists refused to debate knowledgeable people or even entertain questions about warming from the lay audience. Only by continually moving the goal posts and silencing any and all questions can they keep the Hoax going.
“””””…..ferdberple says:
June 14, 2013 at 6:19 am
Nick Stokes says:
June 14, 2013 at 1:05 am
As to averaging models, no, I don’t condemn it. It has been the practice since the beginning, and for good reason. As I said above, models generate weather, from which we try to discern climate. In reality, we just have to wait for long-term averages and patterns to emerge.
============
There is no good reason to average chaos. It is a mathematical nonsense to do so because the law of large numbers does not apply to chaotic time series. There is no mean around which the data can be expected to converge………”””””””””
Averaging is a quite well defined, and quite fictitious process, that we simply made up in our heads; like all mathematics. It’s over half a century, since I last had any formal instruction in mathematics; but I do have a degree in it, so I vaguely recollect how it can sometimes be quite pedantic, in its exact wording.
But in layperson lingo, it is quite simple. You have a set of numbers; hopefully each of them expressed in the same number system; binary/octal/decimal/whatever.
You add all of the numbers together, using the rules for addition, that apply to whatever branch or arithmetic you are using, and then you divide the total by the number of original input numbers, you started with, and the result is called the “average”. Some may use the word mean, as having the same meaning; but I prefer to be cautious, and not assume that “mean” and “average” are exactly the same thing.
So that is what “average” is. Now notice, I said nothing about the original numbers, other than they all belong to the same number system. There is no assumption that the numbers are anything other than some numbers, and are quite independent of each other.
No matter, the definition of “average” doesn’t assume any connections, real or imagined, between the numbers. There also is no assumption that the “average” has ANY meaning whatsoever. It simply is the result of applying a well defined algorithm, to a set of numbers.
So it works for the money amount on your pay check, each pay interval, or for the telephone numbers in your local phone book, or for the number of “animals” (say larger than an ant) per square meter of the earth surface (if you want to bother checking the number in your yard.)
Or it also works for the number you get if you read the thermometer once per day, or once per hour, outside your back door.
In all cases, it had NO meaning, other than fitting the defined function “average”, that we made up.
If you sit down in your back yard, and mark out a square meter, and then count the number of larger than ant sized animals in that area; you are not likely to count a number, that equals the world global average value. Likewise, whatever the source of your set numbers, you aren’t likely to ever find that average number wherever you got your set from. It’s not a “real” number; pure fiction, as distinct from the numbers you read off your back door thermometer, which could be classified as “data”.
Averages, are not “data”; they are the defined result of applying a made up algorithm to a set of numbers; ANY set of numbers drawn from a single number system, and the only meaning they have, is that they are the “average”.
@Bart
Bart says:
June 13, 2013 at 3:51 pm
jai mitchell says:
June 13, 2013 at 3:41 pm
“However, the change in temperatures during the last 5 decades are not based on changes in the sun’s intensity since that effect is pretty much instantaneous.”
Sigh… Just another guy who does not understand the concept of frequency response.
————-
Bart,
your link simply says that the response can be delayed by up to 90 degrees. Since the period of the cycle is 11 years, 90 degrees is 5.5 years.
the average over the entire cycle has not changed significantly over the last 50 years. It sounds like you don’t understand the question.
I will restate it.
If the LIA was caused solely by solar activity (and not also caused by an abnormal increase in volcanic activity) Then the amount of warming since then would cause a significant change in the cycle of temperatures based on the current solar cycle every 6 years or so (from trough to maximum)
Your link only says that this effect is “delayed” not “averaged” over the period of the sine wave.
http://gizmodo.com/uh-why-is-an-artist-living-inside-a-floating-wooden-eg-512882997
Apparently this genius hasn’t gotten the memo yet……
Forgot to h/t Steve Milloy at Junk Science
M Courtney says:June 14, 2013 at 6:54 am
“That is the error that rgbatduke skewered at June 13, 2013 at 7:20 am…
Solar magnetic dipole is finally on the move
http://www.vukcevic.talktalk.net/LFC6.htm
@rgbatduke
One thing in your 1st post that I missed and is really important is your reference to Taylor’s theorem, which is a really important point.
Given small intervals, Taylor’s theorem allows one to linearise a system by ignoring higher derivatives. Normally we do this knowing that it is an approximation and compute a new results as one extends the interval from the initial condition, taking care to achieve stability. As the interval increases one needs an increased number of higher order terms to describe the system. However, in an observed,system such as temperature, we have difficulty in extracting the 1st derivative, let alone the higher derivatives. Hence we use linear trends because we can’t measure the signal sufficiently accurately to do any thing else.
This impinges on averaging model results. If we have several models, their outputs at T+Dt could be identical and we could say that the models were good. However, the mechanisms could be different and that the higher order derivatives could be different at T=0. However the models have been calibrated over a short period so that they conform to a data set. When one averages the “output”, one is, by implication, also obtaining an average of the initial derivatives, which seems highly questionable. As time increases, the results of the models will depend increasingly on the higher derivatives at the initial conditions and they will then diverge. One could say that the models’ first order term is reasonably correct but by averaging one is also saying that the higher derivatives don’t matter..
If it isn’t CO2 heating up the planet it is coal dust blown from a moving train polluting surface waters and so it goes. US 40 billion a year coal industry fights for survival, the green apparatchiks want to kill it: http://www.nytimes.com/2013/06/15/business/energy-environment/a-fight-over-coal-exports-and-the-industrys-future.html?_r=0&hp=&adxnnl=1&adxnnlx=1371236502-ynr7hQjeIpB7QrZ2jnxtEQ
Monckton of Brenchley says:
June 14, 2013 at 7:04 am
“Mr. Stokes vexatiously persists in maintaining that Professor Brown had criticized my graphs, long after the Professor himself has plainly stated he had criticized not my graphs but the IPCC’s graphs”
The Professor plainly stated what graphs he was criticising:
“This is reflected in the graphs Monckton publishes above, where …”
He seems to have been under the impression that they are IPCC graphs, but they aren’t, are they?
His criticisms are quite specific. No IPCC graphs have been nominated which have the kind of statistics that he criticises. Your graphs do.
I often read these threads from the bottom up, so I see the recent comments, and can go back up to see what inspired them.
So I finally got to the original post of rgbatduke, that many had referenced.
So now I know, that I made a correct decision, when I decided to forgo the pleasures of rigorous quantum mechanics; and launch into a career in industry instead. Even so, starting in electronics with a degree in Physics and Maths, instead of an EE degree, made me already a bit of an oddball.
But I also remember when I got dissatisfied with accepting that the Voltage gain for a Pentode stage was simply “gm. Rl” and I figured, I should be able to start from the actual electrode geometries inside a triode or pentode, and solve the electrostatic field equations, to figure out where all the electrons would go, so I would have a more accurate model of the circuit behavior. And this was before PCs and Spice. Well I also remember, when I decided on the total idiocy of that venture, and consigned it to the circular file.
So Prof. Robert demonstrated why sometimes, too much accuracy is less than useless, if you can’t actually use the result to solve real problems. Well I eventually accepted that Vg = gm.Rl is also good enough for both bipolar and MOS transistors too, much of the time. Well, you eventually accept that negative feedback designs are even better, and can make the active devices almost irrelevant.
It is good if your Physics can be rendered sufficiently real, so you might derive Robert’s carbon spectrum to advance atomic modeling capability, for a better understanding of what we think matter is; but no, it isn’t the way to predict the weather next week.
A recently acquired PhD physicist friend, who is currently boning up on QM at Stanford; mostly as an anti-Alzheimer’s brain stimulus, told me directly, that QM can only mess things up, more than they currently are; well unless of course, you need it.
Thanks Robert..
Nick Stokes says (June 14, 2013 at 11:55 am): “The Earth “uses the same physics” and famously gets different results, day after day.”
Different from what? AFAIK we only have one Earth.
One thing about the uncertainty in the trend since Jan 1996, the trend could be zero or -0.029 per decade and the trend could be 0.207 C per decade, with equal probability, but more likely around 0.089 C per decade.